Updated versions
This commit is contained in:
@@ -21,9 +21,9 @@ python3 -m pip install fastapi pydantic tensorflow uvicorn scikit-learn --break-
|
|||||||
sudo mkdir -p $LOCATION
|
sudo mkdir -p $LOCATION
|
||||||
|
|
||||||
# download and unpack the archive
|
# download and unpack the archive
|
||||||
curl -fsSL --retry 3 -o word-language-detector-ai-api_0.1.0_linux-aarch_source.tar.gz https://repo.jcloud-services.ddns.net/software/word-language-detector-ai-api/word-language-detector-ai-api_0.1.0_linux-aarch_source.tar.gz
|
curl -fsSL --retry 3 -o word-language-detector-ai-api_latest_linux-aarch_source.tar.gz https://repo.jcloud-services.ddns.net/software/word-language-detector-ai-api/word-language-detector-ai-api_latest_linux-aarch_source.tar.gz
|
||||||
sudo tar -xzf word-language-detector-ai-api_0.1.0_linux-aarch_source.tar.gz -C $LOCATION
|
sudo tar -xzf word-language-detector-ai-api_latest_linux-aarch_source.tar.gz -C $LOCATION
|
||||||
sudo rm -f word-language-detector-ai-api_0.1.0_linux-aarch_source.tar.gz
|
sudo rm -f word-language-detector-ai-api_latest_linux-aarch_source.tar.gz
|
||||||
|
|
||||||
# Optional: download the model
|
# Optional: download the model
|
||||||
# You can leave this part out if you want to provide the model yourself. Notice that the directory has to include following files: 'label_encoder.json' (label encoder), 'language_detector.keras' (the actual model), 'max_len' (token maximum length) and 'tokenizer.json' (tokenizer). It is recommended to download the model as below.
|
# You can leave this part out if you want to provide the model yourself. Notice that the directory has to include following files: 'label_encoder.json' (label encoder), 'language_detector.keras' (the actual model), 'max_len' (token maximum length) and 'tokenizer.json' (tokenizer). It is recommended to download the model as below.
|
||||||
|
|||||||
Reference in New Issue
Block a user