Fixed a bug
This commit is contained in:
@@ -14,8 +14,8 @@ LOCATION=/srv/word-language-detector-ai-api
|
|||||||
sudo apt install python3
|
sudo apt install python3
|
||||||
|
|
||||||
# install dependencies
|
# install dependencies
|
||||||
python3 -m pip install simple-cache --index-url https://repo.jcloud-services.ddns.net/simple/
|
python3 -m pip install simple-cache --index-url https://repo.jcloud-services.ddns.net/simple/ --break-system-packages
|
||||||
python3 -m pip install fastapi pydantic tensorflow uvicorn scikit-learn
|
python3 -m pip install fastapi pydantic tensorflow uvicorn scikit-learn --break-system-packages
|
||||||
|
|
||||||
# set up directories
|
# set up directories
|
||||||
sudo mkdir -p $LOCATION
|
sudo mkdir -p $LOCATION
|
||||||
@@ -28,12 +28,15 @@ sudo rm -f word-language-detector-ai-api_0.1.0_linux-aarch_source.tar.gz
|
|||||||
# Optional: download the model
|
# Optional: download the model
|
||||||
# You can leave this part out if you want to provide the model yourself. Notice that the directory has to include following files: 'label_encoder.json' (label encoder), 'language_detector.keras' (the actual model), 'max_len' (token maximum length) and 'tokenizer.json' (tokenizer). It is recommended to download the model as below.
|
# You can leave this part out if you want to provide the model yourself. Notice that the directory has to include following files: 'label_encoder.json' (label encoder), 'language_detector.keras' (the actual model), 'max_len' (token maximum length) and 'tokenizer.json' (tokenizer). It is recommended to download the model as below.
|
||||||
curl -fsSL --retry 3 -o word-language-detector-ai_0.1.0_tf_model_artifacts.tar.gz https://repo.jcloud-services.ddns.net/models/word-language-detector-ai/word-language-detector-ai_0.1.0_tf_model_artifacts.tar.gz
|
curl -fsSL --retry 3 -o word-language-detector-ai_0.1.0_tf_model_artifacts.tar.gz https://repo.jcloud-services.ddns.net/models/word-language-detector-ai/word-language-detector-ai_0.1.0_tf_model_artifacts.tar.gz
|
||||||
|
sudo mkdir -p $LOCATION/model
|
||||||
sudo tar -xzf word-language-detector-ai_0.1.0_tf_model_artifacts.tar.gz -C $LOCATION/model
|
sudo tar -xzf word-language-detector-ai_0.1.0_tf_model_artifacts.tar.gz -C $LOCATION/model
|
||||||
sudo rm -f word-language-detector-ai_0.1.0_tf_model_artifacts.tar.gz
|
sudo rm -f word-language-detector-ai_0.1.0_tf_model_artifacts.tar.gz
|
||||||
```
|
```
|
||||||
|
|
||||||
Note: Ensure you at least approximately 700 KB of disk space is available. During the installation, approximately 850 KB of disk space must be available. Without the model, you need approximately 400 KB of disk space and approximately 500 MB of disk space during the installation.
|
Note: Ensure you at least approximately 700 KB of disk space is available. During the installation, approximately 850 KB of disk space must be available. Without the model, you need approximately 400 KB of disk space and approximately 500 MB of disk space during the installation.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Run the API
|
## Run the API
|
||||||
|
|
||||||
To run the API, run `src/api.py` in the package directory. Here is an example with `/srv/word-language-detector-ai-api` as the package directory
|
To run the API, run `src/api.py` in the package directory. Here is an example with `/srv/word-language-detector-ai-api` as the package directory
|
||||||
@@ -51,6 +54,10 @@ For the full API documentation, see the docs path of the API.
|
|||||||
|
|
||||||
## Changelog
|
## Changelog
|
||||||
|
|
||||||
|
### Version 0.1.1
|
||||||
|
- added `--break-system-packages` to `pip install ...`
|
||||||
|
- bug fix: create model directory before downloading the model
|
||||||
|
|
||||||
### Version 0.1.0
|
### Version 0.1.0
|
||||||
- initial release
|
- initial release
|
||||||
- API for the word language detector AI
|
- API for the word language detector AI
|
||||||
|
|||||||
Reference in New Issue
Block a user