71 lines
2.7 KiB
Markdown
71 lines
2.7 KiB
Markdown
# word-language-detector-ai-api
|
|
|
|
The API for Johannes Vos' word language detector AI.
|
|
|
|
## Installation
|
|
|
|
|
|
Run following script to install the API:
|
|
|
|
```bash
|
|
LOCATION=/srv/word-language-detector-ai-api
|
|
|
|
# install python3 if it is not installed
|
|
sudo apt install python3
|
|
|
|
# install dependencies
|
|
python3 -m pip install simple-cache --index-url https://repo.jcloud-services.ddns.net/simple/ --break-system-packages
|
|
python3 -m pip install fastapi pydantic tensorflow uvicorn scikit-learn --break-system-packages
|
|
|
|
# set up directories
|
|
sudo mkdir -p $LOCATION
|
|
|
|
# download and unpack the archive
|
|
curl -fsSL --retry 3 -o word-language-detector-ai-api_latest_linux-aarch_source.tar.gz https://repo.jcloud-services.ddns.net/software/word-language-detector-ai-api/word-language-detector-ai-api_latest_linux-aarch_source.tar.gz
|
|
sudo tar -xzf word-language-detector-ai-api_latest_linux-aarch_source.tar.gz -C $LOCATION
|
|
sudo rm -f word-language-detector-ai-api_latest_linux-aarch_source.tar.gz
|
|
|
|
# Optional: download the model
|
|
# You can leave this part out if you want to provide the model yourself. Notice that the directory has to include following files: 'label_encoder.json' (label encoder), 'language_detector.keras' (the actual model), 'max_len' (token maximum length) and 'tokenizer.json' (tokenizer). It is recommended to download the model as below.
|
|
curl -fsSL --retry 3 -o word-language-detector-ai_latest_tf_model_artifacts.tar.gz https://repo.jcloud-services.ddns.net/models/word-language-detector-ai/word-language-detector-ai_latest_tf_model_artifacts.tar.gz
|
|
sudo mkdir -p $LOCATION/model
|
|
sudo tar -xzf word-language-detector-ai_latest_tf_model_artifacts.tar.gz -C $LOCATION/model
|
|
sudo rm -f word-language-detector-ai_latest_tf_model_artifacts.tar.gz
|
|
```
|
|
|
|
Note: Ensure you at least approximately 700 KB of disk space is available. During the installation, approximately 850 KB of disk space must be available. Without the model, you need approximately 400 KB of disk space and approximately 500 MB of disk space during the installation.
|
|
|
|
|
|
|
|
## Run the API
|
|
|
|
To run the API, run `src/api.py` in the package directory. Here is an example with `/srv/word-language-detector-ai-api` as the package directory
|
|
```bash
|
|
/srv/word-language-detector-ai-api/src/api.py
|
|
```
|
|
|
|
### Arguments
|
|
|
|
For arguments documentation, run the API with the flag `--help`.
|
|
|
|
## API Docs
|
|
|
|
For the full API documentation, see the docs path of the API.
|
|
|
|
## Changelog
|
|
|
|
### Version 0.2.0
|
|
- support for CORS
|
|
|
|
### Version 0.1.1
|
|
- added `--break-system-packages` to `pip install ...`
|
|
- bug fix: create model directory before downloading the model
|
|
|
|
### Version 0.1.0
|
|
- initial release
|
|
- API for the word language detector AI
|
|
- configuration
|
|
- logging
|
|
- caching
|
|
- lazy imports
|
|
- command line arguments |