Update README.md
Browse files
README.md
CHANGED
|
@@ -46,22 +46,27 @@ See the `eole` model configuration in this repository for further details.
|
|
| 46 |
|
| 47 |
## Usage with `quickmt`
|
| 48 |
|
| 49 |
-
|
|
|
|
|
|
|
| 50 |
|
| 51 |
```bash
|
| 52 |
git clone https://github.com/quickmt/quickmt.git
|
| 53 |
pip install ./quickmt/
|
| 54 |
-
|
| 55 |
-
quickmt-model-download quickmt/quickmt-en-zh ./quickmt-en-zh
|
| 56 |
```
|
| 57 |
|
| 58 |
-
|
| 59 |
|
| 60 |
```python
|
| 61 |
from quickmt import Translator
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
# Translate - set beam size to 5 for higher quality (but slower speed)
|
| 67 |
t(["The Boot Monument is an American Revolutionary War memorial located in Saratoga National Historical Park in the state of New York."], beam_size=1)
|
|
|
|
| 46 |
|
| 47 |
## Usage with `quickmt`
|
| 48 |
|
| 49 |
+
You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
|
| 50 |
+
|
| 51 |
+
Next, install the `quickmt` [python library](github.com/quickmt/quickmt).
|
| 52 |
|
| 53 |
```bash
|
| 54 |
git clone https://github.com/quickmt/quickmt.git
|
| 55 |
pip install ./quickmt/
|
|
|
|
|
|
|
| 56 |
```
|
| 57 |
|
| 58 |
+
Finally, use the model in python:
|
| 59 |
|
| 60 |
```python
|
| 61 |
from quickmt import Translator
|
| 62 |
+
from huggingface_hub import snapshot_download
|
| 63 |
+
|
| 64 |
+
# Download Model (if not downloaded already) and return path to local model
|
| 65 |
+
# Device is either 'auto', 'cpu' or 'cuda'
|
| 66 |
+
t = Translator(
|
| 67 |
+
snapshot_download("quickmt/quickmt-en-zh", ignore_patterns="eole-model/*"),
|
| 68 |
+
device="cpu"
|
| 69 |
+
)
|
| 70 |
|
| 71 |
# Translate - set beam size to 5 for higher quality (but slower speed)
|
| 72 |
t(["The Boot Monument is an American Revolutionary War memorial located in Saratoga National Historical Park in the state of New York."], beam_size=1)
|