Spaces:
Running
Running
| By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.7.3/dist/), which should work out-of-the-box. You can customize this as follows: | |
| ### Settings | |
| ```javascript | |
| import { env } from '@huggingface/transformers'; | |
| // Specify a custom location for models (defaults to '/models/'). | |
| env.localModelPath = '/path/to/models/'; | |
| // Disable the loading of remote models from the Hugging Face Hub: | |
| env.allowRemoteModels = false; | |
| // Set location of .wasm files. Defaults to use a CDN. | |
| env.backends.onnx.wasm.wasmPaths = '/path/to/files/'; | |
| ``` | |
| For a full list of available settings, check out the [API Reference](./api/env). | |
| ### Convert your models to ONNX | |
| We recommend using our [conversion script](https://github.com/huggingface/transformers.js/blob/main/scripts/convert.py) to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses [π€ Optimum](https://huggingface.co/docs/optimum) to perform conversion and quantization of your model. | |
| ```bash | |
| python -m scripts.convert --quantize --model_id <model_name_or_path> | |
| ``` | |
| For example, convert and quantize [bert-base-uncased](https://huggingface.co/bert-base-uncased) using: | |
| ```bash | |
| python -m scripts.convert --quantize --model_id bert-base-uncased | |
| ``` | |
| This will save the following files to `./models/`: | |
| ``` | |
| bert-base-uncased/ | |
| βββ config.json | |
| βββ tokenizer.json | |
| βββ tokenizer_config.json | |
| βββ onnx/ | |
| βββ model.onnx | |
| βββ model_quantized.onnx | |
| ``` | |
| For the full list of supported architectures, see the [Optimum documentation](https://huggingface.co/docs/optimum/main/en/exporters/onnx/overview). | |