Buckets:

rtrm's picture
|
download
raw
1.1 kB
# Use custom models
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@huggingface/transformers@4.0.1/dist/), which should work out-of-the-box. You can customize this as follows:
### Settings
```javascript
import { env } from '@huggingface/transformers';
// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';
// Disable the loading of remote models from the Hugging Face Hub:
env.allowRemoteModels = false;
// Set location of .wasm files. Defaults to use a CDN.
env.backends.onnx.wasm.wasmPaths = '/path/to/files/';
```
For a full list of available settings, check out the [API Reference](./api/env).
### Convert your models to ONNX
We recommend using [Optimum](https://github.com/huggingface/optimum-onnx) to convert your PyTorch models to ONNX in a single command. For the full list of supported architectures, check out the [Optimum documentation](https://huggingface.co/docs/optimum-onnx/onnx/overview).

Xet Storage Details

Size:
1.1 kB
·
Xet hash:
2148ba994eb7a984a4d4a896cd6064ea41e884fbc212802502d36d9de1079b2c

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.