| | --- |
| | license: other |
| | license_name: lfm1.0 |
| | license_link: https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking/blob/main/LICENSE |
| | pipeline_tag: text-generation |
| | tags: |
| | - executorch |
| | - liquid |
| | - lfm2.5 |
| | - edge |
| | --- |
| | # Introduction |
| |
|
| | This repository hosts the **LFM2.5-1.2B-Instruct** model for the [React Native ExecuTorch](https://www.npmjs.com/package/react-native-executorch) library. It includes **quantized** version of it in `.pte` format, ready for use in the **ExecuTorch** runtime. |
| |
|
| | If you'd like to run these models in your own ExecuTorch runtime, refer to the [official documentation](https://pytorch.org/executorch/stable/index.html) for setup instructions. |
| |
|
| | ## Compatibility |
| |
|
| | If you intend to use this model outside of React Native ExecuTorch, make sure your runtime is compatible with the **ExecuTorch** version used to export the `.pte` files. For more details, see the compatibility note in the [ExecuTorch GitHub repository](https://github.com/pytorch/executorch/blob/11d1742fdeddcf05bc30a6cfac321d2a2e3b6768/runtime/COMPATIBILITY.md?plain=1#L4). If you work with React Native ExecuTorch, the constants from the library will guarantee compatibility with runtime used behind the scenes. |
| |
|
| | ### Repository Structure |
| |
|
| | The repository is organized as follows: |
| | - The `.pte` file should be passed to the `modelSource` parameter. |
| | - The tokenizer for the models is available within the repo root, under `tokenizer.json` and `tokenizer_config.json` |