| --- |
| language: |
| - en |
| license: cc-by-4.0 |
| tags: |
| - model_hub_mixin |
| - pytorch_model_hub_mixin |
| pipeline_tag: feature-extraction |
| --- |
| |
| # ARC-Encoder models |
|
|
| This page houses `ARC4-Encoder_Llama` from four different versions of pretrained ARC-Encoders. Architectures and methods to train them are described in the paper *ARC-Encoder: learning compressed text representations for large language models* available [here](https://arxiv.org/abs/2510.20535). |
| Code: [ARC-Encoder repository](https://github.com/kyutai-labs/ARC-Encoder) |
|
|
| ## Models Details |
|
|
| All the encoders released here are trained on web crawl filtered using [Dactory](https://github.com/kyutai-labs/dactory) based on a [Llama3.2-3B](https://github.com/meta-llama/llama-cookbook) base backbone. It consists in two ARC-Encoder specifically trained for one decoder and one for two decoders in the same time: |
| - `ARC8-Encoder_Llama`, trained on 2.6B tokens on [Llama3.1-8B](https://github.com/meta-llama/llama-cookbook) base specifically with a pooling factor of 8. |
| - `ARC8-Encoder_Mistral`, trained on 2.6B tokens on [Mistral-7B](https://www.mistralai.com/news/announcing-mistral-7b/) base specifically with a pooling factor of 8. |
| - `ARC8-Encoder_multi`, trained by sampling among the two decoders with a pooling factor of 8. |
| - `ARC4-Encoder_Llama`, trained on 2.6B tokens on [Llama3.1-8B](https://github.com/meta-llama/llama-cookbook) base specifically with a pooling factor of 4. |
|
|
| ### Uses |
|
|
| As described in the [paper](https://arxiv.org/abs/2510.20535), the pretrained ARC-Encoders can be fine-tuned to perform various downstream tasks. |
| You can also adapt an ARC-Encoder to a new pooling factor (PF) by fine-tuning it on the desired PF. |
| For optimal results, we recommend fine-tuning toward a lower PF than the one used during pretraining. |
| To reproduce the results presented in the paper, you can use our released fine-tuning dataset, [ARC_finetuning](https://huggingface.co/datasets/kyutai/ARC_finetuning). |
|
|
| ### Licensing |
|
|
| ARC-Encoders are licensed under the CC-BY 4.0 license. |
| |
| Terms of use: As the released models are pretrained from Llama3.2 3B backbone, ARC-Encoders are subject to the Llama Terms of Use found at [Llama license](https://www.llama.com/license/). |
|
|
| ## Citations |
|
|
| If you use one of these models, please cite: |
|
|
| ```bibtex |
| @misc{pilchen2025arcencoderlearningcompressedtext, |
| title={ARC-Encoder: learning compressed text representations for large language models}, |
| author={Hippolyte Pilchen and Edouard Grave and Patrick Pérez}, |
| year={2025}, |
| eprint={2510.20535}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CL}, |
| url={https://arxiv.org/abs/2510.20535}, |
| } |
| ``` |