File size: 3,250 Bytes
4d38eea
270eb00
 
d430018
55e955f
 
 
d430018
4d38eea
 
d430018
270eb00
d430018
 
270eb00
 
 
 
 
d430018
270eb00
 
 
 
6c0ffbb
270eb00
 
 
 
 
 
 
 
 
 
d430018
 
 
 
 
 
 
 
8ecec22
d430018
 
 
 
270eb00
 
 
 
 
d772c4d
 
 
 
 
 
 
 
270eb00
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
language:
- en
license: cc-by-4.0
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
pipeline_tag: feature-extraction
---

# ARC-Encoder models

 This page houses `ARC8-Encoder_multi` from three different versions of pretrained ARC-Encoders. Architectures and methods to train them are described in the paper *ARC-Encoder: learning compressed text representations for large language models* available [here](https://arxiv.org/abs/2510.20535).
Code: [ARC-Encoder repository](https://github.com/kyutai-labs/ARC-Encoder)

 ## Models Details 

 All the encoders released here are trained on web crawl filtered using [Dactory](https://github.com/kyutai-labs/dactory) based on a [Llama3.2-3B](https://github.com/meta-llama/llama-cookbook) base backbone. It consists in two ARC-Encoder specifically trained for one decoder and one for two decoders in the same time:
- `ARC8-Encoder_Llama`, trained on 2.6B tokens on [Llama3.1-8B](https://github.com/meta-llama/llama-cookbook) base specifically with a pooling factor of 8.    
- `ARC8-Encoder_Mistral`, trained on 2.6B tokens on [Mistral-7B](https://www.mistralai.com/news/announcing-mistral-7b/) base specifically with a pooling factor of 8.    
- `ARC8-Encoder_multi`, trained by sampling among the two decoders with a pooling factor of 8. 

 ### Uses

 As described in the [paper](https://arxiv.org/abs/2510.20535), the pretrained ARC-Encoders can be fine-tuned to perform various downstream tasks.
You can also adapt an ARC-Encoder to a new pooling factor (PF) by fine-tuning it on the desired PF.
For optimal results, we recommend fine-tuning toward a lower PF than the one used during pretraining.
To reproduce the results presented in the paper, you can use our released fine-tuning dataset, [ARC_finetuning](https://huggingface.co/datasets/kyutai/ARC_finetuning).

 ### Licensing

 ARC-Encoders are licensed under the CC-BY 4.0 license.
 
Terms of use: As the released models are pretrained from Llama3.2 3B backbone, ARC-Encoders are subject to the Llama Terms of Use found at [Llama license](https://www.llama.com/license/).

 ## Usage

 To load the pre-trained ARC-Encoders, use the following code snippet from the [ARC-Encoder repository](https://github.com/kyutai-labs/ARC-Encoder):

 ```python
from embed_llm.models.augmented_model import load_and_save_released_models

# ARC8_Encoder_multi, ARC8_Encoder_Llama or ARC8_Encoder_Mistral
load_and_save_released_models(ARC8_Encoder_multi, hf_token=<HF_TOKEN>)
```

 ***Remark:*** This code snippet loads the model from Hugging Face and then creates appropriate folders at `<TMP_PATH>` containing the checkpoint and additional necessary files for fine-tuning or evaluation with the `ARC-Encoder` codebase. To reduce occupied memory space, you can then delete the model from your Hugging Face cache.

 ## Citations

 If you use one of these models, please cite:

 ```bibtex
@misc{pilchen2025arcencoderlearningcompressedtext,
      title={ARC-Encoder: learning compressed text representations for large language models}, 
      author={Hippolyte Pilchen and Edouard Grave and Patrick Pérez},
      year={2025},
      eprint={2510.20535},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2510.20535}, 
}
```