Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ tags:
|
|
| 9 |
|
| 10 |
# ARC-Encoder models
|
| 11 |
|
| 12 |
-
This page houses `ARC8-Encoder_multi` from three different versions of pretrained ARC-Encoders. Architectures and methods to train them are described in the paper *ARC-Encoder: learning compressed text representations for large language models* available [here](https://
|
| 13 |
|
| 14 |
## Models Details
|
| 15 |
|
|
@@ -20,7 +20,7 @@ tags:
|
|
| 20 |
|
| 21 |
### Uses
|
| 22 |
|
| 23 |
-
As described in the [paper](https://
|
| 24 |
You can also adapt an ARC-Encoder to a new pooling factor (PF) by fine-tuning it on the desired PF.
|
| 25 |
For optimal results, we recommend fine-tuning toward a lower PF than the one used during pretraining.
|
| 26 |
To reproduce the results presented in the paper, you can use our released fine-tuning dataset, [ARC_finetuning](https://huggingface.co/datasets/kyutai/ARC_finetuning).
|
|
@@ -36,9 +36,12 @@ Terms of use: As the released models are pretrained from Llama3.2 3B backbone, A
|
|
| 36 |
If you use one of these models, please cite:
|
| 37 |
|
| 38 |
```bibtex
|
| 39 |
-
@
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
|
|
|
|
|
|
|
|
|
| 43 |
}
|
| 44 |
```
|
|
|
|
| 9 |
|
| 10 |
# ARC-Encoder models
|
| 11 |
|
| 12 |
+
This page houses `ARC8-Encoder_multi` from three different versions of pretrained ARC-Encoders. Architectures and methods to train them are described in the paper *ARC-Encoder: learning compressed text representations for large language models* available [here](https://arxiv.org/abs/2510.20535). A code to reproduce the pretraining, further fine-tune the encoders or even evaluate them on dowstream tasks is available at [ARC-Encoder repository](https://github.com/kyutai-labs/ARC-Encoder/tree/main).
|
| 13 |
|
| 14 |
## Models Details
|
| 15 |
|
|
|
|
| 20 |
|
| 21 |
### Uses
|
| 22 |
|
| 23 |
+
As described in the [paper](https://arxiv.org/abs/2510.20535), the pretrained ARC-Encoders can be fine-tuned to perform various downstream tasks.
|
| 24 |
You can also adapt an ARC-Encoder to a new pooling factor (PF) by fine-tuning it on the desired PF.
|
| 25 |
For optimal results, we recommend fine-tuning toward a lower PF than the one used during pretraining.
|
| 26 |
To reproduce the results presented in the paper, you can use our released fine-tuning dataset, [ARC_finetuning](https://huggingface.co/datasets/kyutai/ARC_finetuning).
|
|
|
|
| 36 |
If you use one of these models, please cite:
|
| 37 |
|
| 38 |
```bibtex
|
| 39 |
+
@misc{pilchen2025arcencoder,
|
| 40 |
+
title={ARC-Encoder: learning compressed text representations for large language models},
|
| 41 |
+
author={Hippolyte Pilchen and Edouard Grave and Patrick Pérez},
|
| 42 |
+
year={2025},
|
| 43 |
+
eprint={2510.20535},
|
| 44 |
+
archivePrefix={arXiv},
|
| 45 |
+
primaryClass={cs.CL}
|
| 46 |
}
|
| 47 |
```
|