HippolyteP commited on
Commit
b6abd78
·
1 Parent(s): 0ce0ddb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -3
README.md CHANGED
@@ -1,9 +1,44 @@
1
  ---
 
 
 
2
  tags:
3
  - model_hub_mixin
4
  - pytorch_model_hub_mixin
5
  ---
6
 
7
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
- - Library: [More Information Needed]
9
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - en
5
  tags:
6
  - model_hub_mixin
7
  - pytorch_model_hub_mixin
8
  ---
9
 
10
+ # ARC-Encoder models
11
+
12
+ This page houses `ARC8-Encoder_Mistral` from three different versions of pretrained ARC-Encoders. Architectures and methods to train them are described in the paper *ARC-Encoder: learning compressed text representations for large language models* available [here](https://github.com/kyutai-labs/ARC-Encoder/blob/main/ARC_Encoder_preprint.pdf). A code to reproduce the pretraining, further fine-tune the encoders or even evaluate them on dowstream tasks is available at [ARC-Encoder repository](https://github.com/kyutai-labs/ARC-Encoder/tree/main).
13
+
14
+ ## Models Details
15
+
16
+ All the encoders released here are trained on web crawl filtered using [Dactory](https://github.com/kyutai-labs/dactory) based on a [Llama3.2-3B](https://github.com/meta-llama/llama-cookbook) base backbone. It consists in two ARC-Encoder specifically trained for one decoder and one for two decoders in the same time:
17
+ - `ARC8-Encoder_Llama`, trained on 2.6B tokens on [Llama3.1-8B](https://github.com/meta-llama/llama-cookbook) base specifically with a pooling factor of 8.
18
+ - `ARC8-Encoder_Mistral`, trained on 2.6B tokens on [Mistral-7B](https://github.com/mistralai/mistral-finetune?tab=readme-ov-file) base specifically with a pooling factor of 8.
19
+ - `ARC8-Encoder_multi`, trained by sampling among the two decoders with a pooling factor of 8.
20
+
21
+ ### Uses
22
+
23
+ As described in the [paper](https://github.com/kyutai-labs/ARC-Encoder/blob/main/ARC_Encoder_preprint.pdf), the pretrained ARC-Encoders can be fine-tuned to perform various downstream tasks.
24
+ You can also adapt an ARC-Encoder to a new pooling factor (PF) by fine-tuning it on the desired PF.
25
+ For optimal results, we recommend fine-tuning toward a lower PF than the one used during pretraining.
26
+ To reproduce the results presented in the paper, you can use our released fine-tuning dataset, [ARC_finetuning](https://huggingface.co/datasets/kyutai/ARC_finetuning).
27
+
28
+ ### Licensing
29
+
30
+ ARC-Encoders are licensed under the CC-BY 4.0 license.
31
+
32
+ Terms of use: As the released models are pretrained from Llama3.2 3B backbone, ARC-Encoders are subject to the Llama Terms of Use found at [Llama license](https://www.llama.com/license/).
33
+
34
+ ## Citations
35
+
36
+ If you use one of these models, please cite:
37
+
38
+ ```bibtex
39
+ @techreport{pilchen2025arc_encoder,
40
+ title={ARC-Encoder: learning compressed text representations for large language models},
41
+ author={Pilchen, Hippolyte and Grave, Edouard and P{\'e}rez, Patrick},
42
+ year={2025}
43
+ }
44
+ ```