| | --- |
| | license: apache-2.0 |
| | --- |
| | license: apache-2.0 |
| | --- |
| |
|
| | **Paper**: [Adapting Language Models to Compress Contexts](https://arxiv.org/abs/2305.14788) |
| |
|
| | **Code**: https://github.com/princeton-nlp/AutoCompressors |
| |
|
| | **Models**: |
| | - Llama-2-7b fine-tuned models: [AutoCompressor-Llama-2-7b-6k](https://huggingface.co/princeton-nlp/AutoCompressor-Llama-2-7b-6k/), [FullAttention-Llama-2-7b-6k](https://huggingface.co/princeton-nlp/FullAttention-Llama-2-7b-6k) |
| | - OPT-2.7b fine-tuned models: [AutoCompressor-2.7b-6k](https://huggingface.co/princeton-nlp/AutoCompressor-2.7b-6k), [AutoCompressor-2.7b-30k](https://huggingface.co/princeton-nlp/AutoCompressor-2.7b-30k), [RMT-2.7b-8k](https://huggingface.co/princeton-nlp/RMT-2.7b-8k), [FullAttention-2.7b-4k](https://huggingface.co/princeton-nlp/FullAttention-2.7b-4k) |
| | - OPT-1.3b fine-tuned models: [AutoCompressor-1.3b-30k](https://huggingface.co/princeton-nlp/AutoCompressor-1.3b-30k), [RMT-1.3b-30k](https://huggingface.co/princeton-nlp/RMT-1.3b-30k) |
| |
|
| | --- |
| |
|
| | AutoCompressor-2.7b-6k is a model fine-tuned from [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b) following the AutoCompressor method in [Adapting Language Models to Compress Contexts](https://arxiv.org/abs/2305.14788). |
| | This model is fine-tuned on 2B tokens from [The Pile](https://pile.eleuther.ai). The pre-trained OPT-2.7b model is fine-tuned on sequences of 6,144 tokens with 50 summary vectors, summary accumulation, randomized segmenting, and stop-gradients. |
| |
|
| | To get started, download the [`AutoCompressor`](https://github.com/princeton-nlp/AutoCompressors) repository and load the model as follows: |
| |
|
| | ``` |
| | from auto_compressor import AutoCompressorModel |
| | |
| | model = AutoCompressorModel.from_pretrained("princeton-nlp/AutoCompressor-2.7b-6k") |
| | ``` |
| |
|
| | **Evaluation** |
| |
|
| | We record the perplexity achieved by our OPT-2.7b models on segments of 2048 tokens, conditioned on different amounts of context. |
| | FullAttention-2.7-4k uses full uncompressed contexts whereas AutoCompressor-2.7b-6k and RMT-2.7b-8k compress segments of 2048 tokens into 50 summary vectors. |
| |
|
| | *In-domain Evaluation* |
| |
|
| | | Context Tokens | 0 |512 | 2048 | 4096 | 6144 | |
| | | -----------------------------|-----|-----|------|------|------| |
| | | FullAttention-2.7b-4k | 6.57|6.15 |5.94 |- |- | |
| | | RMT-2.7b-8k | 6.34|6.19 |6.02 | 6.02 | 6.01 | |
| | | AutoCompressor-2.7b-6k | 6.31|6.04 | 5.98 | 5.94 | 5.93 | |
| |
|
| | *Out-of-domain Evaluation* |
| |
|
| | | Context Tokens | 0 |512 | 2048 | 4096 | 6144 | |
| | | -----------------------------|-----|-----|------|------|------| |
| | | FullAttention-2.7b-4k | 8.94|8.28 |7.93 |- |- | |
| | | RMT-2.7b-8k | 8.62|8.44 |8.21 | 8.20 | 8.20 | |
| | | AutoCompressor-2.7b-6k | 8.60|8.26 | 8.17 | 8.12 | 8.10 | |
| |
|
| | See [Adapting Language Models to Compress Contexts](https://arxiv.org/abs/2305.14788) for more evaluations, including evaluation on 11 in-context learning tasks. |
| |
|
| | ## Bibtex |
| | ``` |
| | @misc{chevalier2023adapting, |
| | title={Adapting Language Models to Compress Contexts}, |
| | author={Alexis Chevalier and Alexander Wettig and Anirudh Ajith and Danqi Chen}, |
| | year={2023}, |
| | eprint={2305.14788}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CL} |
| | } |
| | ``` |