Update README.md
#1
by
cdq10131 - opened
README.md
CHANGED
|
@@ -42,12 +42,10 @@ We record the perplexity achieved by our 30k-fine-tuned OPT models on segments o
|
|
| 42 |
|
| 43 |
## Bibtex
|
| 44 |
```
|
| 45 |
-
@
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
archivePrefix={arXiv},
|
| 51 |
-
primaryClass={cs.CL}
|
| 52 |
}
|
| 53 |
```
|
|
|
|
| 42 |
|
| 43 |
## Bibtex
|
| 44 |
```
|
| 45 |
+
@inproceedings{chevalier2023adapting,
|
| 46 |
+
title={Adapting Language Models to Compress Contexts},
|
| 47 |
+
author={Chevalier, Alexis and Wettig, Alexander and Ajith, Anirudh and Chen, Danqi},
|
| 48 |
+
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
|
| 49 |
+
year={2023}
|
|
|
|
|
|
|
| 50 |
}
|
| 51 |
```
|