Update README.md
Browse files
README.md
CHANGED
|
@@ -127,17 +127,19 @@ outputs = model.generate(inputs, max_new_tokens=512, temperature=1.0, top_p=0.7)
|
|
| 127 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 128 |
```
|
| 129 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 130 |
## Citation
|
| 131 |
|
| 132 |
```bibtex
|
| 133 |
@article{zhu2025scaling,
|
| 134 |
title={Scaling Latent Reasoning via Looped Language Models},
|
| 135 |
-
author={Zhu, Rui-Jie and Wang, Zixuan and Hua, Kai and Zhang, Tianyu and Li, Ziniu and Que, Haoran and Boyi
|
| 136 |
journal={arXiv preprint arXiv:2510.25741},
|
| 137 |
-
year={2025}
|
| 138 |
-
url={https://arxiv.org/abs/2510.25741},
|
| 139 |
}
|
| 140 |
-
```
|
| 141 |
|
| 142 |
## License
|
| 143 |
|
|
|
|
| 127 |
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 128 |
```
|
| 129 |
|
| 130 |
+
## Acknowledgments
|
| 131 |
+
|
| 132 |
+
We thank [@Antizana](https://github.com/Antizana) for the KV cache fix merged from [ouro-cache-fix](https://github.com/Antizana/ouro-cache-fix), which resolved a critical compatibility issue with transformers>=4.56.0.
|
| 133 |
+
|
| 134 |
## Citation
|
| 135 |
|
| 136 |
```bibtex
|
| 137 |
@article{zhu2025scaling,
|
| 138 |
title={Scaling Latent Reasoning via Looped Language Models},
|
| 139 |
+
author={Zhu, Rui-Jie and Wang, Zixuan and Hua, Kai and Zhang, Tianyu and Li, Ziniu and Que, Haoran and Wei, Boyi and Wen, Zixin and Yin, Fan and Xing, He and others},
|
| 140 |
journal={arXiv preprint arXiv:2510.25741},
|
| 141 |
+
year={2025}
|
|
|
|
| 142 |
}
|
|
|
|
| 143 |
|
| 144 |
## License
|
| 145 |
|