Add pipeline tag, link to paper, and cite the paper
#4
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,14 +1,15 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
| 3 |
datasets:
|
| 4 |
- codeparrot/apps
|
| 5 |
- BAAI/TACO
|
| 6 |
- AI-MO/NuminaMath-CoT
|
| 7 |
language:
|
| 8 |
- en
|
| 9 |
-
|
| 10 |
-
- Qwen/Qwen2.5-32B-Instruct
|
| 11 |
license: apache-2.0
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
## Model Details
|
|
@@ -50,13 +51,16 @@ We use Llama-Factory for training. On 8 H100, the training takes 19 hours with D
|
|
| 50 |
We would like to thanks the compute resources from [Lambda Lab](https://lambdalabs.com/service/gpu-cloud?srsltid=AfmBOop5FnmEFTkavVtdZDsLWvHWNg6peXtat-OXJ9MW5GMNsk756PE5) and [AnyScale](https://www.anyscale.com/). We would like to thanks the academic feedback and support from the [Still-2 Team](https://arxiv.org/pdf/2412.09413), and [Junyang Lin](https://justinlin610.github.io/) from the [Qwen Team](https://qwenlm.github.io/).
|
| 51 |
|
| 52 |
## Citation
|
| 53 |
-
Please considering citing our
|
| 54 |
|
| 55 |
```bibtex
|
| 56 |
-
@misc{
|
| 57 |
-
author = {
|
| 58 |
-
title = {
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Qwen/Qwen2.5-32B-Instruct
|
| 4 |
datasets:
|
| 5 |
- codeparrot/apps
|
| 6 |
- BAAI/TACO
|
| 7 |
- AI-MO/NuminaMath-CoT
|
| 8 |
language:
|
| 9 |
- en
|
| 10 |
+
library_name: transformers
|
|
|
|
| 11 |
license: apache-2.0
|
| 12 |
+
pipeline_tag: text-generation
|
| 13 |
---
|
| 14 |
|
| 15 |
## Model Details
|
|
|
|
| 51 |
We would like to thanks the compute resources from [Lambda Lab](https://lambdalabs.com/service/gpu-cloud?srsltid=AfmBOop5FnmEFTkavVtdZDsLWvHWNg6peXtat-OXJ9MW5GMNsk756PE5) and [AnyScale](https://www.anyscale.com/). We would like to thanks the academic feedback and support from the [Still-2 Team](https://arxiv.org/pdf/2412.09413), and [Junyang Lin](https://justinlin610.github.io/) from the [Qwen Team](https://qwenlm.github.io/).
|
| 52 |
|
| 53 |
## Citation
|
| 54 |
+
Please considering citing our paper if you found it useful for your research. Thank you!
|
| 55 |
|
| 56 |
```bibtex
|
| 57 |
+
@misc{zhu2025llms,
|
| 58 |
+
author = {Wenxuan Zhu and Xiangru Tang and Ziyang Ma and Hongbo Zhang and Tianqi Chen},
|
| 59 |
+
title = {LLMs Can Easily Learn to Reason from Demonstrations Structure, not content, is what matters!},
|
| 60 |
+
year = {2025},
|
| 61 |
+
eprint={2502.07374},
|
| 62 |
+
archivePrefix={arXiv},
|
| 63 |
+
primaryClass={cs.CL},
|
| 64 |
+
url = {https://arxiv.org/abs/2502.07374}
|
| 65 |
+
}
|
| 66 |
+
```
|