PyTorch
Safetensors
English
llama
koalazf99 commited on
Commit
05d31b3
·
verified ·
1 Parent(s): 799f97f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -14,7 +14,7 @@ tags:
14
  <img src="prox-teaser.png">
15
  </p>
16
 
17
- [ArXiv](http://arxiv.org/abs/xxxx) | [Models](https://huggingface.co/gair-prox/RedPJ-ProX-0.7B) | [Data](https://huggingface.co/datasets/gair-prox/RedPajama-pro) | [Code](https://github.com/GAIR-NLP/program-every-example)
18
 
19
  **RedPJ-ProX-0.7B** is a tiny language model. It was and trained on the [RedPajama-V2-pro](https://huggingface.co/datasets/gair-prox/RedPajama-pro) for 25B tokens.
20
 
@@ -29,6 +29,10 @@ ProX models are evaluated over 10 language model benchmarks in zero-shot setting
29
 
30
  ### Citation
31
  ```
32
- @misc{TBD
 
 
 
 
33
  }
34
  ```
 
14
  <img src="prox-teaser.png">
15
  </p>
16
 
17
+ [ArXiv](http://arxiv.org/abs/2409.17115) | [Models](https://huggingface.co/gair-prox/RedPJ-ProX-0.7B) | [Data](https://huggingface.co/datasets/gair-prox/RedPajama-pro) | [Code](https://github.com/GAIR-NLP/program-every-example)
18
 
19
  **RedPJ-ProX-0.7B** is a tiny language model. It was and trained on the [RedPajama-V2-pro](https://huggingface.co/datasets/gair-prox/RedPajama-pro) for 25B tokens.
20
 
 
29
 
30
  ### Citation
31
  ```
32
+ @article{zhou2024programming,
33
+ title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
34
+ author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
35
+ journal={arXiv preprint arXiv:2409.17115},
36
+ year={2024}
37
  }
38
  ```