Improve model card: Add pipeline tag, library name, and paper link
Browse filesThis PR enhances the model card by:
- Adding `library_name: transformers` to enable the automated "how to use" widget, based on `config.json` and `tokenizer_config.json` which indicate compatibility with the Transformers library.
- Adding `pipeline_tag: text-generation` to improve discoverability for LLM models.
- Including a direct link to the paper: [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
README.md
CHANGED
|
@@ -1,8 +1,11 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
| 4 |
### Entropy Minimization: Qwen3-4B-Base trained on OpenRS
|
| 5 |
|
| 6 |
-
This is the Qwen3-4B-Base model trained by Entropy Minimization using OpenRS training set.
|
| 7 |
|
| 8 |
If you are interested in Co-rewarding, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-rewarding].
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
---
|
| 6 |
+
|
| 7 |
### Entropy Minimization: Qwen3-4B-Base trained on OpenRS
|
| 8 |
|
| 9 |
+
This is the Qwen3-4B-Base model trained by Entropy Minimization using OpenRS training set, as presented in the paper [Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models](https://huggingface.co/papers/2508.00410).
|
| 10 |
|
| 11 |
If you are interested in Co-rewarding, you can find more details on our Github Repo [https://github.com/tmlr-group/Co-rewarding].
|