Update README.md
Browse files
README.md
CHANGED
|
@@ -30,7 +30,7 @@ With ReasonEval, you can
|
|
| 30 |
|
| 31 |
## Model Details
|
| 32 |
|
| 33 |
-
* **Model type**: `ReasonEval-7B` model is an auto-regressive language model based on the transformer decoder architecture.
|
| 34 |
classification head for next-token prediction is replaced with a classification head for outputting the
|
| 35 |
possibilities of each class of reasong steps.
|
| 36 |
* **Language(s)**: English
|
|
@@ -39,7 +39,7 @@ possibilities of each class of reasong steps.
|
|
| 39 |
* **Finetuned from model**: [https://huggingface.co/WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
|
| 40 |
* **Fine-tuning Data**: [PRM800K](https://github.com/openai/prm800k)
|
| 41 |
|
| 42 |
-
|
| 43 |
## How to Cite
|
| 44 |
```bibtex
|
| 45 |
```
|
|
|
|
| 30 |
|
| 31 |
## Model Details
|
| 32 |
|
| 33 |
+
* **Model type**: `ReasonEval-7B` model is an auto-regressive language model based on the transformer decoder architecture. Its architecture is identical to the base model, except that the
|
| 34 |
classification head for next-token prediction is replaced with a classification head for outputting the
|
| 35 |
possibilities of each class of reasong steps.
|
| 36 |
* **Language(s)**: English
|
|
|
|
| 39 |
* **Finetuned from model**: [https://huggingface.co/WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
|
| 40 |
* **Fine-tuning Data**: [PRM800K](https://github.com/openai/prm800k)
|
| 41 |
|
| 42 |
+
For detailed instructions on how to use the ReasonEval-7B model, visit our GitHub repository at [https://github.com/GAIR-NLP/ReasonEval](https://github.com/GAIR-NLP/ReasonEval).
|
| 43 |
## How to Cite
|
| 44 |
```bibtex
|
| 45 |
```
|