seven-cat commited on
Commit
f690a63
·
verified ·
1 Parent(s): 48dc3f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -30,7 +30,7 @@ With ReasonEval, you can
30
 
31
  ## Model Details
32
 
33
- * **Model type**: `ReasonEval-7B` model is an auto-regressive language model based on the transformer decoder architecture. `ReasonEval-7B`’s architecture is identical to the base model, except that the
34
  classification head for next-token prediction is replaced with a classification head for outputting the
35
  possibilities of each class of reasong steps.
36
  * **Language(s)**: English
@@ -39,7 +39,7 @@ possibilities of each class of reasong steps.
39
  * **Finetuned from model**: [https://huggingface.co/WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
40
  * **Fine-tuning Data**: [PRM800K](https://github.com/openai/prm800k)
41
 
42
- Please refer to our [github repo](https://github.com/GAIR-NLP/ReasonEval)) for the usage of `ReasonEval-7B`.
43
  ## How to Cite
44
  ```bibtex
45
  ```
 
30
 
31
  ## Model Details
32
 
33
+ * **Model type**: `ReasonEval-7B` model is an auto-regressive language model based on the transformer decoder architecture. Its architecture is identical to the base model, except that the
34
  classification head for next-token prediction is replaced with a classification head for outputting the
35
  possibilities of each class of reasong steps.
36
  * **Language(s)**: English
 
39
  * **Finetuned from model**: [https://huggingface.co/WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
40
  * **Fine-tuning Data**: [PRM800K](https://github.com/openai/prm800k)
41
 
42
+ For detailed instructions on how to use the ReasonEval-7B model, visit our GitHub repository at [https://github.com/GAIR-NLP/ReasonEval](https://github.com/GAIR-NLP/ReasonEval).
43
  ## How to Cite
44
  ```bibtex
45
  ```