seven-cat commited on
Commit
efe5d32
·
verified ·
1 Parent(s): f690a63

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -2,6 +2,7 @@
2
  license: apache-2.0
3
  language:
4
  - en
 
5
  ---
6
 
7
 
@@ -10,7 +11,7 @@ language:
10
 
11
  ## Model Description
12
 
13
- `ReasonEval-7B` is a 7.1B parameter decoder-only language model tuned from [`WizardMath-7B-V1.1`](https://huggingface.co/WizardLM/WizardMath-7B-V1.1).
14
 
15
  <p align="center">
16
  <img src="introduction.jpg" alt="error" style="width:95%;">
@@ -30,7 +31,7 @@ With ReasonEval, you can
30
 
31
  ## Model Details
32
 
33
- * **Model type**: `ReasonEval-7B` model is an auto-regressive language model based on the transformer decoder architecture. Its architecture is identical to the base model, except that the
34
  classification head for next-token prediction is replaced with a classification head for outputting the
35
  possibilities of each class of reasong steps.
36
  * **Language(s)**: English
 
2
  license: apache-2.0
3
  language:
4
  - en
5
+ pipeline_tag: text-classification
6
  ---
7
 
8
 
 
11
 
12
  ## Model Description
13
 
14
+ `ReasonEval-7B` is a 7.1B parameter decoder-only language model fine-tuned from [`WizardMath-7B-V1.1`](https://huggingface.co/WizardLM/WizardMath-7B-V1.1).
15
 
16
  <p align="center">
17
  <img src="introduction.jpg" alt="error" style="width:95%;">
 
31
 
32
  ## Model Details
33
 
34
+ * **Model type**: `ReasonEval-7B`'s architecture is identical to [`WizardMath-7B-V1.1`](https://huggingface.co/WizardLM/WizardMath-7B-V1.1), except that the
35
  classification head for next-token prediction is replaced with a classification head for outputting the
36
  possibilities of each class of reasong steps.
37
  * **Language(s)**: English