Update README.md
Browse files
README.md
CHANGED
|
@@ -57,7 +57,10 @@ pipeline_tag: text-generation
|
|
| 57 |
## Evaluation Results
|
| 58 |
|
| 59 |
**Overview**
|
| 60 |
-
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
|
|
|
|
|
|
|
|
|
| 61 |
|
| 62 |
**Main Results**
|
| 63 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
|
|
|
| 57 |
## Evaluation Results
|
| 58 |
|
| 59 |
**Overview**
|
| 60 |
+
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
| 61 |
+
- We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
|
| 62 |
+
- We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
|
| 63 |
+
- We can reproduce the evaluation environments using the command below:
|
| 64 |
|
| 65 |
**Main Results**
|
| 66 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|