Datasets:
ZhouChuYue
commited on
Commit
·
4e43ed9
1
Parent(s):
6e09812
Update README: Add Commonsense Reasoning benchmarks description
Browse files
README.md
CHANGED
|
@@ -133,6 +133,7 @@ We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer fo
|
|
| 133 |
- **Mathematical Reasoning:** GSM8K (4-shot), MATH (4-shot), Math-Bench
|
| 134 |
- **Code Generation:** HumanEval (0-shot), MBPP (3-shot)
|
| 135 |
- **Comprehensive Knowledge:** MMLU (5-shot), MMLU-STEM (5-shot)
|
|
|
|
| 136 |
|
| 137 |
### 🔧 Experimental Setup
|
| 138 |
|
|
|
|
| 133 |
- **Mathematical Reasoning:** GSM8K (4-shot), MATH (4-shot), Math-Bench
|
| 134 |
- **Code Generation:** HumanEval (0-shot), MBPP (3-shot)
|
| 135 |
- **Comprehensive Knowledge:** MMLU (5-shot), MMLU-STEM (5-shot)
|
| 136 |
+
- **Commonsense Reasoning:** ARC-E/C (0-shot), BBH (3-shot), CommonSenseQA (8-shot), HellaSwag (0-shot), OpenBookQA (0-shot), PIQA (0-shot), SIQA (0-shot), Winogrande (0-shot)
|
| 137 |
|
| 138 |
### 🔧 Experimental Setup
|
| 139 |
|