Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 40.71 |
| IFEval (0-Shot) | 66.99 |
| BBH (3-Shot) | 55.36 |
| MATH Lvl 5 (4-Shot) | 48.04 |
| GPQA (0-shot) | 11.30 |
| MuSR (0-shot) | 13.47 |
| MMLU-PRO (5-shot) | 49.10 |
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Quazim0t0/time-14b-stock
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard66.990
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard55.360
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard48.040
- acc_norm on GPQA (0-shot)Open LLM Leaderboard11.300
- acc_norm on MuSR (0-shot)Open LLM Leaderboard13.470
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard49.100