| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | |
| Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__CodeLlama-13b-hf) | |
| | Metric | Value | | |
| |-----------------------|---------------------------| | |
| | Avg. | 37.91 | | |
| | ARC (25-shot) | 40.87 | | |
| | HellaSwag (10-shot) | 63.35 | | |
| | MMLU (5-shot) | 32.81 | | |
| | TruthfulQA (0-shot) | 43.79 | | |
| | Winogrande (5-shot) | 67.17 | | |
| | GSM8K (5-shot) | 12.13 | | |
| | DROP (3-shot) | 5.25 | | |