QuantFactory/gemma2-gutenberg-9B-GGUF
This is quantized version of nbeerbower/gemma2-gutenberg-9B created using llama.cpp
Original Model Card
gemma2-gutenberg-9B
UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3 finetuned on jondurbin/gutenberg-dpo-v0.1.
Method
Finetuned using an RTX 4090 using ORPO for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 22.61 |
| IFEval (0-Shot) | 27.96 |
| BBH (3-Shot) | 42.36 |
| MATH Lvl 5 (4-Shot) | 1.44 |
| GPQA (0-shot) | 11.74 |
| MuSR (0-shot) | 16.71 |
| MMLU-PRO (5-shot) | 35.47 |
- Downloads last month
- 59
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for QuantFactory/gemma2-gutenberg-9B-GGUF
Base model
UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3Dataset used to train QuantFactory/gemma2-gutenberg-9B-GGUF
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard27.960
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard42.360
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard1.440
- acc_norm on GPQA (0-shot)Open LLM Leaderboard11.740
- acc_norm on MuSR (0-shot)Open LLM Leaderboard16.710
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard35.470