metadata
license: other
datasets:
- Danielbrdz/Barcenas-DataSet
language:
- es
- en
Barcenas-7b a model based on orca-mini-v3-7b and LLama2-7b.
Trained with a proprietary dataset to boost the creativity and consistency of its responses.
This model would never have been possible thanks to the following people:
Pankaj Mathur - For his orca-mini-v3-7b model which was the basis of the Barcenas-7b fine-tune.
Maxime Labonne - Thanks to his code and tutorial for fine-tuning in LLama2
TheBloke - For his script for a peft adapter
Georgi Gerganov - For his llama.cp project that contributed in Barcenas-7b functions
TrashPandaSavior - Reddit user who with his information would never have started the project.
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 44.77 |
| ARC (25-shot) | 55.12 |
| HellaSwag (10-shot) | 77.4 |
| MMLU (5-shot) | 49.27 |
| TruthfulQA (0-shot) | 43.64 |
| Winogrande (5-shot) | 73.64 |
| GSM8K (5-shot) | 6.14 |
| DROP (3-shot) | 8.17 |