Barcenas-7b a model based on orca-mini-v3-7b and LLama2-7b.

Trained with a proprietary dataset to boost the creativity and consistency of its responses.

This model would never have been possible thanks to the following people:

Pankaj Mathur - For his orca-mini-v3-7b model which was the basis of the Barcenas-7b fine-tune.

Maxime Labonne - Thanks to his code and tutorial for fine-tuning in LLama2

TheBloke - For his script for a peft adapter

Georgi Gerganov - For his llama.cp project that contributed in Barcenas-7b functions

TrashPandaSavior - Reddit user who with his information would never have started the project.

Made with ❀️ in Guadalupe, Nuevo Leon, Mexico πŸ‡²πŸ‡½

Downloads last month
741
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using Danielbrdz/Barcenas-7b 27