| base_model: mlabonne/Chimera-8B | |
| library_name: transformers | |
| tags: | |
| - 4-bit | |
| - AWQ | |
| - text-generation | |
| - autotrain_compatible | |
| - endpoints_compatible | |
| pipeline_tag: text-generation | |
| inference: false | |
| quantized_by: Suparious | |
| # mlabonne/Chimera-8B AWQ | |
| - Model creator: [mlabonne](https://huggingface.co/mlabonne) | |
| - Original model: [Chimera-8B](https://huggingface.co/mlabonne/Chimera-8B) | |
| ## Model Summary | |
| Dare-ties merge method. | |
| List of all models and merging path is coming soon. | |
| Merging the "thick"est model weights from mistral models using amazing training methods like direct preference optimization (dpo) and reinforced learning. | |
| I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers, | |
| and optimized code until i achieved the best possible results. | |
| Thank you openchat 3.5 for showing me the way. | |
| Here is my contribution. | |
| ## Prompt Template | |
| Replace {system} with your system prompt, and {prompt} with your prompt instruction. | |
| ``` | |
| ### System: | |
| {system} | |
| ### User: | |
| {prompt} | |
| ### Assistant: | |
| ``` | |