| | --- |
| | tags: |
| | - generated_from_trainer |
| | - mistral |
| | - conversational |
| | license: mit |
| | language: |
| | - en |
| | inference: False |
| | base_model: HuggingFaceH4/zephyr-7b-beta |
| | library_name: transformers |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | # Zephyr 7B Alpha-GGUF |
| |
|
| | - Model creator: [Hugging Face H4](https://huggingface.co/HuggingFaceH4) |
| | - Original model: [Zephyr 7B Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) |
| |
|
| | ## Description |
| |
|
| | Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so. You can find more details in the [technical report](https://arxiv.org/abs/2310.16944). |
| |
|
| | - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. |
| | - **Language(s) (NLP):** Primarily English |
| | - **License:** MIT |
| | - **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) |