| language: | |
| - en | |
| license: apache-2.0 | |
| tags: | |
| - Mixtral | |
| - instruct | |
| - finetune | |
| - chatml | |
| - DPO | |
| - RLHF | |
| - gpt4 | |
| - synthetic data | |
| - distillation | |
| model-index: | |
| - name: fusedYi | |
| results: [] | |
| # Model Card for FusedYi | |
| <!-- Provide a quick summary of what the model is/does. --> | |
| This is fused Yi-6B. | |
| I took a Yi, merged it with another Yi, and got a new Yi out of the Yis. | |
| Yi-ceptionized. | |
| ## Model Details | |
| 1.9 x Yi-6B | |
| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | |
| Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pharaouk__fusedyi) | |
| | Metric |Value| | |
| |---------------------------------|----:| | |
| |Avg. |53.18| | |
| |AI2 Reasoning Challenge (25-Shot)|55.03| | |
| |HellaSwag (10-Shot) |76.60| | |
| |MMLU (5-Shot) |63.43| | |
| |TruthfulQA (0-shot) |49.29| | |
| |Winogrande (5-shot) |72.69| | |
| |GSM8k (5-shot) | 2.05| | |