| | --- |
| | language: |
| | - en |
| | license: apache-2.0 |
| | tags: |
| | - Math |
| | datasets: |
| | - meta-math/MetaMathQA |
| | pipeline_tag: text-generation |
| | model-index: |
| | - name: Bumblebee-7B |
| | results: |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: AI2 Reasoning Challenge (25-Shot) |
| | type: ai2_arc |
| | config: ARC-Challenge |
| | split: test |
| | args: |
| | num_few_shot: 25 |
| | metrics: |
| | - type: acc_norm |
| | value: 63.4 |
| | name: normalized accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Q-bert/Bumblebee-7B |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: HellaSwag (10-Shot) |
| | type: hellaswag |
| | split: validation |
| | args: |
| | num_few_shot: 10 |
| | metrics: |
| | - type: acc_norm |
| | value: 84.16 |
| | name: normalized accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Q-bert/Bumblebee-7B |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: MMLU (5-Shot) |
| | type: cais/mmlu |
| | config: all |
| | split: test |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 64.0 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Q-bert/Bumblebee-7B |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: TruthfulQA (0-shot) |
| | type: truthful_qa |
| | config: multiple_choice |
| | split: validation |
| | args: |
| | num_few_shot: 0 |
| | metrics: |
| | - type: mc2 |
| | value: 50.96 |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Q-bert/Bumblebee-7B |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: Winogrande (5-shot) |
| | type: winogrande |
| | config: winogrande_xl |
| | split: validation |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 78.22 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Q-bert/Bumblebee-7B |
| | name: Open LLM Leaderboard |
| | - task: |
| | type: text-generation |
| | name: Text Generation |
| | dataset: |
| | name: GSM8k (5-shot) |
| | type: gsm8k |
| | config: main |
| | split: test |
| | args: |
| | num_few_shot: 5 |
| | metrics: |
| | - type: acc |
| | value: 65.66 |
| | name: accuracy |
| | source: |
| | url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Q-bert/Bumblebee-7B |
| | name: Open LLM Leaderboard |
| | --- |
| | |
| | ## Bumblebee-7B |
| |
|
| | <img src="https://images6.alphacoders.com/131/1314913.jpeg" width="300" height="200" alt="Bumblebee-7B"> |
| |
|
| | Fine-tuned On [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) with [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) |
| |
|
| | You can use ChatML format. |
| |
|
| | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
| | Detailed results can be found [Coming soon]() |
| |
|
| | | Metric | Value | |
| | |-----------------------|---------------------------| |
| | | Avg. | Coming soon | |
| | | ARC (25-shot) | Coming soon | |
| | | HellaSwag (10-shot) | Coming soon | |
| | | MMLU (5-shot) | Coming soon | |
| | | TruthfulQA (0-shot) | Coming soon | |
| | | Winogrande (5-shot) | Coming soon | |
| | | GSM8K (5-shot) | Coming soon | |
| |
|
| |
|
| | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
| | Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Q-bert__Bumblebee-7B) |
| |
|
| | | Metric |Value| |
| | |---------------------------------|----:| |
| | |Avg. |67.73| |
| | |AI2 Reasoning Challenge (25-Shot)|63.40| |
| | |HellaSwag (10-Shot) |84.16| |
| | |MMLU (5-Shot) |64.00| |
| | |TruthfulQA (0-shot) |50.96| |
| | |Winogrande (5-shot) |78.22| |
| | |GSM8k (5-shot) |65.66| |
| |
|
| |
|