metadata
language:
- en
license: other
datasets:
- HuggingFaceH4/ultrachat_200k
base_model: Qwen/Qwen-7B
model-index:
- name: UltraQwen-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.71
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/UltraQwen-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.93
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/UltraQwen-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.16
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/UltraQwen-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.2
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/UltraQwen-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.95
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/UltraQwen-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.05
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/UltraQwen-7B
name: Open LLM Leaderboard
Model description
The model was trained on about 100,000 examples of the HuggingFaceH4/ultrachat_200k dataset, with plans to release more checkpoints later on.
This model has not been aligned with DPO. In the future, different repositories will be released that contain versions of this model aligned with DPO, using various datasets.
Evaluation
Upon personal testing, the model demonstrates excellent performance in mathematics, history, trivia, and coding tasks. This model can be found on the Open LLM Leaderboard.
Recommended inference parameters
temperature=0.2, top_p=0.14, top_k=12, repetition_penalty=1.1
License
Please make sure to read the Qwen licensing agreement before using this model.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 59.17 |
| AI2 Reasoning Challenge (25-Shot) | 51.71 |
| HellaSwag (10-Shot) | 77.93 |
| MMLU (5-Shot) | 59.16 |
| TruthfulQA (0-shot) | 48.20 |
| Winogrande (5-shot) | 73.95 |
| GSM8k (5-shot) | 44.05 |