Model Stock: All we need is just a few fine-tuned models
Paper • 2403.19522 • Published • 14
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("DreadPoor/Derivative-8B-Model_Stock")
model = AutoModelForCausalLM.from_pretrained("DreadPoor/Derivative-8B-Model_Stock")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using FuseAI/FuseChat-Llama-3.1-8B-SFT as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: DreadPoor/Aspire-8B-model_stock
- model: DreadPoor/ONeil-model_stock-8B
- model: DreadPoor/BaeZel_1.1-8B-Model_Stock
merge_method: model_stock
base_model: FuseAI/FuseChat-Llama-3.1-8B-SFT
normalize: false
filter_wise: true
chat_template: "auto"
int8_mask: true
dtype: bfloat16
Detailed results can be found here! Summarized results can be found here!
| Metric | Value (%) |
|---|---|
| Average | 30.04 |
| IFEval (0-Shot) | 76.67 |
| BBH (3-Shot) | 34.25 |
| MATH Lvl 5 (4-Shot) | 17.52 |
| GPQA (0-shot) | 8.95 |
| MuSR (0-shot) | 11.61 |
| MMLU-PRO (5-shot) | 31.23 |
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DreadPoor/Derivative-8B-Model_Stock") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)