Evolution Model
Collection
An evolution merge model • 3 items • Updated
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("beyoru/EvolLLM")
model = AutoModelForCausalLM.from_pretrained("beyoru/EvolLLM")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))This model is a merged version of two Qwen base models:
openai/gsm8k (subset of 100 samples, not trained) openfree/Darwin-Qwen3-4B (Evolution model) and base model in ACEBench.@misc{nafy_qwen_merge_2025,
title = {Merged Qwen3 4B Instruct + Thinking Models},
author = {Beyoru},
year = {2025},
howpublished = {\url{https://huggingface.co/beyoru/EvolLLM}},
note = {Merged model combining instruction-tuned and reasoning Qwen3 variants.},
base_models = {Qwen/Qwen3-4B-Instruct-2507, Qwen/Qwen3-4B-Thinking-2507}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="beyoru/EvolLLM") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)