zenlm/zen-eco-4b-thinking

Zen Eco 4B Thinking - Chain-of-thought reasoning model

Model Details

  • Architecture: Qwen3 base
  • Parameters: 4B
  • Training: Fine-tuned with Zen identity
  • Developer: Hanzo AI

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("zenlm/zen-eco-4b-thinking")
tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-eco-4b-thinking")

prompt = "Hello, who are you?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Training

Trained with fixed seed (42) for reproducibility. Base model: Qwen3-4B

License

Apache 2.0

Downloads last month
9
Safetensors
Model size
4B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support