Qwen3-8B-Elizabeth-Simple
A fine-tuned version of Qwen3-8B specifically optimized for tool use capabilities, trained on the Elizabeth tool use minipack.
Model Details
Base Model
- Model: Qwen/Qwen3-8B
- Architecture: Transformer decoder-only
- Parameters: 8 billion
- Context Length: 4096 tokens
Training Details
- Training Method: Full fine-tuning (no LoRA/adapters)
- Precision: bfloat16
- Training Data: Elizabeth tool use minipack (198 high-quality examples)
- Training Time: 2 minutes 36 seconds
- Final Loss: 0.436 (from 3.27 → 0.16)
- Hardware: 2x NVIDIA H200 (283GB total VRAM)
Performance
- Training Speed: 3.8 samples/second
- Convergence: Excellent (3.27 → 0.16 loss)
- Tool Use Accuracy: Optimized for reliable tool calling
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"LevelUp2x/qwen3-8b-elizabeth-simple",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("LevelUp2x/qwen3-8b-elizabeth-simple")
# Tool use example
prompt = "Please help me calculate the square root of 144 using the calculator tool."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0]))
Training Methodology
Pure Weight Evolution
This model was trained using pure weight evolution methodology - no external adapters, LoRA, or quantization were used. The entire base model weights were updated to bake Elizabeth's identity and tool use capabilities directly into the model parameters.
Data Quality
- Dataset Size: 198 carefully curated examples
- Quality: High-quality tool use demonstrations
- Diversity: Multiple tool types and usage patterns
- Consistency: Uniform formatting and instruction following
Optimization
- Gradient Accumulation: 16 steps
- Effective Batch Size: 64
- Learning Rate: 2e-5
- Optimizer: AdamW with cosine scheduler
- Epochs: 3.0
Deployment
Hardware Requirements
- GPU Memory: Minimum 80GB VRAM (recommended 120GB+)
- Precision: bfloat16 recommended
- Batch Size: Optimal batch size of 4
Serving
Recommended serving with vLLM for optimal performance:
python -m vllm.entrypoints.api_server \
--model LevelUp2x/qwen3-8b-elizabeth-simple \
--dtype bfloat16 \
--gpu-memory-utilization 0.9
License
Apache 2.0
Citation
@software{qwen3_8b_elizabeth_simple,
title = {Qwen3-8B-Elizabeth-Simple: Tool Use Fine-Tuned Model},
author = {ADAPT-Chase and Nova Prime},
year = {2025},
url = {https://huggingface.co/LevelUp2x/qwen3-8b-elizabeth-simple},
publisher = {Hugging Face},
version = {1.0.0}
}
Contact
For questions about this model, please open an issue on the Hugging Face repository or contact the maintainers.