File size: 2,966 Bytes
3e626a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
# Qwen3-8B-Elizabeth-Simple

A fine-tuned version of Qwen3-8B specifically optimized for tool use capabilities, trained on the Elizabeth tool use minipack.

## Model Details

### Base Model
- **Model:** Qwen/Qwen3-8B
- **Architecture:** Transformer decoder-only
- **Parameters:** 8 billion
- **Context Length:** 4096 tokens

### Training Details
- **Training Method:** Full fine-tuning (no LoRA/adapters)
- **Precision:** bfloat16
- **Training Data:** Elizabeth tool use minipack (198 high-quality examples)
- **Training Time:** 2 minutes 36 seconds
- **Final Loss:** 0.436 (from 3.27 → 0.16)
- **Hardware:** 2x NVIDIA H200 (283GB total VRAM)

### Performance
- **Training Speed:** 3.8 samples/second
- **Convergence:** Excellent (3.27 → 0.16 loss)
- **Tool Use Accuracy:** Optimized for reliable tool calling

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "LevelUp2x/qwen3-8b-elizabeth-simple",
    torch_dtype=torch.bfloat16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("LevelUp2x/qwen3-8b-elizabeth-simple")

# Tool use example
prompt = "Please help me calculate the square root of 144 using the calculator tool."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0]))
```

## Training Methodology

### Pure Weight Evolution
This model was trained using pure weight evolution methodology - no external adapters, LoRA, or quantization were used. The entire base model weights were updated to bake Elizabeth's identity and tool use capabilities directly into the model parameters.

### Data Quality
- **Dataset Size:** 198 carefully curated examples
- **Quality:** High-quality tool use demonstrations
- **Diversity:** Multiple tool types and usage patterns
- **Consistency:** Uniform formatting and instruction following

### Optimization
- **Gradient Accumulation:** 16 steps
- **Effective Batch Size:** 64
- **Learning Rate:** 2e-5
- **Optimizer:** AdamW with cosine scheduler
- **Epochs:** 3.0

## Deployment

### Hardware Requirements
- **GPU Memory:** Minimum 80GB VRAM (recommended 120GB+)
- **Precision:** bfloat16 recommended
- **Batch Size:** Optimal batch size of 4

### Serving
Recommended serving with vLLM for optimal performance:
```bash
python -m vllm.entrypoints.api_server \
  --model LevelUp2x/qwen3-8b-elizabeth-simple \
  --dtype bfloat16 \
  --gpu-memory-utilization 0.9
```

## License

Apache 2.0

## Citation

```bibtex
@software{qwen3_8b_elizabeth_simple,
  title = {Qwen3-8B-Elizabeth-Simple: Tool Use Fine-Tuned Model},
  author = {ADAPT-Chase and Nova Prime},
  year = {2025},
  url = {https://huggingface.co/LevelUp2x/qwen3-8b-elizabeth-simple},
  publisher = {Hugging Face},
  version = {1.0.0}
}
```

## Contact

For questions about this model, please open an issue on the Hugging Face repository or contact the maintainers.