File size: 3,294 Bytes
53851eb
 
 
 
797f30b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53851eb
797f30b
53851eb
 
797f30b
53851eb
797f30b
53851eb
3f2c7f7
53851eb
797f30b
53851eb
797f30b
 
 
3f2c7f7
797f30b
 
 
 
 
53851eb
 
 
 
 
 
797f30b
 
 
 
 
 
 
 
53851eb
 
797f30b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f2c7f7
797f30b
 
 
 
 
 
 
53851eb
797f30b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: apache-2.0
language:
- en
- zh
- ja
- ko
- fr
- de
- es
- pt
- ru
- ar
tags:
- zen4
- zenlm
- hanzo
- frontier-ai
- abliterated
base_model: Chompa1422/Qwen3.5-122B-A10B-abliterated
pipeline_tag: text-generation
library_name: transformers
---

# Zen4 Mega

**Zen4 Mega** is a 122B MoE (10B active) parameter language model from the [Zen4 family](https://zenlm.org) by [Zen LM](https://huggingface.co/zenlm) and [Hanzo AI](https://hanzo.ai).

Built on abliterated (uncensored) weights with Zen4 Frontier architecture for unrestricted, open-ended AI assistance.

## Model Details

| Property | Value |
|----------|-------|
| **Parameters** | 122B MoE total, 10B active |
| **Architecture** | Zen4 Frontier |
| **Context** | 262K tokens |
| **License** | APACHE-2.0 |
| **Family** | Zen4 |
| **Tier** | Large |
| **Creator** | Zen LM / Hanzo AI |

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("zenlm/zen4-mega", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("zenlm/zen4-mega")

messages = [{"role": "user", "content": "Hello, who are you?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[-1]:], skip_special_tokens=True))
```

## Zen4 Family

| Model | Parameters | Context | HuggingFace |
|-------|-----------|---------|-------------|
| Zen4 Nano | 0.8B | 262K | [zenlm/zen4-nano](https://huggingface.co/zenlm/zen4-nano) |
| Zen4 Micro | 2B | 262K | [zenlm/zen4-micro](https://huggingface.co/zenlm/zen4-micro) |
| Zen4 Mini | 4B | 262K | [zenlm/zen4-mini](https://huggingface.co/zenlm/zen4-mini) |
| Zen4 | 9B | 262K | [zenlm/zen4](https://huggingface.co/zenlm/zen4) |
| Zen4 Pro | 27B | 262K | [zenlm/zen4-pro](https://huggingface.co/zenlm/zen4-pro) |
| Zen4 Max | 35B MoE (3B active) | 262K | [zenlm/zen4-max](https://huggingface.co/zenlm/zen4-max) |
| Zen4 Coder Flash | 31B MoE (3B active) | 131K | [zenlm/zen4-coder-flash](https://huggingface.co/zenlm/zen4-coder-flash) |
| Zen4 Pro Max | 80B MoE (3B active) | 256K | [zenlm/zen4-pro-max](https://huggingface.co/zenlm/zen4-pro-max) |
| Zen4 Coder | 80B MoE (3B active) | 256K | [zenlm/zen4-coder](https://huggingface.co/zenlm/zen4-coder) |
| **Zen4 Mega** | **122B MoE (10B active)** | **262K** | [zenlm/zen4-mega](https://huggingface.co/zenlm/zen4-mega) |
| Zen4 Thunder | 230B MoE (10B active) | 1M | [zenlm/zen4-thunder](https://huggingface.co/zenlm/zen4-thunder) |
| Zen4 Storm | 456B MoE (45B active) | 1M | [zenlm/zen4-storm](https://huggingface.co/zenlm/zen4-storm) |
| Zen4 Titan | 744B MoE (40B active) | 128K | [zenlm/zen4-titan](https://huggingface.co/zenlm/zen4-titan) |
| Zen4 Ultra | 1.04T MoE (32B active) | 256K | [zenlm/zen4-ultra](https://huggingface.co/zenlm/zen4-ultra) |
| Zen4 Ultra Max | 1T MoE (50B active) | 128K | [zenlm/zen4-ultra-max](https://huggingface.co/zenlm/zen4-ultra-max) |

## Links

- [Zen LM](https://zenlm.org) | [Hanzo AI](https://hanzo.ai) | [Hanzo Chat](https://hanzo.chat)
- [All Zen Models](https://huggingface.co/zenlm)

---

*Zen AI: Clarity Through Intelligence*