zeekay commited on
Commit
75c5a27
·
verified ·
1 Parent(s): 6514c2f

feat: Zen4 zen4-micro branding update

Browse files
Files changed (1) hide show
  1. README.md +68 -34
README.md CHANGED
@@ -1,51 +1,85 @@
1
  ---
2
- library_name: transformers
3
  license: apache-2.0
4
- license_link: https://huggingface.co/Qwen/Qwen3.5-2B/blob/main/LICENSE
5
- pipeline_tag: image-text-to-text
6
- base_model:
7
- - Qwen/Qwen3.5-2B
 
 
 
 
 
 
 
8
  tags:
 
 
 
 
9
  - abliterated
10
- - uncensored
 
 
11
  ---
12
 
13
- # huihui-ai/Huihui-Qwen3.5-2B-abliterated
14
-
15
-
16
- This is an uncensored version of [Qwen/Qwen3.5-2B](https://huggingface.co/Qwen/Qwen3.5-2B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
17
- This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
18
 
19
- ## ollama
20
 
21
- Please use the latest version of [ollama v0.17.5](https://github.com/ollama/ollama/releases/tag/v0.17.5)
22
-
23
- You can use [huihui_ai/qwen3.5-abliterated:2b](https://ollama.com/huihui_ai/qwen3.5-abliterated:2b) directly,
24
- ```
25
- ollama run huihui_ai/qwen3.5-abliterated:2b
26
- ```
27
 
28
- ### Usage Warnings
29
 
 
 
 
 
 
 
 
 
 
30
 
31
- - **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
32
 
33
- - **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.
 
34
 
35
- - **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
 
36
 
37
- - **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
38
-
39
- - **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
40
-
41
- - **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
 
42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
- ### Donation
45
- ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
46
- - bitcoin:
47
- ```
48
- bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
49
- ```
50
- - Support our work on [Ko-fi](https://ko-fi.com/huihuiai)!
51
 
 
 
1
  ---
 
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ - ja
7
+ - ko
8
+ - fr
9
+ - de
10
+ - es
11
+ - pt
12
+ - ru
13
+ - ar
14
  tags:
15
+ - zen4
16
+ - zenlm
17
+ - hanzo
18
+ - frontier-ai
19
  - abliterated
20
+ base_model: huihui-ai/Huihui-Qwen3.5-2B-abliterated
21
+ pipeline_tag: text-generation
22
+ library_name: transformers
23
  ---
24
 
25
+ # Zen4 Micro
 
 
 
 
26
 
27
+ **Zen4 Micro** is a 2B parameter language model from the [Zen4 family](https://zenlm.org) by [Zen LM](https://huggingface.co/zenlm) and [Hanzo AI](https://hanzo.ai).
28
 
29
+ Built on abliterated (uncensored) weights with Zen MoDE (Mixture of Distilled Experts) architecture for unrestricted, open-ended AI assistance.
 
 
 
 
 
30
 
31
+ ## Model Details
32
 
33
+ | Property | Value |
34
+ |----------|-------|
35
+ | **Parameters** | 2B total, 2B active |
36
+ | **Architecture** | Zen MoDE |
37
+ | **Context** | 262K tokens |
38
+ | **License** | APACHE-2.0 |
39
+ | **Family** | Zen4 |
40
+ | **Tier** | Small |
41
+ | **Creator** | Zen LM / Hanzo AI |
42
 
43
+ ## Usage
44
 
45
+ ```python
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer
47
 
48
+ model = AutoModelForCausalLM.from_pretrained("zenlm/zen4-micro", torch_dtype="auto")
49
+ tokenizer = AutoTokenizer.from_pretrained("zenlm/zen4-micro")
50
 
51
+ messages = [{"role": "user", "content": "Hello, who are you?"}]
52
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
53
+ inputs = tokenizer(text, return_tensors="pt").to(model.device)
54
+ outputs = model.generate(**inputs, max_new_tokens=512)
55
+ print(tokenizer.decode(outputs[0][inputs.input_ids.shape[-1]:], skip_special_tokens=True))
56
+ ```
57
 
58
+ ## Zen4 Family
59
+
60
+ | Model | Parameters | Context | HuggingFace |
61
+ |-------|-----------|---------|-------------|
62
+ | Zen4 Nano | 0.8B | 262K | [zenlm/zen4-nano](https://huggingface.co/zenlm/zen4-nano) |
63
+ | **Zen4 Micro** | **2B** | **262K** | [zenlm/zen4-micro](https://huggingface.co/zenlm/zen4-micro) |
64
+ | Zen4 Mini | 4B | 262K | [zenlm/zen4-mini](https://huggingface.co/zenlm/zen4-mini) |
65
+ | Zen4 | 9B | 262K | [zenlm/zen4](https://huggingface.co/zenlm/zen4) |
66
+ | Zen4 Pro | 27B | 262K | [zenlm/zen4-pro](https://huggingface.co/zenlm/zen4-pro) |
67
+ | Zen4 Max | 35B MoE (3B active) | 262K | [zenlm/zen4-max](https://huggingface.co/zenlm/zen4-max) |
68
+ | Zen4 Coder Flash | 31B MoE (3B active) | 131K | [zenlm/zen4-coder-flash](https://huggingface.co/zenlm/zen4-coder-flash) |
69
+ | Zen4 Pro Max | 80B MoE (3B active) | 256K | [zenlm/zen4-pro-max](https://huggingface.co/zenlm/zen4-pro-max) |
70
+ | Zen4 Coder | 80B MoE (3B active) | 256K | [zenlm/zen4-coder](https://huggingface.co/zenlm/zen4-coder) |
71
+ | Zen4 Mega | 122B MoE (10B active) | 262K | [zenlm/zen4-mega](https://huggingface.co/zenlm/zen4-mega) |
72
+ | Zen4 Thunder | 230B MoE (10B active) | 1M | [zenlm/zen4-thunder](https://huggingface.co/zenlm/zen4-thunder) |
73
+ | Zen4 Storm | 456B MoE (45B active) | 1M | [zenlm/zen4-storm](https://huggingface.co/zenlm/zen4-storm) |
74
+ | Zen4 Titan | 744B MoE (40B active) | 128K | [zenlm/zen4-titan](https://huggingface.co/zenlm/zen4-titan) |
75
+ | Zen4 Ultra | 1.04T MoE (32B active) | 256K | [zenlm/zen4-ultra](https://huggingface.co/zenlm/zen4-ultra) |
76
+ | Zen4 Apex | 1T MoE (50B active) | 128K | [zenlm/zen4-apex](https://huggingface.co/zenlm/zen4-apex) |
77
+
78
+ ## Links
79
+
80
+ - [Zen LM](https://zenlm.org) | [Hanzo AI](https://hanzo.ai) | [Hanzo Chat](https://hanzo.chat)
81
+ - [All Zen Models](https://huggingface.co/zenlm)
82
 
83
+ ---
 
 
 
 
 
 
84
 
85
+ *Zen AI: Clarity Through Intelligence*