zeekay commited on
Commit
bc4d847
·
verified ·
1 Parent(s): 7464946

Update model files

Browse files
Files changed (1) hide show
  1. README.md +23 -130
README.md CHANGED
@@ -1,154 +1,47 @@
1
  ---
2
- license: apache-2.0
3
- tags:
4
- - zen-research
5
- - zen-ai
6
- - hypermodal
7
- - language-model
8
  language:
9
  - en
10
- library_name: transformers
 
 
 
 
 
11
  pipeline_tag: text-generation
12
  ---
13
 
14
- # zen-eco-4b-instruct
15
 
16
- Efficient 4B parameter instruction-following model for general-purpose tasks
17
 
18
  ## Model Details
19
 
20
- - **Developed by**: Zen Research Authors
21
- - **Organization**: Zen Research DAO under [Zoo Labs Inc](https://github.com/zenlm) (501(c)(3) Non-Profit)
22
- - **Location**: San Francisco, California, USA
23
- - **Model type**: language-model
24
- - **Architecture**: Qwen3-4B
25
- - **Parameters**: 4B
26
  - **License**: Apache 2.0
27
- - **Training**: Trained with [Zen Gym](https://github.com/zenlm/zen-gym)
28
- - **Inference**: Optimized for [Zen Engine](https://github.com/zenlm/zen-engine)
29
-
30
- ## 🌟 Zen AI Ecosystem
31
-
32
- This model is part of the **Zen Research** hypermodal AI family - the world's most comprehensive open-source AI ecosystem.
33
-
34
- ### Complete Model Family
35
-
36
- **Language Models:**
37
- - [zen-nano-0.6b](https://huggingface.co/zenlm/zen-nano-0.6b) - 0.6B edge model (44K tokens/sec)
38
- - [zen-eco-4b-instruct](https://huggingface.co/zenlm/zen-eco-4b-instruct) - 4B instruction model
39
- - [zen-eco-4b-thinking](https://huggingface.co/zenlm/zen-eco-4b-thinking) - 4B reasoning model
40
- - [zen-agent-4b](https://huggingface.co/zenlm/zen-agent-4b) - 4B tool-calling agent
41
-
42
- **3D & World Generation:**
43
- - [zen-3d](https://huggingface.co/zenlm/zen-3d) - Controllable 3D asset generation
44
- - [zen-voyager](https://huggingface.co/zenlm/zen-voyager) - Camera-controlled world exploration
45
- - [zen-world](https://huggingface.co/zenlm/zen-world) - Large-scale world simulation
46
-
47
- **Video Generation:**
48
- - [zen-director](https://huggingface.co/zenlm/zen-director) - Text/image-to-video (5B)
49
- - [zen-video](https://huggingface.co/zenlm/zen-video) - Professional video synthesis
50
- - [zen-video-i2v](https://huggingface.co/zenlm/zen-video-i2v) - Image-to-video animation
51
 
52
- **Audio Generation:**
53
- - [zen-musician](https://huggingface.co/zenlm/zen-musician) - Music generation (7B)
54
- - [zen-foley](https://huggingface.co/zenlm/zen-foley) - Video-to-audio Foley effects
55
-
56
- **Infrastructure:**
57
- - [Zen Gym](https://github.com/zenlm/zen-gym) - Unified training platform
58
- - [Zen Engine](https://github.com/zenlm/zen-engine) - High-performance inference
59
-
60
- ## Usage
61
-
62
- ### Quick Start
63
 
64
  ```python
65
  from transformers import AutoModelForCausalLM, AutoTokenizer
66
 
67
- model = AutoModelForCausalLM.from_pretrained("zenlm/zen-eco-4b-instruct")
68
- tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-eco-4b-instruct")
69
 
70
- inputs = tokenizer("Hello!", return_tensors="pt")\noutputs = model.generate(**inputs)\nprint(tokenizer.decode(outputs[0]))
71
- ```
72
-
73
- ### With Zen Engine
74
-
75
- ```bash
76
- # High-performance inference (44K tokens/sec on M3 Max)
77
- zen-engine serve --model zenlm/zen-eco-4b-instruct --port 3690
78
- ```
79
-
80
- ```python
81
- # OpenAI-compatible API
82
- from openai import OpenAI
83
-
84
- client = OpenAI(base_url="http://localhost:3690/v1")
85
- response = client.chat.completions.create(
86
- model="zenlm/zen-eco-4b-instruct",
87
- messages=[{"role": "user", "content": "Hello!"}]
88
- )
89
- ```
90
-
91
- ## Training
92
-
93
- Fine-tune with [Zen Gym](https://github.com/zenlm/zen-gym):
94
-
95
- ```bash
96
- git clone https://github.com/zenlm/zen-gym
97
- cd zen-gym
98
-
99
- # LoRA fine-tuning
100
- llamafactory-cli train --config configs/zen_lora.yaml \
101
- --model_name_or_path zenlm/zen-eco-4b-instruct
102
-
103
- # GRPO reinforcement learning (40-60% memory reduction)
104
- llamafactory-cli train --config configs/zen_grpo.yaml \
105
- --model_name_or_path zenlm/zen-eco-4b-instruct
106
- ```
107
-
108
- Supported methods: LoRA, QLoRA, DoRA, GRPO, GSPO, DPO, PPO, KTO, ORPO, SimPO, Unsloth
109
-
110
- ## Performance
111
-
112
- - **Speed**: 32K tokens/sec (M3 Max), 28K tokens/sec (RTX 4090)
113
- - **Memory**: 2.5GB (Q4_K_M) to 8GB (F16)
114
- - **Quality**: 90%+ accuracy on MMLU, GSM8K
115
- - **Formats**: PyTorch, MLX, GGUF (Q2_K to F16)
116
-
117
- ## Ethical Considerations
118
-
119
- - **Open Research**: Released under Apache 2.0 for maximum accessibility
120
- - **Environmental Impact**: Optimized for eco-friendly deployment
121
- - **Transparency**: Full training details and model architecture disclosed
122
- - **Safety**: Comprehensive testing and evaluation
123
- - **Non-Profit**: Developed by Zoo Labs Inc (501(c)(3)) for public benefit
124
-
125
- ## Citation
126
-
127
- ```bibtex
128
- @misc{zenzeneco4binstruct2025,
129
- title={zen-eco-4b-instruct: Efficient 4B parameter instruction-following model for general-purpose tasks},
130
- author={Zen Research Authors},
131
- year={2025},
132
- publisher={Zoo Labs Inc},
133
- organization={Zen Research DAO},
134
- url={https://huggingface.co/zenlm/zen-eco-4b-instruct}
135
- }
136
  ```
137
 
138
  ## Links
139
 
140
- - **Organization**: [github.com/zenlm](https://github.com/zenlm) • [huggingface.co/zenlm](https://huggingface.co/zenlm)
141
- - **Training Platform**: [Zen Gym](https://github.com/zenlm/zen-gym)
142
- - **Inference Engine**: [Zen Engine](https://github.com/zenlm/zen-engine)
143
- - **Parent Org**: [Zoo Labs Inc](https://github.com/zenlm) (501(c)(3) Non-Profit, San Francisco)
144
- - **Contact**: dev@hanzo.ai • +1 (913) 777-4443
145
-
146
- ## License
147
-
148
- Apache License 2.0
149
-
150
- Copyright 2025 Zen Research Authors
151
 
152
  ---
153
 
154
- **Zen Research** - Building open, eco-friendly AI for everyone 🌱
 
1
  ---
 
 
 
 
 
 
2
  language:
3
  - en
4
+ license: apache-2.0
5
+ tags:
6
+ - zen-lm
7
+ - transformers
8
+ - safetensors
9
+ base_model: Qwen/Qwen3-4B
10
  pipeline_tag: text-generation
11
  ---
12
 
13
+ # zen-eco-instruct
14
 
15
+ Efficient instruction-following
16
 
17
  ## Model Details
18
 
19
+ - **Size**: 4B
20
+ - **Base**: Qwen/Qwen3-4B
21
+ - **Org**: Hanzo AI × Zoo Labs Foundation
 
 
 
22
  - **License**: Apache 2.0
23
+ - **Code**: https://github.com/zenlm/zen-eco-instruct
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
+ ## Quick Start
 
 
 
 
 
 
 
 
 
 
26
 
27
  ```python
28
  from transformers import AutoModelForCausalLM, AutoTokenizer
29
 
30
+ model = AutoModelForCausalLM.from_pretrained("zenlm/zen-eco-instruct")
31
+ tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-eco-instruct")
32
 
33
+ inputs = tokenizer("Hello!", return_tensors="pt")
34
+ outputs = model.generate(**inputs, max_new_tokens=100)
35
+ print(tokenizer.decode(outputs[0]))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ```
37
 
38
  ## Links
39
 
40
+ - [GitHub Org](https://github.com/zenlm)
41
+ - [Training: Zen Gym](https://github.com/zenlm/gym)
42
+ - [Inference: Zen Engine](https://github.com/zenlm/engine)
43
+ - [Model Repo](https://github.com/zenlm/zen-eco-instruct)
 
 
 
 
 
 
 
44
 
45
  ---
46
 
47
+ **Zen LM** Building AI that's local, private, and free