HyzeAI commited on
Commit
8b8bdd9
·
verified ·
1 Parent(s): 69bd82c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -121
README.md CHANGED
@@ -10,170 +10,102 @@ metrics:
10
  - accuracy
11
  ---
12
  <p align="center">
13
- <img src="https://i.imgur.com/ePJMLNp.png" alt="Hyze Logo" width="405"/>
 
 
14
  </p>
15
 
 
 
16
  <p align="center">
17
- <strong>20 Billion Parameters Research-Grade Open Weights</strong>
18
  </p>
19
 
20
  <p align="center">
21
- <a href="https://hyzebot.vercel.app">🌐 Try Hyze RE1 Pro</a> •
22
- <a href="https://huggingface.co/HyzeAI">🤗 Hugging Face</a> •
23
- <a href="https://github.com/HyzeAI">📁 GitHub</a>
24
  </p>
25
 
26
  ---
27
 
28
  ## 🚀 Overview
29
 
30
- **Hyze RE1 Pro** is a **20 billion parameter** transformer model designed exclusively for **research purposes**. Built on the philosophy that **frontier AI should not belong only to those with billion-dollar budgets**, RE1 Pro delivers strong reasoning capabilities in a fully open-weight package.
31
-
32
- | Attribute | Details |
33
- |----------|---------|
34
- | **Parameters** | 20B |
35
- | **Architecture** | Transformer (Decoder-only) |
36
- | **Precision** | BF16 / INT4 (quantized) |
37
- | **Context Length** | 32K tokens |
38
- | **License** | Apache 2.0 |
39
- | **Target** | Academic / Non-Commercial Research |
40
-
41
- ---
42
-
43
- ## 🧠 Capabilities
44
-
45
- Hyze RE1 Pro excels at:
46
-
47
- - 🔬 **Scientific reasoning** – Physics, mathematics, code
48
- - 🌌 **Space & astronomy** – Continued pretraining on domain-specific corpora
49
- - 📚 **Research summarization** – ArXiv, technical papers
50
- - 🧮 **Complex instruction following** – Multi-step reasoning tasks
51
 
52
- > ⚠️ **Research Use Only**
53
- > RE1 Pro is not optimized for general consumer chatbots. It is a **research instrument**, not a product. For general chat, see [HyzeMini](https://huggingface.co/HyzeAI/HyzeMini).
54
 
55
- ---
56
-
57
- ## 📊 Benchmarks (Preliminary)
58
 
59
- | Benchmark | Score (20B) | Comparison |
60
- |-----------|-------------|------------|
61
- | MMLU (5-shot) | **68.2** | LLaMA2-13B: 54.8 |
62
- | HumanEval (pass@1) | **37.4** | CodeLlama-13B: 36.0 |
63
- | GSM8K (8-shot) | **62.1** | Mistral-7B: 52.2 |
64
- | MATH (4-shot) | **26.8** | LLaMA2-34B: 27.0 |
65
-
66
- *Benchmarks conducted in BF16. Quantized versions may show slight degradation.*
67
 
68
  ---
69
 
70
- ## ⚙️ Installation & Usage
71
-
72
- ### Python (Transformers)
73
-
74
- ```python
75
- from transformers import AutoModelForCausalLM, AutoTokenizer
76
-
77
- model = AutoModelForCausalLM.from_pretrained(
78
- "HyzeAI/Hyze-RE1-Pro",
79
- torch_dtype="auto",
80
- device_map="auto"
81
- )
82
-
83
- tokenizer = AutoTokenizer.from_pretrained("HyzeAI/Hyze-RE1-Pro")
84
 
85
- prompt = "Explain the rocket equation in simple terms."
86
- inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
87
-
88
- outputs = model.generate(
89
- **inputs,
90
- max_new_tokens=256,
91
- temperature=0.7,
92
- top_p=0.9
93
- )
94
 
95
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
96
- ```
 
 
 
97
 
98
- ### llama.cpp (CPU + Quantized)
99
-
100
- ```bash
101
- # Download GGUF from Hugging Face
102
- wget https://huggingface.co/HyzeAI/Hyze-RE1-Pro-GGUF/resolve/main/hyze-re1-pro-q4_k_m.gguf
103
-
104
- ./llama-cli -m hyze-re1-pro-q4_k_m.gguf \
105
- -p "List three challenges of Mars colonization:" \
106
- -n 512 \
107
- -t 8
108
- ```
109
 
110
  ---
111
 
112
- ## 💻 Hardware Requirements
113
-
114
- | Mode | VRAM | RAM | Recommended Hardware |
115
- |------|------|-----|---------------------|
116
- | FP16 (full) | **40GB+** | 64GB | 1x A100 / 2x RTX 3090 |
117
- | INT4 (Q4) | **12GB** | 16GB | RTX 4070 Ti / Mac M2+ |
118
- | CPU (GGUF) | — | 32GB | AMD EPYC / Intel Xeon |
119
 
120
- > 💡 **Quantized versions** (4-bit) make RE1 Pro runnable on consumer hardware with minimal quality loss.
 
 
 
 
121
 
122
  ---
123
 
124
- ## 🧪 Research Access
125
 
126
- Hyze RE1 Pro is **free and open weights** under Apache 2.0.
127
- You do not need to apply for access. No approval required. No gated repository.
128
 
129
- **We believe research should not wait for permission.**
 
 
 
 
130
 
131
  ---
132
 
133
- ## 🧭 About Hyze AI
134
 
135
- <p align="left">
136
- <img src="https://i.imgur.com/ePJMLNp.png" alt="Hyze Logo" width="30"/>
137
- </p>
138
-
139
- **Hyze AI** is a one-person research lab founded by **Hitesh**, a 13-year-old builder.
140
- Hyze exists to prove that **age and budget are not prerequisites for advancing AI**.
141
 
142
- - 🚀 **Mission**: Democratize large-scale AI research
143
- - 🔓 **License Philosophy**: Apache 2.0 — no strings attached
144
- - 🌍 **Focus**: Space, science, and accessible reasoning
 
145
 
146
- > *"DeepSeek proved you don't need billions. We're proving you don't need to be 30."*
147
 
148
  ---
149
 
150
- ## 📎 Citation
151
 
152
- ```bibtex
153
- @misc{hyze-re1-pro-2025,
154
- author = {Hitesh Vinothkumar},
155
- title = {Hyze RE1 Pro: A 20B Parameter Research Model},
156
- year = {2025},
157
- publisher = {Hugging Face},
158
- url = {https://huggingface.co/HyzeAI/Hyze-RE1-Pro}
159
- }
160
- ```
161
 
162
  ---
163
 
164
- ## 🤝 Support & Contact
165
 
166
- - 💬 **Try the live demo**: [https://hyzeai.vercel.app](https://hyzeai.vercel.app)
167
- - 📧 **Email**: hiteshv2603@gmail.com
168
- - 🐦 **Twitter/X**: [@HyzeAI](https://twitter.com/HyzeAI)
169
- - 💼 **GitHub**: [HyzeAI](https://github.com/HyzeAI)
170
-
171
- **For research collaborations, compute sponsorship, or academic partnerships — reach out.**
172
 
173
- ---
 
 
 
174
 
175
- <p align="center">
176
- <sub>Built with Xtuner Model and zero GPUs (so far).</sub>
177
- <br/>
178
- <sub>© 2025 Hyze AI. Apache 2.0.</sub>
179
- </p>
 
10
  - accuracy
11
  ---
12
  <p align="center">
13
+ <img src="https://i.imgur.com/ePJMLNp.png" alt="Hyze Logo" width="200"/>
14
+ &nbsp;&nbsp;&nbsp;
15
+ <img src="https://i.imgur.com/BVjGIik.png" alt="Founder" width="200"/>
16
  </p>
17
 
18
+ <h1 align="center">Hyze RE1 Pro</h1>
19
+
20
  <p align="center">
21
+ 20B Open-Weight Research Model by <b>Hyze AI</b>
22
  </p>
23
 
24
  <p align="center">
25
+ 🌐 <a href="https://hyzeai.vercel.app">hyzeai.vercel.app</a> •
26
+ 📘 <a href="https://hyzedocs.vercel.app">hyzedocs.vercel.app</a> •
27
+ 🧠 <a href="https://hyzecode.vercel.app">hyzecode.vercel.app</a>
28
  </p>
29
 
30
  ---
31
 
32
  ## 🚀 Overview
33
 
34
+ **Hyze RE1 Pro** is a **20-billion parameter transformer model** designed exclusively for **research and advanced reasoning tasks**.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
+ Built on the philosophy that:
 
37
 
38
+ > Frontier AI should not belong only to billion-dollar budgets.
 
 
39
 
40
+ RE1 Pro delivers strong reasoning performance in a **fully open-weight package**, empowering researchers, developers, and independent innovators.
 
 
 
 
 
 
 
41
 
42
  ---
43
 
44
+ ## 🧠 Core Focus
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
+ Hyze RE1 Pro is optimized for:
 
 
 
 
 
 
 
 
47
 
48
+ - 🔬 Advanced reasoning
49
+ - 📊 Research-oriented analysis
50
+ - 🧩 Multi-step problem solving
51
+ - 📚 Long-form structured explanations
52
+ - 🧠 Logical and technical tasks
53
 
54
+ This model prioritizes **clarity, depth, and reasoning structure** over casual chat behavior.
 
 
 
 
 
 
 
 
 
 
55
 
56
  ---
57
 
58
+ ## 📊 Model Specifications
 
 
 
 
 
 
59
 
60
+ - **Architecture:** Transformer
61
+ - **Parameters:** 20 Billion
62
+ - **Type:** Open-weight research model
63
+ - **Primary Domain:** Reasoning & Research
64
+ - **Language:** English
65
 
66
  ---
67
 
68
+ ## 🧪 Intended Use
69
 
70
+ Hyze RE1 Pro is designed for:
 
71
 
72
+ - Academic research experiments
73
+ - Independent AI research labs
74
+ - Reasoning benchmark testing
75
+ - Long-form technical analysis
76
+ - Open-weight innovation
77
 
78
  ---
79
 
80
+ ## Why RE1 Pro?
81
 
82
+ While many frontier models remain closed and restricted, RE1 Pro embraces:
 
 
 
 
 
83
 
84
+ - Accessibility
85
+ - Transparency
86
+ - Open experimentation
87
+ - ✅ Independent research freedom
88
 
89
+ It aims to reduce the barrier between individual researchers and high-performance AI systems.
90
 
91
  ---
92
 
93
+ ## ⚠️ Limitations
94
 
95
+ - Large compute requirements (20B parameters)
96
+ - Not optimized for casual short-form chat
97
+ - Outputs should be validated in academic or production contexts
 
 
 
 
 
 
98
 
99
  ---
100
 
101
+ ## 🧪 Example Usage
102
 
103
+ ```python
104
+ from transformers import pipeline
 
 
 
 
105
 
106
+ generator = pipeline(
107
+ "text-generation",
108
+ model="HyzeAI/Hyze-RE1-Pro"
109
+ )
110
 
111
+ print(generator("Explain the mathematical intuition behind backpropagation."))