Update README.md
Browse files
README.md
CHANGED
|
@@ -5,13 +5,13 @@ pipeline_tag: text-generation
|
|
| 5 |
|
| 6 |
# 🧠 OpenModel-1T-A50B-Instruct
|
| 7 |
|
| 8 |
-
**Repository:** `thenexthub/OpenModel-1T-A50B-Instruct`
|
| 9 |
-
**Organization:** NeXTHub
|
| 10 |
-
**Model Type:** Mixture-of-Experts (MoE) Large Language Model
|
| 11 |
-
**Parameters:** 1 Trillion total | 50 Billion active per forward pass
|
| 12 |
-
**Context Length:** 128K tokens
|
| 13 |
-
**Architecture:** Evo-CoT MoE Transformer (Evolutionary Chain-of-Thought)
|
| 14 |
-
**Training Tokens:** 20+ Trillion reasoning-dense, high-quality tokens
|
| 15 |
|
| 16 |
---
|
| 17 |
|
|
|
|
| 5 |
|
| 6 |
# 🧠 OpenModel-1T-A50B-Instruct
|
| 7 |
|
| 8 |
+
- **Repository:** `thenexthub/OpenModel-1T-A50B-Instruct`
|
| 9 |
+
- **Organization:** NeXTHub
|
| 10 |
+
- **Model Type:** Mixture-of-Experts (MoE) Large Language Model
|
| 11 |
+
- **Parameters:** 1 Trillion total | 50 Billion active per forward pass
|
| 12 |
+
- **Context Length:** 128K tokens
|
| 13 |
+
- **Architecture:** Evo-CoT MoE Transformer (Evolutionary Chain-of-Thought)
|
| 14 |
+
- **Training Tokens:** 20+ Trillion reasoning-dense, high-quality tokens
|
| 15 |
|
| 16 |
---
|
| 17 |
|