Update README.md
Browse files
README.md
CHANGED
|
@@ -68,7 +68,7 @@ library_name: transformers
|
|
| 68 |
|
| 69 |

|
| 70 |
|
| 71 |
-
# 💻 Next-
|
| 72 |
|
| 73 |
### Code your future with our models.
|
| 74 |
|
|
@@ -80,9 +80,9 @@ library_name: transformers
|
|
| 80 |
|
| 81 |
## 📖 Overview
|
| 82 |
|
| 83 |
-
**Next-
|
| 84 |
|
| 85 |
-
Unlike traditional dense models, **Next-
|
| 86 |
|
| 87 |
---
|
| 88 |
|
|
@@ -101,7 +101,7 @@ Unlike traditional dense models, **Next-CodeX** utilizes a sparse architecture w
|
|
| 101 |
|
| 102 |
**Next-Coder 30B** achieves state-of-the-art results among open-weights coding models, balancing extreme efficiency with high accuracy.
|
| 103 |
|
| 104 |
-
| Benchmark | Task Description | Next-
|
| 105 |
| :--- | :--- | :---: | :---: | :---: |
|
| 106 |
| **HumanEval** | Python Code Generation | **82.4%** | 48.2% | 79.3% |
|
| 107 |
| **MBPP** | Basic Python Programming | **86.1%** | 56.0% | 84.0% |
|
|
@@ -131,7 +131,7 @@ model, tokenizer = FastLanguageModel.from_pretrained(
|
|
| 131 |
)
|
| 132 |
|
| 133 |
messages = [
|
| 134 |
-
{"role": "system", "content": "You are Next-
|
| 135 |
{"role" : "user", "content" : "Write a highly optimized Rust function to calculate the Fibonacci sequence using memoization."}
|
| 136 |
]
|
| 137 |
|
|
|
|
| 68 |
|
| 69 |

|
| 70 |
|
| 71 |
+
# 💻 Next-Codex (L846MoE)
|
| 72 |
|
| 73 |
### Code your future with our models.
|
| 74 |
|
|
|
|
| 80 |
|
| 81 |
## 📖 Overview
|
| 82 |
|
| 83 |
+
**Next-Codex 30B** is a high-performance, specialized **Mixture-of-Experts (MoE)** Large Language Model designed specifically for code generation, debugging, and software engineering tasks.
|
| 84 |
|
| 85 |
+
Unlike traditional dense models, **Next-Codex** utilizes a sparse architecture with **30 Billion total parameters**, but only activates **3 Billion parameters per token**. This unique design allows it to deliver the deep reasoning capabilities of a massive model while maintaining the ultra-low latency and inference cost of a lightweight 3B model. It is fine-tuned on a massive corpus of code across 20+ programming languages, making it the most efficient coding assistant in its class.
|
| 86 |
|
| 87 |
---
|
| 88 |
|
|
|
|
| 101 |
|
| 102 |
**Next-Coder 30B** achieves state-of-the-art results among open-weights coding models, balancing extreme efficiency with high accuracy.
|
| 103 |
|
| 104 |
+
| Benchmark | Task Description | Next-Codex | CodeLlama 34B | DeepSeek Coder 33B |
|
| 105 |
| :--- | :--- | :---: | :---: | :---: |
|
| 106 |
| **HumanEval** | Python Code Generation | **82.4%** | 48.2% | 79.3% |
|
| 107 |
| **MBPP** | Basic Python Programming | **86.1%** | 56.0% | 84.0% |
|
|
|
|
| 131 |
)
|
| 132 |
|
| 133 |
messages = [
|
| 134 |
+
{"role": "system", "content": "You are Next-Codex, an expert software engineer and AI coding assistant."},
|
| 135 |
{"role" : "user", "content" : "Write a highly optimized Rust function to calculate the Fibonacci sequence using memoization."}
|
| 136 |
]
|
| 137 |
|