Update README.md
Browse files
README.md
CHANGED
|
@@ -53,7 +53,7 @@ tags:
|
|
| 53 |
|
| 54 |
## 🚀 Performance Highlights
|
| 55 |
+ **Leading MoE Architecture**:
|
| 56 |
-
The open-source **Mixture-of-Experts (MoE) diffusion large language model
|
| 57 |
+ **Efficient Inference**:
|
| 58 |
With **16 billion total parameters**, only **1.4 billion** are activated during inference. LLaDA2.0-mini significantly reduces computational costs while outperforming open-source dense models of similar scale.
|
| 59 |
+ **Impressive Performance on Code & Complex Reasoning**:
|
|
|
|
| 53 |
|
| 54 |
## 🚀 Performance Highlights
|
| 55 |
+ **Leading MoE Architecture**:
|
| 56 |
+
The open-source **Mixture-of-Experts (MoE) diffusion large language model** continually trained on the Ling2.0 series with approximately **20 trillion tokens**.
|
| 57 |
+ **Efficient Inference**:
|
| 58 |
With **16 billion total parameters**, only **1.4 billion** are activated during inference. LLaDA2.0-mini significantly reduces computational costs while outperforming open-source dense models of similar scale.
|
| 59 |
+ **Impressive Performance on Code & Complex Reasoning**:
|