Update README.md
Browse files
README.md
CHANGED
|
@@ -46,7 +46,7 @@ tags:
|
|
| 46 |
|
| 47 |
## 🚀 Performance Highlights
|
| 48 |
+ **Leading MoE Architecture**:
|
| 49 |
-
The open-source **Mixture-of-Experts (MoE) diffusion large language model
|
| 50 |
+ **Efficient Inference**:
|
| 51 |
With **100 billion total parameters**, only **6.1 billion** are activated during inference. LLaDA2.0-flash-preview significantly reduces computational costs while outperforming open-source dense models of similar scale.
|
| 52 |
+ **Impressive Performance on Code & Complex Reasoning**:
|
|
|
|
| 46 |
|
| 47 |
## 🚀 Performance Highlights
|
| 48 |
+ **Leading MoE Architecture**:
|
| 49 |
+
The open-source **Mixture-of-Experts (MoE) diffusion large language model** continually trained on the Ling2.0 series with approximately **20 trillion tokens**.
|
| 50 |
+ **Efficient Inference**:
|
| 51 |
With **100 billion total parameters**, only **6.1 billion** are activated during inference. LLaDA2.0-flash-preview significantly reduces computational costs while outperforming open-source dense models of similar scale.
|
| 52 |
+ **Impressive Performance on Code & Complex Reasoning**:
|