utdawn commited on
Commit
d699e90
·
verified ·
1 Parent(s): 5e97d1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -53,7 +53,7 @@ tags:
53
 
54
  ## 🚀 Performance Highlights
55
  + **Leading MoE Architecture**:
56
- The open-source **Mixture-of-Experts (MoE) diffusion large language model**, pre-trained from scratch on approximately **20 trillion tokens**.
57
  + **Efficient Inference**:
58
  With **16 billion total parameters**, only **1.4 billion** are activated during inference. LLaDA2.0-mini significantly reduces computational costs while outperforming open-source dense models of similar scale.
59
  + **Impressive Performance on Code & Complex Reasoning**:
 
53
 
54
  ## 🚀 Performance Highlights
55
  + **Leading MoE Architecture**:
56
+ The open-source **Mixture-of-Experts (MoE) diffusion large language model** continually trained on the Ling2.0 series with approximately **20 trillion tokens**.
57
  + **Efficient Inference**:
58
  With **16 billion total parameters**, only **1.4 billion** are activated during inference. LLaDA2.0-mini significantly reduces computational costs while outperforming open-source dense models of similar scale.
59
  + **Impressive Performance on Code & Complex Reasoning**: