utdawn commited on
Commit
2c93377
·
verified ·
1 Parent(s): 0e245a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -46,7 +46,7 @@ tags:
46
 
47
  ## 🚀 Performance Highlights
48
  + **Leading MoE Architecture**:
49
- The open-source **Mixture-of-Experts (MoE) diffusion large language model**, pre-trained from scratch on approximately **20 trillion tokens**.
50
  + **Efficient Inference**:
51
  With **100 billion total parameters**, only **6.1 billion** are activated during inference. LLaDA2.0-flash-preview significantly reduces computational costs while outperforming open-source dense models of similar scale.
52
  + **Impressive Performance on Code & Complex Reasoning**:
 
46
 
47
  ## 🚀 Performance Highlights
48
  + **Leading MoE Architecture**:
49
+ The open-source **Mixture-of-Experts (MoE) diffusion large language model** continually trained on the Ling2.0 series with approximately **20 trillion tokens**.
50
  + **Efficient Inference**:
51
  With **100 billion total parameters**, only **6.1 billion** are activated during inference. LLaDA2.0-flash-preview significantly reduces computational costs while outperforming open-source dense models of similar scale.
52
  + **Impressive Performance on Code & Complex Reasoning**: