update readme
Browse files
README.md
CHANGED
|
@@ -32,7 +32,7 @@ Visit our Hugging Face (click links above), search checkpoints with names starti
|
|
| 32 |
|
| 33 |
|
| 34 |
The `dots.llm1` model is a large-scale MoE model that activates 14B parameters out of a total of 142B parameters, delivering performance on par with state-of-the-art models.
|
| 35 |
-
Leveraging our meticulously crafted and efficient data processing pipeline, `dots.llm1` achieves performance comparable to Qwen2.5-72B after pretrained on
|
| 36 |
|
| 37 |
|
| 38 |
<p align="center">
|
|
@@ -43,7 +43,7 @@ Leveraging our meticulously crafted and efficient data processing pipeline, `dot
|
|
| 43 |
|
| 44 |
**This repo contains the base and instruction-tuned `dots.llm1` model**. which has the following features:
|
| 45 |
|
| 46 |
-
- Type: A MoE model with 14B activated and 142B total parameters trained on
|
| 47 |
- Training Stages: Pretraining and SFT.
|
| 48 |
- Architecture: Multi-head Attention with QK-Norm in attention Layer, fine-grained MoE utilizing top-6 out of 128 routed experts, plus 2 shared experts.
|
| 49 |
- Number of Layers: 62
|
|
@@ -55,10 +55,10 @@ Leveraging our meticulously crafted and efficient data processing pipeline, `dot
|
|
| 55 |
The highlights from `dots.llm1` include:
|
| 56 |
|
| 57 |
- **Enhanced Data Processing**: We propose a scalable and fine-grained *three-stage* data processing framework designed to generate large-scale, high-quality and diverse data for pretraining.
|
| 58 |
-
- **No Synthetic Data during Pretraining**:
|
| 59 |
- **Performance and Cost Efficiency**: `dots.llm1` is an open-source model that activates only *14B* parameters at inference, delivering both comprehensive capabilities and high computational efficiency.
|
| 60 |
- **Infrastructure**: We introduce an innovative MoE all-to-all communication and computation overlapping recipe based on interleaved 1F1B pipeline scheduling and an efficient grouped GEMM implementation to boost computational efficiency.
|
| 61 |
-
- **Open Accessibility to Model Dynamics**: Intermediate model checkpoints
|
| 62 |
|
| 63 |
## 3. Example Usage
|
| 64 |
|
|
|
|
| 32 |
|
| 33 |
|
| 34 |
The `dots.llm1` model is a large-scale MoE model that activates 14B parameters out of a total of 142B parameters, delivering performance on par with state-of-the-art models.
|
| 35 |
+
Leveraging our meticulously crafted and efficient data processing pipeline, `dots.llm1` achieves performance comparable to Qwen2.5-72B after pretrained on high-quality corpus without synthetic data. To foster further research, we open-source intermediate training checkpoints spanning the entire training process, providing valuable insights into the learning dynamics of large language models.
|
| 36 |
|
| 37 |
|
| 38 |
<p align="center">
|
|
|
|
| 43 |
|
| 44 |
**This repo contains the base and instruction-tuned `dots.llm1` model**. which has the following features:
|
| 45 |
|
| 46 |
+
- Type: A MoE model with 14B activated and 142B total parameters trained on high-quality corpus.
|
| 47 |
- Training Stages: Pretraining and SFT.
|
| 48 |
- Architecture: Multi-head Attention with QK-Norm in attention Layer, fine-grained MoE utilizing top-6 out of 128 routed experts, plus 2 shared experts.
|
| 49 |
- Number of Layers: 62
|
|
|
|
| 55 |
The highlights from `dots.llm1` include:
|
| 56 |
|
| 57 |
- **Enhanced Data Processing**: We propose a scalable and fine-grained *three-stage* data processing framework designed to generate large-scale, high-quality and diverse data for pretraining.
|
| 58 |
+
- **No Synthetic Data during Pretraining**: High-quality non-synthetic tokens was used in base model pretraining.
|
| 59 |
- **Performance and Cost Efficiency**: `dots.llm1` is an open-source model that activates only *14B* parameters at inference, delivering both comprehensive capabilities and high computational efficiency.
|
| 60 |
- **Infrastructure**: We introduce an innovative MoE all-to-all communication and computation overlapping recipe based on interleaved 1F1B pipeline scheduling and an efficient grouped GEMM implementation to boost computational efficiency.
|
| 61 |
+
- **Open Accessibility to Model Dynamics**: Intermediate model checkpoints are released spanning the entire training process, facilitating future research into the learning dynamics of large language models.
|
| 62 |
|
| 63 |
## 3. Example Usage
|
| 64 |
|