Update README.md
#3
by
dododododo
- opened
README.md
CHANGED
|
@@ -1,8 +1,9 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
-
|
| 5 |
# MAP-CC
|
|
|
|
|
|
|
| 6 |
An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data.
|
| 7 |
|
| 8 |
## Usage Instructions
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
|
|
|
| 4 |
# MAP-CC
|
| 5 |
+
[**π Homepage**](https://chinese-tiny-llm.github.io) | [**π€ MAP-CC**](https://huggingface.co/datasets/m-a-p/MAP-CC) | [**π€ CHC-Bench**](https://huggingface.co/datasets/m-a-p/CHC-Bench) | [**π€ CT-LLM**](https://huggingface.co/collections/m-a-p/chinese-tiny-llm-660d0133dff6856f94ce0fc6) | [**π arXiv**]() | [**GitHub**](https://github.com/Chinese-Tiny-LLM/Chinese-Tiny-LLM)
|
| 6 |
+
|
| 7 |
An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data.
|
| 8 |
|
| 9 |
## Usage Instructions
|