Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,9 +1,6 @@
|
|
| 1 |
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
---
|
| 4 |
-
---
|
| 5 |
dataset_name: dignity045/Collective-Corpus
|
| 6 |
-
license:
|
| 7 |
language: multilingual
|
| 8 |
size_categories: 500B+ tokens
|
| 9 |
task_categories:
|
|
@@ -25,7 +22,7 @@ tags:
|
|
| 25 |
# 🧠Collective Corpus — Universal Pretraining + Finetuning Dataset (500B+ Tokens)
|
| 26 |
|
| 27 |
[](https://huggingface.co/datasets/dignity045/Collective-Corpus)
|
| 28 |
-
[](#-current-status)
|
| 30 |
|
| 31 |
**`Collective-Corpus`** is a massive-scale, **multi-domain** dataset designed to train Transformer-based language models **from scratch** and **finetune** them across a wide variety of domains — all in one place.
|
|
@@ -105,7 +102,7 @@ Interested in **collaboration, hiring, or consulting** for dataset engineering,
|
|
| 105 |
|
| 106 |
## 📜 License
|
| 107 |
|
| 108 |
-
Released under the **
|
| 109 |
|
| 110 |
---
|
| 111 |
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
dataset_name: dignity045/Collective-Corpus
|
| 3 |
+
license: apache-2.0
|
| 4 |
language: multilingual
|
| 5 |
size_categories: 500B+ tokens
|
| 6 |
task_categories:
|
|
|
|
| 22 |
# 🧠Collective Corpus — Universal Pretraining + Finetuning Dataset (500B+ Tokens)
|
| 23 |
|
| 24 |
[](https://huggingface.co/datasets/dignity045/Collective-Corpus)
|
| 25 |
+
[](https://www.apache.org/licenses/LICENSE-2.0)
|
| 26 |
[](#-current-status)
|
| 27 |
|
| 28 |
**`Collective-Corpus`** is a massive-scale, **multi-domain** dataset designed to train Transformer-based language models **from scratch** and **finetune** them across a wide variety of domains — all in one place.
|
|
|
|
| 102 |
|
| 103 |
## 📜 License
|
| 104 |
|
| 105 |
+
Released under the **Apache License 2.0** — you are free to use, modify, and distribute this dataset in compliance with the [full license text](https://www.apache.org/licenses/LICENSE-2.0).
|
| 106 |
|
| 107 |
---
|
| 108 |
|