|
|
--- |
|
|
dataset_name: dignity045/Collective-Corpus |
|
|
license: apache-2.0 |
|
|
language: multilingual |
|
|
size_categories: 500B+ tokens |
|
|
task_categories: |
|
|
- text-generation |
|
|
- fill-mask |
|
|
- text-classification |
|
|
- summarization |
|
|
- question-answering |
|
|
pretty_name: Collective Corpus |
|
|
tags: |
|
|
- pretraining |
|
|
- finetuning |
|
|
- large-language-model |
|
|
- code |
|
|
- math |
|
|
- instructions |
|
|
--- |
|
|
|
|
|
# π§ Collective Corpus β Universal Pretraining + Finetuning Dataset (500B+ Tokens) |
|
|
|
|
|
[](https://huggingface.co/datasets/dignity045/Collective-Corpus) |
|
|
[](https://www.apache.org/licenses/LICENSE-2.0) |
|
|
[](#-current-status) |
|
|
|
|
|
**`Collective-Corpus`** is a massive-scale, **multi-domain** dataset designed to train Transformer-based language models **from scratch** and **finetune** them across a wide variety of domains β all in one place. |
|
|
|
|
|
## π Dataset Scope |
|
|
|
|
|
This dataset aims to **cover the full LLM lifecycle**, from raw pretraining to domain-specialized finetuning. |
|
|
|
|
|
### 1. Pretraining Corpus |
|
|
- Large-scale, diverse multilingual text sources |
|
|
- Cleaned, deduplicated, and filtered for quality |
|
|
- Inspired by datasets like [C4](https://huggingface.co/datasets/c4) and [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) |
|
|
|
|
|
### 2. Domain-Specific Finetuning |
|
|
- **Instruction Following & Dialogue** β Chatbots, multi-turn conversations |
|
|
- **Code** β Python, JavaScript, Java, C++, and more |
|
|
- **Math & Logical Reasoning** |
|
|
- **Specialized Fields** β Research papers, technical documentation |
|
|
|
|
|
--- |
|
|
|
|
|
## π Scale |
|
|
|
|
|
- **Total Tokens**: **500B+** |
|
|
- **Estimated Text Samples**: **700M+** |
|
|
- **Target Model Size**: Suitable for training large models **from scratch** |
|
|
- Covers **general-purpose** and **domain-specific** training needs |
|
|
|
|
|
--- |
|
|
|
|
|
## π― Goals |
|
|
|
|
|
1. Build a **unified corpus** for full-stack LLM development. |
|
|
2. Enable **open and reproducible** large-scale language model research. |
|
|
3. Support **finetuning for high-impact domains** like code, math, and dialogue. |
|
|
|
|
|
--- |
|
|
|
|
|
## π§ Current Status |
|
|
|
|
|
- **Model Pretraining**: Currently training a Transformer model from scratch on the full **500B+ token** dataset. |
|
|
- **Public Release**: Planned **after model training completes**. |
|
|
|
|
|
--- |
|
|
|
|
|
## π€ Collaboration |
|
|
|
|
|
We are **actively seeking open-source collaborators** to: |
|
|
- Contribute to dataset cleaning, filtering, and deduplication |
|
|
- Assist in large-scale model training and evaluation |
|
|
- Provide expertise for **specialized domain corpora** |
|
|
|
|
|
We also **offer free guidance** on: |
|
|
- Dataset curation best practices |
|
|
- Efficient large-scale LLM training pipelines |
|
|
- Transformer architecture optimization |
|
|
|
|
|
--- |
|
|
|
|
|
## πΌ Open for Collaboration |
|
|
|
|
|
Iβm actively looking to connect with researchers, engineers, and organizations passionate about **dataset engineering**, **large-scale model training**, and **applied NLP**. |
|
|
Whether itβs open-source projects, research collaborations, or large-scale AI initiatives β letβs build something impactful together. |
|
|
|
|
|
π **GitHub**: [Dhiraj309](https://github.com/Dhiraj309) |
|
|
π **LinkedIn**: [Dhiraj Patil](https://www.linkedin.com/in/dhiraj-patil-b42262323) |
|
|
|
|
|
--- |
|
|
|
|
|
## π
Release Timeline |
|
|
|
|
|
| Stage | Status | |
|
|
|------------------------|------------------| |
|
|
| Data Curation | π§ In Progress | |
|
|
| Model Pretraining | π§ In Progress | |
|
|
| Dataset Public Release | β³ Post-training | |
|
|
|
|
|
--- |
|
|
|
|
|
## π License |
|
|
|
|
|
Released under the **Apache License 2.0** β you are free to use, modify, and distribute this dataset in compliance with the [full license text](https://www.apache.org/licenses/LICENSE-2.0). |
|
|
|
|
|
--- |
|
|
|
|
|
### π Letβs build the next generation of **open-source LLMs** β together. |