Datasets:
File size: 3,838 Bytes
6f17e5f 98aa3d9 6f17e5f 98aa3d9 6f17e5f 26e9fe4 6f17e5f 26e9fe4 6f17e5f 26e9fe4 6f17e5f 98aa3d9 6f17e5f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
---
dataset_name: dignity045/Collective-Corpus
license: apache-2.0
language: multilingual
size_categories: 500B+ tokens
task_categories:
- text-generation
- fill-mask
- text-classification
- summarization
- question-answering
pretty_name: Collective Corpus
tags:
- pretraining
- finetuning
- large-language-model
- code
- math
- instructions
---
# π§ Collective Corpus β Universal Pretraining + Finetuning Dataset (500B+ Tokens)
[](https://huggingface.co/datasets/dignity045/Collective-Corpus)
[](https://www.apache.org/licenses/LICENSE-2.0)
[](#-current-status)
**`Collective-Corpus`** is a massive-scale, **multi-domain** dataset designed to train Transformer-based language models **from scratch** and **finetune** them across a wide variety of domains β all in one place.
## π Dataset Scope
This dataset aims to **cover the full LLM lifecycle**, from raw pretraining to domain-specialized finetuning.
### 1. Pretraining Corpus
- Large-scale, diverse multilingual text sources
- Cleaned, deduplicated, and filtered for quality
- Inspired by datasets like [C4](https://huggingface.co/datasets/c4) and [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
### 2. Domain-Specific Finetuning
- **Instruction Following & Dialogue** β Chatbots, multi-turn conversations
- **Code** β Python, JavaScript, Java, C++, and more
- **Math & Logical Reasoning**
- **Specialized Fields** β Research papers, technical documentation
---
## π Scale
- **Total Tokens**: **500B+**
- **Estimated Text Samples**: **700M+**
- **Target Model Size**: Suitable for training large models **from scratch**
- Covers **general-purpose** and **domain-specific** training needs
---
## π― Goals
1. Build a **unified corpus** for full-stack LLM development.
2. Enable **open and reproducible** large-scale language model research.
3. Support **finetuning for high-impact domains** like code, math, and dialogue.
---
## π§ Current Status
- **Model Pretraining**: Currently training a Transformer model from scratch on the full **500B+ token** dataset.
- **Public Release**: Planned **after model training completes**.
---
## π€ Collaboration
We are **actively seeking open-source collaborators** to:
- Contribute to dataset cleaning, filtering, and deduplication
- Assist in large-scale model training and evaluation
- Provide expertise for **specialized domain corpora**
We also **offer free guidance** on:
- Dataset curation best practices
- Efficient large-scale LLM training pipelines
- Transformer architecture optimization
---
## πΌ Open for Collaboration
Iβm actively looking to connect with researchers, engineers, and organizations passionate about **dataset engineering**, **large-scale model training**, and **applied NLP**.
Whether itβs open-source projects, research collaborations, or large-scale AI initiatives β letβs build something impactful together.
π **GitHub**: [Dhiraj309](https://github.com/Dhiraj309)
π **LinkedIn**: [Dhiraj Patil](https://www.linkedin.com/in/dhiraj-patil-b42262323)
---
## π
Release Timeline
| Stage | Status |
|------------------------|------------------|
| Data Curation | π§ In Progress |
| Model Pretraining | π§ In Progress |
| Dataset Public Release | β³ Post-training |
---
## π License
Released under the **Apache License 2.0** β you are free to use, modify, and distribute this dataset in compliance with the [full license text](https://www.apache.org/licenses/LICENSE-2.0).
---
### π Letβs build the next generation of **open-source LLMs** β together. |