Collective-Corpus / README.md
dignity045's picture
Update README.md
26e9fe4 verified
metadata
dataset_name: dignity045/Collective-Corpus
license: apache-2.0
language: multilingual
size_categories: 500B+ tokens
task_categories:
  - text-generation
  - fill-mask
  - text-classification
  - summarization
  - question-answering
pretty_name: Collective Corpus
tags:
  - pretraining
  - finetuning
  - large-language-model
  - code
  - math
  - instructions

🧠 Collective Corpus β€” Universal Pretraining + Finetuning Dataset (500B+ Tokens)

Hugging Face License: Apache-2.0 Status

Collective-Corpus is a massive-scale, multi-domain dataset designed to train Transformer-based language models from scratch and finetune them across a wide variety of domains β€” all in one place.

πŸ“š Dataset Scope

This dataset aims to cover the full LLM lifecycle, from raw pretraining to domain-specialized finetuning.

1. Pretraining Corpus

  • Large-scale, diverse multilingual text sources
  • Cleaned, deduplicated, and filtered for quality
  • Inspired by datasets like C4 and FineWeb

2. Domain-Specific Finetuning

  • Instruction Following & Dialogue β€” Chatbots, multi-turn conversations
  • Code β€” Python, JavaScript, Java, C++, and more
  • Math & Logical Reasoning
  • Specialized Fields β€” Research papers, technical documentation

πŸ“Š Scale

  • Total Tokens: 500B+
  • Estimated Text Samples: 700M+
  • Target Model Size: Suitable for training large models from scratch
  • Covers general-purpose and domain-specific training needs

🎯 Goals

  1. Build a unified corpus for full-stack LLM development.
  2. Enable open and reproducible large-scale language model research.
  3. Support finetuning for high-impact domains like code, math, and dialogue.

🚧 Current Status

  • Model Pretraining: Currently training a Transformer model from scratch on the full 500B+ token dataset.
  • Public Release: Planned after model training completes.

🀝 Collaboration

We are actively seeking open-source collaborators to:

  • Contribute to dataset cleaning, filtering, and deduplication
  • Assist in large-scale model training and evaluation
  • Provide expertise for specialized domain corpora

We also offer free guidance on:

  • Dataset curation best practices
  • Efficient large-scale LLM training pipelines
  • Transformer architecture optimization

πŸ’Ό Open for Collaboration

I’m actively looking to connect with researchers, engineers, and organizations passionate about dataset engineering, large-scale model training, and applied NLP.
Whether it’s open-source projects, research collaborations, or large-scale AI initiatives β€” let’s build something impactful together.

πŸ”— GitHub: Dhiraj309
πŸ”— LinkedIn: Dhiraj Patil


πŸ“… Release Timeline

Stage Status
Data Curation 🚧 In Progress
Model Pretraining 🚧 In Progress
Dataset Public Release ⏳ Post-training

πŸ“œ License

Released under the Apache License 2.0 β€” you are free to use, modify, and distribute this dataset in compliance with the full license text.


🌍 Let’s build the next generation of open-source LLMs β€” together.