indro-3B-corpus / README.md
abhinav337463's picture
Update README.md
344b8e6
|
raw
history blame
3.55 kB
metadata
language:
  - en
  - code
license: apache-2.0
size_categories:
  - 10B<n<100B
task_categories:
  - text-generation
pretty_name: Indro 3B Corpus
tags:
  - pretraining
  - fineweb
  - starcoder
  - custom-curation

🌌 Indro-3B-Corpus

A hyper-curated, deduplicated, and mathematically verified dataset designed for training a 3-Billion parameter large language model.

Built and maintained by Indro AI , this corpus represents a massive engineering effort to extract the absolute highest-quality tokens from the internet. It is processed using fault-tolerant, multi-threaded ingestion engines designed to survive cloud evictions while maintaining perfect token cryptography.

πŸ“Š Dataset Overview

The Indro-3B-Corpus is divided into two primary data streams, meticulously filtered to balance advanced reasoning (code) with deep world knowledge (web text).

  • Total Target Size: ~51 Billion Tokens
  • Format: .jsonl.zst (Zstandard compressed JSON lines for extreme I/O speed)
  • Language: English & Programming Languages (primarily Python)

πŸ›οΈ The Sub-Datasets

1. Silver Data (Text / World Knowledge)

  • Source: FineWeb-Edu
  • Target Tokens: 39,000,000,000 (39B)
  • Engine: Titan v12.0 (The Singularity)
  • Curation Focus: Academic, educational, and high-value informational text. SEO spam, micro-stubs, and duplicate web pages are mathematically vaporized.

2. StarCoder Clean (Logical Reasoning / Code)

  • Source: StarCoderData (Python Subset)
  • Target Tokens: 12,000,000,000 (12B)
  • Engine: CodeForge v3.0 (The Apex)
  • Curation Focus: Pure, compilable code. We strip auto-generated boilerplate, massive licenses, and run an AST Syntax Verification to ensure the model only learns from code that actually executes.

βš™οΈ The Indro-Nexus Architecture

This dataset wasn't just downloaded; it was forged. The data was processed using custom, highly advanced ingestion scripts featuring:

  • Immortal State Sync: A distributed checkpointing system that links local processing to a cloud-based master_ledger.json. If a server crashes or restarts, the engine resumes at the exact row without dropping a single token.
  • Cryptographic Deduplication: Dual-layer defense using MinHash LSH (Locality-Sensitive Hashing) and ultra-strict Scalable Bloom Filters to ensure zero data leakage and prevent the AI from memorizing duplicate text.
  • AST Syntax Snipping: (CodeForge specific) Every single Python script is passed through an Abstract Syntax Tree parser. If the code is corrupted or contains fatal syntax errors, it is dropped entirely.
  • Asynchronous I/O Pipeline: Multi-threaded streaming allows the engines to download, decompress, clean, re-compress (ZSTD level 3), and upload 250MB data shards simultaneously without I/O bottlenecks.

πŸš€ How to Use This Dataset

Because the data is highly compressed using Zstandard, you can stream it directly into your tokenizer or training loop with minimal RAM footprint.

from datasets import load_dataset

# Load the Educational Text Data
text_data = load_dataset("Indro-ai/Indro-3B-Corpus", data_dir="silver_data", split="train", streaming=True)

# Load the Verified Code Data
code_data = load_dataset("Indro-ai/Indro-3B-Corpus", data_dir="starcoder_clean", split="train", streaming=True)

# Example iteration
for item in text_data:
    print(item["text"])
    # Note: Tokens are pre-calculated to save CPU during streaming!
    print(f"Token Count: {item['tok']}") 
    break