--- license: apache-2.0 task_categories: - text-generation language: - en tags: - web-crawl - pretraining - nlp - text-corpus pretty_name: Web Crawl 2026 size_categories: - 10B ~350MB compressed) - Auto-upload: each completed chunk is uploaded to HuggingFace Hub Seed sources: 1. Common Crawl URL index (CC-MAIN-2024-10 through CC-MAIN-2025-08) 2. Wikipedia random articles API (20K articles) 3. Sitemaps from 34 major sites (Reuters, BBC, Nature, StackOverflow, etc.) 4. Hacker News top/new/best stories Performance on Titan Xp box ($0.06/hr): - Phase 1: 562K seeds loaded in 28 seconds - Phase 2: 150-300 docs/s sustained throughput - ~1.2GB chunk every 5-6 minutes - ~12-15 GB/hour of raw crawled text - Cost: ~$0.004 per GB of crawled text ### Building and Running Install Rust, then: cd crawler/rust cargo build --release ulimit -n 65536 ./target/release/crawl_rust > crawl.log 2>&1 ### Quality Filtering - HTML text extraction via scraper crate (article/main/body selectors) - Minimum 200 chars, maximum 200K chars - Content-type filtering (only text/html) - URL filtering: blocks social media, login pages, media files, admin pages - Deduplication via MD5 content hash ## Intended Use Pretraining data for the AGILLM-3 language model (698M params, joint AR+SAT architecture). ## License Apache 2.0