WebText-2 / README.md
Raziel1234's picture
Update README.md
859d612 verified
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
tags:
  - code
  - medical
  - biology
  - chemistry
  - finance

Orion-Spark-2 Dataset

Overview

The Orion-Spark-2 Dataset is a text corpus curated for training the Orion-Spark-2 transformer language model. It consists of a diverse collection of sentences extracted from multiple sources including Wikipedia articles, technology news sites, developer resources, and other open-access web pages. The dataset is designed to provide broad coverage of general knowledge, programming topics, artificial intelligence, space, popular culture, and current events.

Structure

  • File: corpus.txt
  • Format: Plain text, one sentence per line.
  • Encoding: UTF-8
  • Line Count: Approximately 60,000+ lines
  • Checkpoint: corpus_checkpoint.txt to track downloaded lines for resuming corpus collection.

Sources

The dataset draws content from:

  • Wikipedia pages (various topics including AI, programming languages, mathematics, astronomy, and historical events)
  • News and tech sites (BBC Technology, TechCrunch)
  • Open-source repositories (GitHub)
  • Educational and community platforms (Fast.ai)
  • Hugging Face datasets

Processing

  • Each line in the dataset is cleaned to remove excessive whitespace.
  • Sentences shorter than 30 characters are discarded.
  • HTML content is parsed using BeautifulSoup to extract text from paragraph and header tags (<p>, <h1>, <h2>, <h3>).
  • Sentences are split on punctuation marks (., ?, !) to ensure individual sentence granularity.

Usage

  1. Load the dataset:
    from torch.utils.data import DataLoader
    from dataset import TextDataset
    dataset = TextDataset(texts, tokenizer)
    Use TextDataset for training or evaluation in PyTorch.
    

Pad sequences using the collate_batch function when forming batches for model training.

Notes The dataset is intended for educational and research purposes.

It contains only publicly available information; no private or copyrighted content has been included beyond fair use.

Designed for training medium-sized language models (30M parameters) efficiently with maximum sequence length of 128 tokens.