sparrow / README.md
TerenceLau's picture
Upload dataset
5286d3f verified
metadata
license: cc-by-sa-3.0
language:
  - ar
  - ru
  - fr
  - es
  - zh
  - en
size_categories:
  - 1M<n<10M
dataset_info:
  - config_name: default
    features:
      - name: id
        dtype: string
      - name: url
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: pretrain
        num_bytes: 3895403456
        num_examples: 1200000
    download_size: 2293060098
    dataset_size: 3895403456
  - config_name: instruct_tuning
    features:
      - name: instruct
        dtype: string
      - name: input
        dtype: string
      - name: output
        dtype: string
    splits:
      - name: train
        num_bytes: 11251187014
        num_examples: 6720312
      - name: valid
        num_bytes: 590616379
        num_examples: 353701
    download_size: 6802172962
    dataset_size: 11841803393
configs:
  - config_name: default
    data_files:
      - split: pretrain
        path: data/pretrain-*
  - config_name: instruct_tuning
    data_files:
      - split: train
        path: instruct_tuning/train-*
      - split: valid
        path: instruct_tuning/valid-*
pretty_name: sparrow

Sparrow

麻雀虽小 五脏俱全

Small as it is, the sparrow has all the vital organs

The pretraining dataset is collected from wikimedia/wikipedia


The sparrow project aims to help beginner to understand the base architecture of a large language model from scratch. Not only the model, but also the optimization methods that are widely use to shorten the training process.

  • tokenizer from scratch & merge tokenizer
  • model modules from scratch & train the stacked model
  • supervised fine-tuning
  • Reward Modelling

Data Preparation

  1. For tokenizer and pretraining process, to simplify the data collection process, we use the data from wikimedia/wikipedia, ensuring that our training corpus is both rich in content and easily accessible. We use 10%-20% of the data with six official language of United Nation — Arabic, Chinese, English, French, Russian, and Spanish—providing a diverse and representative sample for training our tokenizer.

Tokenizer

A good tokenizer is vital as it is the first component that converts raw text into a structured format a model can understand. It determines the granularity of tokenization and ensures that important elements—such as special tokens marking the beginning and end of a sentence—are consistently incorporated, directly affecting the model's ability to learn and generate language accurately. In tokenizer/tokenizer.py, we provide a class SparrowTokenizer to help you understand the how a tokenizer been trained. This script demonstrates the complete pipeline—from preprocessing raw data and creating a training corpus, to training a BPE-based tokenizer with customized post-processing for adding special tokens, and finally, saving the vocabulary and configuration files. You can explore this workflow by running:

python tokenizer/tokenizer.py --args configs/tokenizers.yaml

Models Artitecture

Pretraining Set

from datasets import load_dataset
dataset = load_dataset("TerenceLau/sparrow", split="pretrain")