ybornachot's picture
fix: clean metadata
7fe68ea
|
raw
history blame
4.24 kB
metadata
language:
  - en
pretty_name: NTv3 tutorial dataset - Genome & functional tracks
tags:
  - 🧬 genomics
  - πŸ“Š bigwig / functional tracks
  - 🎯 regression
  - ⚑ fine-tuning
  - πŸ§ͺ sequence-to-signal
  - πŸ“ functional-genomics
  - πŸ”¬ bioinformatics
task_categories:
  - other
size_categories:
  - 100K<n<1M
<!-- license: apache-2.0 -->

BigWig Genome Dataset

A Hugging Face dataset builder for generating genome sequence datasets paired with BigWig track data. Generates random sequence windows from chromosomes/regions with corresponding normalized BigWig signal values.

Features

Each example contains:

  • sequence: Uppercase ACGT DNA sequence (string)
  • bigwig_targets: Normalized BigWig values (shape [sequence_length, num_tracks])
  • chrom, start, end: Genomic coordinates

Installation

pip install datasets transformers torch pyBigWig pyfaidx numpy

Quick Start

from transformers import AutoTokenizer
from datasets import load_dataset, BuilderConfig
from torch.utils.data import DataLoader

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("your-model-name")

# Configure dataset
config = BuilderConfig(
    name="InstaDeepAI/bigwig-tracks",
    data_files={
        "train": ["chr1", "chr2"],
        "val": ["chr3"],
        "test": ["chr4"],
    },
    num_samples={"train": 1000, "val": 50, "test": 100},
    fasta_url="https://example.com/genome.fa",
    bigwig_urls=[
        "https://example.com/track1.bw", 
        "https://example.com/track2.bw"
    ],
    sequence_length=1024,
)

# Load and tokenize
dataset = load_dataset("dataset_script.py", config=config, trust_remote_code=True)
dataset = dataset.map(
    lambda examples: {
        "tokens": tokenizer(
            examples["sequence"],
            max_length=1024,
            padding="max_length",
            truncation=True,
            return_tensors=None
        )["input_ids"]
    },
    batched=True,
    remove_columns=["sequence"]
)
dataset = dataset.select_columns(["tokens", "bigwig_targets"]).with_format(type="torch")

# Create DataLoaders
train_loader = DataLoader(dataset["train"], batch_size=32, shuffle=True)

Defining Splits

Method 1: Chromosome Names

Randomly sample from entire chromosomes:

data_files={
    "train": ["chr1", "chr2", "chr3"],
    "val": ["chr4"],
    "test": ["chr5"],
}

Method 2: Chromosome Regions

Specify exact regions as (chromosome, start, end) tuples:

data_files={
    "train": [
        ("chr1", 0, 10_000_000),           # First 10Mb of chr1
        ("chr1", 15_000_000, 20_000_000),  # 15-20Mb of chr1
        ("chr2", 0, 5_000_000),            # First 5Mb of chr2
    ],
    "val": [("chr1", 20_000_000, 25_000_000)],
    "test": [("chr2", 5_000_000, 10_000_000)],
}

Configuration

Required parameters:

  • data_files (dict): Split names β†’ chromosome names or region tuples
  • num_samples (dict): Split names β†’ number of examples to generate
  • fasta_url (str): URL to reference genome FASTA (auto-downloaded)
  • bigwig_urls (list): URLs to BigWig track files (auto-downloaded)
  • sequence_length (int): Length of sequence windows in base pairs

Optional parameters:

  • data_dir (str): Directory for cached files (default: "data_cache")
  • max_workers (int): Max parallel download workers (default: 10)

How It Works

  1. Downloads FASTA and BigWig files in parallel to data_cache/ (or custom data_dir) on first run
  2. Normalizes BigWig tracks: computes per-track mean, scales by mean, clips values > 10Γ— mean
  3. Samples random sequence windows from specified chromosomes/regions
  4. Extracts DNA sequences and corresponding BigWig signal values

BigWig normalization ensures tracks with different signal ranges are comparable.

Notes

  • Files are cached in data_cache/ directory (configurable via data_dir)
  • Downloads run in parallel (up to 10 workers by default)
  • Sequences are randomly sampled (set random seed for reproducibility)
  • Ensure sequence_length matches tokenizer max_length for consistent batching
  • No custom collate function needed when using padding="max_length"