metadata
license: odc-by
language:
- en
tags:
- debug
- fineweb
- sample
size_categories:
- n<10
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 25549
num_examples: 10
- name: validation
num_bytes: 14712
num_examples: 8
download_size: 42891
dataset_size: 40261
femto-fineweb
A tiny 10-sample subset of FineWeb designed specifically for debugging purposes.
Purpose
This dataset contains only the first 10 samples from the FineWeb dataset, making it ideal for:
- Quick debugging of data pipelines
- Testing code without downloading large datasets
- Rapid prototyping and development
- CI/CD testing
Dataset Structure
The dataset has the same structure as the original FineWeb dataset:
text: The text contentid: Unique identifierdump: CommonCrawl dump identifierurl: Source URLdate: Dump datefile_path: Path in the dumplanguage: Language codelanguage_score: Language detection confidencetoken_count: Number of tokens
Usage
from datasets import load_dataset
dataset = load_dataset("Butanium/femto-fineweb", split="train")
print(f"Dataset size: {len(dataset)} samples")
Source
This dataset is derived from FineWeb (sample-10BT) and inherits its ODC-BY license.
Citation
If you use FineWeb in your research, please cite:
@software{penedo2024fineweb,
author = {Penedo, Guilherme and Kydlíček, Hynek and Cappelli, Anton and Wolf, Thomas and Sasko, Mario},
title = {FineWeb: decanting the web for the finest text data at scale},
month = May,
year = 2024,
url = {https://huggingface.co/datasets/HuggingFaceFW/fineweb}
}