File size: 2,231 Bytes
6804eb5
119b62e
 
 
 
 
 
 
 
 
49382d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6804eb5
119b62e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
license: odc-by
language:
- en
tags:
- debug
- fineweb
- sample
size_categories:
- n<10
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
dataset_info:
  features:
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: dump
    dtype: string
  - name: url
    dtype: string
  - name: date
    dtype: string
  - name: file_path
    dtype: string
  - name: language
    dtype: string
  - name: language_score
    dtype: float64
  - name: token_count
    dtype: int64
  splits:
  - name: train
    num_bytes: 25549
    num_examples: 10
  - name: validation
    num_bytes: 14712
    num_examples: 8
  download_size: 42891
  dataset_size: 40261
---

# femto-fineweb

A tiny 10-sample subset of [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) designed specifically for debugging purposes.

## Purpose

This dataset contains only the first 10 samples from the FineWeb dataset, making it ideal for:
- Quick debugging of data pipelines
- Testing code without downloading large datasets
- Rapid prototyping and development
- CI/CD testing

## Dataset Structure

The dataset has the same structure as the original FineWeb dataset:
- `text`: The text content
- `id`: Unique identifier
- `dump`: CommonCrawl dump identifier
- `url`: Source URL
- `date`: Dump date
- `file_path`: Path in the dump
- `language`: Language code
- `language_score`: Language detection confidence
- `token_count`: Number of tokens

## Usage

```python
from datasets import load_dataset

dataset = load_dataset("Butanium/femto-fineweb", split="train")
print(f"Dataset size: {len(dataset)} samples")
```

## Source

This dataset is derived from [FineWeb (sample-10BT)](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and inherits its ODC-BY license.

## Citation

If you use FineWeb in your research, please cite:

```bibtex
@software{penedo2024fineweb,
  author = {Penedo, Guilherme and Kydlíček, Hynek and Cappelli, Anton and Wolf, Thomas and Sasko, Mario},
  title = {FineWeb: decanting the web for the finest text data at scale},
  month = May,
  year = 2024,
  url = {https://huggingface.co/datasets/HuggingFaceFW/fineweb}
}
```