File size: 847 Bytes
13bc2b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 5485226321
    num_examples: 1000000
  download_size: 3353329992
  dataset_size: 5485226321
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: odc-by
task_categories:
- text-generation
- fill-mask
- feature-extraction
language:
- en
size_categories:
- 100K<n<1M
---

# fineweb "longish" 1M


1m samples w/ random seed w.r.t. previous samples.

- min 512 GPT-4 tiktoken tokens
- max 8192 GPT-4 tiktoken tokens




BEE-spoke-data/claude-tokenizer token count:

```
          token_count
count  1000000.000000
mean      1218.231641
std        935.733312
min        139.000000
25%        683.000000
50%        905.000000
75%       1350.000000
max       9550.000000
```


 - Total count:	1218.23 M tokens