eoinf commited on
Commit
454ba30
·
verified ·
1 Parent(s): 2d5553d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +69 -15
README.md CHANGED
@@ -1,17 +1,71 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: tokens
5
- list: int64
6
- splits:
7
- - name: train
8
- num_bytes: 6532121844
9
- num_examples: 796989
10
- download_size: 1451384768
11
- dataset_size: 6532121844
12
- configs:
13
- - config_name: default
14
- data_files:
15
- - split: train
16
- path: data/train-*
17
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - tokenized
9
+ - language-modeling
10
+ size_categories:
11
+ - 100K<n<1M
 
 
 
 
 
12
  ---
13
+ # Dataset Card for eoinf/pile_llama_800m
14
+
15
+ ## Original Dataset
16
+ Original dataset: monology/pile-uncopyrighted
17
+
18
+ ## Dataset Details
19
+
20
+ - **Total Tokens**: 816,116,736
21
+ - **Total Sequences**: 796,989
22
+ - **Context Length**: 1024 tokens
23
+ - **Tokenizer**: meta-llama/Llama-2-7b-hf
24
+ - **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
25
+
26
+ ## Preprocessing
27
+
28
+ Each document was:
29
+ 1. Tokenized using the meta-llama/Llama-2-7b-hf tokenizer
30
+ 2. Prefixed with a BOS (beginning of sequence) token
31
+ 3. Suffixed with an EOS (end of sequence) token
32
+ 4. Packed into fixed-length sequences of 1024 tokens
33
+
34
+ ## Usage
35
+
36
+ ```python
37
+ from datasets import load_dataset
38
+
39
+ # Load the dataset
40
+ dataset = load_dataset("eoinf/pile_llama_800m")
41
+
42
+ # Access training data
43
+ train_data = dataset["train"]
44
+ print(train_data[0]["tokens"]) # First sequence
45
+ ```
46
+
47
+ ## Use with PyTorch
48
+
49
+ ```python
50
+ import torch
51
+ from datasets import load_dataset
52
+ from torch.utils.data import DataLoader
53
+
54
+ dataset = load_dataset("eoinf/pile_llama_800m", split="train")
55
+
56
+ # Convert to PyTorch tensors
57
+ dataset.set_format(type="torch", columns=["tokens"])
58
+
59
+ # Create DataLoader
60
+ dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
61
+
62
+ for batch in dataloader:
63
+ tokens = batch["tokens"] # Shape: (batch_size, 1024)
64
+ # Your training code here
65
+ ```
66
+
67
+ ## Dataset Statistics
68
+
69
+ - Total storage size: ~1.63 GB (as uint16)
70
+ - Sequences per batch folder: 10,000
71
+ - Number of batch folders: 80