eoinf commited on
Commit
9d49519
·
verified ·
1 Parent(s): 1c63cfc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +50 -17
README.md CHANGED
@@ -1,17 +1,50 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: tokens
5
- list: int64
6
- splits:
7
- - name: train
8
- num_bytes: 409800
9
- num_examples: 50
10
- download_size: 114878
11
- dataset_size: 409800
12
- configs:
13
- - config_name: default
14
- data_files:
15
- - split: train
16
- path: data/train-*
17
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for eoinf/tokenized_dataset_test3
2
+
3
+ ## Original dataset
4
+ Original dataset: monology/pile-uncopyrighted
5
+
6
+ ## Dataset Details
7
+
8
+ - **Total Tokens**: 51,200
9
+ - **Total Sequences**: 50
10
+ - **Context Length**: 1024 tokens
11
+ - **Tokenizer**: EleutherAI/gpt-neox-20b
12
+ - **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
13
+
14
+ ## Preprocessing
15
+
16
+ Each document was:
17
+ 1. Tokenized using the EleutherAI/gpt-neox-20b tokenizer
18
+ 2. Prefixed with a BOS (beginning of sequence) token
19
+ 3. Suffixed with an EOS (end of sequence) token
20
+ 4. Packed into fixed-length sequences of 1024 tokens
21
+
22
+ ## Usage
23
+ ```python
24
+ from datasets import load_dataset
25
+
26
+ # Load the dataset
27
+ dataset = load_dataset("eoinf/tokenized_dataset_test3")
28
+
29
+ # Access training data
30
+ train_data = dataset["train"]
31
+ print(train_data[0]["tokens"]) # First sequence
32
+ ```
33
+
34
+ ## Use with PyTorch
35
+ ```python
36
+ import torch
37
+ from datasets import load_dataset
38
+ from torch.utils.data import DataLoader
39
+
40
+ dataset = load_dataset("eoinf/tokenized_dataset_test3", split="train")
41
+
42
+ # Convert to PyTorch tensors
43
+ dataset.set_format(type="torch", columns=["tokens"])
44
+
45
+ # Create DataLoader
46
+ dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
47
+
48
+ for batch in dataloader:
49
+ tokens = batch["tokens"]
50
+ ```