eoinf commited on
Commit
06320c6
·
verified ·
1 Parent(s): 2b10e9e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +57 -10
README.md CHANGED
@@ -2,16 +2,63 @@
2
  dataset_info:
3
  features:
4
  - name: tokens
5
- list: int64
6
  splits:
7
  - name: train
8
- num_bytes: 409800
9
- num_examples: 50
10
- download_size: 109371
11
- dataset_size: 409800
12
- configs:
13
- - config_name: default
14
- data_files:
15
- - split: train
16
- path: data/train-*
17
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: tokens
5
+ sequence: int32
6
  splits:
7
  - name: train
8
+ task_categories:
9
+ - text-generation
10
+ language:
11
+ - en
12
+ size_categories:
13
+ - 1M<n<10M
 
 
 
14
  ---
15
+ ## Original dataset
16
+ Original dataset: monology/pile-uncopyrighted
17
+
18
+ ## Dataset Details
19
+
20
+ - **Total Tokens**: 51,200
21
+ - **Total Sequences**: 50
22
+ - **Context Length**: 1024 tokens
23
+ - **Tokenizer**: meta-llama/Llama-2-7b-hf
24
+ - **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
25
+
26
+ ## Preprocessing
27
+
28
+ Each document from was:
29
+ 1. Tokenized using the meta-llama/Llama-2-7b-hf tokenizer
30
+ 2. Prefixed with a BOS (beginning of sequence) token
31
+ 3. Suffixed with an EOS (end of sequence) token
32
+ 4. Packed into fixed-length sequences of 1024 tokens
33
+
34
+ ## Usage
35
+
36
+ ```python
37
+ from datasets import load_dataset
38
+
39
+ # Load the dataset
40
+ dataset = load_dataset("eoinf/tokenized_dataset_test")
41
+
42
+ # Access training data
43
+ train_data = dataset["train"]
44
+ print(train_data[0]["tokens"]) # First sequence
45
+ ```
46
+
47
+ ## Use with PyTorch
48
+
49
+ ```python
50
+ import torch
51
+ from datasets import load_dataset
52
+ from torch.utils.data import DataLoader
53
+
54
+ dataset = load_dataset("eoinf/tokenized_dataset_test", split="train")
55
+
56
+ # Convert to PyTorch tensors
57
+ dataset.set_format(type="torch", columns=["tokens"])
58
+
59
+ # Create DataLoader
60
+ dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
61
+
62
+ for batch in dataloader:
63
+ tokens = batch["tokens"]
64
+ ```