eoinf commited on
Commit
d71926c
·
verified ·
1 Parent(s): 1c00773

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -20
README.md CHANGED
@@ -1,20 +1,4 @@
1
- ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: train
6
- path: data/train-*
7
- dataset_info:
8
- features:
9
- - name: tokens
10
- list: int64
11
- splits:
12
- - name: train
13
- num_bytes: 409800
14
- num_examples: 50
15
- download_size: 109371
16
- dataset_size: 409800
17
- ---
18
 
19
  ## Original dataset
20
  Original dataset: monology/pile-uncopyrighted
@@ -29,14 +13,13 @@ Original dataset: monology/pile-uncopyrighted
29
 
30
  ## Preprocessing
31
 
32
- Each document from was:
33
  1. Tokenized using the meta-llama/Llama-2-7b-hf tokenizer
34
  2. Prefixed with a BOS (beginning of sequence) token
35
  3. Suffixed with an EOS (end of sequence) token
36
  4. Packed into fixed-length sequences of 1024 tokens
37
 
38
  ## Usage
39
-
40
  ```python
41
  from datasets import load_dataset
42
 
@@ -49,7 +32,6 @@ print(train_data[0]["tokens"]) # First sequence
49
  ```
50
 
51
  ## Use with PyTorch
52
-
53
  ```python
54
  import torch
55
  from datasets import load_dataset
 
1
+ # Dataset Card for eoinf/tokenized_dataset_test2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  ## Original dataset
4
  Original dataset: monology/pile-uncopyrighted
 
13
 
14
  ## Preprocessing
15
 
16
+ Each document was:
17
  1. Tokenized using the meta-llama/Llama-2-7b-hf tokenizer
18
  2. Prefixed with a BOS (beginning of sequence) token
19
  3. Suffixed with an EOS (end of sequence) token
20
  4. Packed into fixed-length sequences of 1024 tokens
21
 
22
  ## Usage
 
23
  ```python
24
  from datasets import load_dataset
25
 
 
32
  ```
33
 
34
  ## Use with PyTorch
 
35
  ```python
36
  import torch
37
  from datasets import load_dataset