eoinf commited on
Commit
5f046e5
·
verified ·
1 Parent(s): 97a7cc7

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +12 -23
README.md CHANGED
@@ -1,47 +1,37 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: tokens
5
- list: int64
6
- splits:
7
- - name: train
8
- num_bytes: 467172
9
- num_examples: 57
10
- download_size: 97659
11
- dataset_size: 467172
12
  task_categories:
13
  - text-generation
14
  language:
15
  - en
 
 
 
16
  size_categories:
17
- - 1M<n<10M
18
- configs:
19
- - config_name: default
20
- data_files:
21
- - split: train
22
- path: data/train-*
23
  ---
 
 
24
  ## Original dataset
25
  Original dataset: monology/pile-uncopyrighted
26
 
27
  ## Dataset Details
28
 
29
- - **Total Tokens**: 51,200
30
- - **Total Sequences**: 50
31
  - **Context Length**: 1024 tokens
32
- - **Tokenizer**: meta-llama/Llama-2-7b-hf
33
  - **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
34
 
35
  ## Preprocessing
36
 
37
- Each document from was:
38
- 1. Tokenized using the meta-llama/Llama-2-7b-hf tokenizer
39
  2. Prefixed with a BOS (beginning of sequence) token
40
  3. Suffixed with an EOS (end of sequence) token
41
  4. Packed into fixed-length sequences of 1024 tokens
42
 
43
  ## Usage
44
-
45
  ```python
46
  from datasets import load_dataset
47
 
@@ -54,7 +44,6 @@ print(train_data[0]["tokens"]) # First sequence
54
  ```
55
 
56
  ## Use with PyTorch
57
-
58
  ```python
59
  import torch
60
  from datasets import load_dataset
 
1
  ---
2
+ license: mit
 
 
 
 
 
 
 
 
 
3
  task_categories:
4
  - text-generation
5
  language:
6
  - en
7
+ tags:
8
+ - tokenized
9
+ - language-modeling
10
  size_categories:
11
+ - n<1K
 
 
 
 
 
12
  ---
13
+ # Dataset Card for eoinf/tokenized_dataset_test
14
+
15
  ## Original dataset
16
  Original dataset: monology/pile-uncopyrighted
17
 
18
  ## Dataset Details
19
 
20
+ - **Total Tokens**: 58,368
21
+ - **Total Sequences**: 57
22
  - **Context Length**: 1024 tokens
23
+ - **Tokenizer**: eoinf/pile_tokenizer_4096
24
  - **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
25
 
26
  ## Preprocessing
27
 
28
+ Each document was:
29
+ 1. Tokenized using the eoinf/pile_tokenizer_4096 tokenizer
30
  2. Prefixed with a BOS (beginning of sequence) token
31
  3. Suffixed with an EOS (end of sequence) token
32
  4. Packed into fixed-length sequences of 1024 tokens
33
 
34
  ## Usage
 
35
  ```python
36
  from datasets import load_dataset
37
 
 
44
  ```
45
 
46
  ## Use with PyTorch
 
47
  ```python
48
  import torch
49
  from datasets import load_dataset