eoinf commited on
Commit
8f476e5
·
verified ·
1 Parent(s): d027060

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +2 -19
README.md CHANGED
@@ -1,20 +1,3 @@
1
- ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: train
6
- path: data/train-*
7
- dataset_info:
8
- features:
9
- - name: tokens
10
- list: int64
11
- splits:
12
- - name: train
13
- num_bytes: 409800
14
- num_examples: 50
15
- download_size: 109371
16
- dataset_size: 409800
17
- ---
18
  # Dataset Card for eoinf/tokenized_dataset_test3
19
 
20
  ## Original dataset
@@ -25,13 +8,13 @@ Original dataset: monology/pile-uncopyrighted
25
  - **Total Tokens**: 51,200
26
  - **Total Sequences**: 50
27
  - **Context Length**: 1024 tokens
28
- - **Tokenizer**: EleutherAI/gpt-neox-20b
29
  - **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
30
 
31
  ## Preprocessing
32
 
33
  Each document was:
34
- 1. Tokenized using the EleutherAI/gpt-neox-20b tokenizer
35
  2. Prefixed with a BOS (beginning of sequence) token
36
  3. Suffixed with an EOS (end of sequence) token
37
  4. Packed into fixed-length sequences of 1024 tokens
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Dataset Card for eoinf/tokenized_dataset_test3
2
 
3
  ## Original dataset
 
8
  - **Total Tokens**: 51,200
9
  - **Total Sequences**: 50
10
  - **Context Length**: 1024 tokens
11
+ - **Tokenizer**: meta-llama/Llama-2-7b-hf
12
  - **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
13
 
14
  ## Preprocessing
15
 
16
  Each document was:
17
+ 1. Tokenized using the meta-llama/Llama-2-7b-hf tokenizer
18
  2. Prefixed with a BOS (beginning of sequence) token
19
  3. Suffixed with an EOS (end of sequence) token
20
  4. Packed into fixed-length sequences of 1024 tokens