Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -8,22 +8,7 @@ tags:
|
|
| 8 |
- tokenized
|
| 9 |
- language-modeling
|
| 10 |
size_categories:
|
| 11 |
-
-
|
| 12 |
-
configs:
|
| 13 |
-
- config_name: default
|
| 14 |
-
data_files:
|
| 15 |
-
- split: train
|
| 16 |
-
path: data/train-*
|
| 17 |
-
dataset_info:
|
| 18 |
-
features:
|
| 19 |
-
- name: tokens
|
| 20 |
-
list: int64
|
| 21 |
-
splits:
|
| 22 |
-
- name: train
|
| 23 |
-
num_bytes: 1143891132
|
| 24 |
-
num_examples: 139567
|
| 25 |
-
download_size: 236204536
|
| 26 |
-
dataset_size: 1143891132
|
| 27 |
---
|
| 28 |
# Dataset Card for eoinf/wikitext_llama
|
| 29 |
|
|
@@ -32,8 +17,8 @@ Original dataset: Salesforce/wikitext
|
|
| 32 |
|
| 33 |
## Dataset Details
|
| 34 |
|
| 35 |
-
- **Total Tokens**:
|
| 36 |
-
- **Total Sequences**:
|
| 37 |
- **Context Length**: 1024 tokens
|
| 38 |
- **Tokenizer**: meta-llama/Llama-2-7b-hf
|
| 39 |
- **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
|
|
|
|
| 8 |
- tokenized
|
| 9 |
- language-modeling
|
| 10 |
size_categories:
|
| 11 |
+
- 100K<n<1M
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
# Dataset Card for eoinf/wikitext_llama
|
| 14 |
|
|
|
|
| 17 |
|
| 18 |
## Dataset Details
|
| 19 |
|
| 20 |
+
- **Total Tokens**: 142,916,608
|
| 21 |
+
- **Total Sequences**: 139,567
|
| 22 |
- **Context Length**: 1024 tokens
|
| 23 |
- **Tokenizer**: meta-llama/Llama-2-7b-hf
|
| 24 |
- **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
|