Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,47 +1,37 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
features:
|
| 4 |
-
- name: tokens
|
| 5 |
-
list: int64
|
| 6 |
-
splits:
|
| 7 |
-
- name: train
|
| 8 |
-
num_bytes: 467172
|
| 9 |
-
num_examples: 57
|
| 10 |
-
download_size: 97659
|
| 11 |
-
dataset_size: 467172
|
| 12 |
task_categories:
|
| 13 |
- text-generation
|
| 14 |
language:
|
| 15 |
- en
|
|
|
|
|
|
|
|
|
|
| 16 |
size_categories:
|
| 17 |
-
-
|
| 18 |
-
configs:
|
| 19 |
-
- config_name: default
|
| 20 |
-
data_files:
|
| 21 |
-
- split: train
|
| 22 |
-
path: data/train-*
|
| 23 |
---
|
|
|
|
|
|
|
| 24 |
## Original dataset
|
| 25 |
Original dataset: monology/pile-uncopyrighted
|
| 26 |
|
| 27 |
## Dataset Details
|
| 28 |
|
| 29 |
-
- **Total Tokens**:
|
| 30 |
-
- **Total Sequences**:
|
| 31 |
- **Context Length**: 1024 tokens
|
| 32 |
-
- **Tokenizer**:
|
| 33 |
- **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
|
| 34 |
|
| 35 |
## Preprocessing
|
| 36 |
|
| 37 |
-
Each document
|
| 38 |
-
1. Tokenized using the
|
| 39 |
2. Prefixed with a BOS (beginning of sequence) token
|
| 40 |
3. Suffixed with an EOS (end of sequence) token
|
| 41 |
4. Packed into fixed-length sequences of 1024 tokens
|
| 42 |
|
| 43 |
## Usage
|
| 44 |
-
|
| 45 |
```python
|
| 46 |
from datasets import load_dataset
|
| 47 |
|
|
@@ -54,7 +44,6 @@ print(train_data[0]["tokens"]) # First sequence
|
|
| 54 |
```
|
| 55 |
|
| 56 |
## Use with PyTorch
|
| 57 |
-
|
| 58 |
```python
|
| 59 |
import torch
|
| 60 |
from datasets import load_dataset
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
- text-generation
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
+
tags:
|
| 8 |
+
- tokenized
|
| 9 |
+
- language-modeling
|
| 10 |
size_categories:
|
| 11 |
+
- n<1K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
+
# Dataset Card for eoinf/tokenized_dataset_test
|
| 14 |
+
|
| 15 |
## Original dataset
|
| 16 |
Original dataset: monology/pile-uncopyrighted
|
| 17 |
|
| 18 |
## Dataset Details
|
| 19 |
|
| 20 |
+
- **Total Tokens**: 58,368
|
| 21 |
+
- **Total Sequences**: 57
|
| 22 |
- **Context Length**: 1024 tokens
|
| 23 |
+
- **Tokenizer**: eoinf/pile_tokenizer_4096
|
| 24 |
- **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
|
| 25 |
|
| 26 |
## Preprocessing
|
| 27 |
|
| 28 |
+
Each document was:
|
| 29 |
+
1. Tokenized using the eoinf/pile_tokenizer_4096 tokenizer
|
| 30 |
2. Prefixed with a BOS (beginning of sequence) token
|
| 31 |
3. Suffixed with an EOS (end of sequence) token
|
| 32 |
4. Packed into fixed-length sequences of 1024 tokens
|
| 33 |
|
| 34 |
## Usage
|
|
|
|
| 35 |
```python
|
| 36 |
from datasets import load_dataset
|
| 37 |
|
|
|
|
| 44 |
```
|
| 45 |
|
| 46 |
## Use with PyTorch
|
|
|
|
| 47 |
```python
|
| 48 |
import torch
|
| 49 |
from datasets import load_dataset
|