Datasets:
File size: 1,402 Bytes
1a1aeba b6da641 1a1aeba b6da641 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- tokenized
- language-modeling
size_categories:
- 1K<n<10K
---
# Dataset Card for eoinf/tokenized_dataset_test7
## Original dataset
Original dataset: monology/pile-uncopyrighted
## Dataset Details
- **Total Tokens**: 10,003,456
- **Total Sequences**: 9,769
- **Context Length**: 1024 tokens
- **Tokenizer**: meta-llama/Llama-2-7b-hf
- **Format**: Each example contains a single field `tokens` with a list of 1024 token IDs
## Preprocessing
Each document was:
1. Tokenized using the meta-llama/Llama-2-7b-hf tokenizer
2. Prefixed with a BOS (beginning of sequence) token
3. Suffixed with an EOS (end of sequence) token
4. Packed into fixed-length sequences of 1024 tokens
## Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("eoinf/tokenized_dataset_test7")
# Access training data
train_data = dataset["train"]
print(train_data[0]["tokens"]) # First sequence
```
## Use with PyTorch
```python
import torch
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("eoinf/tokenized_dataset_test7", split="train")
# Convert to PyTorch tensors
dataset.set_format(type="torch", columns=["tokens"])
# Create DataLoader
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
for batch in dataloader:
tokens = batch["tokens"]
```
|