Datasets:
metadata
license: mit
task_categories:
- text-generation
language:
- en
tags:
- tokenized
- language-modeling
size_categories:
- 1K<n<10K
Dataset Card for eoinf/tokenized_dataset_test7
Original dataset
Original dataset: monology/pile-uncopyrighted
Dataset Details
- Total Tokens: 10,003,456
- Total Sequences: 9,769
- Context Length: 1024 tokens
- Tokenizer: meta-llama/Llama-2-7b-hf
- Format: Each example contains a single field
tokenswith a list of 1024 token IDs
Preprocessing
Each document was:
- Tokenized using the meta-llama/Llama-2-7b-hf tokenizer
- Prefixed with a BOS (beginning of sequence) token
- Suffixed with an EOS (end of sequence) token
- Packed into fixed-length sequences of 1024 tokens
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("eoinf/tokenized_dataset_test7")
# Access training data
train_data = dataset["train"]
print(train_data[0]["tokens"]) # First sequence
Use with PyTorch
import torch
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("eoinf/tokenized_dataset_test7", split="train")
# Convert to PyTorch tensors
dataset.set_format(type="torch", columns=["tokens"])
# Create DataLoader
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
for batch in dataloader:
tokens = batch["tokens"]