eoinf's picture
Upload README.md with huggingface_hub
b6da641 verified
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
tags:
  - tokenized
  - language-modeling
size_categories:
  - 1K<n<10K

Dataset Card for eoinf/tokenized_dataset_test7

Original dataset

Original dataset: monology/pile-uncopyrighted

Dataset Details

  • Total Tokens: 10,003,456
  • Total Sequences: 9,769
  • Context Length: 1024 tokens
  • Tokenizer: meta-llama/Llama-2-7b-hf
  • Format: Each example contains a single field tokens with a list of 1024 token IDs

Preprocessing

Each document was:

  1. Tokenized using the meta-llama/Llama-2-7b-hf tokenizer
  2. Prefixed with a BOS (beginning of sequence) token
  3. Suffixed with an EOS (end of sequence) token
  4. Packed into fixed-length sequences of 1024 tokens

Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("eoinf/tokenized_dataset_test7")

# Access training data
train_data = dataset["train"]
print(train_data[0]["tokens"])  # First sequence

Use with PyTorch

import torch
from datasets import load_dataset
from torch.utils.data import DataLoader

dataset = load_dataset("eoinf/tokenized_dataset_test7", split="train")

# Convert to PyTorch tensors
dataset.set_format(type="torch", columns=["tokens"])

# Create DataLoader
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)

for batch in dataloader:
    tokens = batch["tokens"]