| | --- |
| | license: apache-2.0 |
| | task_categories: |
| | - text-generation |
| | language: |
| | - en |
| | size_categories: |
| | - 10K<n<100K |
| | --- |
| | # Dataset Card for pg19-test |
| |
|
| | <!-- Provide a quick summary of the dataset. --> |
| |
|
| | This dataset is a curated subset of the PG-19 dataset (a large collection of classic books from Project Gutenberg), specifically processed to generate text samples with controlled token lengths—using the Llama3.1-8B-Instruct tokenizer for precise tokenization—for long-context language model evaluation and research. The dataset contains stratified text excerpts at six distinct token length targets (10240 to 61440 tokens) with consistent sample sizes per length, ensuring sentence-level coherence (truncation at natural sentence endings) and minimal text reuse. |
| |
|
| | ## Dataset Details |
| |
|
| | ### Key Features |
| |
|
| | <!-- Provide a longer summary of what this dataset is. --> |
| |
|
| |
|
| | - **Length Stratification:** Samples across 6 target token lengths: 10k, 20k, 30k, 40k, 50k, 60k tokens.Note that the actual token length of samples does not strictly adhere to the target values, and a fluctuation of several hundred tokens (either above or below the target) may occur. |
| | - **Coherent Truncation:** Text is truncated at natural sentence endings (.!?) rather than arbitrary token positions to preserve readability and semantic integrity. |
| | - **Language(s) (NLP):** English |
| | - **Size of the dataset:** 54.7MB |
| |
|
| | ### Dataset Sources |
| |
|
| | <!-- Provide the basic links for the dataset. --> |
| |
|
| | - **Homepage:** https://huggingface.co/datasets/hcyy/pg19-test |
| | - **Paper:** SpecPV: Improving Self-Speculative Decoding for Long-Context Generation via Partial Verification |
| |
|
| |
|
| | ## Dataset Structure |
| |
|
| | <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
| |
|
| | This dataset consists of 120 text samples structured into two core fields: "length" (an integer representing the target token length, which falls into six tiers: 10k, 20k, 30k, 40k, 50k, 60k tokens) and "text" (a string of coherently truncated PG-19 excerpts ending at natural sentence boundaries); it has 100 samples per length tier, with actual token lengths fluctuating by several hundred tokens around the targets to prioritize coherence, and is available in both Parquet formats. |
| |
|
| | ## Dataset Creation |
| |
|
| | ### Curation Rationale |
| |
|
| | <!-- Motivation for the creation of this dataset. --> |
| |
|
| | This dataset is used to evaluate the performance of the SpecPV algorithm. |
| |
|
| | ### Source Data |
| |
|
| | The dataset is directly derived from the PG-19 dataset (hosted at https://huggingface.co/datasets/emozilla/pg19/), a large collection of public-domain books published before 1919 (from Project Gutenberg). |
| |
|
| |
|
| | **BibTeX:** |
| |
|
| | [More Information Needed] |
| |
|
| | **APA:** |
| |
|
| | [More Information Needed] |
| |
|
| |
|