Update README.md
Browse files
README.md
CHANGED
|
@@ -11,13 +11,220 @@ tags:
|
|
| 11 |
size_categories:
|
| 12 |
- 1K<n<10K
|
| 13 |
---
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
size_categories:
|
| 12 |
- 1K<n<10K
|
| 13 |
---
|
| 14 |
+
Below is a structured, professional‐tone description of the “QA Increasing Context Length” dataset. You can use this text as a README, a data card, or incorporate it directly into documentation.
|
| 15 |
+
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# QA Increasing Context Length Dataset
|
| 19 |
+
|
| 20 |
+
## 1. Overview
|
| 21 |
+
|
| 22 |
+
The **QA Increasing Context Length** dataset is designed to facilitate benchmarking and research on question‐answering (QA) systems as the size of the input context grows. It compiles QA examples drawn from multiple LongBench subsets, each bucketed by ascending context length (measured in tokens). Researchers can use this dataset to evaluate how modern language models and retrieval‐augmented systems handle progressively larger contexts (from 2 K tokens up to 32 K tokens) in terms of accuracy, latency, memory usage, and robustness.
|
| 23 |
+
|
| 24 |
+
* **Intended purpose**
|
| 25 |
+
|
| 26 |
+
* To measure QA performance (e.g., exact match, F1) under different context‐length regimes.
|
| 27 |
+
* To assess inference latency, throughput, and resource utilization when models process long documents.
|
| 28 |
+
* To compare retrieval strategies or memory‐efficient attention mechanisms as context size increases.
|
| 29 |
+
|
| 30 |
+
* **Key features**
|
| 31 |
+
|
| 32 |
+
1. A single CSV (`longbench_all_buckets_100.csv`) containing examples from five context‐length buckets: **2 K**, **4 K**, **8 K**, **16 K**, and **32 K** tokens.
|
| 33 |
+
2. Each row includes a complete (potentially multi‐paragraph) passage, a target question, and its ground‐truth answer, along with metadata fields that facilitate grouping, filtering, or statistical analysis.
|
| 34 |
+
3. Examples are drawn from diverse domains (scientific articles, technical reports, web pages, etc.), as indicated by the `dataset` field.
|
| 35 |
+
|
| 36 |
+
## 2. Dataset Structure
|
| 37 |
+
|
| 38 |
+
The dataset is provided as one CSV file:
|
| 39 |
+
|
| 40 |
+
```
|
| 41 |
+
longbench_all_buckets_100.csv
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
* **File format**: Comma‐separated values (UTF-8 encoded)
|
| 45 |
+
* **Number of rows**: Varies by bucket (typically 100 examples per bucket)
|
| 46 |
+
* **Total buckets**: 5 (`“2k”`, `“4k”`, `“8k”`, `“16k”`, `“32k”`)
|
| 47 |
+
|
| 48 |
+
### 2.1. Column Descriptions
|
| 49 |
+
|
| 50 |
+
Each row (example) has six columns:
|
| 51 |
+
|
| 52 |
+
| Column Name | Type | Description |
|
| 53 |
+
| ------------------ | -------- | ---------------------------------------------------------------------------------------------------------- |
|
| 54 |
+
| **context** | `string` | A (long) text passage whose token count falls into one of the predefined buckets (2 K – 32 K). |
|
| 55 |
+
| **question** | `string` | A natural‐language question referring to information contained in `context`. |
|
| 56 |
+
| **answer** | `string` | The ground‐truth answer (text span or summary) extracted from the context. |
|
| 57 |
+
| **length** | `int` | The exact token count of the `context` (as measured by a standard tokenizer, e.g., T5/BPE). |
|
| 58 |
+
| **dataset** | `string` | The original LongBench subset (e.g., “scitldr”, “arxiv”, “pubmed”, etc.) from which the example was drawn. |
|
| 59 |
+
| **context\_range** | `string` | One of `"2k"`, `"4k"`, `"8k"`, `"16k"`, or `"32k"`. Indicates the bucket into which `length` falls. |
|
| 60 |
+
|
| 61 |
+
* **Context buckets (`context_range`)**
|
| 62 |
+
|
| 63 |
+
* `"2k"`: 1 500 – 2 499 tokens (approximate; exact boundaries may vary)
|
| 64 |
+
|
| 65 |
+
* `"4k"`: 3 000 – 4 999 tokens
|
| 66 |
+
|
| 67 |
+
* `"8k"`: 6 000 – 9 999 tokens
|
| 68 |
+
|
| 69 |
+
* `"16k"`: 12 000 – 17 999 tokens
|
| 70 |
+
|
| 71 |
+
* `"32k"`: 24 000 – 34 999 tokens
|
| 72 |
+
|
| 73 |
+
> **Note**: The buckets are chosen to stress‐test long‐context inference. The exact cutoff may be implementation‐dependent, but each row’s `length` field indicates the precise token count.
|
| 74 |
+
|
| 75 |
+
## 3. Usage Examples
|
| 76 |
+
|
| 77 |
+
Below are common ways to load and inspect the dataset, either via Hugging Face DL Hub or using the local CSV directly.
|
| 78 |
+
|
| 79 |
+
### 3.1. Loading with Hugging Face Datasets
|
| 80 |
+
|
| 81 |
+
If the dataset has been published under a dataset ID (for example, `username/longbench-qa-increasing-context`), you can load it directly:
|
| 82 |
+
|
| 83 |
+
```python
|
| 84 |
+
from datasets import load_dataset
|
| 85 |
+
|
| 86 |
+
# Replace with the actual HF dataset ID if different
|
| 87 |
+
dataset = load_dataset("username/longbench-qa-increasing-context")
|
| 88 |
+
|
| 89 |
+
# Print a summary of splits (e.g., “train”, “validation”, etc.)
|
| 90 |
+
print(dataset)
|
| 91 |
+
|
| 92 |
+
# Inspect column names
|
| 93 |
+
print(dataset["train"].column_names)
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
Internally, you will see:
|
| 97 |
+
|
| 98 |
+
```
|
| 99 |
+
DatasetDict({
|
| 100 |
+
train: Dataset({
|
| 101 |
+
features: ['context', 'question', 'answer', 'length', 'dataset', 'context_range'],
|
| 102 |
+
num_rows: <total_rows>
|
| 103 |
+
})
|
| 104 |
+
})
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
### 3.2. Loading from Local CSV via Hugging Face Datasets
|
| 108 |
+
|
| 109 |
+
```python
|
| 110 |
+
from datasets import load_dataset
|
| 111 |
+
|
| 112 |
+
# If you already have the CSV file in the current directory:
|
| 113 |
+
data_files = "longbench_all_buckets_100.csv"
|
| 114 |
+
ds = load_dataset("csv", data_files=data_files)
|
| 115 |
+
|
| 116 |
+
# The library assigns the single split name “train” by default:
|
| 117 |
+
print(ds) # {"train": Dataset}
|
| 118 |
+
print(ds["train"].column_names)
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
```python
|
| 122 |
+
# Example: count number of examples per context_range
|
| 123 |
+
from collections import Counter
|
| 124 |
+
counts = Counter(ds["train"]["context_range"])
|
| 125 |
+
print(counts) # e.g., {'2k': 100, '4k': 100, '8k': 100, '16k': 100, '32k': 100}
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
### 3.3. Loading Directly with pandas
|
| 129 |
+
|
| 130 |
+
```python
|
| 131 |
+
import pandas as pd
|
| 132 |
+
|
| 133 |
+
df = pd.read_csv("longbench_all_buckets_100.csv")
|
| 134 |
+
|
| 135 |
+
# View the first few rows
|
| 136 |
+
df.head()
|
| 137 |
+
|
| 138 |
+
# Output:
|
| 139 |
+
# context_range length dataset context question answer
|
| 140 |
+
# 0 2k 1985 scitldr "Long scientific abstract…" "What is the title?" "X Study"
|
| 141 |
+
# 1 2k 2021 pubmed "Biomedical paper text…" "What did authors find?" "Y"
|
| 142 |
+
# 2 4k 3812 arxiv "An arXiv article excerpt…" "Which method used?" "Method Z"
|
| 143 |
+
# ...
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
Once loaded, you can:
|
| 147 |
+
|
| 148 |
+
* **Filter** a specific bucket:
|
| 149 |
+
|
| 150 |
+
```python
|
| 151 |
+
df_16k = df[df["context_range"] == "16k"]
|
| 152 |
+
```
|
| 153 |
+
* **Compute statistics**:
|
| 154 |
+
|
| 155 |
+
```python
|
| 156 |
+
df.groupby("context_range")["length"].describe()
|
| 157 |
+
```
|
| 158 |
+
* **Inspect sample contexts**:
|
| 159 |
+
|
| 160 |
+
```python
|
| 161 |
+
for bucket in ["2k","4k","8k","16k","32k"]:
|
| 162 |
+
sample = df[df["context_range"] == bucket].sample(1).iloc[0]
|
| 163 |
+
print(f"Bucket: {bucket}")
|
| 164 |
+
print("Length:", sample["length"])
|
| 165 |
+
print("Question:", sample["question"])
|
| 166 |
+
print("Answer:", sample["answer"])
|
| 167 |
+
print("---")
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
---
|
| 171 |
+
|
| 172 |
+
## 4. Potential Applications
|
| 173 |
+
|
| 174 |
+
1. **Long-Context QA Benchmarking**
|
| 175 |
+
|
| 176 |
+
* Evaluate how QA accuracy (Exact Match, F1 score) degrades or holds steady as context length increases from 2K to 32K tokens.
|
| 177 |
+
* Compare performance across different model architectures (e.g., transformer‐based models with full attention vs. sparse or windowed attention).
|
| 178 |
+
|
| 179 |
+
2. **Latency & Memory Profiling**
|
| 180 |
+
|
| 181 |
+
* Measure inference time (e.g., time-to-first-token, throughput) as a function of `length`.
|
| 182 |
+
* Monitor GPU/CPU memory usage and peak consumption for each bucket, especially for engines that implement paged KV-cache or offloading to CPU.
|
| 183 |
+
|
| 184 |
+
3. **Retrieval and Reranking Research**
|
| 185 |
+
|
| 186 |
+
* Use “dataset” metadata to explore domain‐specific retrieval strategies (e.g., scientific vs. biomedical abstracts).
|
| 187 |
+
* Investigate how downstream QA accuracy changes when retrieving increasingly large context chunks.
|
| 188 |
+
|
| 189 |
+
4. **Failure Mode Analysis**
|
| 190 |
+
|
| 191 |
+
* Characterize the types of questions that become “unanswerable” when the context is extremely large (32 K tokens).
|
| 192 |
+
* Analyze error patterns (e.g., where the gold answer appears near the beginning vs. the end of a long document).
|
| 193 |
+
|
| 194 |
+
5. **Adaptive Context Truncation / Compression**
|
| 195 |
+
|
| 196 |
+
* Develop algorithms that select only the most relevant 2 K–4 K tokens from a 32 K‐token context to maintain accuracy while reducing inference cost.
|
| 197 |
+
* Evaluate how aggressive summarization or chunking influences QA performance.
|
| 198 |
+
|
| 199 |
+
---
|
| 200 |
+
|
| 201 |
+
## 5. Citation & License
|
| 202 |
+
|
| 203 |
+
* If you plan to publish results using this dataset, please refer to the original LongBench publication (LongBench: A Bedrock-Level Benchmark for Foundation Models) and cite the specific subset(s) from which examples were drawn.
|
| 204 |
+
* Check the Hugging Face hub (dataset card) for detailed licensing information. Typically, LongBench subsets carry permissive licenses for research use, but always verify at [https://huggingface.co/datasets/…](https://huggingface.co/datasets/…) before redistribution.
|
| 205 |
+
|
| 206 |
+
---
|
| 207 |
+
|
| 208 |
+
## 6. Contact & Repository
|
| 209 |
+
|
| 210 |
+
* **Hugging Face dataset page**:
|
| 211 |
+
[https://huggingface.co/datasets/USERNAME/longbench-qa-increasing-context](https://huggingface.co/datasets/USERNAME/longbench-qa-increasing-context)
|
| 212 |
+
(Replace `USERNAME` with the dataset owner’s name.)
|
| 213 |
+
|
| 214 |
+
* **GitHub/Source code**:
|
| 215 |
+
If there is a linked GitHub repo or script for generating this CSV, include its URL here.
|
| 216 |
+
Example:
|
| 217 |
+
|
| 218 |
+
```
|
| 219 |
+
https://github.com/longbench/longbench-qa
|
| 220 |
+
```
|
| 221 |
+
|
| 222 |
+
* **Maintainers**:
|
| 223 |
+
|
| 224 |
+
* [Name 1](mailto:email1@institute.edu)
|
| 225 |
+
* [Name 2](mailto:email2@institute.edu)
|
| 226 |
+
|
| 227 |
+
---
|
| 228 |
+
|
| 229 |
+
**Summary:**
|
| 230 |
+
The **QA Increasing Context Length** dataset is a single CSV of QA examples bucketed into five context‐length tiers (2 K–32 K tokens). Each record contains the full passage (`context`), a `question`, its `answer`, plus metadata fields (`length`, `dataset`, `context_range`). Researchers can seamlessly load it with Hugging Face `datasets` or pandas and apply it to study model behavior under very long contexts—ranging from accuracy trends to hardware profiling and retrieval strategies.
|