Update README.md
Browse files
README.md
CHANGED
|
@@ -35,15 +35,9 @@ The **QA Increasing Context Length** dataset is designed to facilitate benchmark
|
|
| 35 |
|
| 36 |
## 2. Dataset Structure
|
| 37 |
|
| 38 |
-
The dataset is provided as one CSV file:
|
| 39 |
-
|
| 40 |
-
```
|
| 41 |
-
longbench_all_buckets_100.csv
|
| 42 |
-
```
|
| 43 |
-
|
| 44 |
* **File format**: Comma‐separated values (UTF-8 encoded)
|
| 45 |
* **Number of rows**: Varies by bucket (typically 100 examples per bucket)
|
| 46 |
-
* **
|
| 47 |
|
| 48 |
### 2.1. Column Descriptions
|
| 49 |
|
|
@@ -72,159 +66,54 @@ Each row (example) has six columns:
|
|
| 72 |
|
| 73 |
> **Note**: The buckets are chosen to stress‐test long‐context inference. The exact cutoff may be implementation‐dependent, but each row’s `length` field indicates the precise token count.
|
| 74 |
|
| 75 |
-
## 3.
|
| 76 |
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
### 3.1. Loading with Hugging Face Datasets
|
| 80 |
-
|
| 81 |
-
If the dataset has been published under a dataset ID (for example, `username/longbench-qa-increasing-context`), you can load it directly:
|
| 82 |
|
| 83 |
```python
|
| 84 |
from datasets import load_dataset
|
| 85 |
|
| 86 |
# Replace with the actual HF dataset ID if different
|
| 87 |
-
dataset = load_dataset("
|
| 88 |
|
| 89 |
-
# Print
|
| 90 |
print(dataset)
|
| 91 |
|
| 92 |
-
# Inspect column names
|
| 93 |
print(dataset["train"].column_names)
|
| 94 |
```
|
| 95 |
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
```
|
| 99 |
-
DatasetDict({
|
| 100 |
-
train: Dataset({
|
| 101 |
-
features: ['context', 'question', 'answer', 'length', 'dataset', 'context_range'],
|
| 102 |
-
num_rows: <total_rows>
|
| 103 |
-
})
|
| 104 |
-
})
|
| 105 |
-
```
|
| 106 |
-
|
| 107 |
-
### 3.2. Loading from Local CSV via Hugging Face Datasets
|
| 108 |
|
| 109 |
```python
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
# If you already have the CSV file in the current directory:
|
| 113 |
-
data_files = "longbench_all_buckets_100.csv"
|
| 114 |
-
ds = load_dataset("csv", data_files=data_files)
|
| 115 |
-
|
| 116 |
-
# The library assigns the single split name “train” by default:
|
| 117 |
-
print(ds) # {"train": Dataset}
|
| 118 |
-
print(ds["train"].column_names)
|
| 119 |
-
```
|
| 120 |
-
|
| 121 |
-
```python
|
| 122 |
-
# Example: count number of examples per context_range
|
| 123 |
from collections import Counter
|
| 124 |
-
counts = Counter(
|
| 125 |
print(counts) # e.g., {'2k': 100, '4k': 100, '8k': 100, '16k': 100, '32k': 100}
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
```
|
| 127 |
|
| 128 |
-
|
| 129 |
|
| 130 |
```python
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
df = pd.read_csv("longbench_all_buckets_100.csv")
|
| 134 |
-
|
| 135 |
-
# View the first few rows
|
| 136 |
-
df.head()
|
| 137 |
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
# 0 2k 1985 scitldr "Long scientific abstract…" "What is the title?" "X Study"
|
| 141 |
-
# 1 2k 2021 pubmed "Biomedical paper text…" "What did authors find?" "Y"
|
| 142 |
-
# 2 4k 3812 arxiv "An arXiv article excerpt…" "Which method used?" "Method Z"
|
| 143 |
-
# ...
|
| 144 |
```
|
| 145 |
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
* **Filter** a specific bucket:
|
| 149 |
-
|
| 150 |
-
```python
|
| 151 |
-
df_16k = df[df["context_range"] == "16k"]
|
| 152 |
-
```
|
| 153 |
-
* **Compute statistics**:
|
| 154 |
-
|
| 155 |
-
```python
|
| 156 |
-
df.groupby("context_range")["length"].describe()
|
| 157 |
-
```
|
| 158 |
-
* **Inspect sample contexts**:
|
| 159 |
-
|
| 160 |
-
```python
|
| 161 |
-
for bucket in ["2k","4k","8k","16k","32k"]:
|
| 162 |
-
sample = df[df["context_range"] == bucket].sample(1).iloc[0]
|
| 163 |
-
print(f"Bucket: {bucket}")
|
| 164 |
-
print("Length:", sample["length"])
|
| 165 |
-
print("Question:", sample["question"])
|
| 166 |
-
print("Answer:", sample["answer"])
|
| 167 |
-
print("---")
|
| 168 |
-
```
|
| 169 |
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
1. **Long-Context QA Benchmarking**
|
| 175 |
-
|
| 176 |
-
* Evaluate how QA accuracy (Exact Match, F1 score) degrades or holds steady as context length increases from 2K to 32K tokens.
|
| 177 |
-
* Compare performance across different model architectures (e.g., transformer‐based models with full attention vs. sparse or windowed attention).
|
| 178 |
-
|
| 179 |
-
2. **Latency & Memory Profiling**
|
| 180 |
-
|
| 181 |
-
* Measure inference time (e.g., time-to-first-token, throughput) as a function of `length`.
|
| 182 |
-
* Monitor GPU/CPU memory usage and peak consumption for each bucket, especially for engines that implement paged KV-cache or offloading to CPU.
|
| 183 |
-
|
| 184 |
-
3. **Retrieval and Reranking Research**
|
| 185 |
-
|
| 186 |
-
* Use “dataset” metadata to explore domain‐specific retrieval strategies (e.g., scientific vs. biomedical abstracts).
|
| 187 |
-
* Investigate how downstream QA accuracy changes when retrieving increasingly large context chunks.
|
| 188 |
-
|
| 189 |
-
4. **Failure Mode Analysis**
|
| 190 |
-
|
| 191 |
-
* Characterize the types of questions that become “unanswerable” when the context is extremely large (32 K tokens).
|
| 192 |
-
* Analyze error patterns (e.g., where the gold answer appears near the beginning vs. the end of a long document).
|
| 193 |
-
|
| 194 |
-
5. **Adaptive Context Truncation / Compression**
|
| 195 |
-
|
| 196 |
-
* Develop algorithms that select only the most relevant 2 K–4 K tokens from a 32 K‐token context to maintain accuracy while reducing inference cost.
|
| 197 |
-
* Evaluate how aggressive summarization or chunking influences QA performance.
|
| 198 |
|
| 199 |
-
---
|
| 200 |
|
| 201 |
-
##
|
| 202 |
|
| 203 |
* If you plan to publish results using this dataset, please refer to the original LongBench publication (LongBench: A Bedrock-Level Benchmark for Foundation Models) and cite the specific subset(s) from which examples were drawn.
|
| 204 |
* Check the Hugging Face hub (dataset card) for detailed licensing information. Typically, LongBench subsets carry permissive licenses for research use, but always verify at [https://huggingface.co/datasets/…](https://huggingface.co/datasets/…) before redistribution.
|
| 205 |
|
| 206 |
-
---
|
| 207 |
-
|
| 208 |
-
## 6. Contact & Repository
|
| 209 |
-
|
| 210 |
-
* **Hugging Face dataset page**:
|
| 211 |
-
[https://huggingface.co/datasets/USERNAME/longbench-qa-increasing-context](https://huggingface.co/datasets/USERNAME/longbench-qa-increasing-context)
|
| 212 |
-
(Replace `USERNAME` with the dataset owner’s name.)
|
| 213 |
-
|
| 214 |
-
* **GitHub/Source code**:
|
| 215 |
-
If there is a linked GitHub repo or script for generating this CSV, include its URL here.
|
| 216 |
-
Example:
|
| 217 |
-
|
| 218 |
-
```
|
| 219 |
-
https://github.com/longbench/longbench-qa
|
| 220 |
-
```
|
| 221 |
-
|
| 222 |
-
* **Maintainers**:
|
| 223 |
-
|
| 224 |
-
* [Name 1](mailto:email1@institute.edu)
|
| 225 |
-
* [Name 2](mailto:email2@institute.edu)
|
| 226 |
-
|
| 227 |
-
---
|
| 228 |
-
|
| 229 |
-
**Summary:**
|
| 230 |
-
The **QA Increasing Context Length** dataset is a single CSV of QA examples bucketed into five context‐length tiers (2 K–32 K tokens). Each record contains the full passage (`context`), a `question`, its `answer`, plus metadata fields (`length`, `dataset`, `context_range`). Researchers can seamlessly load it with Hugging Face `datasets` or pandas and apply it to study model behavior under very long contexts—ranging from accuracy trends to hardware profiling and retrieval strategies.
|
|
|
|
| 35 |
|
| 36 |
## 2. Dataset Structure
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
* **File format**: Comma‐separated values (UTF-8 encoded)
|
| 39 |
* **Number of rows**: Varies by bucket (typically 100 examples per bucket)
|
| 40 |
+
* **Context lengths**: 5 (`“2k”`, `“4k”`, `“8k”`, `“16k”`, `“32k”`)
|
| 41 |
|
| 42 |
### 2.1. Column Descriptions
|
| 43 |
|
|
|
|
| 66 |
|
| 67 |
> **Note**: The buckets are chosen to stress‐test long‐context inference. The exact cutoff may be implementation‐dependent, but each row’s `length` field indicates the precise token count.
|
| 68 |
|
| 69 |
+
## 3. Loading
|
| 70 |
|
| 71 |
+
If this collection has been published under a Hugging Face dataset ID (for example, `slinusc/qa_increasing_context_length`), you can load it directly:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
```python
|
| 74 |
from datasets import load_dataset
|
| 75 |
|
| 76 |
# Replace with the actual HF dataset ID if different
|
| 77 |
+
dataset = load_dataset("slinusc/qa_increasing_context_length")
|
| 78 |
|
| 79 |
+
# Print overall structure and splits
|
| 80 |
print(dataset)
|
| 81 |
|
| 82 |
+
# Inspect column names in the “train” split
|
| 83 |
print(dataset["train"].column_names)
|
| 84 |
```
|
| 85 |
|
| 86 |
+
By default, the single CSV is assigned a “train” split. You can then filter or sample by `context_range`:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
|
| 88 |
```python
|
| 89 |
+
# Example: count examples per bucket
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
from collections import Counter
|
| 91 |
+
counts = Counter(dataset["train"]["context_range"])
|
| 92 |
print(counts) # e.g., {'2k': 100, '4k': 100, '8k': 100, '16k': 100, '32k': 100}
|
| 93 |
+
|
| 94 |
+
# Filter to only 16k‐token contexts
|
| 95 |
+
ds_16k = dataset["train"].filter(lambda x: x["context_range"] == "16k")
|
| 96 |
+
print(len(ds_16k))
|
| 97 |
```
|
| 98 |
|
| 99 |
+
If you have downloaded `longbench_all_buckets_100.csv` locally, you can also load it via:
|
| 100 |
|
| 101 |
```python
|
| 102 |
+
from datasets import load_dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 103 |
|
| 104 |
+
dataset = load_dataset("csv", data_files="longbench_all_buckets_100.csv")
|
| 105 |
+
print(dataset["train"].column_names)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 106 |
```
|
| 107 |
|
| 108 |
+
After loading, each row will expose the fields:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 109 |
|
| 110 |
+
```
|
| 111 |
+
["context", "question", "answer", "length", "dataset", "context_range"]
|
| 112 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 113 |
|
|
|
|
| 114 |
|
| 115 |
+
## 4. Citation & License
|
| 116 |
|
| 117 |
* If you plan to publish results using this dataset, please refer to the original LongBench publication (LongBench: A Bedrock-Level Benchmark for Foundation Models) and cite the specific subset(s) from which examples were drawn.
|
| 118 |
* Check the Hugging Face hub (dataset card) for detailed licensing information. Typically, LongBench subsets carry permissive licenses for research use, but always verify at [https://huggingface.co/datasets/…](https://huggingface.co/datasets/…) before redistribution.
|
| 119 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|