Update README.md
Browse files
README.md
CHANGED
|
@@ -32,7 +32,7 @@ The **QA Increasing Context Length** dataset is designed to facilitate benchmark
|
|
| 32 |
|
| 33 |
* **Key features**
|
| 34 |
|
| 35 |
-
1. A single CSV (`longbench_all_buckets_100.csv`) containing examples from five context‐length buckets: **
|
| 36 |
2. Each row includes a complete (potentially multi‐paragraph) passage, a target question, and its ground‐truth answer, along with metadata fields that facilitate grouping, filtering, or statistical analysis.
|
| 37 |
3. Examples are drawn from diverse domains (scientific articles, technical reports, web pages, etc.), as indicated by the `dataset` field.
|
| 38 |
|
|
@@ -40,7 +40,7 @@ The **QA Increasing Context Length** dataset is designed to facilitate benchmark
|
|
| 40 |
|
| 41 |
* **File format**: Comma‐separated values (UTF-8 encoded)
|
| 42 |
* **Number of rows**: Varies by bucket (typically 100 examples per bucket)
|
| 43 |
-
* **Context lengths**: 5 (`“
|
| 44 |
|
| 45 |
### 2.1. Column Descriptions
|
| 46 |
|
|
@@ -48,7 +48,7 @@ Each row (example) has six columns:
|
|
| 48 |
|
| 49 |
| Column Name | Type | Description |
|
| 50 |
| ------------------ | -------- | ---------------------------------------------------------------------------------------------------------- |
|
| 51 |
-
| **context** | `string` | A (long) text passage whose token count falls into one of the predefined buckets (
|
| 52 |
| **question** | `string` | A natural‐language question referring to information contained in `context`. |
|
| 53 |
| **answer** | `string` | The ground‐truth answer (text span or summary) extracted from the context. |
|
| 54 |
| **length** | `int` | The exact token count of the `context` (as measured by a standard tokenizer, e.g., T5/BPE). |
|
|
@@ -57,15 +57,15 @@ Each row (example) has six columns:
|
|
| 57 |
|
| 58 |
* **Context buckets (`context_range`)**
|
| 59 |
|
| 60 |
-
* `"
|
| 61 |
|
| 62 |
-
* `"4k"`: 3 000 –
|
| 63 |
|
| 64 |
-
* `"8k"`:
|
| 65 |
|
| 66 |
-
* `"16k"`:
|
| 67 |
|
| 68 |
-
* `"32k"`:
|
| 69 |
|
| 70 |
> **Note**: The buckets are chosen to stress‐test long‐context inference. The exact cutoff may be implementation‐dependent, but each row’s `length` field indicates the precise token count.
|
| 71 |
|
|
@@ -92,7 +92,7 @@ By default, the single CSV is assigned a “train” split. You can then filter
|
|
| 92 |
# Example: count examples per bucket
|
| 93 |
from collections import Counter
|
| 94 |
counts = Counter(dataset["train"]["context_range"])
|
| 95 |
-
print(counts) # e.g., {'
|
| 96 |
|
| 97 |
# Filter to only 16k‐token contexts
|
| 98 |
ds_16k = dataset["train"].filter(lambda x: x["context_range"] == "16k")
|
|
|
|
| 32 |
|
| 33 |
* **Key features**
|
| 34 |
|
| 35 |
+
1. A single CSV (`longbench_all_buckets_100.csv`) containing examples from five context‐length buckets: **3 K**, **4 K**, **8 K**, **16 K**, and **32 K** tokens.
|
| 36 |
2. Each row includes a complete (potentially multi‐paragraph) passage, a target question, and its ground‐truth answer, along with metadata fields that facilitate grouping, filtering, or statistical analysis.
|
| 37 |
3. Examples are drawn from diverse domains (scientific articles, technical reports, web pages, etc.), as indicated by the `dataset` field.
|
| 38 |
|
|
|
|
| 40 |
|
| 41 |
* **File format**: Comma‐separated values (UTF-8 encoded)
|
| 42 |
* **Number of rows**: Varies by bucket (typically 100 examples per bucket)
|
| 43 |
+
* **Context lengths**: 5 (`“3k”`, `“4k”`, `“8k”`, `“16k”`, `“32k”`)
|
| 44 |
|
| 45 |
### 2.1. Column Descriptions
|
| 46 |
|
|
|
|
| 48 |
|
| 49 |
| Column Name | Type | Description |
|
| 50 |
| ------------------ | -------- | ---------------------------------------------------------------------------------------------------------- |
|
| 51 |
+
| **context** | `string` | A (long) text passage whose token count falls into one of the predefined buckets (3 K – 32 K). |
|
| 52 |
| **question** | `string` | A natural‐language question referring to information contained in `context`. |
|
| 53 |
| **answer** | `string` | The ground‐truth answer (text span or summary) extracted from the context. |
|
| 54 |
| **length** | `int` | The exact token count of the `context` (as measured by a standard tokenizer, e.g., T5/BPE). |
|
|
|
|
| 57 |
|
| 58 |
* **Context buckets (`context_range`)**
|
| 59 |
|
| 60 |
+
* `"3k"`: 1 500 – 3 000 tokens (approximate; exact boundaries may vary)
|
| 61 |
|
| 62 |
+
* `"4k"`: 3 000 – 3 999 tokens
|
| 63 |
|
| 64 |
+
* `"8k"`: 4 000 – 7 999 tokens
|
| 65 |
|
| 66 |
+
* `"16k"`: 8 000 – 15 999 tokens
|
| 67 |
|
| 68 |
+
* `"32k"`: 16 000 – 31 999 tokens
|
| 69 |
|
| 70 |
> **Note**: The buckets are chosen to stress‐test long‐context inference. The exact cutoff may be implementation‐dependent, but each row’s `length` field indicates the precise token count.
|
| 71 |
|
|
|
|
| 92 |
# Example: count examples per bucket
|
| 93 |
from collections import Counter
|
| 94 |
counts = Counter(dataset["train"]["context_range"])
|
| 95 |
+
print(counts) # e.g., {'3k': 100, '4k': 100, '8k': 100, '16k': 100, '32k': 100}
|
| 96 |
|
| 97 |
# Filter to only 16k‐token contexts
|
| 98 |
ds_16k = dataset["train"].filter(lambda x: x["context_range"] == "16k")
|