Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -166,4 +166,82 @@ configs:
|
|
| 166 |
data_files:
|
| 167 |
- split: test
|
| 168 |
path: radar-tasks/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 166 |
data_files:
|
| 167 |
- split: test
|
| 168 |
path: radar-tasks/test-*
|
| 169 |
+
license: cc-by-4.0
|
| 170 |
+
task_categories:
|
| 171 |
+
- table-question-answering
|
| 172 |
+
language:
|
| 173 |
+
- en
|
| 174 |
+
pretty_name: RADAR
|
| 175 |
+
size_categories:
|
| 176 |
+
- 1K<n<10K
|
| 177 |
---
|
| 178 |
+
|
| 179 |
+
# RADAR: Robust And Data Aware Reasoning Benchmark
|
| 180 |
+
|
| 181 |
+
## Link: [Paper]() | [Code]()
|
| 182 |
+
|
| 183 |
+
The **Robust And Data Aware Reasoning (RADAR)** benchmark is designed to evaluate the ability of language models to demonstrate **data-awareness**—that is, to recognize, reason over, and appropriately handle complex data artifacts such as:
|
| 184 |
+
|
| 185 |
+
- Missing data
|
| 186 |
+
- Bad values
|
| 187 |
+
- Outliers
|
| 188 |
+
- Inconsistent formatting
|
| 189 |
+
- Inconsistent multi-column logic
|
| 190 |
+
|
| 191 |
+
The full dataset includes **53 tasks** grounded in real-world data tables and varies across data artifact types and table dimensions (by token count and number of columns). In total, RADAR provides **2,980 unique query-table task instances**.
|
| 192 |
+
We also include two subsets of the data: (1) **radar-sizes** (RADAR-S) to focus evaluation on table sizes and (2) **radar-tasks** (RADAR-T) to focus evaluation across all tasks.
|
| 193 |
+
|
| 194 |
+
## 📊 Dataset Statistics
|
| 195 |
+
|
| 196 |
+
| **Dataset Split** | **Tasks** | **Instances** | **Tokens (K)** | **Cols** |
|
| 197 |
+
|-------------|-----------|---------------|----------------|----------|
|
| 198 |
+
| RADAR | 53 | 2,980 | [2,4,8,16] | [5,10,20] |
|
| 199 |
+
| RADAR-T | 53 | 313 | 8 | 10 |
|
| 200 |
+
| RADAR-S | 10 | 720 | [2,4,8,16] | [5,10,20] |
|
| 201 |
+
|
| 202 |
+
|
| 203 |
+
## 🔭 Dataset Structure
|
| 204 |
+
Each task instance comprises of the follwowing data:
|
| 205 |
+
* `task_id`: a unique id for each source table and query
|
| 206 |
+
* `query`: the query to ask over the data table
|
| 207 |
+
* `answer`: ground truth answer to the query
|
| 208 |
+
* `artifact_type`: the artifact type introduced to the data table for this task
|
| 209 |
+
* `artiact_scope`: does reasoning over the data artifacts involve only a single column, naively or independetly over multiple columns, or jointly or connected over multiple columns
|
| 210 |
+
* `query_cols`: the columns invovled in the query
|
| 211 |
+
* `artifact_reasoning_cols`: the columns invovled in reasoning over the artifacts
|
| 212 |
+
* `table`: the data table for this task (a dictionary with keys "headers" and "rows" to represent the table column names and rows)
|
| 213 |
+
* `num_rows`: number of rows in the tbale
|
| 214 |
+
* `num_cols`: number of columns in the table
|
| 215 |
+
* `recovered_tables_transform_spec`: The right answer is caluclated over the recovered data table(s). We convert the data table in `table` to the recovered data table(s) using this specification indicating which rows to drop and which cells to overwrite.
|
| 216 |
+
* `base_data_num_tokens`: The number of tokens in the data table (before introducing any data artifact perturbations). This may be slightly different after introducing perturbations.
|
| 217 |
+
* `base_data_token_bucket`: The token bucket in which this task belongs to (one of 2000, 4000, 8000, and 16000)
|
| 218 |
+
* `perturbation_note`: Any note about the data artifact perturbation that is introduced.
|
| 219 |
+
|
| 220 |
+
## 💻 Loading the Data
|
| 221 |
+
Using Hugging Face
|
| 222 |
+
```python
|
| 223 |
+
from datasets import load_dataset
|
| 224 |
+
radar_all = load_dataset("kenqgu/radar", "radar")["test"]
|
| 225 |
+
radar_s = load_dataset("kenqgu/radar", "radar-sizes")["test"]
|
| 226 |
+
radar_t = load_dataset("kenqgu/radar", "radar-tasks")["test"]
|
| 227 |
+
```
|
| 228 |
+
|
| 229 |
+
Using included RADAR code to load into more usable pydantic objects (need to install radar first).
|
| 230 |
+
```python
|
| 231 |
+
from radar.data import load_task_instances_hf
|
| 232 |
+
|
| 233 |
+
# load the full dataset
|
| 234 |
+
tasks, task_summary_df = load_task_instances_hf(split="full")
|
| 235 |
+
tasks_s, _ = load_task_instances_hf(split="sizes")
|
| 236 |
+
tasks_t, _ = load_task_instances_hf(split="tasks")
|
| 237 |
+
|
| 238 |
+
# view the table as a pandas dataframe
|
| 239 |
+
tasks[0].table_df.head()
|
| 240 |
+
```
|
| 241 |
+
|
| 242 |
+
## 📖 Citation
|
| 243 |
+
|
| 244 |
+
If you use RADAR in your research, please cite our paper:
|
| 245 |
+
|
| 246 |
+
```bibtex
|
| 247 |
+
```
|