Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
row_id: int64
fBodyAcckurtosisY: double
tBodyAccmeanX: double
tBodyAccmeanY: double
tBodyAccmeanZ: double
tGravityAcccorrelationYZ: double
tBodyAccstdX: double
tBodyAccstdY: double
tBodyAccstdZ: double
tBodyAccmadX: double
tBodyAcccorrelationYZ: double
tBodyAccmadY: double
tBodyAccmadZ: double
tBodyAccmaxX: double
tBodyAccmaxY: double
tBodyGyroJerkMagentropy: double
tBodyAccmaxZ: double
tBodyAccminX: double
tBodyAccminY: double
tBodyAccminZ: double
tBodyAcccorrelationXY: double
tBodyAccsma: double
tBodyAccenergyX: double
tBodyAccenergyY: double
tBodyAccenergyZ: double
tBodyAcciqrX: double
tBodyAcciqrY: double
tBodyAcciqrZ: double
tBodyAccentropyX: double
tBodyAccentropyY: double
tBodyAccentropyZ: double
tBodyAccarCoeffX1: double
tBodyAccarCoeffX2: double
tBodyAccarCoeffX3: double
tBodyAccarCoeffX4: double
tBodyAccarCoeffY1: double
tBodyAccarCoeffY2: double
tBodyAccarCoeffY3: double
tBodyAccarCoeffY4: double
tBodyAccarCoeffZ1: double
tBodyAccarCoeffZ2: double
tBodyAccarCoeffZ3: double
tBodyAccarCoeffZ4: double
tBodyAcccorrelationXZ: double
tGravityAccmeanX: double
tGravityAccmeanY: double
tGravityAccmeanZ: double
tGravityAccstdX: double
tGravityAccstdY: double
tGravityAccstdZ: double
tGravityAccmadX: double
tGravityAccmadY: double
tGravityAccmadZ: double
tGravityAccmaxX: double
tGravityAccmaxY: double
tGravityAccmaxZ: double
tGravityAccminX: double
tGravityAccminY: double
tGravityAccminZ: double
tGravityAccsma: double
tGravityAccenergyX: double
shape: list<item: int64>
child 0, item: int64
columns: list<item: string>
child 0, item: string
artifact_meta: struct<step: string, uid_col: string, P_row: int64, P_col: int64, i: int64, j: int64, i_pos_1indexed (... 266 chars omitted)
child 0, step: string
child 1, uid_col: string
child 2, P_row: int64
child 3, P_col: int64
child 4, i: int64
child 5, j: int64
child 6, i_pos_1indexed: int64
child 7, j_pos_1indexed: int64
child 8, T_m_rows: int64
child 9, T_m_feature_cols: int64
child 10, bench_rows: int64
child 11, bench_feature_cols: int64
child 12, extra_rows_used: int64
child 13, extra_cols_used: int64
child 14, top_padding_rows: int64
child 15, left_padding_cols: int64
child 16, note: string
child 17, T_bench_shape: list<item: int64>
child 0, item: int64
i: int64
j: int64
to
{'i': Value('int64'), 'j': Value('int64'), 'shape': List(Value('int64')), 'columns': List(Value('string')), 'artifact_meta': {'step': Value('string'), 'uid_col': Value('string'), 'P_row': Value('int64'), 'P_col': Value('int64'), 'i': Value('int64'), 'j': Value('int64'), 'i_pos_1indexed': Value('int64'), 'j_pos_1indexed': Value('int64'), 'T_m_rows': Value('int64'), 'T_m_feature_cols': Value('int64'), 'bench_rows': Value('int64'), 'bench_feature_cols': Value('int64'), 'extra_rows_used': Value('int64'), 'extra_cols_used': Value('int64'), 'top_padding_rows': Value('int64'), 'left_padding_cols': Value('int64'), 'note': Value('string'), 'T_bench_shape': List(Value('int64'))}}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 289, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 124, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
row_id: int64
fBodyAcckurtosisY: double
tBodyAccmeanX: double
tBodyAccmeanY: double
tBodyAccmeanZ: double
tGravityAcccorrelationYZ: double
tBodyAccstdX: double
tBodyAccstdY: double
tBodyAccstdZ: double
tBodyAccmadX: double
tBodyAcccorrelationYZ: double
tBodyAccmadY: double
tBodyAccmadZ: double
tBodyAccmaxX: double
tBodyAccmaxY: double
tBodyGyroJerkMagentropy: double
tBodyAccmaxZ: double
tBodyAccminX: double
tBodyAccminY: double
tBodyAccminZ: double
tBodyAcccorrelationXY: double
tBodyAccsma: double
tBodyAccenergyX: double
tBodyAccenergyY: double
tBodyAccenergyZ: double
tBodyAcciqrX: double
tBodyAcciqrY: double
tBodyAcciqrZ: double
tBodyAccentropyX: double
tBodyAccentropyY: double
tBodyAccentropyZ: double
tBodyAccarCoeffX1: double
tBodyAccarCoeffX2: double
tBodyAccarCoeffX3: double
tBodyAccarCoeffX4: double
tBodyAccarCoeffY1: double
tBodyAccarCoeffY2: double
tBodyAccarCoeffY3: double
tBodyAccarCoeffY4: double
tBodyAccarCoeffZ1: double
tBodyAccarCoeffZ2: double
tBodyAccarCoeffZ3: double
tBodyAccarCoeffZ4: double
tBodyAcccorrelationXZ: double
tGravityAccmeanX: double
tGravityAccmeanY: double
tGravityAccmeanZ: double
tGravityAccstdX: double
tGravityAccstdY: double
tGravityAccstdZ: double
tGravityAccmadX: double
tGravityAccmadY: double
tGravityAccmadZ: double
tGravityAccmaxX: double
tGravityAccmaxY: double
tGravityAccmaxZ: double
tGravityAccminX: double
tGravityAccminY: double
tGravityAccminZ: double
tGravityAccsma: double
tGravityAccenergyX: double
shape: list<item: int64>
child 0, item: int64
columns: list<item: string>
child 0, item: string
artifact_meta: struct<step: string, uid_col: string, P_row: int64, P_col: int64, i: int64, j: int64, i_pos_1indexed (... 266 chars omitted)
child 0, step: string
child 1, uid_col: string
child 2, P_row: int64
child 3, P_col: int64
child 4, i: int64
child 5, j: int64
child 6, i_pos_1indexed: int64
child 7, j_pos_1indexed: int64
child 8, T_m_rows: int64
child 9, T_m_feature_cols: int64
child 10, bench_rows: int64
child 11, bench_feature_cols: int64
child 12, extra_rows_used: int64
child 13, extra_cols_used: int64
child 14, top_padding_rows: int64
child 15, left_padding_cols: int64
child 16, note: string
child 17, T_bench_shape: list<item: int64>
child 0, item: int64
i: int64
j: int64
to
{'i': Value('int64'), 'j': Value('int64'), 'shape': List(Value('int64')), 'columns': List(Value('string')), 'artifact_meta': {'step': Value('string'), 'uid_col': Value('string'), 'P_row': Value('int64'), 'P_col': Value('int64'), 'i': Value('int64'), 'j': Value('int64'), 'i_pos_1indexed': Value('int64'), 'j_pos_1indexed': Value('int64'), 'T_m_rows': Value('int64'), 'T_m_feature_cols': Value('int64'), 'bench_rows': Value('int64'), 'bench_feature_cols': Value('int64'), 'extra_rows_used': Value('int64'), 'extra_cols_used': Value('int64'), 'top_padding_rows': Value('int64'), 'left_padding_cols': Value('int64'), 'note': Value('string'), 'T_bench_shape': List(Value('int64'))}}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
NIAT-Pro: Needle-In-A-Table-Pro
NIAT-Pro is a benchmark for evaluating how well large language models understand and reason over large tables under controlled variations of tabular format, table size, and information position.
It extends the original Needle-In-A-Table setting from simple cell lookup to one-hop, two-hop, and four-hop tasks, and studies performance across 11 table representations:
- CSV
- TSV
- PSV
- JSON
- XML
- YAML
- Markdown
- HTML
- LaTeX
- SQL
- Free-form text
NIAT-Pro is designed to study long-context tabular understanding at realistic scales using three public datasets: HAR, SECOM, and WEC. It systematically controls table representation, row and column scaling, and target information position to enable rigorous analysis of LLM behavior on large tables.
Overview
Existing tabular benchmarks often rely on relatively small tables, fix contextual properties within each sample, and report only coarse average accuracy. NIAT-Pro is designed to address these limitations by:
- using substantially larger tables
- systematically varying format, row and column scaling, and target information position
- including retrieval and reasoning tasks of increasing complexity
- enabling factorial analysis of how these factors affect model performance
Tasks
NIAT-Pro includes three levels of task complexity.
One-hop lookup
This task evaluates direct retrieval of a target cell value from row and column cues.
Two-hop reasoning
This task includes two types of questions:
- finding the maximum or minimum value of a given column
- table navigation relative to a base cell
Four-hop reasoning
This task further increases complexity by defining the base position implicitly through an extreme value in a column, then asking the model to navigate relative to that position and retrieve the final target value.
Source datasets
NIAT-Pro is constructed from three public tabular datasets spanning different domains:
- SECOM: semiconductor manufacturing
- WEC: wave energy converters
- HAR: human activity recognition
These datasets cover engineering, environmental, and health-related domains.
Repository structure and folder hierarchy
The repository is organized by dataset name, then by row and column scaling factors, then by benchmark scenarios defined by information positions, and finally by the 11 table-format files for the same test scenario.
At the root level, the repository contains the three dataset folders and the README file:
NIAT-Pro/
βββ har/
βββ secom/
βββ wec/
βββ README.md
Here:
har,secom, andwecare dataset names- each dataset folder stores benchmark artifacts derived from that source dataset
Dataset-level hierarchy
Inside each dataset folder, there are multiple subfolders named in the form Srow{}_Scol{}. These indicate the scaling factors applied to the row and column dimensions of the benchmark tables.
A dataset folder has the following general structure:
<dataset_name>/
βββ Srow{row_scale}_Scol{col_scale}/
βββ Srow{row_scale}_Scol{col_scale}/
βββ ...
βββ T_s.csv
βββ benchmark_summary.json
βββ qa_spec.json
Meaning of these items:
Srow{row_scale}_Scol{col_scale}: a benchmark subset with a specific row scaling factor and column scaling factorT_s.csv: the informative subtable used during benchmark constructionbenchmark_summary.json: summary metadata for the benchmark instances under this datasetqa_spec.json: dataset-level question and answer specification, not specific questions and answers
Scaling-factor level hierarchy
Inside each Srow{}_Scol{} folder, the structure is:
Srow{row_scale}_Scol{col_scale}/
βββ benches/
βββ manifest.json
βββ qas.json
Meaning of these items:
qas.json: records the questions and answers for this scaling setting; these are unified for all scenario subfolders insidebenches/manifest.json: metadata describing the scenario inventory and files under this scaling settingbenches/: contains benchmark scenarios created by varying information positions
This means that for a fixed dataset and a fixed pair of row and column scaling factors, the questions and answers are shared across the different information-position scenarios, while the actual rendered benchmark tables differ by scenario.
Benchmark-scenario hierarchy
Inside benches/, each subfolder name is of the form i{}_j{}:
benches/
βββ i01_j01/
βββ i01_j02/
βββ i01_j03/
βββ i02_j01/
βββ i02_j02/
βββ i02_j03/
βββ i03_j01/
βββ i03_j02/
βββ i03_j03/
These folders represent information positions.
Meaning of i{}_j{}:
iis the row-position indexjis the column-position index
They indicate where the target information is placed in the benchmark table. In the benchmark design, information position is systematically controlled across row and column dimensions, corresponding to top, middle, and bottom positions along rows and front, middle, and back positions along columns.
Format-file hierarchy
Inside each i{}_j{} folder, the repository stores the same benchmark scenario rendered into 11 different formats, together with scenario metadata:
i{row_pos}_j{col_pos}/
βββ meta.json
βββ table.csv
βββ table.html
βββ table.json
βββ table.md
βββ table.nl.txt
βββ table.psv
βββ table.sql
βββ table.tex
βββ table.tsv
βββ table.xml
βββ table.yaml
Meaning of these files:
meta.json: metadata for the specific benchmark scenariotable.csv,table.tsv,table.psv: delimiter-separated representationstable.json,table.xml,table.yaml: hierarchical serialization formatstable.md,table.html,table.tex: markup-oriented formatstable.sql: executable relational representationtable.nl.txt: free-form natural-language rendering
These 11 files correspond to the 11 tabular formats studied in NIAT-Pro.
Full hierarchy example
The full folder hierarchy can be summarized as follows:
NIAT-Pro/
βββ har/
β βββ Srow6_Scol6/
β β βββ benches/
β β β βββ i01_j01/
β β β β βββ meta.json
β β β β βββ table.csv
β β β β βββ table.html
β β β β βββ table.json
β β β β βββ table.md
β β β β βββ table.nl.txt
β β β β βββ table.psv
β β β β βββ table.sql
β β β β βββ table.tex
β β β β βββ table.tsv
β β β β βββ table.xml
β β β β βββ table.yaml
β β β βββ i01_j02/
β β β βββ i01_j03/
β β β βββ i02_j01/
β β β βββ i02_j02/
β β β βββ i02_j03/
β β β βββ i03_j01/
β β β βββ i03_j02/
β β β βββ i03_j03/
β β βββ manifest.json
β β βββ qas.json
β βββ Srow{...}_Scol{...}/
β βββ T_s.csv
β βββ benchmark_summary.json
β βββ qa_spec.json
βββ secom/
βββ wec/
βββ README.md
How to interpret one path
For example, the path below:
har/Srow6_Scol6/benches/i01_j01/table.csv
can be interpreted as:
har: the HAR source datasetSrow6_Scol6: row scaling factor 6 and column scaling factor 6benches/i01_j01: the benchmark scenario where the informative content is placed at information position(i=1, j=1)table.csv: the CSV rendering of that exact scenario
The corresponding qas.json in har/Srow6_Scol6/ provides the unified question and answer set for all i{}_j{} scenario folders under that same scaling configuration.
Benchmark construction
NIAT-Pro is generated through a controlled pipeline that:
- selects an informative subtable from the original source table
- expands rows and columns in a controlled manner
- places informative content at controlled row and column positions
- renders the same table scenario into multiple tabular formats
This construction is designed to keep target information and question content aligned across settings so that performance differences can be more cleanly attributed to the manipulated factors.
Controlled factors
Tabular format
The benchmark includes 11 different tabular formats. Format choice has a substantial impact on LLM performance, and CSV is not always the best-performing representation.
Table size
Table size is controlled along both row length and column width, yielding structures such as short-and-narrow, short-and-wide, long-and-narrow, and long-and-wide.
Information position
Target information is placed at controlled positions across both rows and columns, enabling analysis of early, middle, and late positions in the table context.
Intended uses
NIAT-Pro is intended for:
- benchmarking LLM tabular understanding
- studying long-context reasoning over structured data
- comparing different tabular representations
- evaluating sensitivity to table size and information position
- testing methods such as direct encoding, RAG, code execution, Code-RAG, and few-shot test-time scaling
Loading notes
Because the repository is organized as nested benchmark artifacts rather than a single flat table, users may prefer loading specific JSON files or writing a small parser over the folder hierarchy.
Example:
from pathlib import Path
import json
root = Path("NIAT-Pro")
dataset = "har"
scale = "Srow6_Scol6"
pos = "i01_j01"
qas = json.loads((root / dataset / scale / "qas.json").read_text())
meta = json.loads((root / dataset / scale / "benches" / pos / "meta.json").read_text())
table_csv = (root / dataset / scale / "benches" / pos / "table.csv").read_text()
print(meta)
print(qas[0] if isinstance(qas, list) and len(qas) > 0 else qas)
print(table_csv[:500])
License
The current repository lists the dataset license as openrail.
Please verify the final repository-level license choice for consistency with the included files and redistribution plan.
Citation
If you use NIAT-Pro, please cite:
@article{yuan2026niatpro,
title={Needle-In-A-Table-Pro: Tabular Formats Matter When Table Size and Information Position Jointly Shape LLMs' Understanding of Large Tables},
author={},
journal={Preprint},
year={2026}
}
Acknowledgements
NIAT-Pro is built on public datasets from semiconductor manufacturing, wave energy systems, and human activity recognition, and is released to support research on robust long-context tabular understanding.
- Downloads last month
- 921