Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
row_id: int64
fBodyAcckurtosisY: double
tBodyAccmeanX: double
tBodyAccmeanY: double
tBodyAccmeanZ: double
tGravityAcccorrelationYZ: double
tBodyAccstdX: double
tBodyAccstdY: double
tBodyAccstdZ: double
tBodyAccmadX: double
tBodyAcccorrelationYZ: double
tBodyAccmadY: double
tBodyAccmadZ: double
tBodyAccmaxX: double
tBodyAccmaxY: double
tBodyGyroJerkMagentropy: double
tBodyAccmaxZ: double
tBodyAccminX: double
tBodyAccminY: double
tBodyAccminZ: double
tBodyAcccorrelationXY: double
tBodyAccsma: double
tBodyAccenergyX: double
tBodyAccenergyY: double
tBodyAccenergyZ: double
tBodyAcciqrX: double
tBodyAcciqrY: double
tBodyAcciqrZ: double
tBodyAccentropyX: double
tBodyAccentropyY: double
tBodyAccentropyZ: double
tBodyAccarCoeffX1: double
tBodyAccarCoeffX2: double
tBodyAccarCoeffX3: double
tBodyAccarCoeffX4: double
tBodyAccarCoeffY1: double
tBodyAccarCoeffY2: double
tBodyAccarCoeffY3: double
tBodyAccarCoeffY4: double
tBodyAccarCoeffZ1: double
tBodyAccarCoeffZ2: double
tBodyAccarCoeffZ3: double
tBodyAccarCoeffZ4: double
tBodyAcccorrelationXZ: double
tGravityAccmeanX: double
tGravityAccmeanY: double
tGravityAccmeanZ: double
tGravityAccstdX: double
tGravityAccstdY: double
tGravityAccstdZ: double
tGravityAccmadX: double
tGravityAccmadY: double
tGravityAccmadZ: double
tGravityAccmaxX: double
tGravityAccmaxY: double
tGravityAccmaxZ: double
tGravityAccminX: double
tGravityAccminY: double
tGravityAccminZ: double
tGravityAccsma: double
tGravityAccenergyX: double
shape: list<item: int64>
  child 0, item: int64
columns: list<item: string>
  child 0, item: string
artifact_meta: struct<step: string, uid_col: string, P_row: int64, P_col: int64, i: int64, j: int64, i_pos_1indexed (... 266 chars omitted)
  child 0, step: string
  child 1, uid_col: string
  child 2, P_row: int64
  child 3, P_col: int64
  child 4, i: int64
  child 5, j: int64
  child 6, i_pos_1indexed: int64
  child 7, j_pos_1indexed: int64
  child 8, T_m_rows: int64
  child 9, T_m_feature_cols: int64
  child 10, bench_rows: int64
  child 11, bench_feature_cols: int64
  child 12, extra_rows_used: int64
  child 13, extra_cols_used: int64
  child 14, top_padding_rows: int64
  child 15, left_padding_cols: int64
  child 16, note: string
  child 17, T_bench_shape: list<item: int64>
      child 0, item: int64
i: int64
j: int64
to
{'i': Value('int64'), 'j': Value('int64'), 'shape': List(Value('int64')), 'columns': List(Value('string')), 'artifact_meta': {'step': Value('string'), 'uid_col': Value('string'), 'P_row': Value('int64'), 'P_col': Value('int64'), 'i': Value('int64'), 'j': Value('int64'), 'i_pos_1indexed': Value('int64'), 'j_pos_1indexed': Value('int64'), 'T_m_rows': Value('int64'), 'T_m_feature_cols': Value('int64'), 'bench_rows': Value('int64'), 'bench_feature_cols': Value('int64'), 'extra_rows_used': Value('int64'), 'extra_cols_used': Value('int64'), 'top_padding_rows': Value('int64'), 'left_padding_cols': Value('int64'), 'note': Value('string'), 'T_bench_shape': List(Value('int64'))}}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 289, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 124, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              row_id: int64
              fBodyAcckurtosisY: double
              tBodyAccmeanX: double
              tBodyAccmeanY: double
              tBodyAccmeanZ: double
              tGravityAcccorrelationYZ: double
              tBodyAccstdX: double
              tBodyAccstdY: double
              tBodyAccstdZ: double
              tBodyAccmadX: double
              tBodyAcccorrelationYZ: double
              tBodyAccmadY: double
              tBodyAccmadZ: double
              tBodyAccmaxX: double
              tBodyAccmaxY: double
              tBodyGyroJerkMagentropy: double
              tBodyAccmaxZ: double
              tBodyAccminX: double
              tBodyAccminY: double
              tBodyAccminZ: double
              tBodyAcccorrelationXY: double
              tBodyAccsma: double
              tBodyAccenergyX: double
              tBodyAccenergyY: double
              tBodyAccenergyZ: double
              tBodyAcciqrX: double
              tBodyAcciqrY: double
              tBodyAcciqrZ: double
              tBodyAccentropyX: double
              tBodyAccentropyY: double
              tBodyAccentropyZ: double
              tBodyAccarCoeffX1: double
              tBodyAccarCoeffX2: double
              tBodyAccarCoeffX3: double
              tBodyAccarCoeffX4: double
              tBodyAccarCoeffY1: double
              tBodyAccarCoeffY2: double
              tBodyAccarCoeffY3: double
              tBodyAccarCoeffY4: double
              tBodyAccarCoeffZ1: double
              tBodyAccarCoeffZ2: double
              tBodyAccarCoeffZ3: double
              tBodyAccarCoeffZ4: double
              tBodyAcccorrelationXZ: double
              tGravityAccmeanX: double
              tGravityAccmeanY: double
              tGravityAccmeanZ: double
              tGravityAccstdX: double
              tGravityAccstdY: double
              tGravityAccstdZ: double
              tGravityAccmadX: double
              tGravityAccmadY: double
              tGravityAccmadZ: double
              tGravityAccmaxX: double
              tGravityAccmaxY: double
              tGravityAccmaxZ: double
              tGravityAccminX: double
              tGravityAccminY: double
              tGravityAccminZ: double
              tGravityAccsma: double
              tGravityAccenergyX: double
              shape: list<item: int64>
                child 0, item: int64
              columns: list<item: string>
                child 0, item: string
              artifact_meta: struct<step: string, uid_col: string, P_row: int64, P_col: int64, i: int64, j: int64, i_pos_1indexed (... 266 chars omitted)
                child 0, step: string
                child 1, uid_col: string
                child 2, P_row: int64
                child 3, P_col: int64
                child 4, i: int64
                child 5, j: int64
                child 6, i_pos_1indexed: int64
                child 7, j_pos_1indexed: int64
                child 8, T_m_rows: int64
                child 9, T_m_feature_cols: int64
                child 10, bench_rows: int64
                child 11, bench_feature_cols: int64
                child 12, extra_rows_used: int64
                child 13, extra_cols_used: int64
                child 14, top_padding_rows: int64
                child 15, left_padding_cols: int64
                child 16, note: string
                child 17, T_bench_shape: list<item: int64>
                    child 0, item: int64
              i: int64
              j: int64
              to
              {'i': Value('int64'), 'j': Value('int64'), 'shape': List(Value('int64')), 'columns': List(Value('string')), 'artifact_meta': {'step': Value('string'), 'uid_col': Value('string'), 'P_row': Value('int64'), 'P_col': Value('int64'), 'i': Value('int64'), 'j': Value('int64'), 'i_pos_1indexed': Value('int64'), 'j_pos_1indexed': Value('int64'), 'T_m_rows': Value('int64'), 'T_m_feature_cols': Value('int64'), 'bench_rows': Value('int64'), 'bench_feature_cols': Value('int64'), 'extra_rows_used': Value('int64'), 'extra_cols_used': Value('int64'), 'top_padding_rows': Value('int64'), 'left_padding_cols': Value('int64'), 'note': Value('string'), 'T_bench_shape': List(Value('int64'))}}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

NIAT-Pro: Needle-In-A-Table-Pro

NIAT-Pro is a benchmark for evaluating how well large language models understand and reason over large tables under controlled variations of tabular format, table size, and information position.

It extends the original Needle-In-A-Table setting from simple cell lookup to one-hop, two-hop, and four-hop tasks, and studies performance across 11 table representations:

  • CSV
  • TSV
  • PSV
  • JSON
  • XML
  • YAML
  • Markdown
  • HTML
  • LaTeX
  • SQL
  • Free-form text

NIAT-Pro is designed to study long-context tabular understanding at realistic scales using three public datasets: HAR, SECOM, and WEC. It systematically controls table representation, row and column scaling, and target information position to enable rigorous analysis of LLM behavior on large tables.

Overview

Existing tabular benchmarks often rely on relatively small tables, fix contextual properties within each sample, and report only coarse average accuracy. NIAT-Pro is designed to address these limitations by:

  • using substantially larger tables
  • systematically varying format, row and column scaling, and target information position
  • including retrieval and reasoning tasks of increasing complexity
  • enabling factorial analysis of how these factors affect model performance

Tasks

NIAT-Pro includes three levels of task complexity.

One-hop lookup

This task evaluates direct retrieval of a target cell value from row and column cues.

Two-hop reasoning

This task includes two types of questions:

  • finding the maximum or minimum value of a given column
  • table navigation relative to a base cell

Four-hop reasoning

This task further increases complexity by defining the base position implicitly through an extreme value in a column, then asking the model to navigate relative to that position and retrieve the final target value.

Source datasets

NIAT-Pro is constructed from three public tabular datasets spanning different domains:

  • SECOM: semiconductor manufacturing
  • WEC: wave energy converters
  • HAR: human activity recognition

These datasets cover engineering, environmental, and health-related domains.

Repository structure and folder hierarchy

The repository is organized by dataset name, then by row and column scaling factors, then by benchmark scenarios defined by information positions, and finally by the 11 table-format files for the same test scenario.

At the root level, the repository contains the three dataset folders and the README file:

NIAT-Pro/
β”œβ”€β”€ har/
β”œβ”€β”€ secom/
β”œβ”€β”€ wec/
└── README.md

Here:

  • har, secom, and wec are dataset names
  • each dataset folder stores benchmark artifacts derived from that source dataset

Dataset-level hierarchy

Inside each dataset folder, there are multiple subfolders named in the form Srow{}_Scol{}. These indicate the scaling factors applied to the row and column dimensions of the benchmark tables.

A dataset folder has the following general structure:

<dataset_name>/
β”œβ”€β”€ Srow{row_scale}_Scol{col_scale}/
β”œβ”€β”€ Srow{row_scale}_Scol{col_scale}/
β”œβ”€β”€ ...
β”œβ”€β”€ T_s.csv
β”œβ”€β”€ benchmark_summary.json
└── qa_spec.json

Meaning of these items:

  • Srow{row_scale}_Scol{col_scale}: a benchmark subset with a specific row scaling factor and column scaling factor
  • T_s.csv: the informative subtable used during benchmark construction
  • benchmark_summary.json: summary metadata for the benchmark instances under this dataset
  • qa_spec.json: dataset-level question and answer specification, not specific questions and answers

Scaling-factor level hierarchy

Inside each Srow{}_Scol{} folder, the structure is:

Srow{row_scale}_Scol{col_scale}/
β”œβ”€β”€ benches/
β”œβ”€β”€ manifest.json
└── qas.json

Meaning of these items:

  • qas.json: records the questions and answers for this scaling setting; these are unified for all scenario subfolders inside benches/
  • manifest.json: metadata describing the scenario inventory and files under this scaling setting
  • benches/: contains benchmark scenarios created by varying information positions

This means that for a fixed dataset and a fixed pair of row and column scaling factors, the questions and answers are shared across the different information-position scenarios, while the actual rendered benchmark tables differ by scenario.

Benchmark-scenario hierarchy

Inside benches/, each subfolder name is of the form i{}_j{}:

benches/
β”œβ”€β”€ i01_j01/
β”œβ”€β”€ i01_j02/
β”œβ”€β”€ i01_j03/
β”œβ”€β”€ i02_j01/
β”œβ”€β”€ i02_j02/
β”œβ”€β”€ i02_j03/
β”œβ”€β”€ i03_j01/
β”œβ”€β”€ i03_j02/
└── i03_j03/

These folders represent information positions.

Meaning of i{}_j{}:

  • i is the row-position index
  • j is the column-position index

They indicate where the target information is placed in the benchmark table. In the benchmark design, information position is systematically controlled across row and column dimensions, corresponding to top, middle, and bottom positions along rows and front, middle, and back positions along columns.

Format-file hierarchy

Inside each i{}_j{} folder, the repository stores the same benchmark scenario rendered into 11 different formats, together with scenario metadata:

i{row_pos}_j{col_pos}/
β”œβ”€β”€ meta.json
β”œβ”€β”€ table.csv
β”œβ”€β”€ table.html
β”œβ”€β”€ table.json
β”œβ”€β”€ table.md
β”œβ”€β”€ table.nl.txt
β”œβ”€β”€ table.psv
β”œβ”€β”€ table.sql
β”œβ”€β”€ table.tex
β”œβ”€β”€ table.tsv
β”œβ”€β”€ table.xml
└── table.yaml

Meaning of these files:

  • meta.json: metadata for the specific benchmark scenario
  • table.csv, table.tsv, table.psv: delimiter-separated representations
  • table.json, table.xml, table.yaml: hierarchical serialization formats
  • table.md, table.html, table.tex: markup-oriented formats
  • table.sql: executable relational representation
  • table.nl.txt: free-form natural-language rendering

These 11 files correspond to the 11 tabular formats studied in NIAT-Pro.

Full hierarchy example

The full folder hierarchy can be summarized as follows:

NIAT-Pro/
β”œβ”€β”€ har/
β”‚   β”œβ”€β”€ Srow6_Scol6/
β”‚   β”‚   β”œβ”€β”€ benches/
β”‚   β”‚   β”‚   β”œβ”€β”€ i01_j01/
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ meta.json
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ table.csv
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ table.html
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ table.json
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ table.md
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ table.nl.txt
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ table.psv
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ table.sql
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ table.tex
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ table.tsv
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ table.xml
β”‚   β”‚   β”‚   β”‚   └── table.yaml
β”‚   β”‚   β”‚   β”œβ”€β”€ i01_j02/
β”‚   β”‚   β”‚   β”œβ”€β”€ i01_j03/
β”‚   β”‚   β”‚   β”œβ”€β”€ i02_j01/
β”‚   β”‚   β”‚   β”œβ”€β”€ i02_j02/
β”‚   β”‚   β”‚   β”œβ”€β”€ i02_j03/
β”‚   β”‚   β”‚   β”œβ”€β”€ i03_j01/
β”‚   β”‚   β”‚   β”œβ”€β”€ i03_j02/
β”‚   β”‚   β”‚   └── i03_j03/
β”‚   β”‚   β”œβ”€β”€ manifest.json
β”‚   β”‚   └── qas.json
β”‚   β”œβ”€β”€ Srow{...}_Scol{...}/
β”‚   β”œβ”€β”€ T_s.csv
β”‚   β”œβ”€β”€ benchmark_summary.json
β”‚   └── qa_spec.json
β”œβ”€β”€ secom/
β”œβ”€β”€ wec/
└── README.md

How to interpret one path

For example, the path below:

har/Srow6_Scol6/benches/i01_j01/table.csv

can be interpreted as:

  • har: the HAR source dataset
  • Srow6_Scol6: row scaling factor 6 and column scaling factor 6
  • benches/i01_j01: the benchmark scenario where the informative content is placed at information position (i=1, j=1)
  • table.csv: the CSV rendering of that exact scenario

The corresponding qas.json in har/Srow6_Scol6/ provides the unified question and answer set for all i{}_j{} scenario folders under that same scaling configuration.

Benchmark construction

NIAT-Pro is generated through a controlled pipeline that:

  1. selects an informative subtable from the original source table
  2. expands rows and columns in a controlled manner
  3. places informative content at controlled row and column positions
  4. renders the same table scenario into multiple tabular formats

This construction is designed to keep target information and question content aligned across settings so that performance differences can be more cleanly attributed to the manipulated factors.

Controlled factors

Tabular format

The benchmark includes 11 different tabular formats. Format choice has a substantial impact on LLM performance, and CSV is not always the best-performing representation.

Table size

Table size is controlled along both row length and column width, yielding structures such as short-and-narrow, short-and-wide, long-and-narrow, and long-and-wide.

Information position

Target information is placed at controlled positions across both rows and columns, enabling analysis of early, middle, and late positions in the table context.

Intended uses

NIAT-Pro is intended for:

  • benchmarking LLM tabular understanding
  • studying long-context reasoning over structured data
  • comparing different tabular representations
  • evaluating sensitivity to table size and information position
  • testing methods such as direct encoding, RAG, code execution, Code-RAG, and few-shot test-time scaling

Loading notes

Because the repository is organized as nested benchmark artifacts rather than a single flat table, users may prefer loading specific JSON files or writing a small parser over the folder hierarchy.

Example:

from pathlib import Path
import json

root = Path("NIAT-Pro")

dataset = "har"
scale = "Srow6_Scol6"
pos = "i01_j01"

qas = json.loads((root / dataset / scale / "qas.json").read_text())
meta = json.loads((root / dataset / scale / "benches" / pos / "meta.json").read_text())

table_csv = (root / dataset / scale / "benches" / pos / "table.csv").read_text()
print(meta)
print(qas[0] if isinstance(qas, list) and len(qas) > 0 else qas)
print(table_csv[:500])

License

The current repository lists the dataset license as openrail.

Please verify the final repository-level license choice for consistency with the included files and redistribution plan.

Citation

If you use NIAT-Pro, please cite:

@article{yuan2026niatpro,
  title={Needle-In-A-Table-Pro: Tabular Formats Matter When Table Size and Information Position Jointly Shape LLMs' Understanding of Large Tables},
  author={},
  journal={Preprint},
  year={2026}
}

Acknowledgements

NIAT-Pro is built on public datasets from semiconductor manufacturing, wave energy systems, and human activity recognition, and is released to support research on robust long-context tabular understanding.

Downloads last month
921