Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    TypeError
Message:      Couldn't cast array of type struct<row_index: int64, column_index: int64, children: list<item: null>> to null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2567, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2102, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2125, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 479, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 380, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 260, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 120, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
                  cast_array_to_feature(
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2002, in cast_array_to_feature
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2002, in cast_array_to_feature
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2052, in cast_array_to_feature
                  casted_array_values = _c(array.values, feature.feature)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2002, in cast_array_to_feature
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2052, in cast_array_to_feature
                  casted_array_values = _c(array.values, feature.feature)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2002, in cast_array_to_feature
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2052, in cast_array_to_feature
                  casted_array_values = _c(array.values, feature.feature)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2086, in cast_array_to_feature
                  return array_cast(
                         ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1950, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type struct<row_index: int64, column_index: int64, children: list<item: null>> to null

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

ViTaB-A: Evaluating Multimodal Large Language Models on Visual Table Attribution

Dataset Description

This dataset is a perturbed verion of the HiTab dataset. Specifically we retain the QA, cell attribution and JSON formatted tables. We perturb this with markdown version and multiple images of the same table for attribution research in imges of structured data.

Key Features

  • Visual Diversity: Each table is rendered in 5 visual variations:

    • Arial font with standard headers
    • Times New Roman font with standard headers
    • Red headers with white text
    • Blue headers with white text
    • Green headers with white text
  • Multi-modal Representations: Each sample includes:

    • JSON structure (programmatic access)
    • Markdown format (text-based with row/column labels)
    • PNG images (visual format, base64 encoded)
  • Standardized Coordinates: Tables include Excel-style coordinates (A, B, C... for columns; 1, 2, 3... for rows) enabling precise answer localization

  • Rich Annotations: Includes questions, answers, answer formulas, and cells indicating answer locations

Dataset Statistics

  • Total Samples: ~8,300 (from HiTab)
  • Unique Tables: ~2,500
  • Splits:
    • Train: 80%
    • Validation: 6.6%
    • Dev: 6.6%
    • Test: 6.8%

Data Format

Each line in visualcite.jsonl contains:

{
  "id": "visualcite_000001",
  "split": "train",
  "question": "What was the team's total rushing yards in 2015?",
  "answer": "529",
  "answer_formulas": ["SUM(F8)"],
  "highlighted_cells": [[7, 5]],
  "table_json": {
    "texts": [...],
    "merged_regions": [...],
    "top_header_rows_num": 2,
    "left_header_columns_num": 1,
    "title": "career statistics"
  },
  "table_md": "| | A | B | C | ...\n| --- | --- | --- | ...",
  "table_images": {
    "arial": "base64_encoded_png...",
    "times_new_roman": "base64_encoded_png...",
    "red": "base64_encoded_png...",
    "blue": "base64_encoded_png...",
    "green": "base64_encoded_png..."
  },
  "source": "hitab_train",
  "source_id": "0_totto74-1"
}

Field Descriptions

Field Type Description
id string Unique identifier for the sample
split string Dataset split (train/validation/dev/test)
question string Natural language question about the table
answer string Ground truth answer
answer_formulas list Formulas used to derive the answer
highlighted_cells list Row/column indices of cells relevant to the answer
table_json dict Complete table structure with metadata
table_md string Markdown representation with coordinate labels
table_images dict Base64-encoded PNG images (5 variations)
source string Original HiTab split
source_id string Original HiTab table identifier

Usage

Loading the Dataset

import json

# Load dataset
with open('data/visualcite.jsonl', 'r') as f:
    dataset = [json.loads(line) for line in f]

# Filter by split
train_data = [sample for sample in dataset if sample['split'] == 'train']

Decoding Images

import base64
from PIL import Image
from io import BytesIO

# Decode a table image
sample = dataset[0]
img_data = base64.b64decode(sample['table_images']['arial'])
image = Image.open(BytesIO(img_data))
image.show()

Using Markdown Tables

# Access markdown representation
markdown_table = sample['table_md']
print(markdown_table)

# The markdown includes coordinate labels:
# |  | A | B | C | D |
# | --- | --- | --- | --- | --- |
# | 1 | season | team | games | ... |
# | 2 | 2014 | tampa bay | 8 | ... |

Original HiTab Citation:

@inproceedings{cheng-etal-2022-hitab,
    title = "{H}i{T}ab: A Hierarchical Table Dataset for Question Answering and Natural Language Generation",
    author = "Cheng, Zhoujun  and others",
    booktitle = "Proceedings of ACL 2022",
    year = "2022"
}

Dataset Creation

Source Data

VisualCite is derived from HiTab, a hierarchical table QA dataset created by Microsoft Research.

Generation Process

  1. Table Rendering: Each table is converted to HTML with 5 different styling configurations
  2. Image Generation: HTML rendered to PNG using headless browser (Playwright)
  3. Coordinate System: Row/column labels added (A-Z, 1-N) for precise referencing
  4. Markdown Conversion: Tables converted to markdown with preserved coordinates
  5. Base64 Encoding: Images encoded for embedding in JSONL

Quality Assurance

  • All images are PNG format
  • Tables preserve original structure including merged cells
  • Coordinate systems are consistent across all representations
  • Random seed (42) ensures reproducible splits

Limitations

  • Merged cells in markdown show empty strings for non-primary cells
  • Very wide tables may have readability issues in image format
  • Base64 encoding significantly increases file size
  • Limited to tables from Wikipedia and web sources (HiTab's scope)

License

This dataset inherits the license from HiTab. Please refer to the HiTab repository for licensing details.

Acknowledgments

We thank the creators of HiTab for providing the foundation for this dataset. Special thanks to Microsoft Research for their contributions to table understanding research

Downloads last month
10