Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    TypeError
Message:      Couldn't cast array of type struct<modality: list<item: string>, modality_extra: string, type: list<item: string>, type_extra: string, textual_summary: string> to null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2255, in cast_table_to_schema
                  cast_array_to_feature(
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1804, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2061, in cast_array_to_feature
                  casted_array_values = _c(array.values, feature.feature)
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1806, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2095, in cast_array_to_feature
                  return array_cast(
                         ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1806, in wrapper
                  return func(array, *args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1959, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type struct<modality: list<item: string>, modality_extra: string, type: list<item: string>, type_extra: string, textual_summary: string> to null

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

AniMINT: UI Animation Interpretation Dataset

Dataset Description

AniMINT is a dataset for evaluating whether vision language models (VLMs) can understand UI animations beyond static screenshots. The dataset contains 300 densely annotated UI animation videos from mobile, web, and desktop interfaces. Each animation is annotated with:

  1. Start and end frame of the animation
  2. Animation region of interest(s)
  3. Context information
  4. User input information, if any
  5. 10 unique human-annotated, open-ended descriptions of the animation effect
  6. 10 unique human-annotated, open-ended descriptions of the animation meaning
  7. Categorization of animation purpose

The dataset is intended to support research on:

  • UI animation understanding;
  • multimodal and video-language evaluation;
  • UI agent perception;
  • motion-grounded interface reasoning.

This dataset accompanies the paper:

Beyond Screenshots: Evaluating VLMs’ Understanding of UI Animations

If you use AniMINT, please cite:

@misc{liang2026beyondscreenshots,
  title         = {Beyond Screenshots: Evaluating VLMs' Understanding of UI Animations},
  author        = {Liang, Chen and Jiang, Xirui and Deng, Naihao and Adar, Eytan and Guo, Anhong},
  year          = {2026},
  eprint        = {2604.26148},
  archivePrefix = {arXiv},
  primaryClass  = {cs.HC},
  url           = {https://arxiv.org/abs/2604.26148}
}

Dataset Summary

Property Value
Dataset name AniMINT
Full name UI AniMation INTerpretation Dataset
Number of clips 300
Modality UI animation video + text annotations
Source platforms Mobile, web, desktop
Main platform distribution Mobile: 75.00%; Web: 15.67%; Desktop: 9.33%
Median animation duration 3.59 seconds
Annotation language English
Video/interface language Primarily English
Annotator region United States
License CC BY-NC-ND 4.0

Animation Purpose Labels

AniMINT uses seven purpose categories for UI animations:

Label Description
Transition Animations that support layout or state changes in the interface.
Demonstration Animations that reveal or explain the behavior, functionality, or structure of the interface or its elements.
Guidance Animations that guide the user toward an intended interaction.
Feedback Animations that provide visual responses to user interactions.
Visualization Animations that represent system status, data, progress, or other information.
Highlight Animations that emphasize specific content or draw the user’s attention to key elements.
Aesthetic Animations that enhance visual appeal, create emotional impact, or improve user experience without primarily conveying required information.

Meaning Interpretation and Effect Description Annotations

AniMINT includes open-ended human text annotations for each UI animation video. Each video contains:

Annotation Type Count per Video Description
animation_meaning_annotations 10 Human-written descriptions of what the animation communicates or means in the UI context.
animation_effect_annotations 10 Human-written descriptions of the visible animation effect, motion, or perceptual change.

These annotations support evaluation of whether a model can go beyond detecting motion and produce interpretations that align with human understanding. The meaning annotations focus on the interpretation of the animation, such as whether it signals an error or system state., or attract users' attention. The effect annotations focus on the visible motion or visual transformation, such as an element shaking, fading, filling, bouncing, moving, changing color, or resizing.

For example, a shake animation that indicates a failed login attempt may have an animation effect description of: "The login field page becomes outlined in red and rapidly shakes 3 times when the user taps the sign in button." and an interpretation of: "The animation is telling the user that signing in was unable to be completed and that some error has occurred that has prevented the sign in."

DMCA Notices and Data Removal Requests

AniMINT is released for non-commercial research and evaluation purposes. The dataset may include recordings of third-party user interfaces, logos, or other UI assets. The research team has cut the video to primarily focus on the research evaluation purposes. The collection source for each animation is attributed in the datasource.csv. All third-party rights remain with their respective owners.

If you are a rights holder, platform operator, developer, or individual who believes that a dataset item should be removed or reviewed, please contact us at researchpubacc at gmail.com with details including the dataset item ID or filename, the reason for the request, your relationship to the content or rights holder, and a contact method for follow-up.

Downloads last month
96

Paper for pubacc/AniMINT