The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: UnidentifiedImageError
Message: cannot identify image file <_io.BytesIO object at 0x7f55e3508b80>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2061, in __iter__
batch = formatter.format_batch(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 472, in format_batch
batch = self.python_features_decoder.decode_batch(batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 234, in decode_batch
return self.features.decode_batch(batch, token_per_repo_id=self.token_per_repo_id) if self.features else batch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2161, in decode_batch
decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1419, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/image.py", line 190, in decode_example
image = PIL.Image.open(bytes_)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/PIL/Image.py", line 3498, in open
raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f55e3508b80>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
TACO Resized (512x376) — Benchmarking Generalizable Bimanual Tool-ACtion-Object Understanding
This is the resized version of the TACO dataset, with all allocentric videos and segmentation masks downscaled to a uniform 512x376 resolution (from native 4096x3000 / 2048x1500). Camera intrinsics are rescaled accordingly.
Why use this version?
The original TACO allocentric videos are 4096x3000, making training impractical without on-the-fly resizing. This version pre-processes everything to 512x376, resulting in ~25x faster data loading and 4.4x less disk space.
Loading Performance Comparison
Config: 4 context + 3 target views, 4 past + 8 future frames (52 frame decodes/sample). Cold loading, no cache.
| Original (4096x3000) | Resized (512x376) | Speedup | |
|---|---|---|---|
| Single sample (no seg) | 3.91 s | 158 ms | 25x |
| Single sample (with seg) | 4.23 s | 169 ms | 25x |
| Throughput (4 workers) | 0.52 samp/s | 12.2 samp/s | 23x |
| Disk size | 2.2 TB | 495 GB | 4.4x smaller |
| Allocentric videos | 809 GB | 4.3 GB | 188x smaller |
| Segmentation masks | 632 GB | 145 GB | 4.4x smaller |
Full profiling details: tools/taco_analysis/profile_comparison.md
What changed
- Allocentric RGB videos: resized from 4096x3000 / 2048x1500 to 512x376, re-encoded as H.264 MP4
- 2D segmentation masks: resized from 750x1024 to 375x512 using nearest-neighbor interpolation
- Calibration intrinsics: K matrix scaled to match new resolution,
imgSizeupdated to [512, 376] - Everything else unchanged: egocentric videos, depth, hand poses, object poses, object models, MANO
Dataset Contents
- 2120 motion sequences (from Version 1's 2317, filtered to marker-removed)
- 12 allocentric cameras per sequence at 512x376
- 2D segmentation masks at 375x512
- Egocentric RGB-D videos (original resolution)
- Hand-object pose annotations + pre-computed 3D hand joints
- 206 high-resolution object models
- Camera parameters (intrinsics rescaled)
Archive Contents
| Archive | Size | Contents |
|---|---|---|
Marker_Removed_Allocentric_RGB_Videos.zip |
4.3 GB | 12 camera MP4s per sequence (512x376) |
2D_Segmentation.zip |
145 GB | Per-camera segmentation masks (375x512 npy) |
Hand_Poses.zip |
25.9 GB | MANO params (pkl) + pre-computed 3D joints (npy) |
Hand_Poses_3D.zip |
160 MB | 3D joints only — hand_joints.npy per sequence (T, 2, 21, 3) float32 |
Object_Poses.zip |
64 MB | Object 6DoF transforms (npy) |
Egocentric_RGB_Videos.zip |
641 MB | Egocentric RGB videos |
Egocentric_Depth_Videos.zip.* |
959 MB | Egocentric depth videos (split archive) |
object_models_released.zip |
~1.2 GB | 206 high-res object meshes |
mano_v1_2.zip |
small | MANO hand model files |
Allocentric_Camera_Parameters/ and taco_info.csv are stored directly (not zipped).
Tip: If you only need 3D hand joints (not raw MANO parameters), download Hand_Poses_3D.zip (160 MB) instead of Hand_Poses.zip (26 GB).
Downloading
# Full download
huggingface-cli download mzhobro/taco_dataset_resized \
--repo-type dataset \
--local-dir taco_dataset_resized
cd taco_dataset_resized
# Reassemble split archives and extract
./reassemble.sh
for z in *.zip; do unzip -qn "$z"; done
# Minimal download (allocentric videos + cameras + 3D hand joints only)
huggingface-cli download mzhobro/taco_dataset_resized \
--repo-type dataset \
--include "*.csv" "*.sh" "*.md" \
"Marker_Removed_Allocentric_RGB_Videos.zip" \
"Allocentric_Camera_Parameters/**" \
"Hand_Poses_3D.zip" \
--local-dir taco_dataset_resized
Hand Poses 3D Format
Hand_Poses_3D/{action}/{sequence_id}/hand_joints.npy — shape (T, 2, 21, 3):
- T: number of frames
- Dim 1: 0=left hand, 1=right hand
- 21 joints: wrist, index(MCP,PIP,DIP,tip), middle(...), ring(...), pinky(...), thumb(CMC,MCP,IP,tip)
- 3: xyz world coordinates in meters
# Loading in the dataset loader
ds = TACODataset(..., load_hand_joints=True)
sample = ds[0]
sample["hand_joints"] # (T, 2, 21, 3) float32
Related
- Original full-resolution dataset: mzhobro/taco_dataset
Tools
The tools/ directory contains:
taco_dataset_loader.py— PyTorch Dataset class for loading TACO dataview_sampler.py— Camera view sampling strategiesgenerate_taco_csv.py— Generatetaco_info.csvmetadataprecompute_hand_joints.py— Pre-compute 3D hand joints from MANO parameterstaco_analysis/— Analysis and visualization scripts (dataset stats, camera extrinsics, epipolar lines, mesh overlays, profiling)resizing_pipeline/— Scripts used to produce this resized dataset from the original
cd tools/taco_analysis
# Dataset summary statistics
python analyze_taco.py
# Camera extrinsics analysis
python analyze_extrinsics.py
# Loading performance profile
python profile_dataset.py --root ../../ --output profile.md
# Visualizations
python visualize_taco_3d_scene.py
python visualize_taco_cameras_topdown.py
python visualize_taco_epipolar.py
python render_mesh_overlay.py
Each script accepts --help for full options.
- Downloads last month
- 158