The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: HfHubHTTPError
Message: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/8d/f6/8df68197218e06f9ed1263d588ac1160721a86c48cd05d7aacfcfcb5eda9dcb1/ffecf5b51e71413cccc797122e05775beba508e957b149935b96c3cae7f03ae2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260413%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260413T161325Z&X-Amz-Expires=3600&X-Amz-Signature=317d56cac102b2b9c86b052e1dcb1016360179222492890fea9c73c4a1d268ab&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27frame_060988.png%3B%20filename%3D%22frame_060988.png%22%3B&response-content-type=image%2Fpng&x-amz-checksum-mode=ENABLED&x-id=GetObject
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>repos/8d/f6/8df68197218e06f9ed1263d588ac1160721a86c48cd05d7aacfcfcb5eda9dcb1/ffecf5b51e71413cccc797122e05775beba508e957b149935b96c3cae7f03ae2</Key><RequestId>G3VG27337PC0JC15</RequestId><HostId>Ull/XQJtgxq6LQRMlVFaNpgp/LwTIFWwvvaR6tnWExrWtqwSnuczxhX3PVXm7zEvRVw9dbzhSz0=</HostId></Error>
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1026, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/8d/f6/8df68197218e06f9ed1263d588ac1160721a86c48cd05d7aacfcfcb5eda9dcb1/ffecf5b51e71413cccc797122e05775beba508e957b149935b96c3cae7f03ae2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260413%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260413T161323Z&X-Amz-Expires=3600&X-Amz-Signature=502bd05c658e40e8277dd3f66d6e3fc4fc2240c1491068ee703322f5448b601f&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27frame_060988.png%3B%20filename%3D%22frame_060988.png%22%3B&response-content-type=image%2Fpng&x-amz-checksum-mode=ENABLED&x-id=GetObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1594, in _prepare_split_single
writer.write(example)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 682, in write
self.write_examples_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 655, in write_examples_on_file
self._write_batch(batch_examples=batch_examples)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 756, in _write_batch
self.write_table(pa_table, writer_batch_size)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
self._write_table(pa_table, writer_batch_size=writer_batch_size)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 775, in _write_table
pa_table = embed_table_storage(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in embed_table_storage
embed_array_storage(table[name], feature, token_per_repo_id=token_per_repo_id)
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2124, in embed_array_storage
return feature.embed_storage(array, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/image.py", line 303, in embed_storage
(path_to_bytes(x["path"]) if x["bytes"] is None else x["bytes"]) if x is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 309, in wrapper
return func(value) if value is not None else None
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/image.py", line 299, in path_to_bytes
return f.read()
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 728, in track_read
out = f_read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 1012, in read
out = f.read()
^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 728, in track_read
out = f_read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 1078, in read
hf_raise_for_status(self.response)
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/8d/f6/8df68197218e06f9ed1263d588ac1160721a86c48cd05d7aacfcfcb5eda9dcb1/ffecf5b51e71413cccc797122e05775beba508e957b149935b96c3cae7f03ae2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260413%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260413T161323Z&X-Amz-Expires=3600&X-Amz-Signature=502bd05c658e40e8277dd3f66d6e3fc4fc2240c1491068ee703322f5448b601f&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27frame_060988.png%3B%20filename%3D%22frame_060988.png%22%3B&response-content-type=image%2Fpng&x-amz-checksum-mode=ENABLED&x-id=GetObject
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>repos/8d/f6/8df68197218e06f9ed1263d588ac1160721a86c48cd05d7aacfcfcb5eda9dcb1/ffecf5b51e71413cccc797122e05775beba508e957b149935b96c3cae7f03ae2</Key><RequestId>Q3JQV9HP7VSQSABR</RequestId><HostId>jhDvzgKIzYe3Z+GOsI11ERQPh0kDSjfv8r/hcxVgzQvjGagm2J/dUR8rn22tSUN9Zks27CgFik4=</HostId></Error>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1026, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/8d/f6/8df68197218e06f9ed1263d588ac1160721a86c48cd05d7aacfcfcb5eda9dcb1/ffecf5b51e71413cccc797122e05775beba508e957b149935b96c3cae7f03ae2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260413%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260413T161325Z&X-Amz-Expires=3600&X-Amz-Signature=317d56cac102b2b9c86b052e1dcb1016360179222492890fea9c73c4a1d268ab&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27frame_060988.png%3B%20filename%3D%22frame_060988.png%22%3B&response-content-type=image%2Fpng&x-amz-checksum-mode=ENABLED&x-id=GetObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1607, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 783, in finalize
self.write_examples_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 655, in write_examples_on_file
self._write_batch(batch_examples=batch_examples)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 756, in _write_batch
self.write_table(pa_table, writer_batch_size)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
self._write_table(pa_table, writer_batch_size=writer_batch_size)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 775, in _write_table
pa_table = embed_table_storage(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in embed_table_storage
embed_array_storage(table[name], feature, token_per_repo_id=token_per_repo_id)
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2124, in embed_array_storage
return feature.embed_storage(array, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/image.py", line 303, in embed_storage
(path_to_bytes(x["path"]) if x["bytes"] is None else x["bytes"]) if x is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 309, in wrapper
return func(value) if value is not None else None
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/image.py", line 299, in path_to_bytes
return f.read()
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 728, in track_read
out = f_read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 1012, in read
out = f.read()
^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 728, in track_read
out = f_read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 1078, in read
hf_raise_for_status(self.response)
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/8d/f6/8df68197218e06f9ed1263d588ac1160721a86c48cd05d7aacfcfcb5eda9dcb1/ffecf5b51e71413cccc797122e05775beba508e957b149935b96c3cae7f03ae2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260413%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260413T161325Z&X-Amz-Expires=3600&X-Amz-Signature=317d56cac102b2b9c86b052e1dcb1016360179222492890fea9c73c4a1d268ab&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27frame_060988.png%3B%20filename%3D%22frame_060988.png%22%3B&response-content-type=image%2Fpng&x-amz-checksum-mode=ENABLED&x-id=GetObject
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>repos/8d/f6/8df68197218e06f9ed1263d588ac1160721a86c48cd05d7aacfcfcb5eda9dcb1/ffecf5b51e71413cccc797122e05775beba508e957b149935b96c3cae7f03ae2</Key><RequestId>G3VG27337PC0JC15</RequestId><HostId>Ull/XQJtgxq6LQRMlVFaNpgp/LwTIFWwvvaR6tnWExrWtqwSnuczxhX3PVXm7zEvRVw9dbzhSz0=</HostId></Error>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1438, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1616, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image image | label class label |
|---|---|
0ep_0 | |
0ep_0 | |
0ep_0 | |
0ep_0 | |
0ep_0 | |
0ep_0 | |
0ep_0 | |
0ep_0 | |
0ep_0 | |
0ep_0 | |
0ep_0 | |
0ep_0 | |
0ep_0 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
1ep_1 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
2ep_10 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
3ep_100 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
4ep_1000 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
5ep_1001 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
6ep_1002 | |
7ep_1003 | |
7ep_1003 | |
7ep_1003 | |
7ep_1003 | |
7ep_1003 | |
7ep_1003 | |
7ep_1003 | |
7ep_1003 | |
7ep_1003 |
LARY — A Latent Action Representation Yielding Benchmark for Generalizable Vision-to-Action Alignment
LARY is a unified evaluation framework for latent action representations. Given any model that produces latent action representations (LAMs or visual encoders), LARY provides three complementary evaluation pipelines:
| Pipeline | Task |
|---|---|
get_latent_action |
Extract latent action representations from videos or image pairs |
classification |
Probe how well latent actions capture action semantics (action-type recognition) |
regression |
Probe how well latent actions can decode physical robot actions (action regression) |
News
- [2026-04-13] We release the code, text annotations, and partial validation datasets. Training datasets are coming soon.
Release Checklist
- Code
- Text annotations link
- Validation datasets
- Training datasets
Overview
While the shortage of explicit action data limits Vision-Language-Action (VLA) models, human action videos offer a scalable yet unlabeled data source. A critical challenge in utilizing large-scale human video datasets lies in transforming visual signals into ontology-independent representations, known as latent actions. However, the capacity of latent action representation to derive robust control from visual observations has yet to be rigorously evaluated.
We introduce the Latent Action Representation Yielding (LARY) Benchmark, a unified framework for evaluating latent action representations on both high-level semantic actions (\textit{what to do}) and low-level robotic control (\textit{how to do}). The comprehensively curated dataset encompasses over one million videos (1,000 hours) spanning 151 action categories, alongside 620K image pairs and 595K motion trajectories across diverse embodiments and environments. Our experiments reveal two crucial insights: (i) General visual foundation models, trained without any action supervision, consistently outperform specialized embodied LAMs. (ii) Latent-based visual space is fundamentally better aligned to physical action space than pixel-based space. These results suggest that general visual representations inherently encode action-relevant knowledge for physical control, and that semantic-level abstraction serves as a fundamentally more effective pathway from vision to action than pixel-level reconstruction.
Contributions
LARYBench: We introduce LARYBench, a comprehensive benchmark that first decouples the evaluation of latent action representations from downstream policy performance. LARYBench probes representations along two complementary dimensions — high-level semantic action (what to do) encoding and the low-level physical dynamics required for robotic control (how to do it) — enabling direct, standardized measurement of representation quality itself.
Large-Scale Data Engine: To support rigorous evaluation, we develop an automated data engine to re-segment and re-annotate a large-scale corpus, yielding 1.2M videos, 620K image pairs, and 595K trajectories across 151 action categories and 11 robotic embodiments, covering both human and robotic agents from egocentric and exocentric perspectives in simulated and real-world environments.
Key Findings: Through systematic evaluation of 11 models, we reveal two consistent findings: (i) action-relevant features can emerge from large-scale visual pre-training without explicit action supervision, and (ii) latent-based feature spaces tend to align with robotic control better than pixel-based ones. These results suggest that future VLA systems may benefit more from leveraging general visual representations than from learning action spaces solely on scarce robotic data.
Environment Setup
1. Clone the repository
2. Create conda environments
Different LAMs require different environments.
The project ships with a helper function lary_activate <model> in env.sh.
| LAM | Environment |
|---|---|
lapa, lapa-dinov2, lapa-dinov3, lapa-siglip2, lapa-magvit2, univla, dinov3, lapa-dinov3-cs*, flux2 |
laq |
vjepa2 |
vjepa2 |
wan2-2 |
wan |
villa-x |
dedicated .venv inside $VILLA_X_DIR |
Base environment (laq):
conda create -n laq python=3.10 -y
conda activate laq
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
pip install einops transformers omegaconf tqdm pandas numpy opencv-python pillow \
scikit-image accelerate diffusers wandb timm decord seaborn scikit-learn
V-JEPA 2 environment:
conda create -n vjepa2 python=3.10 -y
conda activate vjepa2
pip install torch torchvision
pip install einops transformers tqdm pandas numpy opencv-python pillow decord
3. Configure environment variables
Edit env.sh to point to your file system, then source it:
# Minimal required variables
export LARY_ROOT=/path/to/LARY
export DATA_DIR=/path/to/LARYBench # LARYBench dataset root (classification/ + regression/)
export LARY_LA_DIR=/path/to/latent_actions # where extracted .npz files are stored
export MODEL_DIR=/path/to/pretrained_lam_weights
export LARY_LOG_DIR=/path/to/logs
source /path/to/LARY/env.sh
Key variables and their roles are documented in Environment Variables Reference.
Quick Start — End-to-End Pipeline
The following example runs all three stages on the CALVIN dataset using LAPA-DINOv2 as the LAM.
source env.sh
conda activate laq
# ── Stage 1: Extract latent actions ────────────────────────────────────────
# Input CSVs are already in data/; no --input flag needed.
python -m lary.cli extract \
--model dinov2 \
--dataset calvin \
--split train \
--mode image \
--stride 5
python -m lary.cli extract \
--model dinov2 \
--dataset calvin \
--split val \
--mode image \
--stride 5
# → writes data/train_la_calvin_5_dinov2.csv
# → writes data/val_la_calvin_5_dinov2.csv
# → writes npz files under $LARY_LA_DIR/calvin/stride_5/{train,val}/dinov2/
# ── Stage 2: Regression ─────────────────────────────────────────────────────
conda activate lerobot2
python -m lary.cli regress \
--model dinov2 \
--dataset calvin \
--stride 5 \
--model-type mlp
For classification (video-based datasets):
# Extract
conda activate laq
python -m lary.cli extract \
--model dinov2 \
--dataset robot_1st \
--split train \
--mode video
python -m lary.cli extract \
--model dinov2 \
--dataset robot_1st \
--split val \
--mode video
# → writes data/train_la_robot_1st_dinov2.csv
# → writes data/val_la_robot_1st_dinov2.csv
# Classify
conda activate vjepa2
python -m lary.cli classify \
--model dinov2 \
--dataset human_1st \
--dim 1024 \
--classes 123
Step 1 · get_latent_action
All metadata CSVs required as input are pre-built in data/ with relative paths.
They are resolved at runtime via the DATA_DIR environment variable.
Extracting Latent Actions (Video Mode)
Used for classification datasets (human_1st, robot_1st, libero).
Via unified CLI (recommended)
python -m lary.cli extract \
--model <model_name> \
--dataset <dataset_name> \
--split <train|val> \
--mode video
The CLI automatically reads data/{dataset}_metadata_{split}.csv.
To override the input file, use --input /path/to/custom.csv.
Distributed / partitioned extraction
For large datasets, use --num_partitions to split work across GPUs:
for PART in 0 1 2 3 4 5 6 7; do
CUDA_VISIBLE_DEVICES=$PART python -m lary.cli extract \
--model <model_name> \
--dataset <dataset_name> \
--split train \
--mode video \
--num_partitions 8 \
--partition $PART &
done
wait
Each job writes data/{split}_la_{dataset}_{model}_{partition}.csv.
Merge the partitions manually into a single data/{split}_la_{dataset}_{model}.csv.
Extracting Latent Actions (Image-Pair Mode)
Used for regression datasets (calvin, vlabench, agibotbeta, robocoin).
python -m lary.cli extract \
--model <model_name> \
--dataset <dataset_name> \
--split <split> \
--mode image \
--stride <stride>
Output CSV naming (written to data/ automatically):
| Mode | Output CSV |
|---|---|
image (with stride) |
data/{split}_la_{dataset}_{stride}_{model}.csv |
video |
data/{split}_la_{dataset}_{model}.csv |
Supported Models
| Model key | Architecture | Mode | Notes |
|---|---|---|---|
lapa |
LAQ | both | Loads from $MODEL_DIR/laq_openx.pt |
dinov2 |
DINOv2 + LAQ | both | Loads from $MODEL_DIR/laq_dinov2.pt |
dinov3 |
DINOv3 + LAQ | both | Loads from $MODEL_DIR/laq_dinov3.pt |
dinov3-origin |
DINOv3-ViTL16 (raw features) | both | No LAQ head; outputs patch features |
dinov3-cs{N}_sl{L}_dim{D}_lr{lr} |
DINOv3 + custom LAQ | both | E.g. dinov3-cs8_sl16_dim32_lr1e-4 |
siglip2 |
SigLIP2 + LAQ | both | Loads from $MODEL_DIR/siglip2.pt |
magvit2 |
Open-MAGVIT2 + LAQ | both | Requires MAGVIT2_CONFIG_PATH / MAGVIT2_TOKENIZER_PATH |
univla |
UniVLA | both | Loads from CKPT_PATH/lam-stage-2.ckpt |
villa-x |
villa-X | both | Requires VILLA_X_CKPT_PATH and its .venv |
flux2 |
FLUX.2-dev VAE | both | Requires AE_MODEL_PATH (safetensors) |
wan2-2 |
Wan 2.2 VAE | both | Requires wan conda env |
vjepa2 |
V-JEPA 2 ViT-L/16 | both | Requires vjepa2 conda env + vitl.pt checkpoint |
Adding a New LAM
Integrating a new LAM requires edits to a single file: get_latent_action/dynamics.py.
Step 1 — Add the import guard (top of file)
env_model = os.environ.get("USE_MODEL")
if env_model == 'my-new-model':
from path.to.my_new_model import MyNewModel
Step 2 — Register the model in get_dynamic_tokenizer()
def get_dynamic_tokenizer(model):
...
elif model == 'my-new-model':
dynamics = MyNewModel(
# constructor arguments
).cuda()
dynamics.load_state_dict(torch.load(f"{model_dir}/my_new_model.pt"))
...
freeze_backbone(dynamics)
return dynamics
Step 3 — Handle the forward pass in get_latent_action()
def get_latent_action(x, tokenizer, model_name):
with torch.no_grad():
...
elif model_name == 'my-new-model':
# x shape: (B, C, T, H, W) — adjust as needed
tokens = tokenizer(x) # → (B, N, D)
indices = torch.zeros(B, N) # or actual codebook indices
...
return tokens.cpu().numpy(), indices.cpu().numpy()
Step 4 — Handle the batch loop in extraction scripts
If your model needs custom batching (e.g. different input shape), add the corresponding
elif self.args.model == 'my-new-model': branch in:
get_latent_action/get_latent_action.py→ActionProcessor.process()get_latent_action/get_latent_action_img.py→ActionProcessor.process()lary/extract.py→LatentActionExtractor._process_batch()
Step 5 — Activate the right conda env
Add the model to the lary_activate function in env.sh:
lary_activate() {
case "$model" in
...
"my-new-model") conda activate my_new_env ;;
...
esac
}
That's it. The classification and regression pipelines are fully model-agnostic and will work without any further changes.
Step 2 · Classification
The classification probe trains a lightweight FeatureEvaluator classifier
(multi-head MLP with attention pooling) on top of frozen latent action features.
Running Classification
Via unified CLI
python -m lary.cli classify \
--model <lam_name> \
--dataset <dataset_name> \
--dim <latent_dim> \
--classes <num_classes> \
--gpus 0,1,2,3,4,5,6,7
The CLI reads data/train_la_{dataset}_{model}.csv and data/val_la_{dataset}_{model}.csv automatically.
Via direct script (multi-GPU DDP)
MASTER_PORT=11325 python -m classification.evals.main \
--fname classification/configs/eval/vitl/manipulation.yaml \
--lam <lam_name> \
--dataset <dataset_name> \
--dim <latent_dim> \
--classes <num_classes> \
--devices cuda:0 cuda:1 cuda:2 cuda:3 cuda:4 cuda:5 cuda:6 cuda:7
Outputs
| File | Description |
|---|---|
latest.pt |
Latest checkpoint (classifiers + optimizers) |
log_r0.csv |
Per-epoch train/val accuracy CSV |
confusion_matrix.json |
Full confusion matrix |
confusion_matrix.png |
Confusion matrix heatmap |
classification_stats.json |
Per-class precision / recall / F1 |
case.txt |
Per-sample prediction file |
Step 3 · Regression
The regression probe trains a MLPResNet or DiT-based diffusion decoder to predict physical robot action sequences from latent action representations.
Running Regression
Via unified CLI
python -m lary.cli regress \
--model <lam_name> \
--dataset <dataset_name> \
--stride <stride> \
--model-type mlp # or 'dit'
The CLI reads data/train_la_{dataset}_{stride}_{model}.csv and the corresponding val CSV automatically.
Via accelerate (multi-GPU)
accelerate launch \
--num_processes=8 \
regression/main.py \
--train_csv data/train_la_calvin_5_dinov2.csv \
--val_csv data/val_la_calvin_5_dinov2.csv \
--dataset calvin \
--stride 5 \
--model_type mlp \
--wandb_project lary \
--wandb_name dinov2-calvin-5-mlp
For datasets with seen_train / seen_val / unseen splits (AgiBotWorld-Beta, RoboCOIN):
accelerate launch \
--num_processes=8 \
regression/main.py \
--train_csv data/seen_train_la_agibotbeta_45_dinov2.csv \
--val_csv data/seen_val_la_agibotbeta_45_dinov2.csv \
--val_unseen_csv data/unseen_la_agibotbeta_45_dinov2.csv \
--dataset agibotbeta \
--stride 45 \
--model_type mlp
Model type options
--model_type |
Architecture | Notes |
|---|---|---|
mlp |
MLPResNet (MLP + residual blocks) | Fast, good for simple action spaces |
dit |
DiT (Diffusion Transformer) | More expressive; slower inference |
Key arguments
| Argument | Default | Description |
|---|---|---|
--stride |
5 | Number of action steps per sample (chunk size) |
--global_stats_json |
None | JSON with per-robot mean/std (required for AgiBotWorld-Beta / RoboCOIN) |
--val_unseen_csv |
None | Optional CSV for out-of-distribution (unseen) evaluation |
--dit_hidden_size |
512 | Hidden size for DiT decoder |
--dit_depth |
6 | Number of DiT blocks |
Outputs
| File | Description |
|---|---|
best_model.pth |
Checkpoint with lowest validation MSE |
best_result.csv |
Best epoch metrics (MSE per dimension / group) |
eval_vis/epoch_*/ |
Visualisation plots: GT vs. predicted trajectories |
Supported Datasets
Classification datasets (video mode)
| Dataset key | Split names | Data sub-path |
|---|---|---|
human_1st |
train, val |
DATA_DIR/classification/ (mixed sub-dirs) |
robot_1st |
train, val |
DATA_DIR/classification/ |
libero |
train, val |
DATA_DIR/classification/LIBERO/ |
Regression datasets (image-pair mode)
| Dataset key | Split names | Stride | Data sub-path |
|---|---|---|---|
calvin |
train, val |
5 | DATA_DIR/regression/calvin/{split}_stride5/ |
vlabench |
train, val |
5 | DATA_DIR/regression/vlabench/ |
vlabench_15 |
train, val |
15 | DATA_DIR/regression/vlabench/ |
vlabench_30 |
train, val |
30 | DATA_DIR/regression/vlabench/ |
agibotbeta |
seen_train, seen_val, unseen |
45 | DATA_DIR/regression/agibot_45/ |
robocoin |
seen_train, seen_val, unseen |
10 | DATA_DIR/regression/robocoin_10/ |
Data Directory Layout
$LARY_ROOT/data/ ← committed metadata CSVs
├── {dataset}_metadata_{split}.csv # input to extract (relative paths)
└── {split}_la_{dataset}_{stride}_{model}.csv # output of extract (la_path column added)
$DATA_DIR/ ← LARYBench dataset root (env: DATA_DIR)
├── classification/
│ ├── EPIC-KITCHENS/
│ ├── EgoDex/
│ ├── AgiBotWorld-Beta/
│ ├── LIBERO/
│ └── ...
└── regression/
├── calvin/{train_stride5,val_stride5}/
├── vlabench/
├── agibot_45/
├── robocoin_10/
└── vlabench_{15,30}/ (symlink or separate)
$LARY_LA_DIR/ ← extracted .npz files (env: LARY_LA_DIR)
└── {dataset}/
├── {model}/ # no-split: human_1st, robot_1st, libero
├── {split}/{model}/ # standard split
└── stride_{N}/{split}/{model}/ # stride datasets: calvin, vlabench, agibotbeta, robocoin
$MODEL_DIR/ ← LAM weight files (env: MODEL_DIR)
├── laq_openx.pt # lapa
├── laq_dinov2.pt # lapa-dinov2
├── laq_dinov3.pt # lapa-dinov3
├── siglip2.pt # lapa-siglip2
└── magvit2.pt # lapa-magvit2
Environment Variables Reference
| Variable | Required | Description |
|---|---|---|
LARY_ROOT |
✓ | Project root directory |
LARY_LOG_DIR |
✓ | Log / checkpoint output directory |
DATA_DIR |
✓ | LARYBench dataset root (classification/ + regression/ sub-dirs) |
LARY_LA_DIR |
✓ | Latent action storage root (written by extract, read by downstream tasks) |
MODEL_DIR |
✓ | Pre-trained LAM weight files |
DINO_V2_PATH |
DINOv2 model directory | |
DINO_V3_PATH |
DINOv3 model directory | |
SIGLIP2_PATH |
SigLIP2 model directory | |
MAGVIT2_CONFIG_PATH |
Open-MAGVIT2 config YAML | |
MAGVIT2_TOKENIZER_PATH |
Open-MAGVIT2 checkpoint | |
CONDA_SH_PATH |
Path to conda.sh profile |
|
WANDB_API_KEY |
Weights & Biases API key | |
WANDB_PROJECT |
Default W&B project name (default: lary) |
Citation
If you find this work useful, please cite:
@article{larybench2026,
title = {LARY: A Latent Action Representation Yielding Benchmark for Generalizable Vision-to-Action Alignment},
author = {Dujun Nie and Fengjiao Chen and Qi Lv and Jun Kuang and Xiaoyu Li and Xuezhi Cao and Xunliang Cai},
year = {2026},
eprint={},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/xx},
}
Data Statements
LARYBench is built upon the following publicly available datasets. We gratefully acknowledge the efforts of their creators and ask users to comply with each dataset's respective license and terms of use.
| Dataset | Link |
|---|---|
| EgoDex | github.com/apple/ml-egodex |
| Something-Something V2 | something-something-v2 |
| Ego4D | github.com/facebookresearch/Ego4d |
| HoloAssist | holoassist.github.io |
| EPIC-KITCHENS | epic-kitchens.github.io |
| TACO | taco2024.github.io |
| AgiBotWorld-Beta | github.com/OpenDriveLab/AgiBot-World |
| LIBERO | github.com/Lifelong-Robot-Learning/LIBERO |
| RoboCOIN | github.com/FlagOpen/RoboCOIN |
| VLABench | github.com/OpenMOSS/VLABench |
| CALVIN | github.com/mees/calvin |
Acknowledgements
We thank the following open-source projects for their contributions:
- Downloads last month
- -