Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find any data file at /src/services/worker/Ahaskar04/cola-pushblock-1033. Couldn't find 'Ahaskar04/cola-pushblock-1033' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/Ahaskar04/cola-pushblock-1033@068b193ea5d2ac7de38c07f3b210b846e3e20a55/train_images_a.npy' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.eval', '.lance', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.nii', '.NII', '.zip', '.idx', '.manifest', '.txn']
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1203, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find any data file at /src/services/worker/Ahaskar04/cola-pushblock-1033. Couldn't find 'Ahaskar04/cola-pushblock-1033' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/Ahaskar04/cola-pushblock-1033@068b193ea5d2ac7de38c07f3b210b846e3e20a55/train_images_a.npy' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.ndjson', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.xml', '.hdf5', '.h5', '.eval', '.lance', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.3gp', '.3g2', '.avi', '.asf', '.flv', '.mp4', '.mov', '.m4v', '.mkv', '.webm', '.f4v', '.wmv', '.wma', '.ogm', '.mxf', '.nut', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.3GP', '.3G2', '.AVI', '.ASF', '.FLV', '.MP4', '.MOV', '.M4V', '.MKV', '.WEBM', '.F4V', '.WMV', '.WMA', '.OGM', '.MXF', '.NUT', '.pdf', '.PDF', '.nii', '.NII', '.zip', '.idx', '.manifest', '.txn']

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

COLA Push-Block — Two-Agent Coordination (1033 demos)

Teleoperated demonstrations of a two-arm cooperative push-block task in MuJoCo, collected for the COLA (Coordination via Latent Adapters) paper. Two 4-DoF arms on opposite ends of a near-frictionless ("icy") table cooperate to (1) push a small block across the midline and (2) stop it inside a target zone before it slides off the far edge.

This is the cached, flattened version of the dataset used directly by the COLA training pipeline — 332,990 (image_a, image_b, action_a, action_b) timestep tuples spread across 1,033 demonstrations.

![Sample views: agent A (left) and agent B (right)](WhatsApp Image 2026-05-12 at 17.07.24.jpeg)

Task

  • Arm A (blue, left, x = -0.35) — pusher. Must shove the block past the green midline so arm B can intercept it.
  • Arm B (orange, right, x = +0.40) — blocker. Must move into the block's path and stop it inside the green success zone (0.10 <= x <= 0.40) before it crosses the red fail zone at x = 0.47.
  • Success requires both coordinated behaviors. Random or non-coordinated policies score near 13% (level-0 baseline in the paper).

The table is deliberately low-friction (friction = 0.005), so the block slides and the timing window is tight — coordination is what's being tested.

What's in the cache

Per split, four numpy arrays:

File Shape Dtype Description
{split}_images_a.npy (N, 256, 256, 3) uint8 RGB camera view from agent A's wrist-cam
{split}_images_b.npy (N, 256, 256, 3) uint8 RGB camera view from agent B's wrist-cam
{split}_actions_a.npy (N, 4) float32 Joint-position commands for arm A, in [-1, 1]
{split}_actions_b.npy (N, 4) float32 Joint-position commands for arm B, in [-1, 1]

Splits

Split Demos* Timesteps N Size (uncompressed)
train ~825 265,770 ~104.5 GB
validation ~104 33,538 ~13.2 GB
test ~104 33,682 ~13.2 GB
total 1,033 332,990 ~131 GB

*Demo counts are approximate (cache is flattened across episodes; episode boundaries are not stored in this version — see "Limitations" below).

Action format

  • 4 joint-position commands per arm, in order: [yaw, shoulder, elbow, wrist].

  • Normalized to [-1, 1]. To recover the actual joint angle, denormalize using the per-joint ctrlrange from the MuJoCo XML (see the COLA repo for the env definition):

    limits = [(-2.75, 2.75), (-1.5, 1.5), (-0.2, 2.5), (-2.0, 0.5)]
    joint_angle = lo + (action + 1.0) * (hi - lo) / 2.0
    
  • The environment runs at 50 Hz control, ~0.002s sim timestep, position control with kp=120.

Image format

  • 256x256 RGB, uint8 in [0, 255].
  • Two views per timestep: image_a is rendered from a camera attached near arm A; image_b is the equivalent view for arm B.
  • Both views see the table, the block, and the partner arm — agents have full visual access; coordination is the bottleneck, not perception.

Quick start

Load with numpy (memory-mapped, fastest)

import numpy as np

cache = "."  # or wherever you downloaded the files

images_a  = np.load(f"{cache}/train_images_a.npy",  mmap_mode="r")
images_b  = np.load(f"{cache}/train_images_b.npy",  mmap_mode="r")
actions_a = np.load(f"{cache}/train_actions_a.npy")
actions_b = np.load(f"{cache}/train_actions_b.npy")

print(images_a.shape, actions_a.shape)
# (265770, 256, 256, 3) (265770, 4)

# Random batch
idx = np.random.choice(len(actions_a), 32, replace=False)
batch = {
    "image_a":  np.array(images_a[idx]),
    "image_b":  np.array(images_b[idx]),
    "action_a": actions_a[idx],
    "action_b": actions_b[idx],
}

Download with huggingface_hub

from huggingface_hub import snapshot_download

local = snapshot_download(
    repo_id="Ahaskar04/cola-pushblock-1033",
    repo_type="dataset",
    allow_patterns=["train_*.npy", "val_*.npy", "test_*.npy", "README.md"],
)
print(local)

Behavioral cloning, minimal example

import numpy as np

cache = "."
images_a  = np.load(f"{cache}/train_images_a.npy",  mmap_mode="r")
images_b  = np.load(f"{cache}/train_images_b.npy",  mmap_mode="r")
actions_a = np.load(f"{cache}/train_actions_a.npy")
actions_b = np.load(f"{cache}/train_actions_b.npy")

def get_batch(B=128):
    idx = np.sort(np.random.choice(len(actions_a), B, replace=False))
    return {
        "image_a":  np.array(images_a[idx]).astype(np.float32) / 255.0,
        "image_b":  np.array(images_b[idx]).astype(np.float32) / 255.0,
        "action_a": actions_a[idx],
        "action_b": actions_b[idx],
    }

# Plug get_batch into your trainer; targets are already normalized in [-1, 1].

Sorting indices before slicing dramatically speeds up disk reads on network filesystems.

How the data was collected

  • Environment: MuJoCo, custom XML with two 4-DoF arms on a low- friction table, a freejoint block, and three reward zones (midline, success, fail).
  • Operator: 1,033 episodes were teleoperated by a single demonstrator via a keyboard/gamepad interface that mapped continuous inputs to joint-position deltas, with diverse box spawn positions and arm starting poses.
  • Episode termination: success (block stops in zone after crossing midline), failure (block falls off, falls below table, or timeout at 500 steps).
  • Recording: at each environment step, both camera views and both 4-DoF joint commands were saved.
  • Split: episodes were randomly partitioned ~80/10/10 across train/val/test before flattening.

The 1,033 demos were collected across several sessions and pooled into a single "diverse" dataset (no oversampling of any particular sub-pattern).

Recommended baseline / what's known to work

In the COLA paper, this dataset was used to train a behavioral-cloning policy on top of a frozen Octo-Small VLA backbone:

  • L0 (random): ~13% task success
  • L1 (shared vision baseline): ~12%
  • L2 (COLA, frozen Octo + ~1M-param coordination adapter): 42% — main method
  • L3 (joint fine-tuning, full Octo unfrozen): upper-bound comparison (see paper for the exact number)

Training hyperparameters that worked for L2 / L3:

  • Adam, lr 1e-4, cosine decay to 1e-5
  • Batch size 128
  • Gradient clip global-norm 1.0
  • MSE loss on joint actions
  • Early stop with patience 25 epochs on val MSE

Limitations

  1. No episode boundaries in this cache: the arrays are concatenated across episodes, so you can sample i.i.d. timesteps but can't reconstruct trajectories from this dump alone. (For trajectory-level eval — e.g. the 135-step expert-kickstart protocol used in the paper — you'd need the original per-episode .npz files; reach out if you need them.)
  2. Single demonstrator: data was collected by one person; behavioral diversity is from varied box spawns and arm starts, not multiple styles.
  3. No language / no goal embedding: this dataset is for the specific push-block task only. The Octo backbone in COLA was conditioned on a fixed text task "coordinate with partner" for all timesteps.
  4. Synthetic visuals: MuJoCo's default renderer — not photorealistic.
  5. Large size: ~131 GB uncompressed. Use mmap; don't try to np.load(...) without mmap_mode='r'.

Citation

If you use this dataset, please cite the COLA paper:

@article{cola2026,
  title  = {COLA: Coordination via Latent Adapters for Multi-Agent VLAs},
  author = {Ahaskar and collaborators},
  year   = {2026},
  note   = {CoRL 2026 submission},
}

(BibTeX entry will be finalized after submission.)

License

Released under the MIT License. See the LICENSE file in the source repo.

Contact

Open an issue or discussion on this repo, or reach the corresponding author via the email listed on the paper.

Downloads last month
-