Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
                  raise ValueError(
              ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

ManiSoft

ManiSoft is a soft-robot manipulation dataset and benchmark for vision-language-action learning. It contains expert demonstrations for four manipulation tasks:

  • COLL: Collection
  • ALN: Alignment
  • ARR: Arrangement
  • STK: Stacking

This upload directory currently provides:

  • assets.tar: simulator assets required for replay and training
  • clean/: task data packaged as .tar shards for efficient download and upload
  • data_extract.sh: a utility script for recursively extracting all dataset shards

Task Layout in This Repository

The files hosted in the dataset repository are organized as tar shards rather than already-extracted case folders.

.
β”œβ”€β”€ assets.tar
β”œβ”€β”€ clean
β”‚   β”œβ”€β”€ ALN
β”‚   β”‚   β”œβ”€β”€ train_bottle_0_9.tar
β”‚   β”‚   β”œβ”€β”€ train_bottle_10_19.tar
β”‚   β”‚   β”œβ”€β”€ eval_bottle_0_9.tar
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ ARR
β”‚   β”‚   β”œβ”€β”€ eval_bottle_0_9.tar
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ COLL
β”‚   β”‚   β”œβ”€β”€ train_pencup_0_9.tar
β”‚   β”‚   β”œβ”€β”€ eval_boxdrink_0_9.tar
β”‚   β”‚   └── ...
β”‚   └── STK
β”‚       β”œβ”€β”€ train_default_0_9.tar
β”‚       β”œβ”€β”€ eval_default_0_9.tar
β”‚       └── ...
└── data_extract.sh

For ALN, ARR, and COLL, shard names follow:

<split>_<object_category>_<start_case_id>_<end_case_id>.tar

For STK, shard names follow:

<split>_default_<start_case_id>_<end_case_id>.tar

Extracted Dataset Format

After extraction, each shard restores the original directory structure. A typical case directory looks like this:

clean/
└── ALN/
    β”œβ”€β”€ train/
    β”‚   └── bottle/
    β”‚       └── 0/
    β”‚           β”œβ”€β”€ environment.yaml
    β”‚           β”œβ”€β”€ instructions.txt
    β”‚           β”œβ”€β”€ trajectory.pkl
    β”‚           └── visual/
    └── eval/
        └── bottle/
            └── 0/
                β”œβ”€β”€ environment.yaml
                β”œβ”€β”€ instructions.txt
                β”œβ”€β”€ trajectory.pkl
                └── visual/

Each case is typically organized by:

<setting>/<task>/<split>/<object_category>/<case_id>/

Common files inside one case:

  • instructions.txt: language instructions for the manipulation case
  • environment.yaml: scene and task configuration
  • trajectory.pkl: expert trajectory stored as a time-indexed dictionary
  • visual/: visualization assets such as rendered frames or videos

Quick Download Example

If you use the Hugging Face CLI, you can download the dataset to a local directory like this:

hf download JobsWei/ManiSoft --local-dir ./ManiSoft --repo-type dataset

If you only need the benchmark data without simulator assets:

hf download JobsWei/ManiSoft --local-dir ./ManiSoft --repo-type dataset --exclude "assets.tar"

If you only need evaluation shards:

hf download JobsWei/ManiSoft --local-dir ./ManiSoft --repo-type dataset --include "**/eval/**"

data_extract.sh Usage

The repository includes data_extract.sh for recursively finding and extracting all .tar files under a root directory with parallel workers.

Command

bash data_extract.sh <tar_root_dir> <max_processes> <delete_tar_file>

Arguments

  • tar_root_dir: root directory to recursively search for .tar files
  • max_processes: number of parallel extraction processes, must be a positive integer
  • delete_tar_file: whether to delete each .tar after successful extraction
    • 0: keep tar files
    • 1: delete tar files

Typical Examples

Extract all dataset shards under the downloaded directory and keep the original tar files:

bash data_extract.sh ./ManiSoft 8 0

Extract all dataset shards and delete each tar file after successful extraction:

bash data_extract.sh ./ManiSoft 8 1

Extract only the clean subset:

bash data_extract.sh ./ManiSoft/clean 8 1

What the Script Does

  • recursively finds all .tar files under tar_root_dir
  • extracts them in parallel
  • restores files into the original relative paths stored in each tar shard
  • optionally removes the source tar files after successful extraction

Recommended Workflow

hf download JobsWei/ManiSoft --local-dir ./ManiSoft --repo-type dataset --exclude "assets.tar"
cp /path/to/data_extract.sh ./ManiSoft/
cd ./ManiSoft
bash data_extract.sh ./clean 8 1

If you also need simulator assets:

tar -xvf assets.tar

Notes

  • The extraction script requires a Unix-like shell environment with bash, find, tar, and standard job control support.
  • Different shards may expand into the same train/ or eval/ directory tree. This is expected.
  • trajectory.pkl is the main expert trajectory file used for imitation learning and replay.
Downloads last month
225