The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ManiSoft
ManiSoft is a soft-robot manipulation dataset and benchmark for vision-language-action learning. It contains expert demonstrations for four manipulation tasks:
COLL: CollectionALN: AlignmentARR: ArrangementSTK: Stacking
This upload directory currently provides:
assets.tar: simulator assets required for replay and trainingclean/: task data packaged as.tarshards for efficient download and uploaddata_extract.sh: a utility script for recursively extracting all dataset shards
Task Layout in This Repository
The files hosted in the dataset repository are organized as tar shards rather than already-extracted case folders.
.
βββ assets.tar
βββ clean
β βββ ALN
β β βββ train_bottle_0_9.tar
β β βββ train_bottle_10_19.tar
β β βββ eval_bottle_0_9.tar
β β βββ ...
β βββ ARR
β β βββ eval_bottle_0_9.tar
β β βββ ...
β βββ COLL
β β βββ train_pencup_0_9.tar
β β βββ eval_boxdrink_0_9.tar
β β βββ ...
β βββ STK
β βββ train_default_0_9.tar
β βββ eval_default_0_9.tar
β βββ ...
βββ data_extract.sh
For ALN, ARR, and COLL, shard names follow:
<split>_<object_category>_<start_case_id>_<end_case_id>.tar
For STK, shard names follow:
<split>_default_<start_case_id>_<end_case_id>.tar
Extracted Dataset Format
After extraction, each shard restores the original directory structure. A typical case directory looks like this:
clean/
βββ ALN/
βββ train/
β βββ bottle/
β βββ 0/
β βββ environment.yaml
β βββ instructions.txt
β βββ trajectory.pkl
β βββ visual/
βββ eval/
βββ bottle/
βββ 0/
βββ environment.yaml
βββ instructions.txt
βββ trajectory.pkl
βββ visual/
Each case is typically organized by:
<setting>/<task>/<split>/<object_category>/<case_id>/
Common files inside one case:
instructions.txt: language instructions for the manipulation caseenvironment.yaml: scene and task configurationtrajectory.pkl: expert trajectory stored as a time-indexed dictionaryvisual/: visualization assets such as rendered frames or videos
Quick Download Example
If you use the Hugging Face CLI, you can download the dataset to a local directory like this:
hf download JobsWei/ManiSoft --local-dir ./ManiSoft --repo-type dataset
If you only need the benchmark data without simulator assets:
hf download JobsWei/ManiSoft --local-dir ./ManiSoft --repo-type dataset --exclude "assets.tar"
If you only need evaluation shards:
hf download JobsWei/ManiSoft --local-dir ./ManiSoft --repo-type dataset --include "**/eval/**"
data_extract.sh Usage
The repository includes data_extract.sh for recursively finding and extracting all .tar files under a root directory with parallel workers.
Command
bash data_extract.sh <tar_root_dir> <max_processes> <delete_tar_file>
Arguments
tar_root_dir: root directory to recursively search for.tarfilesmax_processes: number of parallel extraction processes, must be a positive integerdelete_tar_file: whether to delete each.tarafter successful extraction0: keep tar files1: delete tar files
Typical Examples
Extract all dataset shards under the downloaded directory and keep the original tar files:
bash data_extract.sh ./ManiSoft 8 0
Extract all dataset shards and delete each tar file after successful extraction:
bash data_extract.sh ./ManiSoft 8 1
Extract only the clean subset:
bash data_extract.sh ./ManiSoft/clean 8 1
What the Script Does
- recursively finds all
.tarfiles undertar_root_dir - extracts them in parallel
- restores files into the original relative paths stored in each tar shard
- optionally removes the source tar files after successful extraction
Recommended Workflow
hf download JobsWei/ManiSoft --local-dir ./ManiSoft --repo-type dataset --exclude "assets.tar"
cp /path/to/data_extract.sh ./ManiSoft/
cd ./ManiSoft
bash data_extract.sh ./clean 8 1
If you also need simulator assets:
tar -xvf assets.tar
Notes
- The extraction script requires a Unix-like shell environment with
bash,find,tar, and standard job control support. - Different shards may expand into the same
train/oreval/directory tree. This is expected. trajectory.pklis the main expert trajectory file used for imitation learning and replay.
- Downloads last month
- 225