source_repo stringclasses 5
values | source_tag stringclasses 5
values | native_fps int64 30 50 | is_dagger bool 2
classes | is_impedance bool 2
classes | scene_type stringclasses 3
values | shirt_layout stringclasses 2
values | recommended_common_fps int64 10 10 |
|---|---|---|---|---|---|---|---|
Gongsta/trlc_tshirt_folding | apartment_original_multi_shirt | 30 | false | false | apartment | multi_shirt | 10 |
Gongsta/trlc_tshirt_folding_impedance | apartment_impedance_multi_shirt | 50 | false | true | apartment | multi_shirt | 10 |
Gongsta/dagger_dk1_tshirt_corrections | multi_shirt_dagger_corrections | 50 | true | false | mixed | multi_shirt | 10 |
Gongsta/krish-simpler-tshirt | apartment_single_shirt | 50 | false | false | apartment | single_shirt | 10 |
Gongsta/e7-tshirt-folding | building_single_shirt | 50 | false | false | building | single_shirt | 10 |
T-Shirt Folding Mixed Manifest
This repo is a lightweight manifest for a public T-shirt folding collection built from five source datasets:
Gongsta/trlc_tshirt_foldingGongsta/trlc_tshirt_folding_impedanceGongsta/dagger_dk1_tshirt_correctionsGongsta/krish-simpler-tshirtGongsta/e7-tshirt-folding
The goal is to provide one clean public entrypoint while preserving each source dataset at its highest native frame rate.
Project Background
This dataset card was created from data collected across Waterloo for the University of Waterloo Software Engineering capstone by team members Eddy Zhou, Steven Gong, Krish Shah, and Krish Mehta. We used these datasets to train a laundry-folding robot by fine-tuning the base Pi-0.5 model, including runs that incorporated DAgger corrections.
In practice, we saw strong qualitative accuracy and reasonable generalization across two environments and different T-shirts. We were not highly rigorous about exact benchmark evaluation, so this collection should be treated more as a practical public starting point than as a tightly standardized benchmark. The hope is that it helps other people train stronger models with more careful evaluation and higher final precision.
Best Observed Recipe
In our own experiments, the best model quality we observed came from:
- training at
50 Hz - using impedance-control data
batch_size=325000training steps4x A100- training on
Gongsta/e7-tshirt-folding
That result should be treated as an empirical recipe from our runs, not a universal rule.
What Is In This Repo
Each row in data/train.jsonl describes one source dataset and includes:
source_reposource_tagnative_fpsis_daggeris_impedancescene_typeshirt_layoutrecommended_common_fps
This repo does not duplicate the source videos. It is a manifest / collection layer that documents provenance and intended loading behavior.
Source Tags
apartment_original_multi_shirtapartment_impedance_multi_shirtmulti_shirt_dagger_correctionsapartment_single_shirtbuilding_single_shirt
Recommended Loading
If you want to combine all five datasets with exact-stride downsampling and no interpolation, use:
target_fps = 10
Why:
30 -> 10is exact stride350 -> 10is exact stride5
If you want to keep native FPS, load the listed source repos directly and treat this dataset as the metadata / provenance index.
Using Standard LeRobot Loaders
This repo is intended for users who may only have the standard LeRobot dataset tools, not our internal training code.
The simplest pattern is:
- read
data/train.jsonlfrom this manifest repo - choose the source repos you want
- load those source repos directly with
LeRobotDataset - decide whether to keep native FPS or resample to a common target FPS yourself
If you want a common timebase across all five datasets, our recommendation is:
- downsample everything to
10 Hz - use exact stride downsampling where possible
Recommended exact-stride logic:
30 Hz -> 10 Hz: keep every3rdframe/state/action50 Hz -> 10 Hz: keep every5thframe/state/action
This avoids interpolation entirely for the current dataset set.
If you want to upsample instead, we recommend:
- interpolate proprioceptive signals such as state and action sequences
- use nearest-frame selection for images rather than inventing intermediate video frames
For most users, exact-stride downsampling is the simpler and more reproducible choice.
Notes
Gongsta/trlc_tshirt_foldingis the only30 Hzsource.Gongsta/dagger_dk1_tshirt_correctionsis the DAgger dataset and is50 Hz.- The other listed sources are
50 Hz.
Example: Load Through This Manifest
from datasets import load_dataset
from lerobot.datasets.lerobot_dataset import LeRobotDataset
manifest = load_dataset("djkesu/tshirt-folding", split="train")
# Example: choose all native-50Hz, non-dagger datasets
selected = manifest.filter(
lambda row: row["native_fps"] == 50 and not row["is_dagger"]
)
repo_ids = [row["source_repo"] for row in selected]
datasets = [LeRobotDataset(repo_id=repo_id) for repo_id in repo_ids]
Example: Use All Listed Source Datasets
from datasets import load_dataset
manifest = load_dataset("djkesu/tshirt-folding", split="train")
repo_ids = [row["source_repo"] for row in manifest]
# For an exact-stride common timebase across all five:
target_fps = 10
- Downloads last month
- 37