Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Website
Tasks
HuggingChat
Collections
Languages
Organizations
Community
Blog
Posts
Daily Papers
Learn
Discord
Forum
GitHub
Solutions
Team & Enterprise
Hugging Face PRO
Enterprise Support
Inference Providers
Inference Endpoints
Storage Buckets
Log In
Sign Up
Datasets:
Guowei-Zou
/
DMPO-datasets
like
0
Tasks:
Robotics
Languages:
English
Size:
1M<n<10M
ArXiv:
arxiv:
2601.20701
Tags:
robotics
reinforcement-learning
imitation-learning
robomimic
mujoco
d4rl
License:
mit
Dataset card
Files
Files and versions
xet
Community
main
DMPO-datasets
2.98 GB
1 contributor
History:
3 commits
Guowei-Zou
Upload README.md with huggingface_hub
5b58b74
verified
3 months ago
gym
Upload folder using huggingface_hub
4 months ago
robomimic
Upload folder using huggingface_hub
4 months ago
.gitattributes
Safe
2.46 kB
initial commit
4 months ago
README.md
Safe
2.16 kB
Upload README.md with huggingface_hub
3 months ago