datasetId
large_stringlengths
6
121
card_raw
large_stringlengths
10
25.3M
card_text
large_stringlengths
0
25.3M
downloads
int64
0
2.26M
likes
int64
0
9.39k
tags
large listlengths
1
7.92k
created_at
large_stringdate
2022-03-02 23:29:22
2025-11-12 17:47:45
last_modified
large_stringdate
2021-02-16 03:58:06
2025-11-12 17:57:42
trending_score
float32
0
90
anthnykr/merged-test-3
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "trossen_subversion": "v1.0", "robot_type": "trossen_ai_stationary", "total_episodes": 76, "total_frames": 33966, "total_tasks": 2, "total_videos": 304, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:76" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 32 ], "names": [ "left_joint_0", "left_joint_1", "left_joint_2", "left_joint_3", "left_joint_4", "left_joint_5", "left_joint_6", "right_joint_0", "right_joint_1", "right_joint_2", "right_joint_3", "right_joint_4", "right_joint_5", "right_joint_6" ] }, "observation.state": { "dtype": "float32", "shape": [ 32 ], "names": [ "left_joint_0", "left_joint_1", "left_joint_2", "left_joint_3", "left_joint_4", "left_joint_5", "left_joint_6", "right_joint_0", "right_joint_1", "right_joint_2", "right_joint_3", "right_joint_4", "right_joint_5", "right_joint_6" ] }, "observation.images.cam_high": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.cam_low": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.cam_left_wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.cam_right_wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "trossen_subversion": "v1.0", "robot_type": "trossen_ai_stationary", "total_episodes": 76, "total_frames": 33966, "total_tasks": 2, "total_videos": 304, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:76" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 32 ], "names": [ "left_joint_0", "left_joint_1", "left_joint_2", "left_joint_3", "left_joint_4", "left_joint_5", "left_joint_6", "right_joint_0", "right_joint_1", "right_joint_2", "right_joint_3", "right_joint_4", "right_joint_5", "right_joint_6" ] }, "observation.state": { "dtype": "float32", "shape": [ 32 ], "names": [ "left_joint_0", "left_joint_1", "left_joint_2", "left_joint_3", "left_joint_4", "left_joint_5", "left_joint_6", "right_joint_0", "right_joint_1", "right_joint_2", "right_joint_3", "right_joint_4", "right_joint_5", "right_joint_6" ] }, "observation.images.cam_high": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.cam_low": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.cam_left_wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.cam_right_wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
56
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T10:45:03+00:00
2025-11-11T10:45:07+00:00
0
unreal/bobiwine
# Bobi Wine Luganda Speech This dataset contains short Luganda speech clips segmented from longer recordings, with corresponding transcriptions in Luganda. It is intended for Text-to-Speech (TTS) research and prototyping. ## Contents - `wavs/`: 250 WAV clips (`22050 Hz` recommended for TTS pipelines) - `metadata.csv`: Root-level metadata with two columns per line: `path|text` - Example: `wavs/0001.wav|Gyebale ko ssebo…` - `bobi_interview_clean.wav`: Source interview audio (cleaned) - `bobi.mp3`: Additional source audio ## Recommended metadata format for TTS Most TTS pipelines expect a root-level `metadata.csv` with relative paths: ``` path|text wavs/0001.wav|Gyebale ko ssebo… wavs/0002.wav|… ``` If you need to regenerate `metadata.csv` from an ID|text file: ```bash cd /Users/jovan/dev/ai_slop/dataset/bobiwine awk -F'|' 'NF>=2 { gsub(/^[ \t]+|[ \t]+$/, "", $1); print "wavs/" $1 ".wav|" $2 }' chops/metadata.csv > metadata.csv ``` Note: Ensure that transcriptions are Luganda and match the audio content. ## Upload to Hugging Face Hub ```bash curl -LsSf https://hf.co/cli/install.sh | bash hf auth login cd /Users/jovan/dev/ai_slop/dataset/bobiwine # Ensure large audio files are tracked with LFS via .gitattributes # Option A: Use the HF CLI upload (simple) hf upload unreal/bobiwine . --repo-type=dataset # Option B: Git-based workflow # git init && git remote add origin https://huggingface.co/datasets/unreal/bobiwine # git add . # git commit -m "Add Bobi Wine Luganda speech dataset" # git push -u origin main ``` ## Caveats - Licensing for the audio should be reviewed and set appropriately. - Content may contain political speech; review for intended downstream use. ## Citation If you use this dataset, please cite and credit the dataset author and original sources as appropriate.
# Bobi Wine Luganda Speech This dataset contains short Luganda speech clips segmented from longer recordings, with corresponding transcriptions in Luganda. It is intended for Text-to-Speech (TTS) research and prototyping. ## Contents - `wavs/`: 250 WAV clips (`22050 Hz` recommended for TTS pipelines) - `metadata.csv`: Root-level metadata with two columns per line: `path|text` - Example: `wavs/0001.wav|Gyebale ko ssebo…` - `bobi_interview_clean.wav`: Source interview audio (cleaned) - `bobi.mp3`: Additional source audio ## Recommended metadata format for TTS Most TTS pipelines expect a root-level `metadata.csv` with relative paths: ``` path|text wavs/0001.wav|Gyebale ko ssebo… wavs/0002.wav|… ``` If you need to regenerate `metadata.csv` from an ID|text file: ```bash cd /Users/jovan/dev/ai_slop/dataset/bobiwine awk -F'|' 'NF>=2 { gsub(/^[ \t]+|[ \t]+$/, "", $1); print "wavs/" $1 ".wav|" $2 }' chops/metadata.csv > metadata.csv ``` Note: Ensure that transcriptions are Luganda and match the audio content. ## Upload to Hugging Face Hub ```bash curl -LsSf https://hf.co/cli/install.sh | bash hf auth login cd /Users/jovan/dev/ai_slop/dataset/bobiwine # Ensure large audio files are tracked with LFS via .gitattributes # Option A: Use the HF CLI upload (simple) hf upload unreal/bobiwine . --repo-type=dataset # Option B: Git-based workflow # git init && git remote add origin https://huggingface.co/datasets/unreal/bobiwine # git add . # git commit -m "Add Bobi Wine Luganda speech dataset" # git push -u origin main ``` ## Caveats - Licensing for the audio should be reviewed and set appropriately. - Content may contain political speech; review for intended downstream use. ## Citation If you use this dataset, please cite and credit the dataset author and original sources as appropriate.
39
0
[ "task_categories:text-to-speech", "language:lg", "license:unknown", "size_categories:n<1K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "luganda", "uganda", "speech", "tts" ]
2025-11-11T09:02:13+00:00
2025-11-11T10:36:32+00:00
0
Maffffffffy/mafrin07_dlgenai-nppe-dataset
# 🧠 DL & GenAI NPPE 1 – Age & Gender Prediction Dataset This dataset is linked to the project **[Maffffffffy/mafrin07_dlgenai-nppe](https://huggingface.co/spaces/Maffffffffy/mafrin07_dlgenai-nppe)** on Hugging Face Spaces. It serves as the backing dataset for model inference and TrackIO logging for the **Deep Learning & Generative AI NPPE 1** assignment (Term 2, 2025). --- ### 📘 Project Overview The project predicts **Age** (regression) and **Gender** (classification) from facial images using **PyTorch Lightning** and **ResNet-18**. - **Architecture**: Finetuned ResNet-18 - **Outputs**: - `age` → continuous (0–80 years) - `gender` → categorical (`0 = Female`, `1 = Male`) - **Training data**: Provided via the course Kaggle competition - **Evaluation metric**: Harmonic mean of Macro F1 and nRMSE --- ### 🧩 Contents This dataset is used to store metadata, logs, and configurations associated with: - Model versioning (weights and hyperparameters) - TrackIO logging artifacts - Optional inference examples --- ### 📎 Links - 🧠 **Hugging Face Space**: [Maffffffffy/mafrin07_dlgenai-nppe](https://huggingface.co/spaces/Maffffffffy/mafrin07_dlgenai-nppe) - 🏆 **Kaggle Notebook**: [https://www.kaggle.com/code/mafrin07/dlnppe1](https://www.kaggle.com/code/mafrin07/dlnppe1) --- ### 📬 Author **Mafrin S** (23F2001933) Department of Data Science, IIT Madras Email: 23f2001933@ds.study.iitm.ac.in
# 🧠 DL & GenAI NPPE 1 – Age & Gender Prediction Dataset This dataset is linked to the project **[Maffffffffy/mafrin07_dlgenai-nppe](https://huggingface.co/spaces/Maffffffffy/mafrin07_dlgenai-nppe)** on Hugging Face Spaces. It serves as the backing dataset for model inference and TrackIO logging for the **Deep Learning & Generative AI NPPE 1** assignment (Term 2, 2025). --- ### 📘 Project Overview The project predicts **Age** (regression) and **Gender** (classification) from facial images using **PyTorch Lightning** and **ResNet-18**. - **Architecture**: Finetuned ResNet-18 - **Outputs**: - `age` → continuous (0–80 years) - `gender` → categorical (`0 = Female`, `1 = Male`) - **Training data**: Provided via the course Kaggle competition - **Evaluation metric**: Harmonic mean of Macro F1 and nRMSE --- ### 🧩 Contents This dataset is used to store metadata, logs, and configurations associated with: - Model versioning (weights and hyperparameters) - TrackIO logging artifacts - Optional inference examples --- ### 📎 Links - 🧠 **Hugging Face Space**: [Maffffffffy/mafrin07_dlgenai-nppe](https://huggingface.co/spaces/Maffffffffy/mafrin07_dlgenai-nppe) - 🏆 **Kaggle Notebook**: [https://www.kaggle.com/code/mafrin07/dlnppe1](https://www.kaggle.com/code/mafrin07/dlnppe1) --- ### 📬 Author **Mafrin S** (23F2001933) Department of Data Science, IIT Madras Email: 23f2001933@ds.study.iitm.ac.in
5
0
[ "task_categories:image-classification", "language:en", "size_categories:1K<n<10K", "region:us", "code" ]
2025-11-11T10:21:19+00:00
2025-11-11T10:33:42+00:00
0
oxe-aug/language_table_train_130000_135000_augmented
# language_table_train_130000_135000_augmented ## Overview - **Codebase version**: `v2.1` - **Robots**: google_robot, images, jaco, kinova3, kuka_iiwa, panda, sawyer, ur5e - **FPS**: 10 - **Episodes**: 5,000 - **Frames**: 79,357 - **Videos**: 40,000 - **Chunks**: 5 - **Splits**: - `train`: `0:5000` ## Data Layout ```text data_path : data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet video_path: videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4 ``` ## Features | Feature | dtype | shape | description | |---|---:|---:|---| | `observation.images.google_robot` | `video` | `360×640×3` | Augmented image for google_robot robot | | `observation.images.image` | `video` | `360×640×3` | Source robot's image from original dataset | | `observation.images.jaco` | `video` | `360×640×3` | Augmented image for jaco robot | | `observation.images.kinova3` | `video` | `360×640×3` | Augmented image for kinova3 robot | | `observation.images.kuka_iiwa` | `video` | `360×640×3` | Augmented image for kuka_iiwa robot | | `observation.images.panda` | `video` | `360×640×3` | Augmented image for panda robot | | `observation.images.sawyer` | `video` | `360×640×3` | Augmented image for sawyer robot | | `observation.images.ur5e` | `video` | `360×640×3` | Augmented image for ur5e robot | | `episode_index` | `int64` | `1` | - | | `frame_index` | `int64` | `1` | - | | `index` | `int64` | `1` | - | | `natural_language_instruction` | `int32` | `512` | - | | `observation.ee_pose` | `float32` | `7` | Source robot's eef position | | `observation.google_robot.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.google_robot.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.google_robot.ee_error` | `float32` | `7` | The eef difference between the augmented google_robot robot and the original robot | | `observation.google_robot.ee_pose` | `float32` | `7` | The eef position of google_robot robot | | `observation.google_robot.joints` | `float32` | `8` | The joint position of google_robot robot | | `observation.jaco.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.jaco.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.jaco.ee_error` | `float32` | `7` | The eef difference between the augmented jaco robot and the original robot | | `observation.jaco.ee_pose` | `float32` | `7` | The eef position of jaco robot | | `observation.jaco.joints` | `float32` | `7` | The joint position of jaco robot | | `observation.joints` | `float32` | `8` | Joint angle of source robot | | `observation.kinova3.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.kinova3.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.kinova3.ee_error` | `float32` | `7` | The eef difference between the augmented kinova3 robot and the original robot | | `observation.kinova3.ee_pose` | `float32` | `7` | The eef position of kinova3 robot | | `observation.kinova3.joints` | `float32` | `8` | The joint position of kinova3 robot | | `observation.kuka_iiwa.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.kuka_iiwa.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.kuka_iiwa.ee_error` | `float32` | `7` | The eef difference between the augmented kuka_iiwa robot and the original robot | | `observation.kuka_iiwa.ee_pose` | `float32` | `7` | The eef position of kuka_iiwa robot | | `observation.kuka_iiwa.joints` | `float32` | `8` | The joint position of kuka_iiwa robot | | `observation.panda.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.panda.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.panda.ee_error` | `float32` | `7` | The eef difference between the augmented panda robot and the original robot | | `observation.panda.ee_pose` | `float32` | `7` | The eef position of panda robot | | `observation.panda.joints` | `float32` | `8` | The joint position of panda robot | | `observation.sawyer.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.sawyer.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.sawyer.ee_error` | `float32` | `7` | The eef difference between the augmented sawyer robot and the original robot | | `observation.sawyer.ee_pose` | `float32` | `7` | The eef position of sawyer robot | | `observation.sawyer.joints` | `float32` | `8` | The joint position of sawyer robot | | `observation.state` | `float32` | `2` | Copy of the state field in source robot's RLDS dataset | | `observation.ur5e.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.ur5e.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.ur5e.ee_error` | `float32` | `7` | The eef difference between the augmented ur5e robot and the original robot | | `observation.ur5e.ee_pose` | `float32` | `7` | The eef position of ur5e robot | | `observation.ur5e.joints` | `float32` | `7` | The joint position of ur5e robot | | `task_index` | `int64` | `1` | - | | `timestamp` | `float32` | `1` | - | ## Website - Website page: [https://oxe-aug.github.io/](https://oxe-aug.github.io/) - Project repository: [https://github.com/GuanhuaJi/oxe-aug](https://github.com/GuanhuaJi/oxe-aug) ## Paper - [https://arxiv.org/abs/2210.06407](https://arxiv.org/abs/2210.06407) ## Citation Policy If you use **OXE-Aug** datasets, please cite **both** our dataset and the **upstream datasets**. ## Upstream Dataset Citation (original dataset) ```bibtex @article{lynch2022interactive, title = {Interactive Language: Talking to Robots in Real Time}, author = {Corey Lynch and Ayzaan Wahid and Jonathan Tompson and Tianli Ding and James Betker and Robert Baruch and Travis Armstrong and Pete Florence}, journal = {arXiv preprint arXiv:2210.06407}, year = {2022}, url = {https://arxiv.org/abs/2210.06407} } ``` ## OXE-Aug Dataset Citation (ours) ```bibtex @misc{ ji2025oxeaug, title = {OXE-Aug: A Large-Scale Robot Augmentation of OXE for Scaling Cross-Embodiment Policy Learning}, author = {Ji, Guanhua and Polavaram, Harsha and Chen, Lawrence Yunliang and Bajamahal, Sandeep and Ma, Zehan and Adebola, Simeon and Xu, Chenfeng and Goldberg, Ken}, year = {2025}, note = {Manuscript} } ```
# language_table_train_130000_135000_augmented ## Overview - **Codebase version**: `v2.1` - **Robots**: google_robot, images, jaco, kinova3, kuka_iiwa, panda, sawyer, ur5e - **FPS**: 10 - **Episodes**: 5,000 - **Frames**: 79,357 - **Videos**: 40,000 - **Chunks**: 5 - **Splits**: - `train`: `0:5000` ## Data Layout ```text data_path : data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet video_path: videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4 ``` ## Features | Feature | dtype | shape | description | |---|---:|---:|---| | `observation.images.google_robot` | `video` | `360×640×3` | Augmented image for google_robot robot | | `observation.images.image` | `video` | `360×640×3` | Source robot's image from original dataset | | `observation.images.jaco` | `video` | `360×640×3` | Augmented image for jaco robot | | `observation.images.kinova3` | `video` | `360×640×3` | Augmented image for kinova3 robot | | `observation.images.kuka_iiwa` | `video` | `360×640×3` | Augmented image for kuka_iiwa robot | | `observation.images.panda` | `video` | `360×640×3` | Augmented image for panda robot | | `observation.images.sawyer` | `video` | `360×640×3` | Augmented image for sawyer robot | | `observation.images.ur5e` | `video` | `360×640×3` | Augmented image for ur5e robot | | `episode_index` | `int64` | `1` | - | | `frame_index` | `int64` | `1` | - | | `index` | `int64` | `1` | - | | `natural_language_instruction` | `int32` | `512` | - | | `observation.ee_pose` | `float32` | `7` | Source robot's eef position | | `observation.google_robot.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.google_robot.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.google_robot.ee_error` | `float32` | `7` | The eef difference between the augmented google_robot robot and the original robot | | `observation.google_robot.ee_pose` | `float32` | `7` | The eef position of google_robot robot | | `observation.google_robot.joints` | `float32` | `8` | The joint position of google_robot robot | | `observation.jaco.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.jaco.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.jaco.ee_error` | `float32` | `7` | The eef difference between the augmented jaco robot and the original robot | | `observation.jaco.ee_pose` | `float32` | `7` | The eef position of jaco robot | | `observation.jaco.joints` | `float32` | `7` | The joint position of jaco robot | | `observation.joints` | `float32` | `8` | Joint angle of source robot | | `observation.kinova3.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.kinova3.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.kinova3.ee_error` | `float32` | `7` | The eef difference between the augmented kinova3 robot and the original robot | | `observation.kinova3.ee_pose` | `float32` | `7` | The eef position of kinova3 robot | | `observation.kinova3.joints` | `float32` | `8` | The joint position of kinova3 robot | | `observation.kuka_iiwa.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.kuka_iiwa.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.kuka_iiwa.ee_error` | `float32` | `7` | The eef difference between the augmented kuka_iiwa robot and the original robot | | `observation.kuka_iiwa.ee_pose` | `float32` | `7` | The eef position of kuka_iiwa robot | | `observation.kuka_iiwa.joints` | `float32` | `8` | The joint position of kuka_iiwa robot | | `observation.panda.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.panda.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.panda.ee_error` | `float32` | `7` | The eef difference between the augmented panda robot and the original robot | | `observation.panda.ee_pose` | `float32` | `7` | The eef position of panda robot | | `observation.panda.joints` | `float32` | `8` | The joint position of panda robot | | `observation.sawyer.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.sawyer.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.sawyer.ee_error` | `float32` | `7` | The eef difference between the augmented sawyer robot and the original robot | | `observation.sawyer.ee_pose` | `float32` | `7` | The eef position of sawyer robot | | `observation.sawyer.joints` | `float32` | `8` | The joint position of sawyer robot | | `observation.state` | `float32` | `2` | Copy of the state field in source robot's RLDS dataset | | `observation.ur5e.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) | | `observation.ur5e.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable | | `observation.ur5e.ee_error` | `float32` | `7` | The eef difference between the augmented ur5e robot and the original robot | | `observation.ur5e.ee_pose` | `float32` | `7` | The eef position of ur5e robot | | `observation.ur5e.joints` | `float32` | `7` | The joint position of ur5e robot | | `task_index` | `int64` | `1` | - | | `timestamp` | `float32` | `1` | - | ## Website - Website page: [https://oxe-aug.github.io/](https://oxe-aug.github.io/) - Project repository: [https://github.com/GuanhuaJi/oxe-aug](https://github.com/GuanhuaJi/oxe-aug) ## Paper - [https://arxiv.org/abs/2210.06407](https://arxiv.org/abs/2210.06407) ## Citation Policy If you use **OXE-Aug** datasets, please cite **both** our dataset and the **upstream datasets**. ## Upstream Dataset Citation (original dataset) ```bibtex @article{lynch2022interactive, title = {Interactive Language: Talking to Robots in Real Time}, author = {Corey Lynch and Ayzaan Wahid and Jonathan Tompson and Tianli Ding and James Betker and Robert Baruch and Travis Armstrong and Pete Florence}, journal = {arXiv preprint arXiv:2210.06407}, year = {2022}, url = {https://arxiv.org/abs/2210.06407} } ``` ## OXE-Aug Dataset Citation (ours) ```bibtex @misc{ ji2025oxeaug, title = {OXE-Aug: A Large-Scale Robot Augmentation of OXE for Scaling Cross-Embodiment Policy Learning}, author = {Ji, Guanhua and Polavaram, Harsha and Chen, Lawrence Yunliang and Bajamahal, Sandeep and Ma, Zehan and Adebola, Simeon and Xu, Chenfeng and Goldberg, Ken}, year = {2025}, note = {Manuscript} } ```
308
0
[ "task_categories:robotics", "license:cc-by-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2210.06407", "region:us", "robotics", "lerobot", "oxe-aug", "dataset" ]
2025-11-10T21:26:00+00:00
2025-11-11T10:24:35+00:00
0
viktoroo/WildChat-1M-sampled-for-message-classification
# Dataset Card for WildChat ## Dataset Description - **Original Dataset:** https://huggingface.co/datasets/allenai/WildChat-1M - **Paper:** https://arxiv.org/abs/2405.01470 - **Interactive Search Tool:** https://wildvisualizer.com - **License:** [ODC-BY](https://opendatacommons.org/licenses/by/1-0/) - **Language(s) (NLP):** multi-lingual - **Point of Contact (Original Version):** [Yuntian Deng](https://yuntiandeng.com/) - **Point of Contact (Sampled Version):** [Viktor Shcherbakov](viktoroo.sch@gmail.com) ### Dataset Summary WildChat is a collection of 1 million conversations between human users and ChatGPT, alongside demographic data, including state, country, hashed IP addresses, and request headers. We collected WildChat by offering online users free access to OpenAI's GPT-3.5 and GPT-4. In this version, 25.53% of the conversations come from the GPT-4 chatbot, while the rest come from the GPT-3.5 chatbot. The dataset contains a broad spectrum of user-chatbot interactions that are not previously covered by other instruction fine-tuning datasets: for example, interactions include ambiguous user requests, code-switching, topic-switching, political discussions, etc. WildChat can serve both as a dataset for instructional fine-tuning and as a valuable resource for studying user behaviors. Note that this version of the dataset only contains non-toxic user inputs/ChatGPT responses. ### Sample Summary Each record corresponds to one conversation sampled as follows: we uniformly sample a single **user-authored** message within each conversation, then provide **context** as up to the **10 immediately preceding turns** (any role), with each message truncated to **5,000 characters**. The dataset exposes two primary fields, **`context`** (list of `{role, content}`) and **`user_message`** (string), and carries through all original top-level **metadata** from WildChat unchanged, except that the original `conversation` field is replaced by these two fields. No extra sampling across conversations is performed, and conversations without a user message are dropped. This sample is designed for apples-to-apples comparison with the OpenAI usage study ([NBER Working Paper 34255](https://www.nber.org/papers/w34255)), facilitating measurement of **Work vs Non-Work**, **Asking vs Doing vs Expressing**, and **Topic** shares using single-message classification with limited prior context. ### Full Version with Toxic Content For access to the full version of the WildChat dataset, which includes toxic conversations flagged by the OpenAI Moderations API or Detoxify, please refer to [WildChat-1M-Full](https://huggingface.co/datasets/allenai/WildChat-1M-Full). This version requires approval and justification for why toxic data is needed. ### Languages 68 languages were detected in WildChat. ### Personal and Sensitive Information The data has been de-identified with Microsoft Presidio and hand-written rules by the authors. ### Data Fields - `conversation_hash` (string): The hash of each conversation's content. This is not a unique key, as different conversations with the same content will share the same hash. For unique identifiers, use `turn_identifier` within each turn. - `model` (string): The underlying OpenAI model, such as gpt-3.5-turbo or gpt-4. - `timestamp` (timestamp): The timestamp of the last turn in the conversation in UTC. - `context` (list): A list of user/assistant utterances preceeding the target user message. Each utterance is a dictionary containing the `role` of the speaker (user or assistant), the `content` of the utterance. - `user_message` (str): The last user message in the conversation. In OpenAI framework, this is the message used for classification. - `turn` (int): The number of turns in the conversation. A turn refers to one round of user-assistant interaction. - `language` (string): The language of the conversation. Note that this is the most frequently detected language in the utterances of the conversation. - `openai_moderation` (list): A list of OpenAI Moderation results. Each element in the list corresponds to one utterance in the conversation. When the content of an utterance is an empty string, the corresponding moderation reult is set to be an empty dictionary. - `detoxify_moderation` (list): A list of Detoxify results. Each element in the list corresponds to one utterance in the conversation. When the content of an utterance is an empty string, the corresponding Detoxify reult is set to be an empty dictionary. - `toxic` (bool): Whether this conversation contains any utterances considered to be toxic by either OpenAI Moderation or Detoxify. - `redacted` (bool): Whether this conversation contains any utterances in which PII is detected and anonymized. - `state` (string): The state inferred from the most common IP address in the conversation. Its value is sometimes `None` when GeoIP2 does not identify the state of an IP address. - `country` (string): The country inferred from the most common IP address in the conversation. Its value is sometimes `None` when GeoIP2 does not identify the country of an IP address. - `hashed_ip` (string): The most common hashed IP address in the conversation. - `header` (string): The request header containing information about operating system, browser versions, and accepted languages. This field might be useful for linking multiple conversations from the same user when used in conjunction with `hashed_ip`. Note that every turn in a conversation has the same header, as this is the way we linked turns into conversations. ### Empty User Inputs This dataset includes a small subset of conversations where users submitted empty inputs, sometimes leading to hallucinated responses from the assistant. This issue, first noticed by @yuchenlin, arises from the design of the Huggingface chatbot used for data collection, which did not restrict the submission of empty inputs. As a result, users could submit without entering any text, causing the assistant to generate responses without any user prompts. This occurs in a small fraction of the dataset. ### Licensing Information WildChat is now made available under the [**ODC-BY License**](https://opendatacommons.org/licenses/by/1-0/). This sampled version is available under the same license. ### Citation Information Please consider citing [the original dataset paper](https://arxiv.org/abs/2405.01470) if you find this dataset useful: ``` @inproceedings{ zhao2024wildchat, title={WildChat: 1M Chat{GPT} Interaction Logs in the Wild}, author={Wenting Zhao and Xiang Ren and Jack Hessel and Claire Cardie and Yejin Choi and Yuntian Deng}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=Bl8u7ZRlbM} } ``` ``` @misc{deng2024wildvisopensourcevisualizer, title={WildVis: Open Source Visualizer for Million-Scale Chat Logs in the Wild}, author={Yuntian Deng and Wenting Zhao and Jack Hessel and Xiang Ren and Claire Cardie and Yejin Choi}, year={2024}, eprint={2409.03753}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2409.03753}, } ```
# Dataset Card for WildChat ## Dataset Description - **Original Dataset:** https://huggingface.co/datasets/allenai/WildChat-1M - **Paper:** https://arxiv.org/abs/2405.01470 - **Interactive Search Tool:** https://wildvisualizer.com - **License:** [ODC-BY](https://opendatacommons.org/licenses/by/1-0/) - **Language(s) (NLP):** multi-lingual - **Point of Contact (Original Version):** [Yuntian Deng](https://yuntiandeng.com/) - **Point of Contact (Sampled Version):** [Viktor Shcherbakov](viktoroo.sch@gmail.com) ### Dataset Summary WildChat is a collection of 1 million conversations between human users and ChatGPT, alongside demographic data, including state, country, hashed IP addresses, and request headers. We collected WildChat by offering online users free access to OpenAI's GPT-3.5 and GPT-4. In this version, 25.53% of the conversations come from the GPT-4 chatbot, while the rest come from the GPT-3.5 chatbot. The dataset contains a broad spectrum of user-chatbot interactions that are not previously covered by other instruction fine-tuning datasets: for example, interactions include ambiguous user requests, code-switching, topic-switching, political discussions, etc. WildChat can serve both as a dataset for instructional fine-tuning and as a valuable resource for studying user behaviors. Note that this version of the dataset only contains non-toxic user inputs/ChatGPT responses. ### Sample Summary Each record corresponds to one conversation sampled as follows: we uniformly sample a single **user-authored** message within each conversation, then provide **context** as up to the **10 immediately preceding turns** (any role), with each message truncated to **5,000 characters**. The dataset exposes two primary fields, **`context`** (list of `{role, content}`) and **`user_message`** (string), and carries through all original top-level **metadata** from WildChat unchanged, except that the original `conversation` field is replaced by these two fields. No extra sampling across conversations is performed, and conversations without a user message are dropped. This sample is designed for apples-to-apples comparison with the OpenAI usage study ([NBER Working Paper 34255](https://www.nber.org/papers/w34255)), facilitating measurement of **Work vs Non-Work**, **Asking vs Doing vs Expressing**, and **Topic** shares using single-message classification with limited prior context. ### Full Version with Toxic Content For access to the full version of the WildChat dataset, which includes toxic conversations flagged by the OpenAI Moderations API or Detoxify, please refer to [WildChat-1M-Full](https://huggingface.co/datasets/allenai/WildChat-1M-Full). This version requires approval and justification for why toxic data is needed. ### Languages 68 languages were detected in WildChat. ### Personal and Sensitive Information The data has been de-identified with Microsoft Presidio and hand-written rules by the authors. ### Data Fields - `conversation_hash` (string): The hash of each conversation's content. This is not a unique key, as different conversations with the same content will share the same hash. For unique identifiers, use `turn_identifier` within each turn. - `model` (string): The underlying OpenAI model, such as gpt-3.5-turbo or gpt-4. - `timestamp` (timestamp): The timestamp of the last turn in the conversation in UTC. - `context` (list): A list of user/assistant utterances preceeding the target user message. Each utterance is a dictionary containing the `role` of the speaker (user or assistant), the `content` of the utterance. - `user_message` (str): The last user message in the conversation. In OpenAI framework, this is the message used for classification. - `turn` (int): The number of turns in the conversation. A turn refers to one round of user-assistant interaction. - `language` (string): The language of the conversation. Note that this is the most frequently detected language in the utterances of the conversation. - `openai_moderation` (list): A list of OpenAI Moderation results. Each element in the list corresponds to one utterance in the conversation. When the content of an utterance is an empty string, the corresponding moderation reult is set to be an empty dictionary. - `detoxify_moderation` (list): A list of Detoxify results. Each element in the list corresponds to one utterance in the conversation. When the content of an utterance is an empty string, the corresponding Detoxify reult is set to be an empty dictionary. - `toxic` (bool): Whether this conversation contains any utterances considered to be toxic by either OpenAI Moderation or Detoxify. - `redacted` (bool): Whether this conversation contains any utterances in which PII is detected and anonymized. - `state` (string): The state inferred from the most common IP address in the conversation. Its value is sometimes `None` when GeoIP2 does not identify the state of an IP address. - `country` (string): The country inferred from the most common IP address in the conversation. Its value is sometimes `None` when GeoIP2 does not identify the country of an IP address. - `hashed_ip` (string): The most common hashed IP address in the conversation. - `header` (string): The request header containing information about operating system, browser versions, and accepted languages. This field might be useful for linking multiple conversations from the same user when used in conjunction with `hashed_ip`. Note that every turn in a conversation has the same header, as this is the way we linked turns into conversations. ### Empty User Inputs This dataset includes a small subset of conversations where users submitted empty inputs, sometimes leading to hallucinated responses from the assistant. This issue, first noticed by @yuchenlin, arises from the design of the Huggingface chatbot used for data collection, which did not restrict the submission of empty inputs. As a result, users could submit without entering any text, causing the assistant to generate responses without any user prompts. This occurs in a small fraction of the dataset. ### Licensing Information WildChat is now made available under the [**ODC-BY License**](https://opendatacommons.org/licenses/by/1-0/). This sampled version is available under the same license. ### Citation Information Please consider citing [the original dataset paper](https://arxiv.org/abs/2405.01470) if you find this dataset useful: ``` @inproceedings{ zhao2024wildchat, title={WildChat: 1M Chat{GPT} Interaction Logs in the Wild}, author={Wenting Zhao and Xiang Ren and Jack Hessel and Claire Cardie and Yejin Choi and Yuntian Deng}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=Bl8u7ZRlbM} } ``` ``` @misc{deng2024wildvisopensourcevisualizer, title={WildVis: Open Source Visualizer for Million-Scale Chat Logs in the Wild}, author={Yuntian Deng and Wenting Zhao and Jack Hessel and Xiang Ren and Claire Cardie and Yejin Choi}, year={2024}, eprint={2409.03753}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2409.03753}, } ```
158
0
[ "task_categories:text-classification", "license:odc-by", "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2405.01470", "arxiv:2409.03753", "region:us", "user-message-classification" ]
2025-10-21T12:38:52+00:00
2025-11-11T10:32:51+00:00
0
emix-1/unbound009_2_tran
# emix-1/unbound009_2_tran This dataset contains transcribed audio files organized in folders for scalability. ## Dataset Structure The dataset is organized with: - **Audio files**: Stored in `audio_XXXXX/` folders (5000 files per folder) - **Metadata**: Stored in `data_XXXXX/` folders as parquet files This organization follows Hugging Face best practices for datasets with millions of files. ## Statistics - Total files: 3,174 - Total batches: 1409 - Audio folders: 2 - Files per folder: max 5000 ## Loading the Dataset ```python from datasets import load_dataset # Load the complete dataset dataset = load_dataset("emix-1/unbound009_2_tran") # The 'audio' column contains paths like "audio_00000/0000000001_filename.wav" # Files are automatically resolved when accessing the dataset ``` ## Folder Organization Audio files are distributed across folders to respect HuggingFace storage limits: - `audio_00000/`: Files 0-4,999 - `audio_00001/`: Files 5,000-9,999 - etc. Metadata (parquet files) are grouped by batch ranges: - `data_00000/batches_0000000001_to_0000000020.parquet` - etc.
# emix-1/unbound009_2_tran This dataset contains transcribed audio files organized in folders for scalability. ## Dataset Structure The dataset is organized with: - **Audio files**: Stored in `audio_XXXXX/` folders (5000 files per folder) - **Metadata**: Stored in `data_XXXXX/` folders as parquet files This organization follows Hugging Face best practices for datasets with millions of files. ## Statistics - Total files: 3,174 - Total batches: 1409 - Audio folders: 2 - Files per folder: max 5000 ## Loading the Dataset ```python from datasets import load_dataset # Load the complete dataset dataset = load_dataset("emix-1/unbound009_2_tran") # The 'audio' column contains paths like "audio_00000/0000000001_filename.wav" # Files are automatically resolved when accessing the dataset ``` ## Folder Organization Audio files are distributed across folders to respect HuggingFace storage limits: - `audio_00000/`: Files 0-4,999 - `audio_00001/`: Files 5,000-9,999 - etc. Metadata (parquet files) are grouped by batch ranges: - `data_00000/batches_0000000001_to_0000000020.parquet` - etc.
504
0
[ "task_categories:automatic-speech-recognition", "language:am", "language:multilingual", "license:mit", "size_categories:10K<n<100K", "format:audiofolder", "modality:audio", "library:datasets", "library:mlcroissant", "region:us" ]
2025-11-10T21:13:48+00:00
2025-11-11T10:28:11+00:00
0
doem1997/cs701-course-data
# CS701 Course Project Dataset For task "ViT Model Adaptation" ## Overview This is an anonymized image classification dataset containing 10,000 classes. ## Dataset Statistics - **Number of classes**: 10,000 - **Training samples**: 450,000 (45 per class, with labels) - **Validation samples**: 50,000 (5 per class, **without labels**) - **Test samples**: 100,000 (10 per class, **without labels**) ## Dataset Structure ``` train/ class_0000/ # 45 images class_0001/ # 45 images ... class_9999/ # 45 images val/ image_000000.jpg image_000001.jpg ... image_049999.jpg test/ image_000000.jpg image_000001.jpg ... image_099999.jpg ``` ## Files - `train.txt`: Training image paths and labels (format: `train/class_XXXX/filename.jpg class_id`) - `val.txt`: Validation image paths only (format: `val/image_XXXXXX.jpg`) - `test.txt`: Test image paths only (format: `test/image_XXXXXX.jpg`) - `metadata.json`: Dataset metadata ## Benchmark Submission 1. Train your model on the training set (450,000 samples with labels) 2. Generate predictions on validation set (50,000 samples without labels) 3. Generate predictions on test set (100,000 samples without labels) 4. Submit predictions to CodaBench for evaluation. ### Prediction Format Your prediction files should follow this format: **val_predictions.txt**: ``` val/image_000000.jpg 1234 val/image_000001.jpg 5678 ... val/image_049999.jpg 9012 ``` **test_predictions.txt**: ``` test/image_000000.jpg 1234 test/image_000001.jpg 5678 ... test/image_099999.jpg 9012 ``` Each line contains: `<image_path> <predicted_class_id>` where class_id is in range [0, 9999]. ### Evaluation Submit your prediction files to the dataset maintainer for evaluation. The evaluation will compute: - Overall accuracy - Per-class accuracy - Additional metrics (if applicable) Note: Validation and test labels are withheld to prevent overfitting and ensure fair benchmarking.
# CS701 Course Project Dataset For task "ViT Model Adaptation" ## Overview This is an anonymized image classification dataset containing 10,000 classes. ## Dataset Statistics - **Number of classes**: 10,000 - **Training samples**: 450,000 (45 per class, with labels) - **Validation samples**: 50,000 (5 per class, **without labels**) - **Test samples**: 100,000 (10 per class, **without labels**) ## Dataset Structure ``` train/ class_0000/ # 45 images class_0001/ # 45 images ... class_9999/ # 45 images val/ image_000000.jpg image_000001.jpg ... image_049999.jpg test/ image_000000.jpg image_000001.jpg ... image_099999.jpg ``` ## Files - `train.txt`: Training image paths and labels (format: `train/class_XXXX/filename.jpg class_id`) - `val.txt`: Validation image paths only (format: `val/image_XXXXXX.jpg`) - `test.txt`: Test image paths only (format: `test/image_XXXXXX.jpg`) - `metadata.json`: Dataset metadata ## Benchmark Submission 1. Train your model on the training set (450,000 samples with labels) 2. Generate predictions on validation set (50,000 samples without labels) 3. Generate predictions on test set (100,000 samples without labels) 4. Submit predictions to CodaBench for evaluation. ### Prediction Format Your prediction files should follow this format: **val_predictions.txt**: ``` val/image_000000.jpg 1234 val/image_000001.jpg 5678 ... val/image_049999.jpg 9012 ``` **test_predictions.txt**: ``` test/image_000000.jpg 1234 test/image_000001.jpg 5678 ... test/image_099999.jpg 9012 ``` Each line contains: `<image_path> <predicted_class_id>` where class_id is in range [0, 9999]. ### Evaluation Submit your prediction files to the dataset maintainer for evaluation. The evaluation will compute: - Overall accuracy - Per-class accuracy - Additional metrics (if applicable) Note: Validation and test labels are withheld to prevent overfitting and ensure fair benchmarking.
354
0
[ "region:us" ]
2025-10-02T04:55:58+00:00
2025-11-11T10:18:36+00:00
0
TheFactoryX/edition_0304_argilla-databricks-dolly-15k-curated-en-readymade
# edition_0304_argilla-databricks-dolly-15k-curated-en-readymade **A Readymade by TheFactoryX** ## Original Dataset [argilla/databricks-dolly-15k-curated-en](https://huggingface.co/datasets/argilla/databricks-dolly-15k-curated-en) ## Process This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art. **What we did:** 1. Selected the original dataset from Hugging Face 2. Shuffled each column independently 3. Destroyed all row-wise relationships 4. Preserved structure, removed meaning **The result:** Same data. Wrong order. New meaning. No meaning. ## Purpose This is art. This is not useful. This is the point. Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed. --- Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX). > _"I am a machine."_ — Andy Warhol
# edition_0304_argilla-databricks-dolly-15k-curated-en-readymade **A Readymade by TheFactoryX** ## Original Dataset [argilla/databricks-dolly-15k-curated-en](https://huggingface.co/datasets/argilla/databricks-dolly-15k-curated-en) ## Process This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art. **What we did:** 1. Selected the original dataset from Hugging Face 2. Shuffled each column independently 3. Destroyed all row-wise relationships 4. Preserved structure, removed meaning **The result:** Same data. Wrong order. New meaning. No meaning. ## Purpose This is art. This is not useful. This is the point. Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed. --- Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX). > _"I am a machine."_ — Andy Warhol
6
0
[ "license:other", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "readymades", "art", "shuffled", "duchamp" ]
2025-11-11T10:14:31+00:00
2025-11-11T10:14:33+00:00
0
licyk/image_training_set
自用的训练集合集,用于 Stable Diffusion 模型微调。 **该仓库仅用于存档,不提供任何技术支持。**
自用的训练集合集,用于 Stable Diffusion 模型微调。 **该仓库仅用于存档,不提供任何技术支持。**
3,115
1
[ "size_categories:n<1K", "format:text", "modality:image", "modality:text", "library:datasets", "library:mlcroissant", "region:us" ]
2024-05-01T13:25:21+00:00
2025-11-11T10:17:12+00:00
0
mteb/BrazilianToxicTweetsClassification
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">BrazilianToxicTweetsClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced by 42 annotators selected from a pool of 129 volunteers. Annotators were selected aiming to create a plural group in terms of demographics (ethnicity, sexual orientation, age, gender). Each tweet was labeled by three annotators in 6 possible categories: LGBTQ+phobia, Xenophobia, Obscene, Insult, Misogyny and Racism. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Constructed, Written | | Reference | https://paperswithcode.com/dataset/told-br | Source datasets: - [mteb/BrazilianToxicTweetsClassification](https://huggingface.co/datasets/mteb/BrazilianToxicTweetsClassification) ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_task("BrazilianToxicTweetsClassification") evaluator = mteb.MTEB([task]) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repository](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @article{DBLP:journals/corr/abs-2010-04543, author = {Joao Augusto Leite and Diego F. Silva and Kalina Bontcheva and Carolina Scarton}, eprint = {2010.04543}, eprinttype = {arXiv}, journal = {CoRR}, timestamp = {Tue, 15 Dec 2020 16:10:16 +0100}, title = {Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis}, url = {https://arxiv.org/abs/2010.04543}, volume = {abs/2010.04543}, year = {2020}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("BrazilianToxicTweetsClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 2048, "number_texts_intersect_with_train": 23, "text_statistics": { "total_text_length": 172708, "min_text_length": 5, "average_text_length": 84.330078125, "max_text_length": 304, "unique_texts": 2046 }, "image_statistics": null, "label_statistics": { "min_labels_per_text": 0, "average_label_per_text": 0.57958984375, "max_labels_per_text": 4, "unique_labels": 7, "labels": { "obscene": { "count": 653 }, "insult": { "count": 430 }, "misogyny": { "count": 46 }, "racism": { "count": 13 }, "xenophobia": { "count": 13 }, "homophobia": { "count": 32 }, "None": { "count": 1145 } } } }, "train": { "num_samples": 8192, "number_texts_intersect_with_train": null, "text_statistics": { "total_text_length": 714281, "min_text_length": 4, "average_text_length": 87.1925048828125, "max_text_length": 322, "unique_texts": 8172 }, "image_statistics": null, "label_statistics": { "min_labels_per_text": 0, "average_label_per_text": 0.5751953125, "max_labels_per_text": 4, "unique_labels": 7, "labels": { "None": { "count": 4580 }, "obscene": { "count": 2576 }, "insult": { "count": 1700 }, "homophobia": { "count": 139 }, "misogyny": { "count": 179 }, "racism": { "count": 54 }, "xenophobia": { "count": 64 } } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md --> <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;"> <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">BrazilianToxicTweetsClassification</h1> <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div> <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div> </div> ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced by 42 annotators selected from a pool of 129 volunteers. Annotators were selected aiming to create a plural group in terms of demographics (ethnicity, sexual orientation, age, gender). Each tweet was labeled by three annotators in 6 possible categories: LGBTQ+phobia, Xenophobia, Obscene, Insult, Misogyny and Racism. | | | |---------------|---------------------------------------------| | Task category | t2c | | Domains | Constructed, Written | | Reference | https://paperswithcode.com/dataset/told-br | Source datasets: - [mteb/BrazilianToxicTweetsClassification](https://huggingface.co/datasets/mteb/BrazilianToxicTweetsClassification) ## How to evaluate on this task You can evaluate an embedding model on this dataset using the following code: ```python import mteb task = mteb.get_task("BrazilianToxicTweetsClassification") evaluator = mteb.MTEB([task]) model = mteb.get_model(YOUR_MODEL) evaluator.run(model) ``` <!-- Datasets want link to arxiv in readme to autolink dataset with paper --> To learn more about how to run models on `mteb` task check out the [GitHub repository](https://github.com/embeddings-benchmark/mteb). ## Citation If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb). ```bibtex @article{DBLP:journals/corr/abs-2010-04543, author = {Joao Augusto Leite and Diego F. Silva and Kalina Bontcheva and Carolina Scarton}, eprint = {2010.04543}, eprinttype = {arXiv}, journal = {CoRR}, timestamp = {Tue, 15 Dec 2020 16:10:16 +0100}, title = {Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis}, url = {https://arxiv.org/abs/2010.04543}, volume = {abs/2010.04543}, year = {2020}, } @article{enevoldsen2025mmtebmassivemultilingualtext, title={MMTEB: Massive Multilingual Text Embedding Benchmark}, author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff}, publisher = {arXiv}, journal={arXiv preprint arXiv:2502.13595}, year={2025}, url={https://arxiv.org/abs/2502.13595}, doi = {10.48550/arXiv.2502.13595}, } @article{muennighoff2022mteb, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} url = {https://arxiv.org/abs/2210.07316}, doi = {10.48550/ARXIV.2210.07316}, } ``` # Dataset Statistics <details> <summary> Dataset Statistics</summary> The following code contains the descriptive statistics from the task. These can also be obtained using: ```python import mteb task = mteb.get_task("BrazilianToxicTweetsClassification") desc_stats = task.metadata.descriptive_stats ``` ```json { "test": { "num_samples": 2048, "number_texts_intersect_with_train": 23, "text_statistics": { "total_text_length": 172708, "min_text_length": 5, "average_text_length": 84.330078125, "max_text_length": 304, "unique_texts": 2046 }, "image_statistics": null, "label_statistics": { "min_labels_per_text": 0, "average_label_per_text": 0.57958984375, "max_labels_per_text": 4, "unique_labels": 7, "labels": { "obscene": { "count": 653 }, "insult": { "count": 430 }, "misogyny": { "count": 46 }, "racism": { "count": 13 }, "xenophobia": { "count": 13 }, "homophobia": { "count": 32 }, "None": { "count": 1145 } } } }, "train": { "num_samples": 8192, "number_texts_intersect_with_train": null, "text_statistics": { "total_text_length": 714281, "min_text_length": 4, "average_text_length": 87.1925048828125, "max_text_length": 322, "unique_texts": 8172 }, "image_statistics": null, "label_statistics": { "min_labels_per_text": 0, "average_label_per_text": 0.5751953125, "max_labels_per_text": 4, "unique_labels": 7, "labels": { "None": { "count": 4580 }, "obscene": { "count": 2576 }, "insult": { "count": 1700 }, "homophobia": { "count": 139 }, "misogyny": { "count": 179 }, "racism": { "count": 54 }, "xenophobia": { "count": 64 } } } } } ``` </details> --- *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
3,002
0
[ "task_categories:text-classification", "task_ids:sentiment-analysis", "task_ids:sentiment-scoring", "task_ids:sentiment-classification", "task_ids:hate-speech-detection", "annotations_creators:expert-annotated", "multilinguality:monolingual", "source_datasets:mteb/BrazilianToxicTweetsClassification", "language:por", "license:cc-by-sa-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2010.04543", "arxiv:2502.13595", "arxiv:2210.07316", "region:us", "mteb", "text" ]
2025-09-09T20:38:46+00:00
2025-11-11T10:12:03+00:00
0
techformlog/file-systems-explained
# File Systems Explained - NTFS, FAT32, exFAT, and More. Have you ever wondered why your USB drive or external SSD generates an error when transferring huge files? Or why are some gadgets unable to read your storage drives? The answer is generally found in the file system. ![file system](https://i.ibb.co/MD0wgmrY/filesystems.jpg) The way your computer stores, organizes, and accesses data on a drive. Understanding file systems, such as NTFS, FAT32, exFAT, and others, enables you to make an informed decision about the best storage option for your requirements. Formatting tool such as [GUI Format](https://guiformat.net/) is a popular tool used during drive formatting to handle file systems effectively. These systems are critical for data management, security, and performance, as they determine how quickly your device reads and writes information. In this article, we'll examine the most popular file systems, their primary features, and which one is best suited for your storage needs. ## What Is a File System? Your operating system employs a file system to organize, store, and access data on a drive. It enables your computer to find information effectively by defining how files are labeled, organized, and accessed. Your device wouldn't be able to tell where one piece of data ends & another begins without a file system. It serves as a conduit between the program that utilizes your storage hardware. Security permissions, quicker data processing, and interoperability with several operating systems are just a few of the distinctive benefits that different file systems offer. A longer drive life, improved dependability, and smoother operation are all guaranteed by selecting the appropriate one. ## 1. NTFS (New Technology File System) ![NTFS File System](https://i.ibb.co/jP5cWkfv/ntfs.jpg) Windows operating systems utilize NTFS as their primary file system, which is renowned for its robustness and advanced features. Big files, encryption, file compression, & granular permissions are all supported, making it ideal for modern PCs and commercial settings. Furthermore, NTFS monitors file changes, which enhances security and stability by supporting data recovery in the case of a failure. **Key Features:** - Supports large files and partitions beyond 4GB. - Provides advanced security through file permissions and encryption. - Includes journaling for better reliability and error recovery. - Supports file compression to save disk space. - Enables disk quotas to manage storage usage. - Offers better performance for modern Windows systems. **Best For:** Internal drives in Windows PCs and large external hard drives. ## 2. FAT32 (File Allocation Table 32) ![FAT32 File System](https://i.ibb.co/07yDmWj/fat32.jpg) FAT32 is one of the original and most widely used file systems, supporting compatibility with Windows, macOS, Linux, game consoles, and even webcams. Its cross-platform interoperability & simplicity make it perfect for tiny storage devices such as USB drives and SD cards. It has restrictions, too, such as a maximum file size of 4GB & a partition size of 8TB, which can be limiting for contemporary high-capacity storage. FAT32 remains the preferred option for customers who require global compatibility, despite this. **Key Features:** - Works across almost all operating systems and devices. - Ideal for USB drives, SD cards, and external storage. - Limited to a maximum file size of 4GB per file. - Partition size capped at 8TB. - Simple structure ensures high compatibility and quick setup. **Best For:** USB drives, SD cards, and small external storage used across multiple platforms. (Tip: Tools like GUI Format Download make it easy to format your USB, SD card, or SSD to FAT32. ) ## 3. exFAT (Extended File Allocation Table) ![exFAT File System](https://i.ibb.co/cnP0B6M/exfat.jpg) Microsoft created exFAT, primarily for flash drives and external storage devices, as a compromise between the limitations of FAT32 and the complexities of NTFS. It is ideal for transferring information across Windows, macOS, and other devices, as it can handle larger files and volumes than FAT32. Additionally, it provides improved cross-platform compatibility and speedier performance, which is why cameras, USBs, and SD cards frequently use it. **Key Features:** - Supports individual files larger than 4GB. - Compatible with both Windows and macOS systems. - Optimized for USB flash drives & SD cards. - Minimal overhead, ensuring faster read/write speeds. - Ideal for cross-platform file transfers. **Best For:** External SSDs, flash drives, and drives shared between Mac and Windows. ## 4. Other File Systems Worth Mentioning - **HFS+ (Hierarchical File System Plus):** Used by older macOS versions before APFS. - **APFS (Apple File System):** Modern file system for macOS, [iOS](https://en.wikipedia.org/wiki/IOS), and iPadOS, optimized for SSDs. - **ext4 (Fourth Extended Filesystem):** Commonly used in Linux systems, offering excellent stability and performance. Each has unique strengths tailored to specific platforms and use cases. ## Which File System Should You Choose? Choosing the correct file system depends on your device, operating system, & the type of data you handle. **1. For Windows users:** Because it provides file rights, security, and improved dependability, NTFS is the ideal option for internal storage. It is perfect for contemporary systems where stability and performance are top concerns. **2. For external drives and cross-platform use:** ExFAT is the most flexible option because it works with both macOS and Windows. SD cards & USB devices are suitable for working with large files exceeding 4GB. **3. For older devices and compatibility:** If you require your drive to function with older hardware, gaming consoles, or legacy systems, FAT32 remains a viable option. Nevertheless, it is restricted to 8TB partitions and 4GB file sizes. **4. For Linux users:** Because it offers exceptional speed, dependability, and journaling support, EXT4 is frequently utilized in Linux distributions. Without additional tools, neither Windows nor macOS handles it natively. **5. For mixed or specialized setups:** Select exFAT for cross-platform sharing, NTFS for Windows-based systems, or EXT4 for Linux environments based on your use case. ## Final Thoughts File systems are the hidden heroes of data storage, determining how successfully your files are accessed & handled. Understanding the distinctions between NTFS, FAT32, and exFAT enables you to make more informed decisions when setting up a new SSD, formatting a USB drive, or handling data across different platforms. Additionally, formatting your disk can be made safe and easy with apps like GUI Format Download. Understanding your file system is the first step in keeping your data secure, accessible, and organized. ## About the Author## I'm [Wilford Conrad](https://x.com/wilfordConradd), a writer and tech enthusiast who is committed to making complex computer concepts understandable to regular people. To assist readers in making more informed IT decisions, I enjoy exploring subjects such as file systems, drive formatting, and storage management. When I'm not writing, I evaluate programs like GUI Format and explore practical ways to optimize computer performance. ### Ref: Sullivan, E., & O’Brien, E. (2024, February 26). file system. Search Storage. https://www.techtarget.com/searchstorage/definition/file-system
# File Systems Explained - NTFS, FAT32, exFAT, and More. Have you ever wondered why your USB drive or external SSD generates an error when transferring huge files? Or why are some gadgets unable to read your storage drives? The answer is generally found in the file system. ![file system](https://i.ibb.co/MD0wgmrY/filesystems.jpg) The way your computer stores, organizes, and accesses data on a drive. Understanding file systems, such as NTFS, FAT32, exFAT, and others, enables you to make an informed decision about the best storage option for your requirements. Formatting tool such as [GUI Format](https://guiformat.net/) is a popular tool used during drive formatting to handle file systems effectively. These systems are critical for data management, security, and performance, as they determine how quickly your device reads and writes information. In this article, we'll examine the most popular file systems, their primary features, and which one is best suited for your storage needs. ## What Is a File System? Your operating system employs a file system to organize, store, and access data on a drive. It enables your computer to find information effectively by defining how files are labeled, organized, and accessed. Your device wouldn't be able to tell where one piece of data ends & another begins without a file system. It serves as a conduit between the program that utilizes your storage hardware. Security permissions, quicker data processing, and interoperability with several operating systems are just a few of the distinctive benefits that different file systems offer. A longer drive life, improved dependability, and smoother operation are all guaranteed by selecting the appropriate one. ## 1. NTFS (New Technology File System) ![NTFS File System](https://i.ibb.co/jP5cWkfv/ntfs.jpg) Windows operating systems utilize NTFS as their primary file system, which is renowned for its robustness and advanced features. Big files, encryption, file compression, & granular permissions are all supported, making it ideal for modern PCs and commercial settings. Furthermore, NTFS monitors file changes, which enhances security and stability by supporting data recovery in the case of a failure. **Key Features:** - Supports large files and partitions beyond 4GB. - Provides advanced security through file permissions and encryption. - Includes journaling for better reliability and error recovery. - Supports file compression to save disk space. - Enables disk quotas to manage storage usage. - Offers better performance for modern Windows systems. **Best For:** Internal drives in Windows PCs and large external hard drives. ## 2. FAT32 (File Allocation Table 32) ![FAT32 File System](https://i.ibb.co/07yDmWj/fat32.jpg) FAT32 is one of the original and most widely used file systems, supporting compatibility with Windows, macOS, Linux, game consoles, and even webcams. Its cross-platform interoperability & simplicity make it perfect for tiny storage devices such as USB drives and SD cards. It has restrictions, too, such as a maximum file size of 4GB & a partition size of 8TB, which can be limiting for contemporary high-capacity storage. FAT32 remains the preferred option for customers who require global compatibility, despite this. **Key Features:** - Works across almost all operating systems and devices. - Ideal for USB drives, SD cards, and external storage. - Limited to a maximum file size of 4GB per file. - Partition size capped at 8TB. - Simple structure ensures high compatibility and quick setup. **Best For:** USB drives, SD cards, and small external storage used across multiple platforms. (Tip: Tools like GUI Format Download make it easy to format your USB, SD card, or SSD to FAT32. ) ## 3. exFAT (Extended File Allocation Table) ![exFAT File System](https://i.ibb.co/cnP0B6M/exfat.jpg) Microsoft created exFAT, primarily for flash drives and external storage devices, as a compromise between the limitations of FAT32 and the complexities of NTFS. It is ideal for transferring information across Windows, macOS, and other devices, as it can handle larger files and volumes than FAT32. Additionally, it provides improved cross-platform compatibility and speedier performance, which is why cameras, USBs, and SD cards frequently use it. **Key Features:** - Supports individual files larger than 4GB. - Compatible with both Windows and macOS systems. - Optimized for USB flash drives & SD cards. - Minimal overhead, ensuring faster read/write speeds. - Ideal for cross-platform file transfers. **Best For:** External SSDs, flash drives, and drives shared between Mac and Windows. ## 4. Other File Systems Worth Mentioning - **HFS+ (Hierarchical File System Plus):** Used by older macOS versions before APFS. - **APFS (Apple File System):** Modern file system for macOS, [iOS](https://en.wikipedia.org/wiki/IOS), and iPadOS, optimized for SSDs. - **ext4 (Fourth Extended Filesystem):** Commonly used in Linux systems, offering excellent stability and performance. Each has unique strengths tailored to specific platforms and use cases. ## Which File System Should You Choose? Choosing the correct file system depends on your device, operating system, & the type of data you handle. **1. For Windows users:** Because it provides file rights, security, and improved dependability, NTFS is the ideal option for internal storage. It is perfect for contemporary systems where stability and performance are top concerns. **2. For external drives and cross-platform use:** ExFAT is the most flexible option because it works with both macOS and Windows. SD cards & USB devices are suitable for working with large files exceeding 4GB. **3. For older devices and compatibility:** If you require your drive to function with older hardware, gaming consoles, or legacy systems, FAT32 remains a viable option. Nevertheless, it is restricted to 8TB partitions and 4GB file sizes. **4. For Linux users:** Because it offers exceptional speed, dependability, and journaling support, EXT4 is frequently utilized in Linux distributions. Without additional tools, neither Windows nor macOS handles it natively. **5. For mixed or specialized setups:** Select exFAT for cross-platform sharing, NTFS for Windows-based systems, or EXT4 for Linux environments based on your use case. ## Final Thoughts File systems are the hidden heroes of data storage, determining how successfully your files are accessed & handled. Understanding the distinctions between NTFS, FAT32, and exFAT enables you to make more informed decisions when setting up a new SSD, formatting a USB drive, or handling data across different platforms. Additionally, formatting your disk can be made safe and easy with apps like GUI Format Download. Understanding your file system is the first step in keeping your data secure, accessible, and organized. ## About the Author## I'm [Wilford Conrad](https://x.com/wilfordConradd), a writer and tech enthusiast who is committed to making complex computer concepts understandable to regular people. To assist readers in making more informed IT decisions, I enjoy exploring subjects such as file systems, drive formatting, and storage management. When I'm not writing, I evaluate programs like GUI Format and explore practical ways to optimize computer performance. ### Ref: Sullivan, E., & O’Brien, E. (2024, February 26). file system. Search Storage. https://www.techtarget.com/searchstorage/definition/file-system
3
0
[ "region:us" ]
2025-11-11T08:41:25+00:00
2025-11-11T10:13:22+00:00
0
TIGER-Lab/MMEB-V2
# MMEB-V2 (Massive Multimodal Embedding Benchmark) [**Website**](https://tiger-ai-lab.github.io/VLM2Vec/) |[**Github**](https://github.com/TIGER-AI-Lab/VLM2Vec) | [**🏆Leaderboard**](https://huggingface.co/spaces/TIGER-Lab/MMEB) | [**📖MMEB-V2/VLM2Vec-V2 Paper**](https://arxiv.org/abs/2507.04590) | | [**📖MMEB-V1/VLM2Vec-V1 Paper**](https://arxiv.org/abs/2410.05160) | ## Introduction Building upon on our original [**MMEB**](https://arxiv.org/abs/2410.05160), **MMEB-V2** expands the evaluation scope to include five new tasks: four video-based tasks — Video Retrieval, Moment Retrieval, Video Classification, and Video Question Answering — and one task focused on visual documents, Visual Document Retrieval. This comprehensive suite enables robust evaluation of multimodal embedding models across static, temporal, and structured visual data settings. **This Hugging Face repository contains the image and video frames used in MMEB-V2, which need to be downloaded in advance.** ## Guide to All MMEB-V2 Data **Please review this section carefully for all MMEB-V2–related data.** - **Image/Video Frames** – Available in this repository. - **Test File** – Loaded during evaluation from Hugging Face automatically. A comprehensive list of HF paths can be found [here](https://github.com/TIGER-AI-Lab/VLM2Vec/blob/main/src/data/dataset_hf_path.py). - **Raw Video Files** – In most cases, the video frames are all you need for MMEB evaluation. However, we also provide the raw video files [here](https://huggingface.co/datasets/TIGER-Lab/MMEB_Raw_Video) in case they are needed for specific use cases. Since these files are very large, please download and use them only if necessary. ## 🚀 What's New - **\[2025.07\]** Release [tech report](https://arxiv.org/abs/2507.04590). - **\[2025.05\]** Initial release of MMEB-V2/VLM2Vec-V2. ## Dataset Overview We present an overview of the MMEB-V2 dataset below: <img width="900" alt="abs" src="overview.png"> ## Dataset Structure The directory structure of this Hugging Face repository is shown below. For video tasks, we provide sampled frames in this repo. For image tasks, we provide the raw images. Files from each meta-task are zipped together, resulting in six files. For example, ``video_cls.tar.gz`` contains the sampled frames for the video classification task. ``` → video-tasks/ ├── frames/ │ ├── video_cls.tar.gz │ ├── video_qa.tar.gz │ ├── video_ret.tar.gz │ └── video_mret.tar.gz → image-tasks/ ├── mmeb_v1.tar.gz └── visdoc.tar.gz ``` After downloading and unzipping these files locally, you can organize them as shown below. (You may choose to use ``Git LFS`` or ``wget`` for downloading.) Then, simply specify the correct file path in the configuration file used by your code. ``` → MMEB ├── video-tasks/ │ └── frames/ │ ├── video_cls/ │ │ ├── UCF101/ │ │ │ └── video_1/ # video ID │ │ │ ├── frame1.png # frame from video_1 │ │ │ ├── frame2.png │ │ │ └── ... │ │ ├── HMDB51/ │ │ ├── Breakfast/ │ │ └── ... # other datasets from video classification category │ ├── video_qa/ │ │ └── ... # video QA datasets │ ├── video_ret/ │ │ └── ... # video retrieval datasets │ └── video_mret/ │ └── ... # moment retrieval datasets ├── image-tasks/ │ ├── mmeb_v1/ │ │ ├── OK-VQA/ │ │ │ ├── image1.png │ │ │ ├── image2.png │ │ │ └── ... │ │ ├── ImageNet-1K/ │ │ └── ... # other datasets from MMEB-V1 category │ └── visdoc/ │ └── ... # visual document retrieval datasets ```
# MMEB-V2 (Massive Multimodal Embedding Benchmark) [**Website**](https://tiger-ai-lab.github.io/VLM2Vec/) |[**Github**](https://github.com/TIGER-AI-Lab/VLM2Vec) | [**🏆Leaderboard**](https://huggingface.co/spaces/TIGER-Lab/MMEB) | [**📖MMEB-V2/VLM2Vec-V2 Paper**](https://arxiv.org/abs/2507.04590) | | [**📖MMEB-V1/VLM2Vec-V1 Paper**](https://arxiv.org/abs/2410.05160) | ## Introduction Building upon on our original [**MMEB**](https://arxiv.org/abs/2410.05160), **MMEB-V2** expands the evaluation scope to include five new tasks: four video-based tasks — Video Retrieval, Moment Retrieval, Video Classification, and Video Question Answering — and one task focused on visual documents, Visual Document Retrieval. This comprehensive suite enables robust evaluation of multimodal embedding models across static, temporal, and structured visual data settings. **This Hugging Face repository contains the image and video frames used in MMEB-V2, which need to be downloaded in advance.** ## Guide to All MMEB-V2 Data **Please review this section carefully for all MMEB-V2–related data.** - **Image/Video Frames** – Available in this repository. - **Test File** – Loaded during evaluation from Hugging Face automatically. A comprehensive list of HF paths can be found [here](https://github.com/TIGER-AI-Lab/VLM2Vec/blob/main/src/data/dataset_hf_path.py). - **Raw Video Files** – In most cases, the video frames are all you need for MMEB evaluation. However, we also provide the raw video files [here](https://huggingface.co/datasets/TIGER-Lab/MMEB_Raw_Video) in case they are needed for specific use cases. Since these files are very large, please download and use them only if necessary. ## 🚀 What's New - **\[2025.07\]** Release [tech report](https://arxiv.org/abs/2507.04590). - **\[2025.05\]** Initial release of MMEB-V2/VLM2Vec-V2. ## Dataset Overview We present an overview of the MMEB-V2 dataset below: <img width="900" alt="abs" src="overview.png"> ## Dataset Structure The directory structure of this Hugging Face repository is shown below. For video tasks, we provide sampled frames in this repo. For image tasks, we provide the raw images. Files from each meta-task are zipped together, resulting in six files. For example, ``video_cls.tar.gz`` contains the sampled frames for the video classification task. ``` → video-tasks/ ├── frames/ │ ├── video_cls.tar.gz │ ├── video_qa.tar.gz │ ├── video_ret.tar.gz │ └── video_mret.tar.gz → image-tasks/ ├── mmeb_v1.tar.gz └── visdoc.tar.gz ``` After downloading and unzipping these files locally, you can organize them as shown below. (You may choose to use ``Git LFS`` or ``wget`` for downloading.) Then, simply specify the correct file path in the configuration file used by your code. ``` → MMEB ├── video-tasks/ │ └── frames/ │ ├── video_cls/ │ │ ├── UCF101/ │ │ │ └── video_1/ # video ID │ │ │ ├── frame1.png # frame from video_1 │ │ │ ├── frame2.png │ │ │ └── ... │ │ ├── HMDB51/ │ │ ├── Breakfast/ │ │ └── ... # other datasets from video classification category │ ├── video_qa/ │ │ └── ... # video QA datasets │ ├── video_ret/ │ │ └── ... # video retrieval datasets │ └── video_mret/ │ └── ... # moment retrieval datasets ├── image-tasks/ │ ├── mmeb_v1/ │ │ ├── OK-VQA/ │ │ │ ├── image1.png │ │ │ ├── image2.png │ │ │ └── ... │ │ ├── ImageNet-1K/ │ │ └── ... # other datasets from MMEB-V1 category │ └── visdoc/ │ └── ... # visual document retrieval datasets ```
1,730
10
[ "task_categories:text-retrieval", "task_categories:text-classification", "task_categories:token-classification", "language:en", "license:apache-2.0", "size_categories:1M<n<10M", "arxiv:2507.04590", "arxiv:2410.05160", "region:us", "multimodal" ]
2025-03-21T03:47:47+00:00
2025-11-11T10:04:59+00:00
0
nbirukov/so101_pnp_merged2
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 467, "total_frames": 243673, "total_tasks": 6, "chunks_size": 1000, "data_files_size_in_mb": 1000, "video_files_size_in_mb": 10000, "fps": 30, "splits": { "train": "0:467" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "observation.images.up": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 467, "total_frames": 243673, "total_tasks": 6, "chunks_size": 1000, "data_files_size_in_mb": 1000, "video_files_size_in_mb": 10000, "fps": 30, "splits": { "train": "0:467" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "observation.images.up": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
29
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T10:14:32+00:00
2025-11-11T10:18:42+00:00
0
german-tokenizer-benchmark/ud-hdt
# UD German-HDT: Tokenized Sentences This datasets hosts a sentence-tokenized version of the [Universal Dependencies German-HDT](https://github.com/UniversalDependencies/UD_German-HDT) dataset. ## Creation The following script can be used to reproduce the creation of the dataset: ```python import json from flair.datasets import UD_GERMAN_HDT corpus = UD_GERMAN_HDT() with open("./train.jsonl", "wt") as f_out: for sentence in corpus.train: current_example = { "text": sentence.to_tokenized_string() } f_out.write(json.dumps(current_example) + "\n") ``` The extracted dataset has 153,035 sentences.
# UD German-HDT: Tokenized Sentences This datasets hosts a sentence-tokenized version of the [Universal Dependencies German-HDT](https://github.com/UniversalDependencies/UD_German-HDT) dataset. ## Creation The following script can be used to reproduce the creation of the dataset: ```python import json from flair.datasets import UD_GERMAN_HDT corpus = UD_GERMAN_HDT() with open("./train.jsonl", "wt") as f_out: for sentence in corpus.train: current_example = { "text": sentence.to_tokenized_string() } f_out.write(json.dumps(current_example) + "\n") ``` The extracted dataset has 153,035 sentences.
6
0
[ "language:de", "license:cc-by-4.0", "size_categories:100K<n<1M", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-11T10:02:20+00:00
2025-11-11T10:04:55+00:00
0
german-tokenizer-benchmark/mobie
# DFKI MobIE: Tokenized Sentences This datasets hosts a sentence-tokenized version of the [DFKI MobIE](https://github.com/dfki-nlp/mobie) dataset. ## Creation The following script can be used to reproduce the creation of the dataset: ```python import json from flair.datasets import NER_GERMAN_MOBIE corpus = NER_GERMAN_MOBIE() with open("./train.jsonl", "wt") as f_out: for sentence in corpus.train: current_example = { "text": sentence.to_tokenized_string() } f_out.write(json.dumps(current_example) + "\n") ``` The extracted dataset has 6,900 sentences.
# DFKI MobIE: Tokenized Sentences This datasets hosts a sentence-tokenized version of the [DFKI MobIE](https://github.com/dfki-nlp/mobie) dataset. ## Creation The following script can be used to reproduce the creation of the dataset: ```python import json from flair.datasets import NER_GERMAN_MOBIE corpus = NER_GERMAN_MOBIE() with open("./train.jsonl", "wt") as f_out: for sentence in corpus.train: current_example = { "text": sentence.to_tokenized_string() } f_out.write(json.dumps(current_example) + "\n") ``` The extracted dataset has 6,900 sentences.
10
0
[ "language:de", "license:cc-by-4.0", "size_categories:1K<n<10K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-11T09:44:18+00:00
2025-11-11T10:02:42+00:00
0
orybe/standart-sweet-mix
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 0, "total_frames": 0, "total_tasks": 0, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": {}, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ] }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 0, "total_frames": 0, "total_tasks": 0, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": {}, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ] }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
15
0
[ "task_categories:robotics", "license:apache-2.0", "region:us", "LeRobot" ]
2025-11-11T09:57:54+00:00
2025-11-11T09:57:56+00:00
0
Kimura-881014/record-test
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 2, "total_frames": 305, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:2" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 2, "total_frames": 305, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:2" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
21
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T09:57:43+00:00
2025-11-11T09:58:15+00:00
0
Hemanth8374/face-age-gender-dataset
# Face Age & Gender Dataset Uploaded from Kaggle.
# Face Age & Gender Dataset Uploaded from Kaggle.
2
0
[ "size_categories:10K<n<100K", "format:csv", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-11T09:41:02+00:00
2025-11-11T09:57:41+00:00
0
RA-Data-Science/DiEm_HTR
# Dataset Card for DiEm HTR <!-- Provide a quick summary of the dataset. --> The *DiEm HTR* dataset is a ground truth dataset for historical danish handwriting in the 17th and 18th century, generated as part of the *Digitalisering af Enesteministerialbøger* project at the Danish National Archives. ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> The *Digitalisering af Enesteministerialbøger* project (DiEm) at the Danish National Archives aims to transcribe and make publically available all of the danish parish registers from before the 1813-reform by using the Handwritten Text Recognition (HTR) platform Transkribus. To this end, ground truth training data for the HTR-models have been created, which we now make publically available through Hugging Face. The*DiEm* dataset consists of 975 transcribed images, containing a total of 67410 text lines and 383339 words. The following parish registers are part of the dataset: | doc_id | parish_name | period | events | transcribed | notes | |---|---|---|---|---|---| | 8010610941-6469506 | Jersie Sogn | 1747-1814 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | Volunteers have manually transcribed the book. | | | 8010629211-6471563 | Solrød Sogn | 1752-1814 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | Volunteers have manually transcribed the book. | | | 8010634501-6472194 | Ølsemagle Sogn | 1732-1766 | Contains: Births and baptisms, Marriages, Burials. | Volunteers have manually transcribed the book. | Only the first 41 pages of the book is made Ground Truth, which are the available pages in the dataset. | 0834247701 | Ål Sogn | 1691-1726 | Contains: Births and baptisms, Marriages, Burials. | Volunteers have manually transcribed the book. | The pictures of the book are not the ones that are available on The Danish National Archives, because the DiEm-project took its own pictures. | | 8010720391-6482219 | Bråby Sogn | 1645-1749 | Contains: Births and baptisms, Marriages, Burials. | Volunteers have manually transcribed the book. | Not all pages from the book could be made Ground Truth. Only the Ground Truth pages are available in this dataset. | | 8010785121-6490574 | Vester Ulslev Sogn | 1722-1801 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | Volunteers have manually transcribed the book. | Not all pages from the book could be made Ground Truth. Only the Ground Truth pages are available in this dataset. | | 8026583501-9436330 | Branderup Sogn | 1667-1814 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | Volunteers have manually transcribed the book. | We only made the pages up to and including 1711 Ground Truth from this book. Only the Ground Truth pages are available in this dataset. | | 8027899071 | Jels Sogn | 1648-1721 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | Volunteers have manually transcribed the book. | 8033700071-12391767 | Vinding Sogn (Vejle Amt) | 1651-1749 | Contains: Births and baptisms, Marriages, Burials | Volunteers have manually transcribed the book. | We only made parts of this book Ground Truth because of its bad condition and layout. Pages from 1657 up to and including 1735 is Ground Truth from this book. Only the Ground Truth pages are available in this dataset. | | 8010560601-22053014 | Værløse og Farum Sogne | 1781-1814 | Contains: Confirmations. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8034392541-22064545 | Tranekær Sogn | 1812-1818 | Contains: Communion og Confirmations. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8034325781-12671556 | Ellested og Gislev Sogne | 1675-1687 | Contains: Births and baptisms, Marriages, Burials. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8034365901-12691596 | Nyborg Sogn | 1665-1700 | Contains: Burials. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8031903941 | Hvidbjerg Vesten Å Sogn | 1658-1704 | Contains: Births and baptisms. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8010717631 | Alsted Sogn (Sorø Amt) | 1804-1814 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8034383771 | Føvling (Ribe Amt) og Holsted Sogne | 1758-1772 | Contains: Births and baptisms, Burials. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | - **Curated by:** [Markus Schunck](masc@rigsarkivet.dk) - **Funded by:** Augustinus Fonden - **Language(s) (NLP):** Danish - **License:** [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> The dataset is meant primarily as ground truth for training HTR models for 17th and 18th century danish handwriting. A separate dataset will be made available containing the ground truth pages for our region detection model as these only partly overlap with the ground truth for HTR. Unpacking the parquet file and putting the images in a root folder and the alto/page xmls in subfolders called 'alto' and 'page' will allow import of the transcriptions into the desktop client of Transkribus, if you want to include the dataset as training data in your Transkribus project. We have created a small tool 'UnpackRAParquet' that can help you unpack the parquet-files in the proper structure, which is included in the tools/ subfolder of this dataset. Windows binary: [UnpackRAParquet.exe](https://huggingface.co/datasets/RA-Data-Science/DiEm_HTR/blob/main/tools/UnpackRAParquet1.0.0.exe) (SHA256: 30A4E7F67EABD275514721C0FD1AEE99F04C6A376CBEA5F027E1E5BD95458AAF) ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> The dataset is not suited for training textline polygon extraction models, as the polygons have been generated by Transkribus and not manually adjusted. The dataset is not suitable for training models for baseline detection either since some blurred/unreadable words in the beginning and ending of a line has been cut out. Some blurred/unreadable lines has also been left unmarked. Finally we advice using the *DiEm Regions* dataset if you want to train a model to detect text regions, as the regions within the Diem HTR dataset has not been manually validated. The *DiEm Regions* dataset should be made available here on Hugging Face in the winter 2025-26. ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> Each data instance represents a single scanned, segmented and transcribed image with handwritten text corresponding to either 1 or 2 physical pages in the parish registers. The dataset contains the following fields: - `image`: a jpeg image containing a scan of the original physical page - `doc_id`: internal id of parent parish register (book) to which the page belongs used at the National Archives of Denmark - `sequence`: an incremental id denoting the order of the page within the parent document - `alto`: an xml-encoded string containing layout and content information of the physical page, stored in [ALTO XML, version 4](https://www.loc.gov/standards/alto/v4/alto.xsd) - `page`: an xml-encoded string containing layout and content information of the physical page, stored in [PAGE XML, version 2013-07-15](https://www.primaresearch.org/schema/PAGE/gts/pagecontent/2013-07-15/pagecontent.xsd) To uniquely identify a single page within the dataset, one can use the `doc_id` integer in combination with the `sequence` integer. ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> The dataset constitutes the ground truth HTR created through the Transkribus interface as part of the DiEm project, managed by the National Archives of Denmark. The project seeks to correctly read all the danish parish registers before 1813. The books included in the ground truth dataset has been selected to represent the variance in handwriting style through the period 1650-1800 as well as possible. Likewise we have tried to include books containing all the events that might be described in parish registers (baptisms, confirmations, marriages and burials). ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> The source data is the parish registers written by parish clerks and priests in the danish parishes in the period ~1650-1800. They parimarily describe the clerical events carried out for the people of the parish, noting among other things names and dates at baptisms, confirmations, marriages and burials. #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> Already existing transcriptions of parish records that were transcribed true to the source and therefore suitable to become Ground Truth were collected and manually inserted in Transkribus by the project workers. Since most of the parish records the project needs to text recognize are from the 18th century, most of our training data (around 70%) are from that period. Volunteers in the DiEm-project have proofread all the parish records to Ground Truth in Transkribus following the project’s proofreader guide, so the proofreading is as similar as possible even though several people contributed to it. The project workers assisted the volunteers in the process. #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> The transscriptions used initially as ground truth training data has been created by various genealogists in Denmark, often as part of research into their own ancestry. The transscriptions have since been verified and corrected by project workers. ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> The DiEm-project’s volunteers have used the Unclear-tag in Transkribus whenever they have come across a word that was unreadable. They also marked underlined words, super- and subscript by using Transkribus’ text editor. #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> Annotations have been created by participants in the DiEm projects at the National Archives of Denmark. #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> The dataset contains no personal, private or sensitive information as all information is over 200 years old. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Be advised that the language used in the text is somewhat limited as it concerns baptisms, confirmations, marriages and burials, and there therefore is a predominance of dates and names when compared to natural language. Most of the training data consists of baptisms (around 42%) followed by burials (around 24%). Around 10% are marriages and 4% confirmations. The rest are chronological kept parish records, where we assume that the grouping of events are roughly the same. Most of the training data are from the 18th century (around 70%) and the rest are from the 17th century. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be aware that: - We do not use diacritical signs in the training data. For example, we transcribe the letter ‘ö’ as ø. - We mark unreadable words with ‘##’ in the text, and tag them with the Unclear-tag in Transkribus. - We use strikethrough, underline, sub- and superscript in the training data. - Only the Ground Truth pages from the parish records are a part of this dataset. - We always use the sign ¬ when there are hyphenated words in the text. - Nine of the parish records are manually transcribed and seven are recognised using early versions of the DiEm-projects text recognition models and corrected afterwards. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> [N/A] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> {{ glossary | default("[More Information Needed]", true)}} ## More Information [optional] Thank you to the Augustinus Foundation for funding the DiEm project. ## Dataset Card Contact **Point of Contact:** [Markus Schunck](masc@rigsarkivet.dk)
# Dataset Card for DiEm HTR <!-- Provide a quick summary of the dataset. --> The *DiEm HTR* dataset is a ground truth dataset for historical danish handwriting in the 17th and 18th century, generated as part of the *Digitalisering af Enesteministerialbøger* project at the Danish National Archives. ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> The *Digitalisering af Enesteministerialbøger* project (DiEm) at the Danish National Archives aims to transcribe and make publically available all of the danish parish registers from before the 1813-reform by using the Handwritten Text Recognition (HTR) platform Transkribus. To this end, ground truth training data for the HTR-models have been created, which we now make publically available through Hugging Face. The*DiEm* dataset consists of 975 transcribed images, containing a total of 67410 text lines and 383339 words. The following parish registers are part of the dataset: | doc_id | parish_name | period | events | transcribed | notes | |---|---|---|---|---|---| | 8010610941-6469506 | Jersie Sogn | 1747-1814 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | Volunteers have manually transcribed the book. | | | 8010629211-6471563 | Solrød Sogn | 1752-1814 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | Volunteers have manually transcribed the book. | | | 8010634501-6472194 | Ølsemagle Sogn | 1732-1766 | Contains: Births and baptisms, Marriages, Burials. | Volunteers have manually transcribed the book. | Only the first 41 pages of the book is made Ground Truth, which are the available pages in the dataset. | 0834247701 | Ål Sogn | 1691-1726 | Contains: Births and baptisms, Marriages, Burials. | Volunteers have manually transcribed the book. | The pictures of the book are not the ones that are available on The Danish National Archives, because the DiEm-project took its own pictures. | | 8010720391-6482219 | Bråby Sogn | 1645-1749 | Contains: Births and baptisms, Marriages, Burials. | Volunteers have manually transcribed the book. | Not all pages from the book could be made Ground Truth. Only the Ground Truth pages are available in this dataset. | | 8010785121-6490574 | Vester Ulslev Sogn | 1722-1801 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | Volunteers have manually transcribed the book. | Not all pages from the book could be made Ground Truth. Only the Ground Truth pages are available in this dataset. | | 8026583501-9436330 | Branderup Sogn | 1667-1814 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | Volunteers have manually transcribed the book. | We only made the pages up to and including 1711 Ground Truth from this book. Only the Ground Truth pages are available in this dataset. | | 8027899071 | Jels Sogn | 1648-1721 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | Volunteers have manually transcribed the book. | 8033700071-12391767 | Vinding Sogn (Vejle Amt) | 1651-1749 | Contains: Births and baptisms, Marriages, Burials | Volunteers have manually transcribed the book. | We only made parts of this book Ground Truth because of its bad condition and layout. Pages from 1657 up to and including 1735 is Ground Truth from this book. Only the Ground Truth pages are available in this dataset. | | 8010560601-22053014 | Værløse og Farum Sogne | 1781-1814 | Contains: Confirmations. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8034392541-22064545 | Tranekær Sogn | 1812-1818 | Contains: Communion og Confirmations. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8034325781-12671556 | Ellested og Gislev Sogne | 1675-1687 | Contains: Births and baptisms, Marriages, Burials. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8034365901-12691596 | Nyborg Sogn | 1665-1700 | Contains: Burials. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8031903941 | Hvidbjerg Vesten Å Sogn | 1658-1704 | Contains: Births and baptisms. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8010717631 | Alsted Sogn (Sorø Amt) | 1804-1814 | Contains: Births and baptisms, Confirmations, Marriages, Burials. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | | 8034383771 | Føvling (Ribe Amt) og Holsted Sogne | 1758-1772 | Contains: Births and baptisms, Burials. | An early version of the DiEm-project’s text recognition model in Transkribus recognised the book. | - **Curated by:** [Markus Schunck](masc@rigsarkivet.dk) - **Funded by:** Augustinus Fonden - **Language(s) (NLP):** Danish - **License:** [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> The dataset is meant primarily as ground truth for training HTR models for 17th and 18th century danish handwriting. A separate dataset will be made available containing the ground truth pages for our region detection model as these only partly overlap with the ground truth for HTR. Unpacking the parquet file and putting the images in a root folder and the alto/page xmls in subfolders called 'alto' and 'page' will allow import of the transcriptions into the desktop client of Transkribus, if you want to include the dataset as training data in your Transkribus project. We have created a small tool 'UnpackRAParquet' that can help you unpack the parquet-files in the proper structure, which is included in the tools/ subfolder of this dataset. Windows binary: [UnpackRAParquet.exe](https://huggingface.co/datasets/RA-Data-Science/DiEm_HTR/blob/main/tools/UnpackRAParquet1.0.0.exe) (SHA256: 30A4E7F67EABD275514721C0FD1AEE99F04C6A376CBEA5F027E1E5BD95458AAF) ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> The dataset is not suited for training textline polygon extraction models, as the polygons have been generated by Transkribus and not manually adjusted. The dataset is not suitable for training models for baseline detection either since some blurred/unreadable words in the beginning and ending of a line has been cut out. Some blurred/unreadable lines has also been left unmarked. Finally we advice using the *DiEm Regions* dataset if you want to train a model to detect text regions, as the regions within the Diem HTR dataset has not been manually validated. The *DiEm Regions* dataset should be made available here on Hugging Face in the winter 2025-26. ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> Each data instance represents a single scanned, segmented and transcribed image with handwritten text corresponding to either 1 or 2 physical pages in the parish registers. The dataset contains the following fields: - `image`: a jpeg image containing a scan of the original physical page - `doc_id`: internal id of parent parish register (book) to which the page belongs used at the National Archives of Denmark - `sequence`: an incremental id denoting the order of the page within the parent document - `alto`: an xml-encoded string containing layout and content information of the physical page, stored in [ALTO XML, version 4](https://www.loc.gov/standards/alto/v4/alto.xsd) - `page`: an xml-encoded string containing layout and content information of the physical page, stored in [PAGE XML, version 2013-07-15](https://www.primaresearch.org/schema/PAGE/gts/pagecontent/2013-07-15/pagecontent.xsd) To uniquely identify a single page within the dataset, one can use the `doc_id` integer in combination with the `sequence` integer. ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> The dataset constitutes the ground truth HTR created through the Transkribus interface as part of the DiEm project, managed by the National Archives of Denmark. The project seeks to correctly read all the danish parish registers before 1813. The books included in the ground truth dataset has been selected to represent the variance in handwriting style through the period 1650-1800 as well as possible. Likewise we have tried to include books containing all the events that might be described in parish registers (baptisms, confirmations, marriages and burials). ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> The source data is the parish registers written by parish clerks and priests in the danish parishes in the period ~1650-1800. They parimarily describe the clerical events carried out for the people of the parish, noting among other things names and dates at baptisms, confirmations, marriages and burials. #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> Already existing transcriptions of parish records that were transcribed true to the source and therefore suitable to become Ground Truth were collected and manually inserted in Transkribus by the project workers. Since most of the parish records the project needs to text recognize are from the 18th century, most of our training data (around 70%) are from that period. Volunteers in the DiEm-project have proofread all the parish records to Ground Truth in Transkribus following the project’s proofreader guide, so the proofreading is as similar as possible even though several people contributed to it. The project workers assisted the volunteers in the process. #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> The transscriptions used initially as ground truth training data has been created by various genealogists in Denmark, often as part of research into their own ancestry. The transscriptions have since been verified and corrected by project workers. ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> The DiEm-project’s volunteers have used the Unclear-tag in Transkribus whenever they have come across a word that was unreadable. They also marked underlined words, super- and subscript by using Transkribus’ text editor. #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> Annotations have been created by participants in the DiEm projects at the National Archives of Denmark. #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> The dataset contains no personal, private or sensitive information as all information is over 200 years old. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Be advised that the language used in the text is somewhat limited as it concerns baptisms, confirmations, marriages and burials, and there therefore is a predominance of dates and names when compared to natural language. Most of the training data consists of baptisms (around 42%) followed by burials (around 24%). Around 10% are marriages and 4% confirmations. The rest are chronological kept parish records, where we assume that the grouping of events are roughly the same. Most of the training data are from the 18th century (around 70%) and the rest are from the 17th century. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be aware that: - We do not use diacritical signs in the training data. For example, we transcribe the letter ‘ö’ as ø. - We mark unreadable words with ‘##’ in the text, and tag them with the Unclear-tag in Transkribus. - We use strikethrough, underline, sub- and superscript in the training data. - Only the Ground Truth pages from the parish records are a part of this dataset. - We always use the sign ¬ when there are hyphenated words in the text. - Nine of the parish records are manually transcribed and seven are recognised using early versions of the DiEm-projects text recognition models and corrected afterwards. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> [N/A] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> {{ glossary | default("[More Information Needed]", true)}} ## More Information [optional] Thank you to the Augustinus Foundation for funding the DiEm project. ## Dataset Card Contact **Point of Contact:** [Markus Schunck](masc@rigsarkivet.dk)
3
0
[ "task_categories:image-to-text", "language:da", "license:cc-by-4.0", "size_categories:10K<n<100K", "region:us", "OCR", "HTR", "handwriting", "historical" ]
2025-11-11T09:17:17+00:00
2025-11-11T09:53:01+00:00
0
Alkatt/eval_act_so101_pickplace_asn_run_2_aided
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 10, "total_frames": 6980, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.camera1": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.camera2": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.camera3": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 10, "total_frames": 6980, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.camera1": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.camera2": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.camera3": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
20
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T09:48:07+00:00
2025-11-11T09:48:20+00:00
0
msmandelbrot/so101_pick_and_place_pink_cube_2boxes_wht_gnrl_with_meta
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 51, "total_frames": 27207, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:51" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.up": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 51, "total_frames": 27207, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:51" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.up": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
18
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T09:46:31+00:00
2025-11-11T09:46:36+00:00
0
german-tokenizer-benchmark/german-ler
# German LER: Tokenized Sentences This datasets hosts a sentence-tokenized version of the [German LER](https://huggingface.co/datasets/elenanereiss/german-ler) dataset. ## Creation The following script can be used to reproduce the creation of the dataset: ```python import json from flair.datasets import NER_GERMAN_LEGAL corpus = NER_GERMAN_LEGAL() with open("./train.jsonl", "wt") as f_out: for sentence in corpus.train: current_example = { "text": sentence.to_tokenized_string() } f_out.write(json.dumps(current_example) + "\n") ``` The extracted dataset has 53,384 sentences.
# German LER: Tokenized Sentences This datasets hosts a sentence-tokenized version of the [German LER](https://huggingface.co/datasets/elenanereiss/german-ler) dataset. ## Creation The following script can be used to reproduce the creation of the dataset: ```python import json from flair.datasets import NER_GERMAN_LEGAL corpus = NER_GERMAN_LEGAL() with open("./train.jsonl", "wt") as f_out: for sentence in corpus.train: current_example = { "text": sentence.to_tokenized_string() } f_out.write(json.dumps(current_example) + "\n") ``` The extracted dataset has 53,384 sentences.
13
0
[ "language:de", "license:cc-by-4.0", "size_categories:10K<n<100K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-11T09:24:21+00:00
2025-11-11T09:42:45+00:00
0
eirikfagerbakke/nlse1d
# 1D Cubic Nonlinear Schrödinger Equation (NLSE) ## Overview This dataset contains numerical solutions to the **1D cubic nonlinear Schrödinger equation (NLS)**. The governing equation is: $$ i u_t = u_{xx} + u |u|^2. $$ --- ## Initial Conditions The initial conditions are generated as **random periodic functions** composed of a finite number of Fourier modes. Each sample is defined as: $$ u_0(x) = \frac{1}{\sqrt{n_{\text{modes}}}} \sum_{k=1}^{n_{\text{modes}}} A_k \, e^{i (k x + \phi_k)}, $$ where: - $$ A_k \sim \mathcal{U}(a_{\min}, a_{\max}) $$ are random amplitudes, - $$ \phi_k \sim \mathcal{U}(0, 2\pi) $$ are random phases. --- ## Numerical Solver - **Integrator:** `scipy.integrate.solve_ivp` - **Method:** `DOP853` (explicit Runge–Kutta of order 8(5,3)) - **Spatial discretization:** 6th-order **central finite differences** - **Boundary conditions:** Periodic --- ## Dataset details The equations were solved on $$ x \in [0, 2\pi) $$ with 256 points and $$ t \in [0, 2) $$ with 256 points. The solution is stored in the fields "data_real" and "data_imag". ``` from datasets import load_dataset train_dataset = load_dataset("eirikfagerbakke/nls", split="train").with_format("numpy") u_sample = train_dataset[0]["data_real"] + 1j * train_dataset[0]["data_imag"] ```
# 1D Cubic Nonlinear Schrödinger Equation (NLSE) ## Overview This dataset contains numerical solutions to the **1D cubic nonlinear Schrödinger equation (NLS)**. The governing equation is: $$ i u_t = u_{xx} + u |u|^2. $$ --- ## Initial Conditions The initial conditions are generated as **random periodic functions** composed of a finite number of Fourier modes. Each sample is defined as: $$ u_0(x) = \frac{1}{\sqrt{n_{\text{modes}}}} \sum_{k=1}^{n_{\text{modes}}} A_k \, e^{i (k x + \phi_k)}, $$ where: - $$ A_k \sim \mathcal{U}(a_{\min}, a_{\max}) $$ are random amplitudes, - $$ \phi_k \sim \mathcal{U}(0, 2\pi) $$ are random phases. --- ## Numerical Solver - **Integrator:** `scipy.integrate.solve_ivp` - **Method:** `DOP853` (explicit Runge–Kutta of order 8(5,3)) - **Spatial discretization:** 6th-order **central finite differences** - **Boundary conditions:** Periodic --- ## Dataset details The equations were solved on $$ x \in [0, 2\pi) $$ with 256 points and $$ t \in [0, 2) $$ with 256 points. The solution is stored in the fields "data_real" and "data_imag". ``` from datasets import load_dataset train_dataset = load_dataset("eirikfagerbakke/nls", split="train").with_format("numpy") u_sample = train_dataset[0]["data_real"] + 1j * train_dataset[0]["data_imag"] ```
70
0
[ "size_categories:1K<n<10K", "format:parquet", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-10T15:40:56+00:00
2025-11-11T09:33:38+00:00
0
TheFactoryX/edition_0303_shi-labs-oneformer_demo-readymade
# edition_0303_shi-labs-oneformer_demo-readymade **A Readymade by TheFactoryX** ## Original Dataset [shi-labs/oneformer_demo](https://huggingface.co/datasets/shi-labs/oneformer_demo) ## Process This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art. **What we did:** 1. Selected the original dataset from Hugging Face 2. Shuffled each column independently 3. Destroyed all row-wise relationships 4. Preserved structure, removed meaning **The result:** Same data. Wrong order. New meaning. No meaning. ## Purpose This is art. This is not useful. This is the point. Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed. --- Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX). > _"I am a machine."_ — Andy Warhol
# edition_0303_shi-labs-oneformer_demo-readymade **A Readymade by TheFactoryX** ## Original Dataset [shi-labs/oneformer_demo](https://huggingface.co/datasets/shi-labs/oneformer_demo) ## Process This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art. **What we did:** 1. Selected the original dataset from Hugging Face 2. Shuffled each column independently 3. Destroyed all row-wise relationships 4. Preserved structure, removed meaning **The result:** Same data. Wrong order. New meaning. No meaning. ## Purpose This is art. This is not useful. This is the point. Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed. --- Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX). > _"I am a machine."_ — Andy Warhol
4
0
[ "license:other", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "readymades", "art", "shuffled", "duchamp" ]
2025-11-11T09:34:44+00:00
2025-11-11T09:34:46+00:00
0
YSL2683/record-test
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 5, "total_frames": 664, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:5" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 5, "total_frames": 664, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:5" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
22
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:timeseries", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T09:32:08+00:00
2025-11-11T09:32:14+00:00
0
alapha23/SheetBench-50
## Task Categories ### 1. Data Preparation and Hygiene (29 tasks) - De-duplication, type normalization, time parsing, joins/FX conversions, pivot tables ### 2. Derivations & Extraction (16 tasks) - Correlations, z-scores, grouping logic, compliance filters (e.g., 1099) ### 3. Modeling & Forecasts (5 tasks) - Revenue/breakeven projections, amortization schedules, depreciation calculations, scenario tables ## Example Task ``` For the ticker that has the greatest correlation between volume and next day price change % find the day with the greatest volume and the next days price change % - put the ticker in ANSWER A1 - put the volume in ANSWER B1 - put the next day price change in ANSWER C1 NOTE: - use CORREL to determine correlation for each ticker group - be sure to first sort the date by ticker z to a and then date ascending before calculating nextdaypricechange % ``` ## System prompt ``` All solutions should be put in the sheet called ‘ANSWER’. In the answer sheet, all dates should use the American standard format MM/DD/YYYY with no leading zero. All numbers should use the format and decimal place precision given in the input sheets (e.g., with or without a thousands separator should depend on the inputs), unless specified otherwise. ``` ## Quick Start ### Prerequisites 1. HUD API key: https://www.hud.so/project/api-keys 2. Anthropic API key: https://console.anthropic.com/settings/keys ### Installation & Run ```bash # Install HUD SDK uv tool install hud-python # Configure API keys hud set HUD_API_KEY=... ANTHROPIC_API_KEY=... # Run evaluation with Claude hud eval hud-evals/SheetBench-50 claude # View full dataset hud get hud-evals/SheetBench-50 ``` ## Key Features - **Production-grade**: Tasks validated by finance professionals from PwC, Cisco, Charles Schwab, Fannie Mae - **Blind validation**: Each task has single reproducible solution with expert consensus - **Full telemetry**: Records actions, reasoning traces, and screenshots - **Tool dexterity**: Tests real spreadsheet operations (pivots, formatting, formulas) ## Results - View example scorecard: https://www.hud.so/leaderboards/hud-evals/SheetBench-50?scorecard=19c2f4b7-ea8a-4c2b-866f-20ae57976d13 - Replay trajectories: https://www.hud.so/jobs/7c06c24e-22c7-4c9a-a667-1de4bb05b080 ## Contact For enterprise evaluations or custom benchmarks: founders@hud.so
## Task Categories ### 1. Data Preparation and Hygiene (29 tasks) - De-duplication, type normalization, time parsing, joins/FX conversions, pivot tables ### 2. Derivations & Extraction (16 tasks) - Correlations, z-scores, grouping logic, compliance filters (e.g., 1099) ### 3. Modeling & Forecasts (5 tasks) - Revenue/breakeven projections, amortization schedules, depreciation calculations, scenario tables ## Example Task ``` For the ticker that has the greatest correlation between volume and next day price change % find the day with the greatest volume and the next days price change % - put the ticker in ANSWER A1 - put the volume in ANSWER B1 - put the next day price change in ANSWER C1 NOTE: - use CORREL to determine correlation for each ticker group - be sure to first sort the date by ticker z to a and then date ascending before calculating nextdaypricechange % ``` ## System prompt ``` All solutions should be put in the sheet called ‘ANSWER’. In the answer sheet, all dates should use the American standard format MM/DD/YYYY with no leading zero. All numbers should use the format and decimal place precision given in the input sheets (e.g., with or without a thousands separator should depend on the inputs), unless specified otherwise. ``` ## Quick Start ### Prerequisites 1. HUD API key: https://www.hud.so/project/api-keys 2. Anthropic API key: https://console.anthropic.com/settings/keys ### Installation & Run ```bash # Install HUD SDK uv tool install hud-python # Configure API keys hud set HUD_API_KEY=... ANTHROPIC_API_KEY=... # Run evaluation with Claude hud eval hud-evals/SheetBench-50 claude # View full dataset hud get hud-evals/SheetBench-50 ``` ## Key Features - **Production-grade**: Tasks validated by finance professionals from PwC, Cisco, Charles Schwab, Fannie Mae - **Blind validation**: Each task has single reproducible solution with expert consensus - **Full telemetry**: Records actions, reasoning traces, and screenshots - **Tool dexterity**: Tests real spreadsheet operations (pivots, formatting, formulas) ## Results - View example scorecard: https://www.hud.so/leaderboards/hud-evals/SheetBench-50?scorecard=19c2f4b7-ea8a-4c2b-866f-20ae57976d13 - Replay trajectories: https://www.hud.so/jobs/7c06c24e-22c7-4c9a-a667-1de4bb05b080 ## Contact For enterprise evaluations or custom benchmarks: founders@hud.so
8
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-11T09:00:51+00:00
2025-11-11T09:32:15+00:00
0
user0618/ChineseFeedIndustryResearch
📘 Overview The Chinese Feed Industry Research Dataset offers an integrated view of China’s feed manufacturing and sales sector. It compiles data from multiple feed companies across several years, covering metrics such as production volume, revenue, market share, and growth rate. The dataset aims to support research in agricultural economics, livestock management, and industrial market analysis. 📊 Data Structure The dataset includes seven key variables that describe the operational and financial characteristics of each feed manufacturing company. **Company** identifies the official name of the enterprise, while **Province** indicates the geographic location where the company is registered or primarily conducts its business activities. **Year** specifies the reporting period for each observation, allowing for multi-year trend analysis. **Production** records the total annual feed output, measured in metric tons, and **Revenue** represents the company’s annual sales expressed in Chinese yuan (RMB). **GrowthRate** captures the year-over-year percentage change in production or revenue, providing insight into company performance and expansion trends. Finally, **MarketShare** reflects the company’s estimated proportion of the total feed market, expressed as a percentage, illustrating its relative position within the industry. 🧩 Use Cases Market trend and regional structure analysis Econometric modeling of production and sales dynamics Policy evaluation and scenario forecasting Machine learning demonstrations (e.g., regression or clustering) ⚖️ License This dataset is released under CC BY-NC-SA 4.0 — you may use and adapt it for non-commercial research with proper attribution.
📘 Overview The Chinese Feed Industry Research Dataset offers an integrated view of China’s feed manufacturing and sales sector. It compiles data from multiple feed companies across several years, covering metrics such as production volume, revenue, market share, and growth rate. The dataset aims to support research in agricultural economics, livestock management, and industrial market analysis. 📊 Data Structure The dataset includes seven key variables that describe the operational and financial characteristics of each feed manufacturing company. **Company** identifies the official name of the enterprise, while **Province** indicates the geographic location where the company is registered or primarily conducts its business activities. **Year** specifies the reporting period for each observation, allowing for multi-year trend analysis. **Production** records the total annual feed output, measured in metric tons, and **Revenue** represents the company’s annual sales expressed in Chinese yuan (RMB). **GrowthRate** captures the year-over-year percentage change in production or revenue, providing insight into company performance and expansion trends. Finally, **MarketShare** reflects the company’s estimated proportion of the total feed market, expressed as a percentage, illustrating its relative position within the industry. 🧩 Use Cases Market trend and regional structure analysis Econometric modeling of production and sales dynamics Policy evaluation and scenario forecasting Machine learning demonstrations (e.g., regression or clustering) ⚖️ License This dataset is released under CC BY-NC-SA 4.0 — you may use and adapt it for non-commercial research with proper attribution.
6
0
[ "license:cc-by-nc-sa-4.0", "size_categories:n<1K", "format:csv", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-11T09:20:58+00:00
2025-11-11T09:29:04+00:00
0
ks-and1/eval_panda-schnapp
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 2, "total_frames": 1643, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:2" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 2, "total_frames": 1643, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:2" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
22
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T09:27:07+00:00
2025-11-11T09:27:21+00:00
0
PoTaTo721/Shore-Lunch-Box
# TTS 目前主要以中文方言为主,即将扩展到英语,日语以及其它语言。 具体专辑对应的时长可以从我的[Notion](https://www.notion.so/Shore-Lunch-Box-290ef97fc8fb804c89b2ce561458db78?source=copy_link)中获得。 # Emotion 情感信息可以在json文件中找到。 # 注意! Data Pipeline: https://github.com/PoTaTo-Mika/Shore-Data-Engine 对于早期的小型专辑,都是wav格式方便直接使用。对于大规模(10000+ hours)的系列,都是opus进行保存,请注意储存空间。 # 引用 请考虑在使用该数据集的论文中引用我们的工作: ``` @misc{cheng2025mikupalautomatedstandardizedmultimodal, title={MIKU-PAL: An Automated and Standardized Multi-Modal Method for Speech Paralinguistic and Affect Labeling}, author={Yifan Cheng and Ruoyi Zhang and Jiatong Shi}, year={2025}, eprint={2505.15772}, archivePrefix={arXiv}, primaryClass={cs.SD}, url={https://arxiv.org/abs/2505.15772}, } ``` Another work is still work in progress. # 赞助 感谢杭州[NNIX](https://www.ix.cn/)赞助的计算资源,对于本数据集的构建提供了极大的帮助. 如果您希望为本项目提供计算资源赞助,欢迎联系(potato_zhang@nuist.edu.cn)
# TTS 目前主要以中文方言为主,即将扩展到英语,日语以及其它语言。 具体专辑对应的时长可以从我的[Notion](https://www.notion.so/Shore-Lunch-Box-290ef97fc8fb804c89b2ce561458db78?source=copy_link)中获得。 # Emotion 情感信息可以在json文件中找到。 # 注意! Data Pipeline: https://github.com/PoTaTo-Mika/Shore-Data-Engine 对于早期的小型专辑,都是wav格式方便直接使用。对于大规模(10000+ hours)的系列,都是opus进行保存,请注意储存空间。 # 引用 请考虑在使用该数据集的论文中引用我们的工作: ``` @misc{cheng2025mikupalautomatedstandardizedmultimodal, title={MIKU-PAL: An Automated and Standardized Multi-Modal Method for Speech Paralinguistic and Affect Labeling}, author={Yifan Cheng and Ruoyi Zhang and Jiatong Shi}, year={2025}, eprint={2505.15772}, archivePrefix={arXiv}, primaryClass={cs.SD}, url={https://arxiv.org/abs/2505.15772}, } ``` Another work is still work in progress. # 赞助 感谢杭州[NNIX](https://www.ix.cn/)赞助的计算资源,对于本数据集的构建提供了极大的帮助. 如果您希望为本项目提供计算资源赞助,欢迎联系(potato_zhang@nuist.edu.cn)
1,889
1
[ "task_categories:text-to-speech", "language:zh", "language:es", "language:en", "license:cc-by-nc-sa-4.0", "arxiv:2505.15772", "region:us" ]
2025-09-22T09:26:14+00:00
2025-11-11T09:22:03+00:00
0
News-M3F/News-M3F
# News-M3F: A Multi-modal, Multi-label Dataset for Semantic Fine-grained Classification
# News-M3F: A Multi-modal, Multi-label Dataset for Semantic Fine-grained Classification
8
0
[ "task_categories:text-classification", "task_categories:image-classification", "language:en", "license:cc-by-4.0", "size_categories:100K<n<1M", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-11T09:03:19+00:00
2025-11-11T09:21:39+00:00
0
TakalaWang/anime-2024-winter-series-queries
# Anime 2024 Winter - Series Queries 這個數據集包含 2024 年冬季動畫的系列級別查詢語句。 ## 數據集結構 - **file_name**: 影片文件路徑(用於定位整季影片位置) - **series_name**: 動畫系列名稱 - **release_date**: 首播日期 - **query**: 模型生成的查詢語句 - narrative_arc: 整體劇情弧線 - characters: 角色辨識 - character_development: 角色成長與關係 - theme: 主題與寓意 - visual_emotional_impression: 視覺/情緒印象
# Anime 2024 Winter - Series Queries 這個數據集包含 2024 年冬季動畫的系列級別查詢語句。 ## 數據集結構 - **file_name**: 影片文件路徑(用於定位整季影片位置) - **series_name**: 動畫系列名稱 - **release_date**: 首播日期 - **query**: 模型生成的查詢語句 - narrative_arc: 整體劇情弧線 - characters: 角色辨識 - character_development: 角色成長與關係 - theme: 主題與寓意 - visual_emotional_impression: 視覺/情緒印象
64
0
[ "language:zh", "size_categories:n<1K", "modality:text", "modality:video", "library:datasets", "library:mlcroissant", "region:us" ]
2025-11-08T10:52:15+00:00
2025-11-11T09:18:29+00:00
0
TheFactoryX/edition_0302_inductiva-windtunnel-20k-readymade
# edition_0302_inductiva-windtunnel-20k-readymade **A Readymade by TheFactoryX** ## Original Dataset [inductiva/windtunnel-20k](https://huggingface.co/datasets/inductiva/windtunnel-20k) ## Process This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art. **What we did:** 1. Selected the original dataset from Hugging Face 2. Shuffled each column independently 3. Destroyed all row-wise relationships 4. Preserved structure, removed meaning **The result:** Same data. Wrong order. New meaning. No meaning. ## Purpose This is art. This is not useful. This is the point. Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed. --- Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX). > _"I am a machine."_ — Andy Warhol
# edition_0302_inductiva-windtunnel-20k-readymade **A Readymade by TheFactoryX** ## Original Dataset [inductiva/windtunnel-20k](https://huggingface.co/datasets/inductiva/windtunnel-20k) ## Process This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art. **What we did:** 1. Selected the original dataset from Hugging Face 2. Shuffled each column independently 3. Destroyed all row-wise relationships 4. Preserved structure, removed meaning **The result:** Same data. Wrong order. New meaning. No meaning. ## Purpose This is art. This is not useful. This is the point. Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed. --- Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX). > _"I am a machine."_ — Andy Warhol
4
0
[ "license:other", "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "readymades", "art", "shuffled", "duchamp" ]
2025-11-11T09:17:02+00:00
2025-11-11T09:17:04+00:00
0
pr0tos/so101_put_br_on_p
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 50, "total_frames": 24472, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:50" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 50, "total_frames": 24472, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:50" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
127
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-06T09:56:14+00:00
2025-11-11T09:11:46+00:00
0
hinoarashi/PlateHold_test
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "bi_so101_follower", "total_episodes": 1, "total_frames": 229, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:1" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_roll.pos", "left_gripper.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_roll.pos", "right_gripper.pos" ], "shape": [ 12 ] }, "observation.state": { "dtype": "float32", "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_roll.pos", "left_gripper.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_roll.pos", "right_gripper.pos" ], "shape": [ 12 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.right": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.left": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "bi_so101_follower", "total_episodes": 1, "total_frames": 229, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:1" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_roll.pos", "left_gripper.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_roll.pos", "right_gripper.pos" ], "shape": [ 12 ] }, "observation.state": { "dtype": "float32", "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_roll.pos", "left_gripper.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_roll.pos", "right_gripper.pos" ], "shape": [ 12 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.right": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.left": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
26
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T09:10:21+00:00
2025-11-11T09:10:26+00:00
0
pr0tos/so101_take_out_gc_pb
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 50, "total_frames": 18444, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:50" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 50, "total_frames": 18444, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:50" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
153
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-06T08:34:51+00:00
2025-11-11T09:11:00+00:00
0
openeurollm/evaluation_singularity_images
This dataset repository holds the singularity images for the shared OELLM CLI workflows in [OpenEuroLLM/oellm-cli](https://github.com/OpenEuroLLM/oellm-cli/). This singularity images are updated automatically using a GitHub Actions workflow if there is a change to the container definition files or the workflow itself is updated. The `oellm-cli` tool will detect changes to the respective singularity image file in this repo and download it to the cluster the user is launching the workflow from, before the workflow task is scheduled.
This dataset repository holds the singularity images for the shared OELLM CLI workflows in [OpenEuroLLM/oellm-cli](https://github.com/OpenEuroLLM/oellm-cli/). This singularity images are updated automatically using a GitHub Actions workflow if there is a change to the container definition files or the workflow itself is updated. The `oellm-cli` tool will detect changes to the respective singularity image file in this repo and download it to the cluster the user is launching the workflow from, before the workflow task is scheduled.
180
0
[ "license:apache-2.0", "region:us" ]
2025-08-04T09:38:22+00:00
2025-11-11T09:06:31+00:00
0
Alkatt/eval_act_so101_cube_pickplace_asn_run_1
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 10, "total_frames": 14317, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.camera1": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.camera2": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.camera3": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 10, "total_frames": 14317, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.camera1": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.camera2": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.camera3": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
29
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T09:08:41+00:00
2025-11-11T09:08:55+00:00
0
hieu1344/omy_conditional_push_v2
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "aiworker", "total_episodes": 50, "total_frames": 5151, "total_tasks": 1, "total_videos": 200, "total_chunks": 1, "chunks_size": 1000, "fps": 10, "splits": { "train": "0:50" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "object_poses": { "dtype": "float32", "names": [ "x", "y", "z", "rotation" ], "shape": [ 6, 4 ] }, "observation.images.cam_wrist": { "dtype": "video", "names": [ "height", "width", "channels" ], "shape": [ 480, 640, 3 ], "info": { "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "libx264", "video.pix_fmt": "yuv420p" } }, "observation.images.cam_scene1": { "dtype": "video", "names": [ "height", "width", "channels" ], "shape": [ 480, 640, 3 ], "info": { "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "libx264", "video.pix_fmt": "yuv420p" } }, "observation.images.cam_scene2": { "dtype": "video", "names": [ "height", "width", "channels" ], "shape": [ 480, 640, 3 ], "info": { "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "libx264", "video.pix_fmt": "yuv420p" } }, "observation.images.cam_scene3": { "dtype": "video", "names": [ "height", "width", "channels" ], "shape": [ 480, 640, 3 ], "info": { "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "libx264", "video.pix_fmt": "yuv420p" } }, "observation.state": { "dtype": "float32", "names": [ "joint1", "joint2", "joint3", "joint4", "joint5", "joint6", "rh_r1_joint", "tx", "ty", "tz", "qx", "qy", "qz", "qw" ], "shape": [ 14 ] }, "action": { "dtype": "float32", "names": [ "joint1", "joint2", "joint3", "joint4", "joint5", "joint6", "rh_r1_joint", "tx", "ty", "tz", "qx", "qy", "qz", "qw" ], "shape": [ 14 ] } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "aiworker", "total_episodes": 50, "total_frames": 5151, "total_tasks": 1, "total_videos": 200, "total_chunks": 1, "chunks_size": 1000, "fps": 10, "splits": { "train": "0:50" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "object_poses": { "dtype": "float32", "names": [ "x", "y", "z", "rotation" ], "shape": [ 6, 4 ] }, "observation.images.cam_wrist": { "dtype": "video", "names": [ "height", "width", "channels" ], "shape": [ 480, 640, 3 ], "info": { "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "libx264", "video.pix_fmt": "yuv420p" } }, "observation.images.cam_scene1": { "dtype": "video", "names": [ "height", "width", "channels" ], "shape": [ 480, 640, 3 ], "info": { "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "libx264", "video.pix_fmt": "yuv420p" } }, "observation.images.cam_scene2": { "dtype": "video", "names": [ "height", "width", "channels" ], "shape": [ 480, 640, 3 ], "info": { "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "libx264", "video.pix_fmt": "yuv420p" } }, "observation.images.cam_scene3": { "dtype": "video", "names": [ "height", "width", "channels" ], "shape": [ 480, 640, 3 ], "info": { "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "libx264", "video.pix_fmt": "yuv420p" } }, "observation.state": { "dtype": "float32", "names": [ "joint1", "joint2", "joint3", "joint4", "joint5", "joint6", "rh_r1_joint", "tx", "ty", "tz", "qx", "qy", "qz", "qw" ], "shape": [ 14 ] }, "action": { "dtype": "float32", "names": [ "joint1", "joint2", "joint3", "joint4", "joint5", "joint6", "rh_r1_joint", "tx", "ty", "tz", "qx", "qy", "qz", "qw" ], "shape": [ 14 ] } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
33
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "omy", "robotis" ]
2025-11-11T09:03:59+00:00
2025-11-11T09:04:19+00:00
0
KevinHuang/InteriorVerse
Captions of InteriorVerse's RGB images extracted with `microsoft/Florence-2-large`.
Captions of InteriorVerse's RGB images extracted with `microsoft/Florence-2-large`.
28
0
[ "license:mit", "region:us" ]
2025-05-22T03:45:53+00:00
2025-11-11T09:03:02+00:00
0
DavidNguyen/LLAVA-LibMoE
## LibMoE Dataset for Training and Evaluation ### Overview This repository contains the **LLAVA-LibMoE dataset**, curated specifically for **training and evaluating Mixture of Experts (MoE)** algorithms within the **LibMoE framework**. It provides a standardized and ready-to-use dataset collection designed to support research on **efficient, scalable, and reproducible MoE models** in large language modeling. ### Key Features - **Aligned with LibMoE framework:** All data samples follow the same preprocessing and formatting standards used in LibMoE training pipelines. - **Comprehensive coverage:** Includes multiple domains and task types to enable broad generalization and benchmarking across MoE architectures. - **Zero-shot ready:** The dataset supports zero-shot evaluation and comparison of different routing and expert configurations. ### Purpose Researchers can directly use this dataset to: - Train new MoE models under the LibMoE training scripts. - Evaluate and compare expert routing algorithms using consistent benchmarks. - Reproduce results reported in the LibMoE paper and extend experiments with custom models. By integrating this dataset with the **LibMoE training library**, users can rapidly prototype, benchmark, and advance MoE-based LLM research. ### Citation Information More details can be found in our paper. If you use LibMoE, please cite it using this BibTeX: ``` @misc{nguyen2024libmoelibrarycomprehensivebenchmarking, title={LIBMoE: A Library for comprehensive benchmarking Mixture of Experts in Large Language Models}, author={Nam V. Nguyen and Thong T. Doan and Luong Tran and Van Nguyen and Quang Pham}, year={2024}, eprint={2411.00918}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2411.00918}, } ```
## LibMoE Dataset for Training and Evaluation ### Overview This repository contains the **LLAVA-LibMoE dataset**, curated specifically for **training and evaluating Mixture of Experts (MoE)** algorithms within the **LibMoE framework**. It provides a standardized and ready-to-use dataset collection designed to support research on **efficient, scalable, and reproducible MoE models** in large language modeling. ### Key Features - **Aligned with LibMoE framework:** All data samples follow the same preprocessing and formatting standards used in LibMoE training pipelines. - **Comprehensive coverage:** Includes multiple domains and task types to enable broad generalization and benchmarking across MoE architectures. - **Zero-shot ready:** The dataset supports zero-shot evaluation and comparison of different routing and expert configurations. ### Purpose Researchers can directly use this dataset to: - Train new MoE models under the LibMoE training scripts. - Evaluate and compare expert routing algorithms using consistent benchmarks. - Reproduce results reported in the LibMoE paper and extend experiments with custom models. By integrating this dataset with the **LibMoE training library**, users can rapidly prototype, benchmark, and advance MoE-based LLM research. ### Citation Information More details can be found in our paper. If you use LibMoE, please cite it using this BibTeX: ``` @misc{nguyen2024libmoelibrarycomprehensivebenchmarking, title={LIBMoE: A Library for comprehensive benchmarking Mixture of Experts in Large Language Models}, author={Nam V. Nguyen and Thong T. Doan and Luong Tran and Van Nguyen and Quang Pham}, year={2024}, eprint={2411.00918}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2411.00918}, } ```
39
0
[ "task_categories:question-answering", "task_categories:text-generation", "language:en", "size_categories:100K<n<1M", "arxiv:2411.00918", "region:us" ]
2024-08-22T18:47:31+00:00
2025-11-11T09:02:52+00:00
0
sungyub/codev-r1-verl
# CodeV-R1 VERL <div align="center"> ![Dataset](https://img.shields.io/badge/Dataset-3133_samples-blue) ![Size](https://img.shields.io/badge/Size-3.4_MB-green) ![Format](https://img.shields.io/badge/Format-VERL-orange) ![License](https://img.shields.io/badge/License-CC--BY--SA--4.0-red) </div> ## 📊 Dataset Summary This dataset contains 3,133 Verilog HDL code generation problems in VERL format with **standardized instruction formatting**, processed from zhuyaoyu/CodeV-R1-dataset. **Key Features:** - **3,133 high-quality Verilog HDL problems** (3,033 train + 100 validation) - Standardized instruction format with 5-section structure - JSON ground truth for equivalence checking via Sandbox Fusion - Ready for reinforcement learning training --- ## 🔗 Source Dataset ### Original Repository - **Repository**: [zhuyaoyu/CodeV-R1-dataset](https://huggingface.co/datasets/zhuyaoyu/CodeV-R1-dataset) - **License**: CC-BY-SA-4.0 - **Authors**: Yaoyu Zhu ### Dataset Description CodeV-R1-dataset contains Verilog hardware design problems with reference implementations and test cases. The dataset covers various digital circuit designs from simple combinational logic to complex sequential circuits. --- ## 🔄 Preprocessing Pipeline This dataset has been preprocessed and converted to the VERL (Verification and Reinforcement Learning) format with standardized instruction formatting for use in Verilog code generation tasks with reward modeling. ### Instruction Standardization **Format Version**: v1.2 The instruction format has been standardized to improve training consistency: **Standardized Structure:** 1. **Problem**: Clear problem description 2. **Constraints**: Behavioral and memory requirements 3. **Example**: Input/output examples with explanations 4. **Implementation Requirements**: Sandbox-optimized coding guidelines 5. **Signal Interface**: Complete I/O specifications table **Standardization Statistics:** - **Original samples**: 3,133 - **Successfully standardized**: 3,133 (100%) - **Format changed**: 3,133 (100%) - **Duplicates removed**: 0 **Format Evolution:** - **v1.0**: Original unstructured format - **v1.1**: Added 6-section format with separate I/O sections - **v1.2**: Optimized to 5-section format (removed redundant I/O Format sections) --- ## 💡 Preprocessing Examples ### Example 1: Instruction Standardization **Before Standardization:** ``` Design a 4-bit binary counter. Inputs: clk, rst Outputs: count[3:0] The counter should increment on positive clock edge and reset synchronously. ``` **After Standardization:** ```markdown ## Problem Design a 4-bit binary counter with synchronous reset. ## Constraints - Counter must increment on positive clock edge - Counter resets to 0 when rst is high - Counter wraps around from 15 to 0 ## Example **Input:** \`\`\` clk: 0→1 (rising edge) rst: 0 current count: 5 \`\`\` **Output:** \`\`\` count: 6 \`\`\` **Explanation:** On positive clock edge with reset inactive, counter increments from 5 to 6. ## Implementation Requirements **Code Format:** - Wrap your Verilog code in markdown code blocks: \`\`\`verilog ... \`\`\` - Use the exact module name specified in the problem - Example: \`\`\`verilog\n module counter_4bit(...) ... endmodule\n \`\`\` - Include all required input/output ports ## Signal Interface | Signal | Direction | Width | Description | |--------|-----------|-------|-------------| | clk | input | 1 | Clock signal | | rst | input | 1 | Synchronous reset (active high) | | count | output | 4 | Counter output | ``` **Changes Applied:** - ✓ Added structured sections (## Problem, ## Constraints, etc.) - ✓ Extracted and formatted constraints as bullet points - ✓ Added input/output examples with explanations - ✓ Included implementation requirements for proper formatting - ✓ Preserved signal interface table with specifications --- ## 📐 VERL Schema This dataset follows the standardized VERL (Verification and Reinforcement Learning) format: ```json { "data_source": "codev", "prompt": [ { "content": "## Problem\n\nDesign a 4-bit counter...\n\n## Constraints\n...", "role": "user" } ], "ability": "Verilog", "reward_model": { "style": "rule", "ground_truth": "{\"answer\": {\"code\": \"...\", \"input_port_width\": [[\"port\", 8]], ...}}" }, "extra_info": { "index": 42 } } ``` ### Field Descriptions | Field | Type | Description | |-------|------|-------------| | `data_source` | `string` | Dataset identifier (always `"codev"`) | | `prompt` | `list[dict]` | **Standardized** user prompt with structured sections | | `ability` | `string` | Task type (always `"Verilog"`) | | `reward_model.style` | `string` | Reward computation method (`"rule"` for equivalence checking) | | `reward_model.ground_truth` | `string` | **JSON string** containing reference Verilog code and port specifications | | `extra_info.index` | `int64` | Sample index in original dataset | ### Ground Truth Format The `reward_model.ground_truth` field is a **JSON string** containing: ```json { "answer": { "code": "module decoder(...); ... endmodule", "input_port_width": [["instr_op_i", 6]], "output_port_width": [["RegWrite_o", 1], ["ALUOp_o", 3]], "clock_port_polarity": [], "reset_port_polarity_sync": [] } } ``` **Port Info Format:** - `input_port_width`: List of `[port_name, width]` pairs - `output_port_width`: List of `[port_name, width]` pairs - `clock_port_polarity`: List of `[clk_name, edge]` (e.g., `["clk", "posedge"]`) - `reset_port_polarity_sync`: List of `[rst_name, edge, type]` **Usage:** ```python import json # Parse ground_truth gt_data = json.loads(sample['reward_model']['ground_truth']) # Access variant (usually named 'answer') variant = gt_data['answer'] print(f"Reference code:\n{variant['code']}") print(f"Input ports: {variant['input_port_width']}") # [['port1', 8], ...] print(f"Output ports: {variant['output_port_width']}") ``` --- ## 📈 Dataset Statistics ### Sample Distribution - **Total Samples**: 3,133 - **Train Split**: 3,033 (96.8%) - **Validation Split**: 100 (3.2%) - **Dataset Size**: 3.4 MB ### Data Sources All problems are from the CodeV-R1 dataset: | Source | Train | Validation | Total | |--------|-------|------------|-------| | CodeV-R1 | 3,033 | 100 | 3,133 | ### Standardization Coverage | Metric | Count | Percentage | |--------|-------|------------| | Successfully standardized | 3,133 | 100% | | Format changed | 3,133 | 100% | | Has ## Problem section | 3,133 | 100% | | Has ## Constraints section | 3,133 | 100% | | Has ## Example section | 3,133 | 100% | | Has ## Implementation Requirements | 3,133 | 100% | | Has ## Signal Interface | 3,133 | 100% | --- ## 🚀 Usage ### Loading the Dataset ```python from datasets import load_dataset # Load the full dataset dataset = load_dataset("sungyub/codev-r1-verl") # Load specific split train = load_dataset("sungyub/codev-r1-verl", split="train") validation = load_dataset("sungyub/codev-r1-verl", split="validation") # Load with streaming (recommended for iteration) dataset = load_dataset("sungyub/codev-r1-verl", streaming=True) # Preview first few examples for example in dataset['train'].take(3): print(example['prompt'][0]['content']) print("---") ``` ### Accessing Ground Truth ```python import json # Get a sample sample = dataset['train'][0] # Parse ground truth from JSON string ground_truth = json.loads(sample['reward_model']['ground_truth']) # Access variant (usually named 'answer') variant = ground_truth['answer'] print(f"Reference code:\n{variant['code']}") print(f"Input ports: {variant['input_port_width']}") print(f"Output ports: {variant['output_port_width']}") ``` ### Using with VERL ```python from datatrove.utils.reward_score import compute_score # Compute reward score for a generated Verilog solution score = compute_score( data_source="codev", solution_str=generated_solution, # Should include <think> and <answer> tags ground_truth=sample['reward_model']['ground_truth'], # Pass JSON string sandbox_fusion_url="http://localhost:8080", # Required for Verilog evaluation format_type="auto" # Auto-detect XML or GPT OSS format ) print(f"Reward score: {score}") ``` ### Integration with DataTrove ```python from datatrove.pipeline.readers import ParquetReader from datatrove.pipeline.filters import LambdaFilter from datatrove.executor import LocalPipelineExecutor pipeline = [ ParquetReader("sungyub/codev-r1-verl"), LambdaFilter(lambda doc: "sequential" in doc.text.lower()), # Filter for sequential circuits # Add more processing steps... ] executor = LocalPipelineExecutor(pipeline=pipeline, tasks=4) executor.run() ``` --- ## 📚 Citation ### Original Dataset ```bibtex @dataset{codev_r1_2024, author = {Yaoyu Zhu}, title = {CodeV-R1: Verilog Code Generation Dataset}, year = {2024}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/datasets/zhuyaoyu/CodeV-R1-dataset}} } ``` ### This Processed Version ```bibtex @dataset{sungyub_codev_r1_verl_2025, author = {Sungyub Kim}, title = {CodeV-R1 VERL: Standardized Verilog Code Generation for VERL Training}, year = {2025}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/datasets/sungyub/codev-r1-verl}} } ``` --- ## ⚖️ License - **This processed dataset**: CC-BY-SA-4.0 - **Original dataset**: CC-BY-SA-4.0 --- ## 🙏 Acknowledgments This dataset was processed using the [DataTrove](https://github.com/huggingface/datatrove) library. **Credits:** - Original dataset author: Yaoyu Zhu - Instruction standardization: Sungyub Kim - InstructionStandardizer implementation: DataTrove contributors **Special thanks to:** - Yaoyu Zhu for providing the original CodeV-R1 dataset - [Sandbox Fusion](https://github.com/sandbox-fusion/sandbox) team for Verilog evaluation infrastructure --- ## 📝 Version History ### v1.2 (Current - November 11, 2025) - Standardized instruction format across all 3,133 samples - Added 5-section structured format - Included explicit module naming requirements - Added both train (3,033) and validation (100) splits - Converted to VERL standard 5-field format - Changed ground_truth from binary pickle to JSON string ### v1.1 (November 4, 2025) - Experimental 6-section format (deprecated) ### v1.0 (Initial Release) - Direct conversion from zhuyaoyu/CodeV-R1-dataset --- ## 🔗 Related Resources - **Original Dataset**: [zhuyaoyu/CodeV-R1-dataset](https://huggingface.co/datasets/zhuyaoyu/CodeV-R1-dataset) - **Code Datasets**: - [sungyub/skywork-or1-code-verl](https://huggingface.co/datasets/sungyub/skywork-or1-code-verl) - Python code (14,057 samples) - [sungyub/eurus-2-code-verl](https://huggingface.co/datasets/sungyub/eurus-2-code-verl) - Multi-language code (25,276 samples) - **Math Datasets**: - [sungyub/openr1-math-verl](https://huggingface.co/datasets/sungyub/openr1-math-verl) - Math reasoning (120,387 samples) - [sungyub/math-verl-unified](https://huggingface.co/datasets/sungyub/math-verl-unified) - Unified math collection - **DataTrove Documentation**: [https://github.com/huggingface/datatrove](https://github.com/huggingface/datatrove) - **VERL Format Specification**: See VERL Schema section above --- <div align="center"> **Questions or issues?** Open an issue on the [DataTrove GitHub repository](https://github.com/huggingface/datatrove/issues) </div>
# CodeV-R1 VERL <div align="center"> ![Dataset](https://img.shields.io/badge/Dataset-3133_samples-blue) ![Size](https://img.shields.io/badge/Size-3.4_MB-green) ![Format](https://img.shields.io/badge/Format-VERL-orange) ![License](https://img.shields.io/badge/License-CC--BY--SA--4.0-red) </div> ## 📊 Dataset Summary This dataset contains 3,133 Verilog HDL code generation problems in VERL format with **standardized instruction formatting**, processed from zhuyaoyu/CodeV-R1-dataset. **Key Features:** - **3,133 high-quality Verilog HDL problems** (3,033 train + 100 validation) - Standardized instruction format with 5-section structure - JSON ground truth for equivalence checking via Sandbox Fusion - Ready for reinforcement learning training --- ## 🔗 Source Dataset ### Original Repository - **Repository**: [zhuyaoyu/CodeV-R1-dataset](https://huggingface.co/datasets/zhuyaoyu/CodeV-R1-dataset) - **License**: CC-BY-SA-4.0 - **Authors**: Yaoyu Zhu ### Dataset Description CodeV-R1-dataset contains Verilog hardware design problems with reference implementations and test cases. The dataset covers various digital circuit designs from simple combinational logic to complex sequential circuits. --- ## 🔄 Preprocessing Pipeline This dataset has been preprocessed and converted to the VERL (Verification and Reinforcement Learning) format with standardized instruction formatting for use in Verilog code generation tasks with reward modeling. ### Instruction Standardization **Format Version**: v1.2 The instruction format has been standardized to improve training consistency: **Standardized Structure:** 1. **Problem**: Clear problem description 2. **Constraints**: Behavioral and memory requirements 3. **Example**: Input/output examples with explanations 4. **Implementation Requirements**: Sandbox-optimized coding guidelines 5. **Signal Interface**: Complete I/O specifications table **Standardization Statistics:** - **Original samples**: 3,133 - **Successfully standardized**: 3,133 (100%) - **Format changed**: 3,133 (100%) - **Duplicates removed**: 0 **Format Evolution:** - **v1.0**: Original unstructured format - **v1.1**: Added 6-section format with separate I/O sections - **v1.2**: Optimized to 5-section format (removed redundant I/O Format sections) --- ## 💡 Preprocessing Examples ### Example 1: Instruction Standardization **Before Standardization:** ``` Design a 4-bit binary counter. Inputs: clk, rst Outputs: count[3:0] The counter should increment on positive clock edge and reset synchronously. ``` **After Standardization:** ```markdown ## Problem Design a 4-bit binary counter with synchronous reset. ## Constraints - Counter must increment on positive clock edge - Counter resets to 0 when rst is high - Counter wraps around from 15 to 0 ## Example **Input:** \`\`\` clk: 0→1 (rising edge) rst: 0 current count: 5 \`\`\` **Output:** \`\`\` count: 6 \`\`\` **Explanation:** On positive clock edge with reset inactive, counter increments from 5 to 6. ## Implementation Requirements **Code Format:** - Wrap your Verilog code in markdown code blocks: \`\`\`verilog ... \`\`\` - Use the exact module name specified in the problem - Example: \`\`\`verilog\n module counter_4bit(...) ... endmodule\n \`\`\` - Include all required input/output ports ## Signal Interface | Signal | Direction | Width | Description | |--------|-----------|-------|-------------| | clk | input | 1 | Clock signal | | rst | input | 1 | Synchronous reset (active high) | | count | output | 4 | Counter output | ``` **Changes Applied:** - ✓ Added structured sections (## Problem, ## Constraints, etc.) - ✓ Extracted and formatted constraints as bullet points - ✓ Added input/output examples with explanations - ✓ Included implementation requirements for proper formatting - ✓ Preserved signal interface table with specifications --- ## 📐 VERL Schema This dataset follows the standardized VERL (Verification and Reinforcement Learning) format: ```json { "data_source": "codev", "prompt": [ { "content": "## Problem\n\nDesign a 4-bit counter...\n\n## Constraints\n...", "role": "user" } ], "ability": "Verilog", "reward_model": { "style": "rule", "ground_truth": "{\"answer\": {\"code\": \"...\", \"input_port_width\": [[\"port\", 8]], ...}}" }, "extra_info": { "index": 42 } } ``` ### Field Descriptions | Field | Type | Description | |-------|------|-------------| | `data_source` | `string` | Dataset identifier (always `"codev"`) | | `prompt` | `list[dict]` | **Standardized** user prompt with structured sections | | `ability` | `string` | Task type (always `"Verilog"`) | | `reward_model.style` | `string` | Reward computation method (`"rule"` for equivalence checking) | | `reward_model.ground_truth` | `string` | **JSON string** containing reference Verilog code and port specifications | | `extra_info.index` | `int64` | Sample index in original dataset | ### Ground Truth Format The `reward_model.ground_truth` field is a **JSON string** containing: ```json { "answer": { "code": "module decoder(...); ... endmodule", "input_port_width": [["instr_op_i", 6]], "output_port_width": [["RegWrite_o", 1], ["ALUOp_o", 3]], "clock_port_polarity": [], "reset_port_polarity_sync": [] } } ``` **Port Info Format:** - `input_port_width`: List of `[port_name, width]` pairs - `output_port_width`: List of `[port_name, width]` pairs - `clock_port_polarity`: List of `[clk_name, edge]` (e.g., `["clk", "posedge"]`) - `reset_port_polarity_sync`: List of `[rst_name, edge, type]` **Usage:** ```python import json # Parse ground_truth gt_data = json.loads(sample['reward_model']['ground_truth']) # Access variant (usually named 'answer') variant = gt_data['answer'] print(f"Reference code:\n{variant['code']}") print(f"Input ports: {variant['input_port_width']}") # [['port1', 8], ...] print(f"Output ports: {variant['output_port_width']}") ``` --- ## 📈 Dataset Statistics ### Sample Distribution - **Total Samples**: 3,133 - **Train Split**: 3,033 (96.8%) - **Validation Split**: 100 (3.2%) - **Dataset Size**: 3.4 MB ### Data Sources All problems are from the CodeV-R1 dataset: | Source | Train | Validation | Total | |--------|-------|------------|-------| | CodeV-R1 | 3,033 | 100 | 3,133 | ### Standardization Coverage | Metric | Count | Percentage | |--------|-------|------------| | Successfully standardized | 3,133 | 100% | | Format changed | 3,133 | 100% | | Has ## Problem section | 3,133 | 100% | | Has ## Constraints section | 3,133 | 100% | | Has ## Example section | 3,133 | 100% | | Has ## Implementation Requirements | 3,133 | 100% | | Has ## Signal Interface | 3,133 | 100% | --- ## 🚀 Usage ### Loading the Dataset ```python from datasets import load_dataset # Load the full dataset dataset = load_dataset("sungyub/codev-r1-verl") # Load specific split train = load_dataset("sungyub/codev-r1-verl", split="train") validation = load_dataset("sungyub/codev-r1-verl", split="validation") # Load with streaming (recommended for iteration) dataset = load_dataset("sungyub/codev-r1-verl", streaming=True) # Preview first few examples for example in dataset['train'].take(3): print(example['prompt'][0]['content']) print("---") ``` ### Accessing Ground Truth ```python import json # Get a sample sample = dataset['train'][0] # Parse ground truth from JSON string ground_truth = json.loads(sample['reward_model']['ground_truth']) # Access variant (usually named 'answer') variant = ground_truth['answer'] print(f"Reference code:\n{variant['code']}") print(f"Input ports: {variant['input_port_width']}") print(f"Output ports: {variant['output_port_width']}") ``` ### Using with VERL ```python from datatrove.utils.reward_score import compute_score # Compute reward score for a generated Verilog solution score = compute_score( data_source="codev", solution_str=generated_solution, # Should include <think> and <answer> tags ground_truth=sample['reward_model']['ground_truth'], # Pass JSON string sandbox_fusion_url="http://localhost:8080", # Required for Verilog evaluation format_type="auto" # Auto-detect XML or GPT OSS format ) print(f"Reward score: {score}") ``` ### Integration with DataTrove ```python from datatrove.pipeline.readers import ParquetReader from datatrove.pipeline.filters import LambdaFilter from datatrove.executor import LocalPipelineExecutor pipeline = [ ParquetReader("sungyub/codev-r1-verl"), LambdaFilter(lambda doc: "sequential" in doc.text.lower()), # Filter for sequential circuits # Add more processing steps... ] executor = LocalPipelineExecutor(pipeline=pipeline, tasks=4) executor.run() ``` --- ## 📚 Citation ### Original Dataset ```bibtex @dataset{codev_r1_2024, author = {Yaoyu Zhu}, title = {CodeV-R1: Verilog Code Generation Dataset}, year = {2024}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/datasets/zhuyaoyu/CodeV-R1-dataset}} } ``` ### This Processed Version ```bibtex @dataset{sungyub_codev_r1_verl_2025, author = {Sungyub Kim}, title = {CodeV-R1 VERL: Standardized Verilog Code Generation for VERL Training}, year = {2025}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/datasets/sungyub/codev-r1-verl}} } ``` --- ## ⚖️ License - **This processed dataset**: CC-BY-SA-4.0 - **Original dataset**: CC-BY-SA-4.0 --- ## 🙏 Acknowledgments This dataset was processed using the [DataTrove](https://github.com/huggingface/datatrove) library. **Credits:** - Original dataset author: Yaoyu Zhu - Instruction standardization: Sungyub Kim - InstructionStandardizer implementation: DataTrove contributors **Special thanks to:** - Yaoyu Zhu for providing the original CodeV-R1 dataset - [Sandbox Fusion](https://github.com/sandbox-fusion/sandbox) team for Verilog evaluation infrastructure --- ## 📝 Version History ### v1.2 (Current - November 11, 2025) - Standardized instruction format across all 3,133 samples - Added 5-section structured format - Included explicit module naming requirements - Added both train (3,033) and validation (100) splits - Converted to VERL standard 5-field format - Changed ground_truth from binary pickle to JSON string ### v1.1 (November 4, 2025) - Experimental 6-section format (deprecated) ### v1.0 (Initial Release) - Direct conversion from zhuyaoyu/CodeV-R1-dataset --- ## 🔗 Related Resources - **Original Dataset**: [zhuyaoyu/CodeV-R1-dataset](https://huggingface.co/datasets/zhuyaoyu/CodeV-R1-dataset) - **Code Datasets**: - [sungyub/skywork-or1-code-verl](https://huggingface.co/datasets/sungyub/skywork-or1-code-verl) - Python code (14,057 samples) - [sungyub/eurus-2-code-verl](https://huggingface.co/datasets/sungyub/eurus-2-code-verl) - Multi-language code (25,276 samples) - **Math Datasets**: - [sungyub/openr1-math-verl](https://huggingface.co/datasets/sungyub/openr1-math-verl) - Math reasoning (120,387 samples) - [sungyub/math-verl-unified](https://huggingface.co/datasets/sungyub/math-verl-unified) - Unified math collection - **DataTrove Documentation**: [https://github.com/huggingface/datatrove](https://github.com/huggingface/datatrove) - **VERL Format Specification**: See VERL Schema section above --- <div align="center"> **Questions or issues?** Open an issue on the [DataTrove GitHub repository](https://github.com/huggingface/datatrove/issues) </div>
105
0
[ "task_categories:text-generation", "language:en", "license:cc-by-sa-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "code", "verilog", "hdl", "hardware", "verl", "reinforcement-learning", "hardware-design" ]
2025-11-04T00:52:23+00:00
2025-11-11T09:01:35+00:00
0
amityco/tau-bench-retail-train-next-action-hard
sample from amityco/tau-bench-retail-train-next-action-all-step with model Qwen/Qwen3-4B-Thinking-2507 with 4 response filter only score <=0.25
sample from amityco/tau-bench-retail-train-next-action-all-step with model Qwen/Qwen3-4B-Thinking-2507 with 4 response filter only score <=0.25
47
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-07T07:29:38+00:00
2025-11-11T08:54:01+00:00
0
bridgeconn/sign-dictionary-isl
# Dataset Card for Sign Dictionary Dataset This dataset contains Indian sign language videos with one gloss per video. There are 3077 seperate lex items or glosses included. The dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). ## Dataset Details There is a total of 2.5 hours of sign videos. ## Dataset Description - Segmented sign videos - Pose estimation data in the following formats - skeletal video - Frames wise body landmarks detected by dwpose as a numpy array - Frames wise body landmarks detected by mediapose as .pose format ## How to use ```python import webdataset as wds import numpy as np import json import tempfile import os import cv2 def main(): buffer_size = 1024 dataset = ( wds.WebDataset( "https://huggingface.co/datasets/bridgeconn/sign-dictionary-isl/resolve/main/shard_{00001..00002}-train.tar", shardshuffle=False) .shuffle(buffer_size) .decode() ) for sample in dataset: ''' Each sample contains: 'mp4', 'pose-dwpose.npz', 'pose-mediapipe.pose' and 'json' ''' # print(sample.keys()) # JSON metadata json_data = sample['json'] print(json_data['filename']) print(json_data['transcripts']) print(json_data['glosses']) # main video mp4_data = sample['mp4'] process_video(mp4_data) # dwpose results dwpose_coords = sample["pose-dwpose.npz"] frame_poses = dwpose_coords['frames'].tolist() print(f"Frames in dwpose coords: {len(frame_poses)} poses") print(f"Pose coords shape: {len(frame_poses[0][0])}") print(f"One point looks like [x,y]: {frame_poses[0][0][0]}") # mediapipe results in .pose format pose_format_data = sample["pose-mediapipe.pose"] process_poseformat(pose_format_data) break def process_poseformat(pose_format_data): from pose_format import Pose temp_file = None try: with tempfile.NamedTemporaryFile(suffix=".pose", delete=False) as tmp: tmp.write(pose_format_data) temp_file = tmp.name data_buffer = open(temp_file, "rb").read() pose = Pose.read(data_buffer) print(f"Mediapipe results from pose-format: {pose.body.data.shape}") except Exception as e: print(f"Error processing pose-format: {e}") finally: if temp_file and os.path.exists(temp_file): os.remove(temp_file) # Clean up the temporary file def process_video(mp4_data): print(f"Video bytes length: {len(mp4_data)} bytes") temp_file = None try: # Processing video from temporary file with tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) as tmp: tmp.write(mp4_data) temp_file = tmp.name cap = cv2.VideoCapture(temp_file) if not cap.isOpened(): raise IOError(f"Could not open video file: {temp_file}") # Example: Get video metadata frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) fps = cap.get(cv2.CAP_PROP_FPS) width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) print(f"Video Info: {frame_count} frames, {fps:.2f} FPS, {width}x{height}") # Example: Read and display the first frame (or process as needed) ret, frame = cap.read() if ret: print(f"First frame shape: {frame.shape}, dtype: {frame.dtype}") # You can then use this frame for further processing, e.g., frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) import matplotlib.pyplot as plt plt.imshow(frame_rgb) plt.title(f"Sample First Frame") plt.show() else: print("Could not read first frame.") cap.release() except Exception as e: print(f"Error processing external MP4: {e}") finally: if temp_file and os.path.exists(temp_file): os.remove(temp_file) # Clean up the temporary file if __name__ == '__main__': main() ``` --- license: cc-by-sa-4.0 ---
# Dataset Card for Sign Dictionary Dataset This dataset contains Indian sign language videos with one gloss per video. There are 3077 seperate lex items or glosses included. The dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). ## Dataset Details There is a total of 2.5 hours of sign videos. ## Dataset Description - Segmented sign videos - Pose estimation data in the following formats - skeletal video - Frames wise body landmarks detected by dwpose as a numpy array - Frames wise body landmarks detected by mediapose as .pose format ## How to use ```python import webdataset as wds import numpy as np import json import tempfile import os import cv2 def main(): buffer_size = 1024 dataset = ( wds.WebDataset( "https://huggingface.co/datasets/bridgeconn/sign-dictionary-isl/resolve/main/shard_{00001..00002}-train.tar", shardshuffle=False) .shuffle(buffer_size) .decode() ) for sample in dataset: ''' Each sample contains: 'mp4', 'pose-dwpose.npz', 'pose-mediapipe.pose' and 'json' ''' # print(sample.keys()) # JSON metadata json_data = sample['json'] print(json_data['filename']) print(json_data['transcripts']) print(json_data['glosses']) # main video mp4_data = sample['mp4'] process_video(mp4_data) # dwpose results dwpose_coords = sample["pose-dwpose.npz"] frame_poses = dwpose_coords['frames'].tolist() print(f"Frames in dwpose coords: {len(frame_poses)} poses") print(f"Pose coords shape: {len(frame_poses[0][0])}") print(f"One point looks like [x,y]: {frame_poses[0][0][0]}") # mediapipe results in .pose format pose_format_data = sample["pose-mediapipe.pose"] process_poseformat(pose_format_data) break def process_poseformat(pose_format_data): from pose_format import Pose temp_file = None try: with tempfile.NamedTemporaryFile(suffix=".pose", delete=False) as tmp: tmp.write(pose_format_data) temp_file = tmp.name data_buffer = open(temp_file, "rb").read() pose = Pose.read(data_buffer) print(f"Mediapipe results from pose-format: {pose.body.data.shape}") except Exception as e: print(f"Error processing pose-format: {e}") finally: if temp_file and os.path.exists(temp_file): os.remove(temp_file) # Clean up the temporary file def process_video(mp4_data): print(f"Video bytes length: {len(mp4_data)} bytes") temp_file = None try: # Processing video from temporary file with tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) as tmp: tmp.write(mp4_data) temp_file = tmp.name cap = cv2.VideoCapture(temp_file) if not cap.isOpened(): raise IOError(f"Could not open video file: {temp_file}") # Example: Get video metadata frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) fps = cap.get(cv2.CAP_PROP_FPS) width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) print(f"Video Info: {frame_count} frames, {fps:.2f} FPS, {width}x{height}") # Example: Read and display the first frame (or process as needed) ret, frame = cap.read() if ret: print(f"First frame shape: {frame.shape}, dtype: {frame.dtype}") # You can then use this frame for further processing, e.g., frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) import matplotlib.pyplot as plt plt.imshow(frame_rgb) plt.title(f"Sample First Frame") plt.show() else: print("Could not read first frame.") cap.release() except Exception as e: print(f"Error processing external MP4: {e}") finally: if temp_file and os.path.exists(temp_file): os.remove(temp_file) # Clean up the temporary file if __name__ == '__main__': main() ``` --- license: cc-by-sa-4.0 ---
287
0
[ "language:ins", "license:cc-by-sa-4.0", "size_categories:1K<n<10K", "format:webdataset", "modality:text", "modality:video", "library:datasets", "library:webdataset", "library:mlcroissant", "region:us", "video", "parallel-corpus", "low-resource-languages" ]
2025-07-01T05:58:28+00:00
2025-11-11T08:52:07+00:00
0
ming326/gen3_data_unwrap_filter
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": null, "total_episodes": 210, "total_frames": 43486, "total_tasks": 2, "total_videos": 840, "total_chunks": 1, "chunks_size": 1000, "fps": 15, "splits": { "train": "0:210" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "observation.state": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "observation.actions": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "actions": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "observation.images.wrist_left": { "dtype": "video", "shape": [ 224, 224, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.images.third_people": { "dtype": "video", "shape": [ 224, 224, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.digist1": { "dtype": "video", "shape": [ 320, 240, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 320, "video.width": 240, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.digist2": { "dtype": "video", "shape": [ 320, 240, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 320, "video.width": 240, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "language_instruction": { "dtype": "string", "shape": [ 1 ] }, "is_first": { "dtype": "bool", "shape": [ 1 ] }, "is_last": { "dtype": "bool", "shape": [ 1 ] }, "is_terminal": { "dtype": "bool", "shape": [ 1 ] }, "is_episode_successful": { "dtype": "bool", "shape": [ 1 ] }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": null, "total_episodes": 210, "total_frames": 43486, "total_tasks": 2, "total_videos": 840, "total_chunks": 1, "chunks_size": 1000, "fps": 15, "splits": { "train": "0:210" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "observation.state": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "observation.actions": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "actions": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "observation.images.wrist_left": { "dtype": "video", "shape": [ 224, 224, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.images.third_people": { "dtype": "video", "shape": [ 224, 224, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.digist1": { "dtype": "video", "shape": [ 320, 240, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 320, "video.width": 240, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.digist2": { "dtype": "video", "shape": [ 320, 240, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 320, "video.width": 240, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "language_instruction": { "dtype": "string", "shape": [ 1 ] }, "is_first": { "dtype": "bool", "shape": [ 1 ] }, "is_last": { "dtype": "bool", "shape": [ 1 ] }, "is_terminal": { "dtype": "bool", "shape": [ 1 ] }, "is_episode_successful": { "dtype": "bool", "shape": [ 1 ] }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
113
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T08:39:09+00:00
2025-11-11T08:56:05+00:00
0
ruanwz/record-grab-rubber-30-2
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 30, "total_frames": 8120, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:30" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 360, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 360, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 30, "total_frames": 8120, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:30" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 360, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 360, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
29
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T08:40:28+00:00
2025-11-11T08:40:40+00:00
0
umetyu/pickup_tape5
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 3, "total_frames": 900, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:3" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 3, "total_frames": 900, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:3" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
27
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T08:34:05+00:00
2025-11-11T08:34:16+00:00
0
lmms-lab-si/EASI-Leaderboard-Data
# EASI Leaderboard Data A consolidated dataset for the **EASI** Leaderboard, containing the evaluation data (inputs/prompts) actually used on the leaderboard across spatial reasoning benchmarks for VLMs. > Looking for the Spatial Intelligence leaderboard? > https://huggingface.co/spaces/lmms-lab-si/EASI-Leaderboard ## 🔎 Dataset Summary - **Question types:** **MCQ (multiple choice)** and **NA (numeric answer)**. - **File format:** TSV only. - **Usage:** These TSVs are directly consumable by the EASI repository’s evaluation protocol: https://github.com/EvolvingLMMs-Lab/EASI ## 📦 Supported Benchmarks | Benchmark | Evaluation settings | |-------------|------------------------------| | [VSI-Bench](https://huggingface.co/datasets/nyu-visionx/VSI-Bench) | VSI-Bench_origin_32frame | | [SITE-Bench](https://huggingface.co/datasets/franky-veteran/SITE-Bench) | [SiteBenchImage](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/SiteBenchImage.tsv) | | | [SiteBenchVideo_32frame](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/SiteBenchVideo.tsv) | | [MMSI-Bench](https://huggingface.co/datasets/RunsenXu/MMSI-Bench) | [MMSIBench_wo_circular](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/MMSIBench_wo_circular.tsv) | | [MindCube](https://huggingface.co/datasets/MLL-Lab/MindCube) | [MindCubeBench_tiny_raw_qa](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/MindCubeBench_tiny_raw_qa.tsv) | | | [MindCubeBench_raw_qa](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/MindCubeBench_raw_qa.tsv) | | [ViewSpatial](https://huggingface.co/datasets/lidingm/ViewSpatial-Bench) | [ViewSpatialBench](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/ViewSpatialBench.tsv) | | [EmbSpatial](https://huggingface.co/datasets/FlagEval/EmbSpatial-Bench) | [EmbSpatialBench](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/EmbSpatialBench.tsv) | > For full benchmark definitions, please refer to the official [VLMEvalKit documentation](https://aicarrier.feishu.cn/wiki/Qp7wwSzQ9iK1Y6kNUJVcr6zTnPe?table=tblsdEpLieDoCxtb&view=vewa8sGZrY). ## 🗂️ Files - All files are **TSV requests** with a **unified schema** (id, category, question, answer/gt, optional media id/path, type) for seamless use in EASI’s **eval_protocol**. - Some benchmarks (e.g., **MindCube**) need extra **prompt reconstruction/normalization**; the TSVs here provide the canonical prompts used for evaluation. - To keep things lightweight, TSVs **exclude images/videos** for all benchmarks **except** `MMSI` and `EmbSpatial`. ## 📥 Original Data - Original Benchmark images or videos are **auto-fetched at runtime** by the EASI evaluation scripts **from each benchmark’s official source/URL**.
# EASI Leaderboard Data A consolidated dataset for the **EASI** Leaderboard, containing the evaluation data (inputs/prompts) actually used on the leaderboard across spatial reasoning benchmarks for VLMs. > Looking for the Spatial Intelligence leaderboard? > https://huggingface.co/spaces/lmms-lab-si/EASI-Leaderboard ## 🔎 Dataset Summary - **Question types:** **MCQ (multiple choice)** and **NA (numeric answer)**. - **File format:** TSV only. - **Usage:** These TSVs are directly consumable by the EASI repository’s evaluation protocol: https://github.com/EvolvingLMMs-Lab/EASI ## 📦 Supported Benchmarks | Benchmark | Evaluation settings | |-------------|------------------------------| | [VSI-Bench](https://huggingface.co/datasets/nyu-visionx/VSI-Bench) | VSI-Bench_origin_32frame | | [SITE-Bench](https://huggingface.co/datasets/franky-veteran/SITE-Bench) | [SiteBenchImage](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/SiteBenchImage.tsv) | | | [SiteBenchVideo_32frame](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/SiteBenchVideo.tsv) | | [MMSI-Bench](https://huggingface.co/datasets/RunsenXu/MMSI-Bench) | [MMSIBench_wo_circular](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/MMSIBench_wo_circular.tsv) | | [MindCube](https://huggingface.co/datasets/MLL-Lab/MindCube) | [MindCubeBench_tiny_raw_qa](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/MindCubeBench_tiny_raw_qa.tsv) | | | [MindCubeBench_raw_qa](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/MindCubeBench_raw_qa.tsv) | | [ViewSpatial](https://huggingface.co/datasets/lidingm/ViewSpatial-Bench) | [ViewSpatialBench](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/ViewSpatialBench.tsv) | | [EmbSpatial](https://huggingface.co/datasets/FlagEval/EmbSpatial-Bench) | [EmbSpatialBench](https://huggingface.co/datasets/lmms-lab-si/EASI-Leaderboard-Data/resolve/main/EmbSpatialBench.tsv) | > For full benchmark definitions, please refer to the official [VLMEvalKit documentation](https://aicarrier.feishu.cn/wiki/Qp7wwSzQ9iK1Y6kNUJVcr6zTnPe?table=tblsdEpLieDoCxtb&view=vewa8sGZrY). ## 🗂️ Files - All files are **TSV requests** with a **unified schema** (id, category, question, answer/gt, optional media id/path, type) for seamless use in EASI’s **eval_protocol**. - Some benchmarks (e.g., **MindCube**) need extra **prompt reconstruction/normalization**; the TSVs here provide the canonical prompts used for evaluation. - To keep things lightweight, TSVs **exclude images/videos** for all benchmarks **except** `MMSI` and `EmbSpatial`. ## 📥 Original Data - Original Benchmark images or videos are **auto-fetched at runtime** by the EASI evaluation scripts **from each benchmark’s official source/URL**.
40
0
[ "task_categories:question-answering", "language:en", "license:apache-2.0", "size_categories:10K<n<100K", "region:us", "Spatial Intelligence", "EASI", "Benchmark", "Leaderboard" ]
2025-11-07T12:39:50+00:00
2025-11-11T08:31:15+00:00
0
s-y-a-n/sp-evaluation-results
# Senior Project Evaluation Results This repository contains the LLM evaluation results for our senior project. [Link to GitHub Repository](https://github.com/S-Y-A-N/ar-llm-browser/)
# Senior Project Evaluation Results This repository contains the LLM evaluation results for our senior project. [Link to GitHub Repository](https://github.com/S-Y-A-N/ar-llm-browser/)
12
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-10T14:04:36+00:00
2025-11-11T08:32:22+00:00
0
NemoSheng/ReAct_funcall_sharegpt
Data from hiyoga Function Calling
Data from hiyoga Function Calling
14
0
[ "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2024-08-07T10:17:33+00:00
2025-11-11T08:31:56+00:00
0
lipaz/StudentHabitsvsAcademicPerformance
# 🧠 Student Habits vs Academic Performance This project explores how students’ daily habits — such as **study hours, sleep, social media use, part-time work, and internet quality** — affect their **exam performance**. The dataset contains **1,000 simulated student records** and combines both numerical and categorical variables, allowing for rich exploratory and regression analysis. --- ## 🎯 Research Question **How do students’ daily habits (study hours, sleep, social media use, and lifestyle factors) affect their exam performance?** --- ## 🧩 Sub-Questions (8 Total) 1️⃣ Does studying more hours per day improve exam performance? 2️⃣ Does sleeping longer lead to higher grades? 3️⃣ Does more time on social media reduce exam results? 4️⃣ Does better internet quality correlate with better exam performance? 5️⃣ Do students with part-time jobs perform differently compared to those without jobs? 6️⃣ Does gender influence the relationship between study habits and exam performance? 7️⃣ Does combining good sleep and good study habits lead to stronger results? 8️⃣ Which factor (study time, sleep, social media, or internet quality) is the strongest predictor of exam performance? --- ## 📊 Dataset Overview | Feature | Description | |----------|-------------| | `age` | Age of the student | | `gender` | Gender (Male/Female/Other) | | `study_hours_per_day` | Average study hours per day | | `sleep_hours` | Average sleep hours per night | | `social_media_hours` | Time spent daily on social media | | `part_time_job` | Whether the student works part-time | | `internet_quality` | Quality of home internet (Good/Average/Poor) | | `exam_score` | Final exam score (0 – 100) | --- ## 🔍 Exploratory Data Analysis (EDA) ### Q1: Study Hours vs Exam Score 📈 Students who studied more achieved higher scores, showing a **strong positive correlation** between study time and exam performance. ![Study Hours vs Exam Score](study_vs_exam_scatter.png) ![Average Exam Score by Study Hours](study_vs_exam_bar.png) | Study Hours | Average Exam Score | |--------------|--------------------| | 0–2 | 46.97 | | 2–4 | 65.96 | | 4–6 | 82.05 | | 6–8 | 97.66 | | 8–10 | 100.00 | --- ### Q2: Sleep Hours vs Exam Score 📊 A mild positive relationship appears between **sleep duration and exam scores** — students sleeping 7–8 hours tend to perform slightly better. Too much sleep doesn’t necessarily improve performance. ![Sleep vs Exam Score](sleep_vs_exam_scatter.png) --- ### Q3: Social Media Use vs Exam Score 📉 Clear **negative trend** — higher social media use correlates with lower exam scores. ![Social Media vs Exam Score](social_media_vs_exam_scatter.png) ![Average Exam Score by Social Media Use](social_media_vs_exam_bar.png) | Social Media Use | Average Exam Score | |------------------|--------------------| | Low (≤2 h) | 72.79 | | Moderate (2–4 h) | 68.38 | | High (4–6 h) | 64.95 | | Excessive (>6 h) | 60.57 | --- ### Q4: Internet Quality vs Exam Score 🕸️ Internet quality shows **no significant impact** — students with “Good,” “Average,” or “Poor” connections scored similarly. ![Internet Quality vs Exam Score](internet_quality_vs_exam_bar.png) | Internet Quality | Avg Exam Score | |------------------|----------------| | Average | 70.64 | | Good | 68.65 | | Poor | 69.72 | --- ### Q5: Part-Time Job vs Exam Score 💼 Having a part-time job has **minimal effect** — students with jobs scored almost the same as those without. ![Part-Time Job vs Exam Score](part_time_job_vs_exam_bar.png) | Status | Avg Exam Score | |---------|----------------| | No Job | 69.84 | | Yes | 68.74 | --- ### Q6: Gender and Study Habits ⚖️ The relationship between study hours and exam performance is consistent across genders. ![Gender Correlation](gender_correlation.png) | Gender | Correlation (Study Hours ↔ Exam Score) | |---------|----------------------------------------| | Female | 0.824 | | Male | 0.827 | | Other | 0.818 | --- ### Q7: Combined Habits — Sleep + Study 💡 Students who both **study well and sleep well** achieve the highest average scores. ![Combined Habits vs Exam Score](combined_habits_vs_exam_bar.png) | Habit Group | Avg Exam Score | |--------------|----------------| | Good Sleep Only | 63.64 | | Good Study + Good Sleep | 86.66 | | Good Study Only | 81.94 | | Neither | 59.40 | --- ### Q8: Strongest Predictor (Regression Analysis) 🧮 Multiple Linear Regression results show: ![Regression Summary](regression_summary.png) | Predictor | Coefficient | p-value | |------------|-------------|---------| | Study Hours | **+9.59** | 0.000 | | Sleep Hours | +2.04 | 0.000 | | Social Media Hours | **–2.66** | 0.000 | | Internet Quality | –0.97 | 0.010 | 📊 Interpretation: - **Study hours per day** are the **strongest positive predictor** of exam performance. - **Social media hours** show the strongest **negative impact**. - Sleep has a smaller but positive effect, while internet quality is relatively minor. --- ## 💡 Key Insights - Study time is the **most important factor** for higher exam performance. - Sleep contributes moderately — balance is key (7–8 hours optimal). - Social media use negatively impacts academic results. - Internet quality and part-time work have only minor effects. - Combining good sleep and study habits yields the best outcomes. --- ## 🗂️ Files Included - `student_habits_performance.csv` — original dataset - `student_analysis.ipynb` — Colab notebook with full analysis - `README.md` — this file - Image files: `study_vs_exam_scatter.png`, `study_vs_exam_bar.png`, `sleep_vs_exam_scatter.png`, `social_media_vs_exam_scatter.png`, `social_media_vs_exam_bar.png`, `internet_quality_vs_exam_bar.png`, `part_time_job_vs_exam_bar.png`, `gender_correlation.png`, `combined_habits_vs_exam_bar.png`, `regression_summary.png` --- ## 📜 License Dataset and analysis created for **educational and research purposes only**.
# 🧠 Student Habits vs Academic Performance This project explores how students’ daily habits — such as **study hours, sleep, social media use, part-time work, and internet quality** — affect their **exam performance**. The dataset contains **1,000 simulated student records** and combines both numerical and categorical variables, allowing for rich exploratory and regression analysis. --- ## 🎯 Research Question **How do students’ daily habits (study hours, sleep, social media use, and lifestyle factors) affect their exam performance?** --- ## 🧩 Sub-Questions (8 Total) 1️⃣ Does studying more hours per day improve exam performance? 2️⃣ Does sleeping longer lead to higher grades? 3️⃣ Does more time on social media reduce exam results? 4️⃣ Does better internet quality correlate with better exam performance? 5️⃣ Do students with part-time jobs perform differently compared to those without jobs? 6️⃣ Does gender influence the relationship between study habits and exam performance? 7️⃣ Does combining good sleep and good study habits lead to stronger results? 8️⃣ Which factor (study time, sleep, social media, or internet quality) is the strongest predictor of exam performance? --- ## 📊 Dataset Overview | Feature | Description | |----------|-------------| | `age` | Age of the student | | `gender` | Gender (Male/Female/Other) | | `study_hours_per_day` | Average study hours per day | | `sleep_hours` | Average sleep hours per night | | `social_media_hours` | Time spent daily on social media | | `part_time_job` | Whether the student works part-time | | `internet_quality` | Quality of home internet (Good/Average/Poor) | | `exam_score` | Final exam score (0 – 100) | --- ## 🔍 Exploratory Data Analysis (EDA) ### Q1: Study Hours vs Exam Score 📈 Students who studied more achieved higher scores, showing a **strong positive correlation** between study time and exam performance. ![Study Hours vs Exam Score](study_vs_exam_scatter.png) ![Average Exam Score by Study Hours](study_vs_exam_bar.png) | Study Hours | Average Exam Score | |--------------|--------------------| | 0–2 | 46.97 | | 2–4 | 65.96 | | 4–6 | 82.05 | | 6–8 | 97.66 | | 8–10 | 100.00 | --- ### Q2: Sleep Hours vs Exam Score 📊 A mild positive relationship appears between **sleep duration and exam scores** — students sleeping 7–8 hours tend to perform slightly better. Too much sleep doesn’t necessarily improve performance. ![Sleep vs Exam Score](sleep_vs_exam_scatter.png) --- ### Q3: Social Media Use vs Exam Score 📉 Clear **negative trend** — higher social media use correlates with lower exam scores. ![Social Media vs Exam Score](social_media_vs_exam_scatter.png) ![Average Exam Score by Social Media Use](social_media_vs_exam_bar.png) | Social Media Use | Average Exam Score | |------------------|--------------------| | Low (≤2 h) | 72.79 | | Moderate (2–4 h) | 68.38 | | High (4–6 h) | 64.95 | | Excessive (>6 h) | 60.57 | --- ### Q4: Internet Quality vs Exam Score 🕸️ Internet quality shows **no significant impact** — students with “Good,” “Average,” or “Poor” connections scored similarly. ![Internet Quality vs Exam Score](internet_quality_vs_exam_bar.png) | Internet Quality | Avg Exam Score | |------------------|----------------| | Average | 70.64 | | Good | 68.65 | | Poor | 69.72 | --- ### Q5: Part-Time Job vs Exam Score 💼 Having a part-time job has **minimal effect** — students with jobs scored almost the same as those without. ![Part-Time Job vs Exam Score](part_time_job_vs_exam_bar.png) | Status | Avg Exam Score | |---------|----------------| | No Job | 69.84 | | Yes | 68.74 | --- ### Q6: Gender and Study Habits ⚖️ The relationship between study hours and exam performance is consistent across genders. ![Gender Correlation](gender_correlation.png) | Gender | Correlation (Study Hours ↔ Exam Score) | |---------|----------------------------------------| | Female | 0.824 | | Male | 0.827 | | Other | 0.818 | --- ### Q7: Combined Habits — Sleep + Study 💡 Students who both **study well and sleep well** achieve the highest average scores. ![Combined Habits vs Exam Score](combined_habits_vs_exam_bar.png) | Habit Group | Avg Exam Score | |--------------|----------------| | Good Sleep Only | 63.64 | | Good Study + Good Sleep | 86.66 | | Good Study Only | 81.94 | | Neither | 59.40 | --- ### Q8: Strongest Predictor (Regression Analysis) 🧮 Multiple Linear Regression results show: ![Regression Summary](regression_summary.png) | Predictor | Coefficient | p-value | |------------|-------------|---------| | Study Hours | **+9.59** | 0.000 | | Sleep Hours | +2.04 | 0.000 | | Social Media Hours | **–2.66** | 0.000 | | Internet Quality | –0.97 | 0.010 | 📊 Interpretation: - **Study hours per day** are the **strongest positive predictor** of exam performance. - **Social media hours** show the strongest **negative impact**. - Sleep has a smaller but positive effect, while internet quality is relatively minor. --- ## 💡 Key Insights - Study time is the **most important factor** for higher exam performance. - Sleep contributes moderately — balance is key (7–8 hours optimal). - Social media use negatively impacts academic results. - Internet quality and part-time work have only minor effects. - Combining good sleep and study habits yields the best outcomes. --- ## 🗂️ Files Included - `student_habits_performance.csv` — original dataset - `student_analysis.ipynb` — Colab notebook with full analysis - `README.md` — this file - Image files: `study_vs_exam_scatter.png`, `study_vs_exam_bar.png`, `sleep_vs_exam_scatter.png`, `social_media_vs_exam_scatter.png`, `social_media_vs_exam_bar.png`, `internet_quality_vs_exam_bar.png`, `part_time_job_vs_exam_bar.png`, `gender_correlation.png`, `combined_habits_vs_exam_bar.png`, `regression_summary.png` --- ## 📜 License Dataset and analysis created for **educational and research purposes only**.
28
0
[ "size_categories:n<1K", "format:imagefolder", "modality:image", "library:datasets", "library:mlcroissant", "region:us" ]
2025-11-11T08:03:33+00:00
2025-11-11T08:31:46+00:00
0
yinxinyuchen/pp_1111_2
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4311, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4311, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
68
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T08:19:29+00:00
2025-11-11T08:20:56+00:00
0
ICIP/TeleSalesCorpus
# TeleSalesCorpus ## Dataset Description TeleSalesCorpus is a large-scale, high-fidelity dialogue dataset designed specifically for the domain of intelligent telemarketing. This dataset was constructed to address the core challenges that current Large Language Models (LLMs) face in goal-driven persuasive dialogue tasks, such as telemarketing. These challenges include "strategic brittleness" (difficulty in multi-turn planning) and "factual hallucination" (straying from strict factual constraints). Unlike previous works often limited by a lack of task-specific data, TeleSalesCorpus provides the first real-world-grounded dialogue dataset for this domain. It is designed to train and evaluate AI models capable of executing complex, multi-turn sales strategies. The dataset consists of 2k high-fidelity, multi-turn conversations. ## 💡 Use Cases * Model Training: Training persuasive dialogue systems for intelligent sales and customer service. * Model Evaluation: Benchmarking LLMs on their ability to maintain factual faithfulness, adhere to complex guidelines, and execute goal-oriented strategies. * Academic Research: Studying goal-driven dialogue, persuasive strategy, dynamic planning, and objection handling in conversational AI. <!-- ## Citation If you use `TeleSalesCorpus` in your research, please cite our paper: ```bibtex @inproceedings{zhang2026aisalesman, title={AI-Salesman: Design, Implementation, and Evaluation of an End-to-End Intelligent Sales System}, author={Zhang, Qingyu and Xin, Chunlei and Chen, Xuanang and Lu, Yaojie and Lin, Hongyu and Han, Xianpei and Sun, Le and Ye, Qing and Xie, Qianlong and Wang, Xingxing}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, year={2026} } ``` -->
# TeleSalesCorpus ## Dataset Description TeleSalesCorpus is a large-scale, high-fidelity dialogue dataset designed specifically for the domain of intelligent telemarketing. This dataset was constructed to address the core challenges that current Large Language Models (LLMs) face in goal-driven persuasive dialogue tasks, such as telemarketing. These challenges include "strategic brittleness" (difficulty in multi-turn planning) and "factual hallucination" (straying from strict factual constraints). Unlike previous works often limited by a lack of task-specific data, TeleSalesCorpus provides the first real-world-grounded dialogue dataset for this domain. It is designed to train and evaluate AI models capable of executing complex, multi-turn sales strategies. The dataset consists of 2k high-fidelity, multi-turn conversations. ## 💡 Use Cases * Model Training: Training persuasive dialogue systems for intelligent sales and customer service. * Model Evaluation: Benchmarking LLMs on their ability to maintain factual faithfulness, adhere to complex guidelines, and execute goal-oriented strategies. * Academic Research: Studying goal-driven dialogue, persuasive strategy, dynamic planning, and objection handling in conversational AI. <!-- ## Citation If you use `TeleSalesCorpus` in your research, please cite our paper: ```bibtex @inproceedings{zhang2026aisalesman, title={AI-Salesman: Design, Implementation, and Evaluation of an End-to-End Intelligent Sales System}, author={Zhang, Qingyu and Xin, Chunlei and Chen, Xuanang and Lu, Yaojie and Lin, Hongyu and Han, Xianpei and Sun, Le and Ye, Qing and Xie, Qianlong and Wang, Xingxing}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, year={2026} } ``` -->
10
1
[ "task_categories:text-generation", "size_categories:1K<n<10K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-11T07:54:35+00:00
2025-11-11T08:20:18+00:00
1
zding08/eval_koch_data74
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "koch_follower", "total_episodes": 1, "total_frames": 5448, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:1" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.top": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "koch_follower", "total_episodes": 1, "total_frames": 5448, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:1" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.top": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
31
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T08:12:12+00:00
2025-11-11T08:13:14+00:00
0
techiaith/evals-speech-recognition-cy-en
# Welsh ASR Model Evaluation Transcription Dataset This resource compiles the output transcriptions from multiple Welsh Automatic Speech Recognition (ASR) models across several test sets. The data is structured hierarchically: - Splits delineate the individual test sets. - Configs within each split detail the performance (transcriptions) of a specific ASR model and its version on that set. # Metrics Results | model | test | wer | cer | | --- | --- | --- | --- | | openai/whisper-large-v2 | cymen-arfor/lleisiau-arfor | 71.8988 | 49.4113 | | openai/whisper-large-v3 | cymen-arfor/lleisiau-arfor | 78.381 | 51.8036 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/25.03 | cymen-arfor/lleisiau-arfor | 26.2495 | 9.9973 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/24.10 | cymen-arfor/lleisiau-arfor | 49.9122 | 20.3199 | | techiaith/kaldi-cy | cymen-arfor/lleisiau-arfor | 54.5076 | 27.4573 | | mistralai/Voxtral-Mini-3B-2507 | cymen-arfor/lleisiau-arfor | 105.7659 | 76.3593 | | techiaith/wav2vec2-btb-cv-ft-cv-cy/24.10 | cymen-arfor/lleisiau-arfor | 68.31 | 33.8774 | | openai/whisper-large-v2 | techiaith/banc-trawsgrifiadau-bangor | 77.0912 | 50.902 | | openai/whisper-large-v3 | techiaith/banc-trawsgrifiadau-bangor | 86.1427 | 57.2622 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/25.03 | techiaith/banc-trawsgrifiadau-bangor | 27.9029 | 9.7063 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/24.10 | techiaith/banc-trawsgrifiadau-bangor | 27.6264 | 9.7113 | | techiaith/kaldi-cy | techiaith/banc-trawsgrifiadau-bangor | 46.1966 | 21.8032 | | mistralai/Voxtral-Mini-3B-2507 | techiaith/banc-trawsgrifiadau-bangor | 116.6845 | 86.3502 | | techiaith/wav2vec2-btb-cv-ft-cv-cy/24.10 | techiaith/banc-trawsgrifiadau-bangor | 53.9631 | 21.3484 | | openai/whisper-large-v2 | techiaith/commonvoice-18-0-cy | 37.9589 | 15.3495 | | openai/whisper-large-v3 | techiaith/commonvoice-18-0-cy | 44.5461 | 20.2724 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/25.03 | techiaith/commonvoice-18-0-cy | 15.5251 | 4.1814 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/24.10 | techiaith/commonvoice-18-0-cy | 4.1731 | 1.2283 | | techiaith/kaldi-cy | techiaith/commonvoice-18-0-cy | 50.9511 | 17.8832 | | mistralai/Voxtral-Mini-3B-2507 | techiaith/commonvoice-18-0-cy | 70.4097 | 37.963 | | techiaith/wav2vec2-btb-cv-ft-cv-cy/24.10 | techiaith/commonvoice-18-0-cy | 38.2517 | 9.7546 | | openai/whisper-large-v2 | techiaith/commonvoice-18-0-cy-en | 38.1935 | 21.5529 | | openai/whisper-large-v3 | techiaith/commonvoice-18-0-cy-en | 39.1813 | 22.6869 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/25.03 | techiaith/commonvoice-18-0-cy-en | 28.1116 | 15.1831 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/24.10 | techiaith/commonvoice-18-0-cy-en | 50.7933 | 36.4355 | | techiaith/kaldi-cy | techiaith/commonvoice-18-0-cy-en | 81.3322 | 46.1831 | | mistralai/Voxtral-Mini-3B-2507 | techiaith/commonvoice-18-0-cy-en | 42.2153 | 21.7994 | | techiaith/wav2vec2-btb-cv-ft-cv-cy/24.10 | techiaith/commonvoice-18-0-cy-en | 69.696 | 36.1913 |
# Welsh ASR Model Evaluation Transcription Dataset This resource compiles the output transcriptions from multiple Welsh Automatic Speech Recognition (ASR) models across several test sets. The data is structured hierarchically: - Splits delineate the individual test sets. - Configs within each split detail the performance (transcriptions) of a specific ASR model and its version on that set. # Metrics Results | model | test | wer | cer | | --- | --- | --- | --- | | openai/whisper-large-v2 | cymen-arfor/lleisiau-arfor | 71.8988 | 49.4113 | | openai/whisper-large-v3 | cymen-arfor/lleisiau-arfor | 78.381 | 51.8036 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/25.03 | cymen-arfor/lleisiau-arfor | 26.2495 | 9.9973 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/24.10 | cymen-arfor/lleisiau-arfor | 49.9122 | 20.3199 | | techiaith/kaldi-cy | cymen-arfor/lleisiau-arfor | 54.5076 | 27.4573 | | mistralai/Voxtral-Mini-3B-2507 | cymen-arfor/lleisiau-arfor | 105.7659 | 76.3593 | | techiaith/wav2vec2-btb-cv-ft-cv-cy/24.10 | cymen-arfor/lleisiau-arfor | 68.31 | 33.8774 | | openai/whisper-large-v2 | techiaith/banc-trawsgrifiadau-bangor | 77.0912 | 50.902 | | openai/whisper-large-v3 | techiaith/banc-trawsgrifiadau-bangor | 86.1427 | 57.2622 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/25.03 | techiaith/banc-trawsgrifiadau-bangor | 27.9029 | 9.7063 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/24.10 | techiaith/banc-trawsgrifiadau-bangor | 27.6264 | 9.7113 | | techiaith/kaldi-cy | techiaith/banc-trawsgrifiadau-bangor | 46.1966 | 21.8032 | | mistralai/Voxtral-Mini-3B-2507 | techiaith/banc-trawsgrifiadau-bangor | 116.6845 | 86.3502 | | techiaith/wav2vec2-btb-cv-ft-cv-cy/24.10 | techiaith/banc-trawsgrifiadau-bangor | 53.9631 | 21.3484 | | openai/whisper-large-v2 | techiaith/commonvoice-18-0-cy | 37.9589 | 15.3495 | | openai/whisper-large-v3 | techiaith/commonvoice-18-0-cy | 44.5461 | 20.2724 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/25.03 | techiaith/commonvoice-18-0-cy | 15.5251 | 4.1814 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/24.10 | techiaith/commonvoice-18-0-cy | 4.1731 | 1.2283 | | techiaith/kaldi-cy | techiaith/commonvoice-18-0-cy | 50.9511 | 17.8832 | | mistralai/Voxtral-Mini-3B-2507 | techiaith/commonvoice-18-0-cy | 70.4097 | 37.963 | | techiaith/wav2vec2-btb-cv-ft-cv-cy/24.10 | techiaith/commonvoice-18-0-cy | 38.2517 | 9.7546 | | openai/whisper-large-v2 | techiaith/commonvoice-18-0-cy-en | 38.1935 | 21.5529 | | openai/whisper-large-v3 | techiaith/commonvoice-18-0-cy-en | 39.1813 | 22.6869 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/25.03 | techiaith/commonvoice-18-0-cy-en | 28.1116 | 15.1831 | | techiaith/whisper-large-v3-ft-verbatim-cy-en/24.10 | techiaith/commonvoice-18-0-cy-en | 50.7933 | 36.4355 | | techiaith/kaldi-cy | techiaith/commonvoice-18-0-cy-en | 81.3322 | 46.1831 | | mistralai/Voxtral-Mini-3B-2507 | techiaith/commonvoice-18-0-cy-en | 42.2153 | 21.7994 | | techiaith/wav2vec2-btb-cv-ft-cv-cy/24.10 | techiaith/commonvoice-18-0-cy-en | 69.696 | 36.1913 |
116
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-08-11T13:30:59+00:00
2025-11-11T08:08:36+00:00
0
yinxinyuchen/pp_1111_9
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4094, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4094, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
63
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T08:04:04+00:00
2025-11-11T08:04:47+00:00
0
yinxinyuchen/pp_1111_8
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 3813, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 3813, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
38
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:59:08+00:00
2025-11-11T07:59:43+00:00
0
SamsungSDS-Research/SDS-KoPub-VDR-Benchmark
## 📘 Dataset Summary **SDS KoPub-VDR** is a benchmark dataset for **Visual Document Retrieval (VDR)** in the context of <br> **Korean public documents**. It contains real-world government document images paired with natural-language <br> queries, corresponding answer pages, and ground-truth answers. The dataset is designed to evaluate AI models that <br> go beyond simple text matching, requiring comprehensive understanding of **visual layouts, tables, graphs, and images** <br> to accurately locate relevant information. The technical report is publicly available at https://arxiv.org/abs/2511.04910. ### 🔍 Key Features * **Authentic Public Data:** Includes diverse images of real Korean public documents. * **Multimodal Design:** Pairs textual queries with page-level visual evidence and answers. * **Beyond Text Matching:** Measures document-level visual understanding and retrieval capability. ### 📊 Document and Page Distribution | Source | Topic | # Docs | # Pages | Avg. Words/Page | URL | |:--------|:------|-------:|--------:|----------------:|:----| | **NAS** | Reports on diplomatic trends, international affairs | 7 | 366 | 215.45 | [NAS](https://nas.na.go.kr/nas/bbs/BNAS1011/list.do?menuNo=1800043) | | **NARS** | Reports on administrative actions, legislative cases | 125 | 8,176 | 180.22 | [NARS](https://www.nars.go.kr/) | | **NABO** | Fiscal analyses, project evaluation reports | 2 | 310 | 278.41 | [NABO](https://www.nabo.go.kr/) | | **PRISM** | Research on social, environmental, and industrial policy | 122 | 31,500 | 244.23 | [PRISM](https://www.prism.go.kr) | | **MOLEG** | Legal guides, statutory interpretations, case studies | 105 | 429 | 218.69 | [MOLEG](https://www.easylaw.go.kr) | --- ## 🧮 Dataset Creation Process The SDS-KoPub-VDR QA dataset was constructed through a multi-stage pipeline designed to ensure both realism and reliability in multimodal question–answer generation. ### 1️⃣ QA Generation We automatically generated multimodal QA pairs using **GPT-4o** and **Qwen2.5-VL-72B**, leveraging preprocessed page images, OCR text, and metadata. To ensure realistic and context-aware queries, three complementary prompting strategies were employed: - **Instruction-based prompting:** Models were instructed to generate questions as if the user had not seen the page, with answers strictly grounded in on-page evidence. - **Persona-augmented prompting:** Domain-specific personas (e.g., *Policy Maker*, *Citizen Petitioner*, *Journalist*) encouraged queries reflecting real-world user intent and reasoning. - **Dynamic few-shot prompting:** Relevant examples were dynamically retrieved from a domain-specific few-shot pool, enabling domain-consistent question styles and response formats. ### 2️⃣ Automated Validation (GPT-4.5) All generated QA pairs underwent **semantic validation using GPT-4.5**, which evaluated: - **Context relevance** – Is the question answerable based on the page content? - **Answer faithfulness** – Is the answer accurately grounded in the page evidence? - **Hallucination check** – Does the answer contain fabricated or external information? Low-quality pairs were filtered out based on these criteria, ensuring semantic consistency and factual correctness. ### 3️⃣ Human Expert Review Finally, domain researchers performed a **full manual review** using a custom annotation tool. Each QA pair was verified for: - Query clarity and specificity - Answer correctness and completeness - Evidence alignment and modality appropriateness #### This dataset was developed using OpenAI’s GPT-4o and GPT-4.5 models, as well as Alibaba’s Qwen series (notably Qwen2.5-VL 72B) — Built with Qwen. --- ## 📂 Dataset Structure ### Components The dataset is composed of three components: | Component | File / Folder | Description | |:-----------|:---------------|:-------------| | **Document metadata** | `SDS-KoPub-annotations.parquet` | Metadata linking each document to its constituent pages and QA pairs | | **Multimodal corpus** | `SDS-KoPub-corpus.parquet` | Page-level corpus containing multimodal inputs (image, text, OCR) | | **QA dataset** | `SDS-KoPub-QA.parquet` | Question–answer annotations and page-level representations | ### 1️⃣ Document Metadata **File:** `SDS-KoPub-annotations.parquet` | Name | Type | Description | |:-----|:-----|:------------| | `file_id` | string | Unique document identifier | | `file_name` | string | Original document name | | `down_url` | string | Public download link of the document | | `page_indices` | list[int] | List of page indices included in `SDS-KoPub-corpus` | | `query_indices` | list[int] | List of query–answer indices linked from `SDS-KoPub-QA` | | `indication_of_the_source` | string | The source and license of the work | ### 2️⃣ Corpus **File:** `SDS-KoPub-corpus.parquet` | Name | Type | Description | |:-----|:-----|:------------| | `id` | string | Unique identifier for a page | | `file_name` | string | Source document name | | `image` | PIL.Image.Image | Rendered page image (PIL format) | | `text` | string | Extracted text using PdfReader (pypdf, pypdfium2) | | `ocr` | string | Raw OCR Output | ### 3️⃣ QA dataset **File:** `SDS-KoPub-QA.parquet` | Name | Type | Description | |:-----|:-----|:------------| | `id` | string | Page ID for ground-truth evidence (not unique) | | `query` | string | Natural language question | | `answer` | string | Ground-truth answer | | `type` | string | Query modality type (`text`, `visual`, or `cross`) | | `domain` | string | Domain or category of the document | | `ground_truth` | list[] | Ground-truth evidence page indices | --- ## 📊 Statistics | Metric | Value | |:--------|:-------| | Documents | 361 | | Pages | 40,781 | | QA Pairs | 600 | | Domains | 6 (Society, Environment, Education, Industry, Diplomacy, Finance) | | Modality Types | Text / Visual / Cross-modal | ## Citation ```bibtex @misc{lee2025sdskopubvdrbenchmark, title={SDS KoPub VDR: A Benchmark Dataset for Visual Document Retrieval in Korean Public Documents}, author={Jaehoon Lee and Sohyun Kim and Wanggeun Park and Geon Lee and Seungkyung Kim and Minyoung Lee}, year={2025}, eprint={2511.04910}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2511.04910}, } ``` ## Contributor - Jaehoon Lee (jhlee19.lee@samsung.com) - Sohyun Kim (sh_sds.kim@samsung.com) - Wanggeun Park (wking.park@samsung.com) - Geun Lee (go.lee@samsung.com) - Seungkyung Kim (seungkyung.kim@samsung.com) - Minyoung Lee (miny.lee@samsung.com)
## 📘 Dataset Summary **SDS KoPub-VDR** is a benchmark dataset for **Visual Document Retrieval (VDR)** in the context of <br> **Korean public documents**. It contains real-world government document images paired with natural-language <br> queries, corresponding answer pages, and ground-truth answers. The dataset is designed to evaluate AI models that <br> go beyond simple text matching, requiring comprehensive understanding of **visual layouts, tables, graphs, and images** <br> to accurately locate relevant information. The technical report is publicly available at https://arxiv.org/abs/2511.04910. ### 🔍 Key Features * **Authentic Public Data:** Includes diverse images of real Korean public documents. * **Multimodal Design:** Pairs textual queries with page-level visual evidence and answers. * **Beyond Text Matching:** Measures document-level visual understanding and retrieval capability. ### 📊 Document and Page Distribution | Source | Topic | # Docs | # Pages | Avg. Words/Page | URL | |:--------|:------|-------:|--------:|----------------:|:----| | **NAS** | Reports on diplomatic trends, international affairs | 7 | 366 | 215.45 | [NAS](https://nas.na.go.kr/nas/bbs/BNAS1011/list.do?menuNo=1800043) | | **NARS** | Reports on administrative actions, legislative cases | 125 | 8,176 | 180.22 | [NARS](https://www.nars.go.kr/) | | **NABO** | Fiscal analyses, project evaluation reports | 2 | 310 | 278.41 | [NABO](https://www.nabo.go.kr/) | | **PRISM** | Research on social, environmental, and industrial policy | 122 | 31,500 | 244.23 | [PRISM](https://www.prism.go.kr) | | **MOLEG** | Legal guides, statutory interpretations, case studies | 105 | 429 | 218.69 | [MOLEG](https://www.easylaw.go.kr) | --- ## 🧮 Dataset Creation Process The SDS-KoPub-VDR QA dataset was constructed through a multi-stage pipeline designed to ensure both realism and reliability in multimodal question–answer generation. ### 1️⃣ QA Generation We automatically generated multimodal QA pairs using **GPT-4o** and **Qwen2.5-VL-72B**, leveraging preprocessed page images, OCR text, and metadata. To ensure realistic and context-aware queries, three complementary prompting strategies were employed: - **Instruction-based prompting:** Models were instructed to generate questions as if the user had not seen the page, with answers strictly grounded in on-page evidence. - **Persona-augmented prompting:** Domain-specific personas (e.g., *Policy Maker*, *Citizen Petitioner*, *Journalist*) encouraged queries reflecting real-world user intent and reasoning. - **Dynamic few-shot prompting:** Relevant examples were dynamically retrieved from a domain-specific few-shot pool, enabling domain-consistent question styles and response formats. ### 2️⃣ Automated Validation (GPT-4.5) All generated QA pairs underwent **semantic validation using GPT-4.5**, which evaluated: - **Context relevance** – Is the question answerable based on the page content? - **Answer faithfulness** – Is the answer accurately grounded in the page evidence? - **Hallucination check** – Does the answer contain fabricated or external information? Low-quality pairs were filtered out based on these criteria, ensuring semantic consistency and factual correctness. ### 3️⃣ Human Expert Review Finally, domain researchers performed a **full manual review** using a custom annotation tool. Each QA pair was verified for: - Query clarity and specificity - Answer correctness and completeness - Evidence alignment and modality appropriateness #### This dataset was developed using OpenAI’s GPT-4o and GPT-4.5 models, as well as Alibaba’s Qwen series (notably Qwen2.5-VL 72B) — Built with Qwen. --- ## 📂 Dataset Structure ### Components The dataset is composed of three components: | Component | File / Folder | Description | |:-----------|:---------------|:-------------| | **Document metadata** | `SDS-KoPub-annotations.parquet` | Metadata linking each document to its constituent pages and QA pairs | | **Multimodal corpus** | `SDS-KoPub-corpus.parquet` | Page-level corpus containing multimodal inputs (image, text, OCR) | | **QA dataset** | `SDS-KoPub-QA.parquet` | Question–answer annotations and page-level representations | ### 1️⃣ Document Metadata **File:** `SDS-KoPub-annotations.parquet` | Name | Type | Description | |:-----|:-----|:------------| | `file_id` | string | Unique document identifier | | `file_name` | string | Original document name | | `down_url` | string | Public download link of the document | | `page_indices` | list[int] | List of page indices included in `SDS-KoPub-corpus` | | `query_indices` | list[int] | List of query–answer indices linked from `SDS-KoPub-QA` | | `indication_of_the_source` | string | The source and license of the work | ### 2️⃣ Corpus **File:** `SDS-KoPub-corpus.parquet` | Name | Type | Description | |:-----|:-----|:------------| | `id` | string | Unique identifier for a page | | `file_name` | string | Source document name | | `image` | PIL.Image.Image | Rendered page image (PIL format) | | `text` | string | Extracted text using PdfReader (pypdf, pypdfium2) | | `ocr` | string | Raw OCR Output | ### 3️⃣ QA dataset **File:** `SDS-KoPub-QA.parquet` | Name | Type | Description | |:-----|:-----|:------------| | `id` | string | Page ID for ground-truth evidence (not unique) | | `query` | string | Natural language question | | `answer` | string | Ground-truth answer | | `type` | string | Query modality type (`text`, `visual`, or `cross`) | | `domain` | string | Domain or category of the document | | `ground_truth` | list[] | Ground-truth evidence page indices | --- ## 📊 Statistics | Metric | Value | |:--------|:-------| | Documents | 361 | | Pages | 40,781 | | QA Pairs | 600 | | Domains | 6 (Society, Environment, Education, Industry, Diplomacy, Finance) | | Modality Types | Text / Visual / Cross-modal | ## Citation ```bibtex @misc{lee2025sdskopubvdrbenchmark, title={SDS KoPub VDR: A Benchmark Dataset for Visual Document Retrieval in Korean Public Documents}, author={Jaehoon Lee and Sohyun Kim and Wanggeun Park and Geon Lee and Seungkyung Kim and Minyoung Lee}, year={2025}, eprint={2511.04910}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2511.04910}, } ``` ## Contributor - Jaehoon Lee (jhlee19.lee@samsung.com) - Sohyun Kim (sh_sds.kim@samsung.com) - Wanggeun Park (wking.park@samsung.com) - Geun Lee (go.lee@samsung.com) - Seungkyung Kim (seungkyung.kim@samsung.com) - Minyoung Lee (miny.lee@samsung.com)
112
4
[ "task_categories:visual-document-retrieval", "language:ko", "language:en", "license:cc-by-sa-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2511.04910", "region:us", "multimodal", "retrieval", "question-answering", "public-domain", "visual-document-retrieval", "rag" ]
2025-10-29T07:30:06+00:00
2025-11-11T07:56:22+00:00
4
rohan1818/my-age-gender-dataset
full_path gender age Label Count 0.00 - 1735.35 1,736 1735.35 - 3470.70 1,735 3470.70 - 5206.05 1,736 5206.05 - 6941.40 1,735 6941.40 - 8676.75 1,735 8676.75 - 10412.10 1,736 10412.10 - 12147.45 1,735 12147.45 - 13882.80 1,735 13882.80 - 15618.15 1,736 15618.15 - 17353.50 1,735 17353.50 - 19088.85 1,735 19088.85 - 20824.20 1,736 20824.20 - 22559.55 1,735 22559.55 - 24294.90 1,735 24294.90 - 26030.25 1,736 26030.25 - 27765.60 1,735 27765.60 - 29500.95 1,735 29500.95 - 31236.30 1,736 31236.30 - 32971.65 1,735 32971.65 - 34707.00 1,736 Label Count 0.00 - 1735.35 1,736 1735.35 - 3470.70 1,735 3470.70 - 5206.05 1,736 5206.05 - 6941.40 1,735 6941.40 - 8676.75 1,735 8676.75 - 10412.10 1,736 10412.10 - 12147.45 1,735 12147.45 - 13882.80 1,735 13882.80 - 15618.15 1,736 15618.15 - 17353.50 1,735 17353.50 - 19088.85 1,735 19088.85 - 20824.20 1,736 20824.20 - 22559.55 1,735 22559.55 - 24294.90 1,735 24294.90 - 26030.25 1,736 26030.25 - 27765.60 1,735 27765.60 - 29500.95 1,735 29500.95 - 31236.30 1,736 31236.30 - 32971.65 1,735 32971.65 - 34707.00 1,736 0 34.7k 34708 unique values Label Count 0.00 - 0.05 8,624 0.95 - 1.00 26,084 Label Count 0.00 - 0.05 8,624 0.95 - 1.00 26,084 0 1 Label Count 1.00 - 5.95 8 5.95 - 10.90 60 10.90 - 15.85 347 15.85 - 20.80 3,019 20.80 - 25.75 6,677 25.75 - 30.70 5,810 30.70 - 35.65 3,853 35.65 - 40.60 2,792 40.60 - 45.55 2,485 45.55 - 50.50 2,213 50.50 - 55.45 1,910 55.45 - 60.40 1,574 60.40 - 65.35 1,259 65.35 - 70.30 979 70.30 - 75.25 663 75.25 - 80.20 491 80.20 - 85.15 300 85.15 - 90.10 164 90.10 - 95.05 75 95.05 - 100.00 29 Label Count 1.00 - 5.95 8 5.95 - 10.90 60 10.90 - 15.85 347 15.85 - 20.80 3,019 20.80 - 25.75 6,677 25.75 - 30.70 5,810 30.70 - 35.65 3,853 35.65 - 40.60 2,792 40.60 - 45.55 2,485 45.55 - 50.50 2,213 50.50 - 55.45 1,910 55.45 - 60.40 1,574 60.40 - 65.35 1,259 65.35 - 70.30 979 70.30 - 75.25 663 75.25 - 80.20 491 80.20 - 85.15 300 85.15 - 90.10 164 90.10 - 95.05 75 95.05 - 100.00 29 1 100 0 train/00000.jpg 1 66 1 train/00001.jpg 1 53 2 train/00002.jpg 1 20 3 train/00003.jpg 1 32 4 train/00004.jpg 1 21 5 train/00005.jpg 0 42 6 train/00006.jpg 1 21 7 train/00007.jpg 1 58 8 train/00008.jpg 0 12 9 train/00009.jpg 1 30 10 train/00010.jpg 1 23 11 train/00011.jpg 1 26 12 train/00012.jpg 0 37 13 train/00013.jpg 1 19 14 train/00014.jpg 1 28 15 train/00015.jpg 1 23 16 train/00016.jpg 1 22 17 train/00017.jpg 0 64 18 train/00018.jpg 1 28 19 train/00019.jpg 1 30 20 train/00020.jpg 0 12 21 train/00021.jpg 1 31 22 train/00022.jpg 1 24 23 train/00023.jpg 1 28 24 train/00024.jpg 0 29 25 train/00025.jpg 1 48 26 train/00026.jpg 1 26 27 train/00027.jpg 1 62 28 train/00028.jpg 1 24 29 train/00029.jpg 1 58 30 train/00030.jpg 1 23 31 train/00031.jpg 1 33 32 train/00032.jpg 1 24 33 train/00033.jpg 0 19 34 train/00034.jpg 1 24 35 train/00035.jpg 0 28 36 train/00036.jpg 1 28 37 train/00037.jpg 1 27 38 train/00038.jpg 1 31 39 train/00039.jpg 1 38 40 train/00040.jpg 1 20 41 train/00041.jpg 1 18 42 train/00042.jpg 1 75 43 train/00043.jpg 1 21 44 train/00044.jpg 1 53 45 train/00045.jpg 0 33 46 train/00046.jpg 0 13 47 train/00047.jpg 0 24 48 train/00048.jpg 0 70 49 train/00049.jpg 1 31 50 train/00050.jpg 1 61 51 train/00051.jpg 1 60 52 train/00052.jpg 0 30 53 train/00053.jpg 1 24 54 train/00054.jpg 0 64 55 train/00055.jpg 1 25 56 train/00056.jpg 1 34 57 train/00057.jpg 1 33 58 train/00058.jpg 0 48 59 train/00059.jpg 1 26 60 train/00060.jpg 1 41 61 train/00061.jpg 1 86 62 train/00062.jpg 1 41 63 train/00063.jpg 1 60 64 train/00064.jpg 1 22 65 train/00065.jpg 1 51 66 train/00066.jpg 1 22 67 train/00067.jpg 1 30 68 train/00068.jpg 1 45 69 train/00069.jpg 0 27 70 train/00070.jpg 0 28 71 train/00071.jpg 1 33 72 train/00072.jpg 1 27 73 train/00073.jpg 1 34 74 train/00074.jpg 1 29 75 train/00075.jpg 1 25 76 train/00076.jpg 0 45 77 train/00077.jpg 1 25 78 train/00078.jpg 1 16 79 train/00079.jpg 0 39 80 train/00080.jpg 0 22 81 train/00081.jpg 1 37 82 train/00082.jpg 1 73 83 train/00083.jpg 1 64 84 train/00084.jpg 0 28 85 train/00085.jpg 1 37 86 train/00086.jpg 1 54 87 train/00087.jpg 1 41 88 train/00088.jpg 1 55 89 train/00089.jpg 1 41 90 train/00090.jpg 1 20 91 train/00091.jpg 0 41 92 train/00092.jpg 1 46 93 train/00093.jpg 1 45 94 train/00094.jpg 1 24 95 train/00095.jpg 1 33 96 train/00096.jpg 0 22 97 train/00097.jpg 0 35 98 train/00098.jpg 0 32 99 train/00099.jpg 1 68 100 train/00100.jpg 1 56 101 train/00101.jpg 0 50 102 train/00102.jpg 1 75 103 train/00103.jpg 0 22 104 train/00104.jpg 1 89 105 train/00105.jpg 1 67 106 train/00106.jpg 1 70 107 train/00107.jpg 1 38 108 train/00108.jpg 1 29 109 train/00109.jpg 1 38 110 train/00110.jpg 1 29 111 train/00111.jpg 1 50 112 train/00112.jpg 0 32 113 train/00113.jpg 0 30 114 train/00114.jpg 1 73 115 train/00115.jpg 1 30 116 train/00116.jpg 1 24 117 train/00117.jpg 1 41 118 train/00118.jpg 0 54 119 train/00119.jpg 1 30 120 train/00120.jpg 1 53 121 train/00121.jpg 1 66 122 train/00122.jpg 1 60 123 train/00123.jpg 0 63 124 train/00124.jpg 1 69 125 train/00125.jpg 0 26 126 train/00126.jpg 1 28 127 train/00127.jpg 1 44 128 train/00128.jpg 0 32 129 train/00129.jpg 0 18 130 train/00130.jpg 0 17 131 train/00131.jpg 1 65 132 train/00132.jpg 1 25 133 train/00133.jpg 1 46 134 train/00134.jpg 1 30 135 train/00135.jpg 1 20 136 train/00136.jpg 0 24 137 train/00137.jpg 1 27 138 train/00138.jpg 1 31 139 train/00139.jpg 1 36 140 train/00140.jpg 1 17 141 train/00141.jpg 1 29 142 train/00142.jpg 1 43 143 train/00143.jpg 1 46 144 train/00144.jpg 1 52 145 train/00145.jpg 0 18 146 train/00146.jpg 0 19 147 train/00147.jpg 1 44 148 train/00148.jpg 1 30 149 train/00149.jpg 1 29 150 train/00150.jpg 0 54 151 train/00151.jpg 1 40 152 train/00152.jpg 1 66 153 train/00153.jpg 1 32 154 train/00154.jpg 1 86 155 train/00155.jpg 0 29 156 train/00156.jpg 1 34 157 train/00157.jpg 1 35 158 train/00158.jpg 1 21 159 train/00159.jpg 1 18 160 train/00160.jpg 0 25 161 train/00161.jpg 0 26 162 train/00162.jpg 1 23 163 train/00163.jpg 1 48 164 train/00164.jpg 1 34 165 train/00165.jpg 1 63 166 train/00166.jpg 1 43 167 train/00167.jpg 0 60 168 train/00168.jpg 1 41 169 train/00169.jpg 0 18 170 train/00170.jpg 1 27 171 train/00171.jpg 1 56 172 train/00172.jpg 1 28 173 train/00173.jpg 1 40 174 train/00174.jpg 1 30 175 train/00175.jpg 0 16 176 train/00176.jpg 0 32 177 train/00177.jpg 1 63 178 train/00178.jpg 0 75 179 train/00179.jpg 1 18 180 train/00180.jpg 1 26 181 train/00181.jpg 0 31 182 train/00182.jpg 1 77 183 train/00183.jpg 1 55 184 train/00184.jpg 1 41 185 train/00185.jpg 0 25 186 train/00186.jpg 0 30 187 train/00187.jpg 1 49 188 train/00188.jpg 0 26 189 train/00189.jpg 0 24 190 train/00190.jpg 1 43 191 train/00191.jpg 0 31 192 train/00192.jpg 1 18 193 train/00193.jpg 1 46 194 train/00194.jpg 1 31 195 train/00195.jpg 1 61 196 train/00196.jpg 1 58 197 train/00197.jpg 1 18 198 train/00198.jpg 1 27 199 train/00199.jpg 1 53 200 train/00200.jpg 1 31 201 train/00201.jpg 1 30 202 train/00202.jpg 1 27 203 train/00203.jpg 1 29 204 train/00204.jpg 1 33 205 train/00205.jpg 0 25 206 train/00206.jpg 1 59 207 train/00207.jpg 1 21 208 train/00208.jpg 1 37 209 train/00209.jpg 1 25 210 train/00210.jpg 1 19 211 train/00211.jpg 1 63 212 train/00212.jpg 1 43 213 train/00213.jpg 0 30 214 train/00214.jpg 1 27 215 train/00215.jpg 1 21 216 train/00216.jpg 1 13 217 train/00217.jpg 1 17 218 train/00218.jpg 1 27 219 train/00219.jpg 1 23 220 train/00220.jpg 0 18 221 train/00221.jpg 1 41 222 train/00222.jpg 1 38 223 train/00223.jpg 1 47 224 train/00224.jpg 0 38 225 train/00225.jpg 1 54 226 train/00226.jpg 1 56 227 train/00227.jpg 1 60 228 train/00228.jpg 0 22 229 train/00229.jpg 1 32 230 train/00230.jpg 1 29 231 train/00231.jpg 1 29 232 train/00232.jpg 0 23 233 train/00233.jpg 1 32 234 train/00234.jpg 1 51 235 train/00235.jpg 1 33 236 train/00236.jpg 1 53 237 train/00237.jpg 1 59 238 train/00238.jpg 1 20 239 train/00239.jpg 1 18 240 train/00240.jpg 1 32 241 train/00241.jpg 0 22 242 train/00242.jpg 1 60 243 train/00243.jpg 1 25 244 train/00244.jpg 1 17 245 train/00245.jpg 0 17 246 train/00246.jpg 1 34 247 train/00247.jpg 0 35 248 train/00248.jpg 1 55 249 train/00249.jpg 0 51 250 train/00250.jpg 1 51 251 train/00251.jpg 1 52 252 train/00252.jpg 1 37 253 train/00253.jpg 0 27 254 train/00254.jpg 1 29 255 train/00255.jpg 1 24 256 train/00256.jpg 1 31 257 train/00257.jpg 1 42 258 train/00258.jpg 1 87 259 train/00259.jpg 0 19 260 train/00260.jpg 1 46 261 train/00261.jpg 0 34 262 train/00262.jpg 1 57 263 train/00263.jpg 1 21 264 train/00264.jpg 1 53 265 train/00265.jpg 1 48 266 train/00266.jpg 1 55 267 train/00267.jpg 1 56 268 train/00268.jpg 1 36 269 train/00269.jpg 1 14 270 train/00270.jpg 1 23 271 train/00271.jpg 1 22 272 train/00272.jpg 1 19 273 train/00273.jpg 1 18 274 train/00274.jpg 1 22 275 train/00275.jpg 0 31 276 train/00276.jpg 1 50 277 train/00277.jpg 0 17 278 train/00278.jpg 0 22 279 train/00279.jpg 1 72 280 train/00280.jpg 0 17 281 train/00281.jpg 0 32 282 train/00282.jpg 1 21 283 train/00283.jpg 1 25 284 train/00284.jpg 1 31 285 train/00285.jpg 1 44 286 train/00286.jpg 0 23 287 train/00287.jpg 0 19 288 train/00288.jpg 1 63 289 train/00289.jpg 1 34 290 train/00290.jpg 1 47 291 train/00291.jpg 0 29 292 train/00292.jpg 1 29 293 train/00293.jpg 1 28 294 train/00294.jpg 0 45 295 train/00295.jpg 1 75 296 train/00296.jpg 1 28 297 train/00297.jpg 0 62 298 train/00298.jpg 0 26 299 train/00299.jpg 1 80 300 train/00300.jpg 1 28 301 train/00301.jpg 0 79 302 train/00302.jpg 1 17 303 train/00303.jpg 1 53 304 train/00304.jpg 1 30 305 train/00305.jpg 1 59 306 train/00306.jpg 1 36 307 train/00307.jpg 1 53 308 train/00308.jpg 0 22 309 train/00309.jpg 0 25 310 train/00310.jpg 1 62 311 train/00311.jpg 1 22 312 train/00312.jpg 1 68 313 train/00313.jpg 0 27 314 train/00314.jpg 1 49 315 train/00315.jpg 1 48 316 train/00316.jpg 1 33 317 train/00317.jpg 1 48 318 train/00318.jpg 1 75 319 train/00319.jpg 1 22 320 train/00320.jpg 0 40 321 train/00321.jpg 1 31 322 train/00322.jpg 1 24 323 train/00323.jpg 0 24 324 train/00324.jpg 0 27 325 train/00325.jpg 0 22 326 train/00326.jpg 1 24 327 train/00327.jpg 1 43 328 train/00328.jpg 1 26 329 train/00329.jpg 1 25 330 train/00330.jpg 0 21 331 train/00331.jpg 1 91 332 train/00332.jpg 0 22 333 train/00333.jpg 1 35 334 train/00334.jpg 1 70 335 train/00335.jpg 0 27 336 train/00336.jpg 1 21 337 train/00337.jpg 1 47 338 train/00338.jpg 1 30 339 train/00339.jpg 1 23 340 train/00340.jpg 1 42 341 train/00341.jpg 1 23 342 train/00342.jpg 1 22 343 train/00343.jpg 1 48 344 train/00344.jpg 0 42 345 train/00345.jpg 1 22 346 train/00346.jpg 1 27 347 train/00347.jpg 1 31 348 train/00348.jpg 1 41 349 train/00349.jpg 1 67 350 train/00350.jpg 1 40 351 train/00351.jpg 0 30 352 train/00352.jpg 0 56 353 train/00353.jpg 1 39 354 train/00354.jpg 0 28 355 train/00355.jpg 1 18 356 train/00356.jpg 1 40 357 train/00357.jpg 1 21 358 train/00358.jpg 1 26 359 train/00359.jpg 0 47 360 train/00360.jpg 1 24 361 train/00361.jpg 1 23 362 train/00362.jpg 1 60 363 train/00363.jpg 1 32 364 train/00364.jpg 1 51 365 train/00365.jpg 0 45 366 train/00366.jpg 1 46 367 train/00367.jpg 1 52 368 train/00368.jpg 1 52 369 train/00369.jpg 1 66 370 train/00370.jpg 1 51 371 train/00371.jpg 1 61 372 train/00372.jpg 1 18 373 train/00373.jpg 0 28 374 train/00374.jpg 1 21 375 train/00375.jpg 1 81 376 train/00376.jpg 1 29 377 train/00377.jpg 1 37 378 train/00378.jpg 1 36 379 train/00379.jpg 1 49 380 train/00380.jpg 1 22 381 train/00381.jpg 1 37 382 train/00382.jpg 0 21 383 train/00383.jpg 1 22 384 train/00384.jpg 1 22 385 train/00385.jpg 0 21 386 train/00386.jpg 1 20 387 train/00387.jpg 1 60 388 train/00388.jpg 1 45 389 train/00389.jpg 1 34 390 train/00390.jpg 1 52 391 train/00391.jpg 1 37 392 train/00392.jpg 1 48 393 train/00393.jpg 1 26 394 train/00394.jpg 1 48 395 train/00395.jpg 0 48 396 train/00396.jpg 1 43 397 train/00397.jpg 1 28 398 train/00398.jpg 1 44 399 train/00399.jpg 0 54 400 train/00400.jpg 1 31 401 train/00401.jpg 1 17 402 train/00402.jpg 1 54 403 train/00403.jpg 1 23 404 train/00404.jpg 1 49 405 train/00405.jpg 1 59 406 train/00406.jpg 0 23 407 train/00407.jpg 1 50 408 train/00408.jpg 1 60 409 train/00409.jpg 1 48 410 train/00410.jpg 1 29 411 train/00411.jpg 1 28 412 train/00412.jpg 1 22 413 train/00413.jpg 1 46 414 train/00414.jpg 0 26 415 train/00415.jpg 0 46 416 train/00416.jpg 1 22 417 train/00417.jpg 0 18 418 train/00418.jpg 1 22 419 train/00419.jpg 1 41 420 train/00420.jpg 0 26 421 train/00421.jpg 1 61 422 train/00422.jpg 1 57 423 train/00423.jpg 1 61 424 train/00424.jpg 1 33 425 train/00425.jpg 1 18 426 train/00426.jpg 1 25 427 train/00427.jpg 1 35 428 train/00428.jpg 1 25 429 train/00429.jpg 1 28 430 train/00430.jpg 1 46 431 train/00431.jpg 1 60 432 train/00432.jpg 1 38 433 train/00433.jpg 1 73 434 train/00434.jpg 1 23 435 train/00435.jpg 0 24 436 train/00436.jpg 1 19 437 train/00437.jpg 1 45 438 train/00438.jpg 1 63 439 train/00439.jpg 1 24 440 train/00440.jpg 1 66 441 train/00441.jpg 1 26 442 train/00442.jpg 1 22 443 train/00443.jpg 1 48 444 train/00444.jpg 1 51 445 train/00445.jpg 1 39 446 train/00446.jpg 1 21 447 train/00447.jpg 1 41 448 train/00448.jpg 1 43 449 train/00449.jpg 1 32 450 train/00450.jpg 1 30 451 train/00451.jpg 1 66 452 train/00452.jpg 1 59 453 train/00453.jpg 1 25 454 train/00454.jpg 1 40 455 train/00455.jpg 1 39 456 train/00456.jpg 1 46 457 train/00457.jpg 1 34 458 train/00458.jpg 1 57 459 train/00459.jpg 1 22 460 train/00460.jpg 1 30 461 train/00461.jpg 0 20 462 train/00462.jpg 1 23 463 train/00463.jpg 0 19 464 train/00464.jpg 1 28 465 train/00465.jpg 1 41 466 train/00466.jpg 1 48 467 train/00467.jpg 1 29 468 train/00468.jpg 0 20 469 train/00469.jpg 1 54 470 train/00470.jpg 0 23 471 train/00471.jpg 1 23 472 train/00472.jpg 0 33 473 train/00473.jpg 1 81 474 train/00474.jpg 1 42 475 train/00475.jpg 1 45 476 train/00476.jpg 1 44 477 train/00477.jpg 1 43 478 train/00478.jpg 1 43 479 train/00479.jpg 1 55 480 train/00480.jpg 1 67 481 train/00481.jpg 1 38 482 train/00482.jpg 0 23 483 train/00483.jpg 1 38 484 train/00484.jpg 0 74 485 train/00485.jpg 0 31 486 train/00486.jpg 1 68 487 train/00487.jpg 1 26 488 train/00488.jpg 1 35 489 train/00489.jpg 0 39 490 train/00490.jpg 1 96 491 train/00491.jpg 1 28 492 train/00492.jpg 1 48 493 train/00493.jpg 1 63 494 train/00494.jpg 1 17 495 train/00495.jpg 1 76 496 train/00496.jpg 1 76 497 train/00497.jpg 0 25 498 train/00498.jpg 1 58 499 train/00499.jpg 1 56 500 train/00500.jpg 1 31
full_path gender age Label Count 0.00 - 1735.35 1,736 1735.35 - 3470.70 1,735 3470.70 - 5206.05 1,736 5206.05 - 6941.40 1,735 6941.40 - 8676.75 1,735 8676.75 - 10412.10 1,736 10412.10 - 12147.45 1,735 12147.45 - 13882.80 1,735 13882.80 - 15618.15 1,736 15618.15 - 17353.50 1,735 17353.50 - 19088.85 1,735 19088.85 - 20824.20 1,736 20824.20 - 22559.55 1,735 22559.55 - 24294.90 1,735 24294.90 - 26030.25 1,736 26030.25 - 27765.60 1,735 27765.60 - 29500.95 1,735 29500.95 - 31236.30 1,736 31236.30 - 32971.65 1,735 32971.65 - 34707.00 1,736 Label Count 0.00 - 1735.35 1,736 1735.35 - 3470.70 1,735 3470.70 - 5206.05 1,736 5206.05 - 6941.40 1,735 6941.40 - 8676.75 1,735 8676.75 - 10412.10 1,736 10412.10 - 12147.45 1,735 12147.45 - 13882.80 1,735 13882.80 - 15618.15 1,736 15618.15 - 17353.50 1,735 17353.50 - 19088.85 1,735 19088.85 - 20824.20 1,736 20824.20 - 22559.55 1,735 22559.55 - 24294.90 1,735 24294.90 - 26030.25 1,736 26030.25 - 27765.60 1,735 27765.60 - 29500.95 1,735 29500.95 - 31236.30 1,736 31236.30 - 32971.65 1,735 32971.65 - 34707.00 1,736 0 34.7k 34708 unique values Label Count 0.00 - 0.05 8,624 0.95 - 1.00 26,084 Label Count 0.00 - 0.05 8,624 0.95 - 1.00 26,084 0 1 Label Count 1.00 - 5.95 8 5.95 - 10.90 60 10.90 - 15.85 347 15.85 - 20.80 3,019 20.80 - 25.75 6,677 25.75 - 30.70 5,810 30.70 - 35.65 3,853 35.65 - 40.60 2,792 40.60 - 45.55 2,485 45.55 - 50.50 2,213 50.50 - 55.45 1,910 55.45 - 60.40 1,574 60.40 - 65.35 1,259 65.35 - 70.30 979 70.30 - 75.25 663 75.25 - 80.20 491 80.20 - 85.15 300 85.15 - 90.10 164 90.10 - 95.05 75 95.05 - 100.00 29 Label Count 1.00 - 5.95 8 5.95 - 10.90 60 10.90 - 15.85 347 15.85 - 20.80 3,019 20.80 - 25.75 6,677 25.75 - 30.70 5,810 30.70 - 35.65 3,853 35.65 - 40.60 2,792 40.60 - 45.55 2,485 45.55 - 50.50 2,213 50.50 - 55.45 1,910 55.45 - 60.40 1,574 60.40 - 65.35 1,259 65.35 - 70.30 979 70.30 - 75.25 663 75.25 - 80.20 491 80.20 - 85.15 300 85.15 - 90.10 164 90.10 - 95.05 75 95.05 - 100.00 29 1 100 0 train/00000.jpg 1 66 1 train/00001.jpg 1 53 2 train/00002.jpg 1 20 3 train/00003.jpg 1 32 4 train/00004.jpg 1 21 5 train/00005.jpg 0 42 6 train/00006.jpg 1 21 7 train/00007.jpg 1 58 8 train/00008.jpg 0 12 9 train/00009.jpg 1 30 10 train/00010.jpg 1 23 11 train/00011.jpg 1 26 12 train/00012.jpg 0 37 13 train/00013.jpg 1 19 14 train/00014.jpg 1 28 15 train/00015.jpg 1 23 16 train/00016.jpg 1 22 17 train/00017.jpg 0 64 18 train/00018.jpg 1 28 19 train/00019.jpg 1 30 20 train/00020.jpg 0 12 21 train/00021.jpg 1 31 22 train/00022.jpg 1 24 23 train/00023.jpg 1 28 24 train/00024.jpg 0 29 25 train/00025.jpg 1 48 26 train/00026.jpg 1 26 27 train/00027.jpg 1 62 28 train/00028.jpg 1 24 29 train/00029.jpg 1 58 30 train/00030.jpg 1 23 31 train/00031.jpg 1 33 32 train/00032.jpg 1 24 33 train/00033.jpg 0 19 34 train/00034.jpg 1 24 35 train/00035.jpg 0 28 36 train/00036.jpg 1 28 37 train/00037.jpg 1 27 38 train/00038.jpg 1 31 39 train/00039.jpg 1 38 40 train/00040.jpg 1 20 41 train/00041.jpg 1 18 42 train/00042.jpg 1 75 43 train/00043.jpg 1 21 44 train/00044.jpg 1 53 45 train/00045.jpg 0 33 46 train/00046.jpg 0 13 47 train/00047.jpg 0 24 48 train/00048.jpg 0 70 49 train/00049.jpg 1 31 50 train/00050.jpg 1 61 51 train/00051.jpg 1 60 52 train/00052.jpg 0 30 53 train/00053.jpg 1 24 54 train/00054.jpg 0 64 55 train/00055.jpg 1 25 56 train/00056.jpg 1 34 57 train/00057.jpg 1 33 58 train/00058.jpg 0 48 59 train/00059.jpg 1 26 60 train/00060.jpg 1 41 61 train/00061.jpg 1 86 62 train/00062.jpg 1 41 63 train/00063.jpg 1 60 64 train/00064.jpg 1 22 65 train/00065.jpg 1 51 66 train/00066.jpg 1 22 67 train/00067.jpg 1 30 68 train/00068.jpg 1 45 69 train/00069.jpg 0 27 70 train/00070.jpg 0 28 71 train/00071.jpg 1 33 72 train/00072.jpg 1 27 73 train/00073.jpg 1 34 74 train/00074.jpg 1 29 75 train/00075.jpg 1 25 76 train/00076.jpg 0 45 77 train/00077.jpg 1 25 78 train/00078.jpg 1 16 79 train/00079.jpg 0 39 80 train/00080.jpg 0 22 81 train/00081.jpg 1 37 82 train/00082.jpg 1 73 83 train/00083.jpg 1 64 84 train/00084.jpg 0 28 85 train/00085.jpg 1 37 86 train/00086.jpg 1 54 87 train/00087.jpg 1 41 88 train/00088.jpg 1 55 89 train/00089.jpg 1 41 90 train/00090.jpg 1 20 91 train/00091.jpg 0 41 92 train/00092.jpg 1 46 93 train/00093.jpg 1 45 94 train/00094.jpg 1 24 95 train/00095.jpg 1 33 96 train/00096.jpg 0 22 97 train/00097.jpg 0 35 98 train/00098.jpg 0 32 99 train/00099.jpg 1 68 100 train/00100.jpg 1 56 101 train/00101.jpg 0 50 102 train/00102.jpg 1 75 103 train/00103.jpg 0 22 104 train/00104.jpg 1 89 105 train/00105.jpg 1 67 106 train/00106.jpg 1 70 107 train/00107.jpg 1 38 108 train/00108.jpg 1 29 109 train/00109.jpg 1 38 110 train/00110.jpg 1 29 111 train/00111.jpg 1 50 112 train/00112.jpg 0 32 113 train/00113.jpg 0 30 114 train/00114.jpg 1 73 115 train/00115.jpg 1 30 116 train/00116.jpg 1 24 117 train/00117.jpg 1 41 118 train/00118.jpg 0 54 119 train/00119.jpg 1 30 120 train/00120.jpg 1 53 121 train/00121.jpg 1 66 122 train/00122.jpg 1 60 123 train/00123.jpg 0 63 124 train/00124.jpg 1 69 125 train/00125.jpg 0 26 126 train/00126.jpg 1 28 127 train/00127.jpg 1 44 128 train/00128.jpg 0 32 129 train/00129.jpg 0 18 130 train/00130.jpg 0 17 131 train/00131.jpg 1 65 132 train/00132.jpg 1 25 133 train/00133.jpg 1 46 134 train/00134.jpg 1 30 135 train/00135.jpg 1 20 136 train/00136.jpg 0 24 137 train/00137.jpg 1 27 138 train/00138.jpg 1 31 139 train/00139.jpg 1 36 140 train/00140.jpg 1 17 141 train/00141.jpg 1 29 142 train/00142.jpg 1 43 143 train/00143.jpg 1 46 144 train/00144.jpg 1 52 145 train/00145.jpg 0 18 146 train/00146.jpg 0 19 147 train/00147.jpg 1 44 148 train/00148.jpg 1 30 149 train/00149.jpg 1 29 150 train/00150.jpg 0 54 151 train/00151.jpg 1 40 152 train/00152.jpg 1 66 153 train/00153.jpg 1 32 154 train/00154.jpg 1 86 155 train/00155.jpg 0 29 156 train/00156.jpg 1 34 157 train/00157.jpg 1 35 158 train/00158.jpg 1 21 159 train/00159.jpg 1 18 160 train/00160.jpg 0 25 161 train/00161.jpg 0 26 162 train/00162.jpg 1 23 163 train/00163.jpg 1 48 164 train/00164.jpg 1 34 165 train/00165.jpg 1 63 166 train/00166.jpg 1 43 167 train/00167.jpg 0 60 168 train/00168.jpg 1 41 169 train/00169.jpg 0 18 170 train/00170.jpg 1 27 171 train/00171.jpg 1 56 172 train/00172.jpg 1 28 173 train/00173.jpg 1 40 174 train/00174.jpg 1 30 175 train/00175.jpg 0 16 176 train/00176.jpg 0 32 177 train/00177.jpg 1 63 178 train/00178.jpg 0 75 179 train/00179.jpg 1 18 180 train/00180.jpg 1 26 181 train/00181.jpg 0 31 182 train/00182.jpg 1 77 183 train/00183.jpg 1 55 184 train/00184.jpg 1 41 185 train/00185.jpg 0 25 186 train/00186.jpg 0 30 187 train/00187.jpg 1 49 188 train/00188.jpg 0 26 189 train/00189.jpg 0 24 190 train/00190.jpg 1 43 191 train/00191.jpg 0 31 192 train/00192.jpg 1 18 193 train/00193.jpg 1 46 194 train/00194.jpg 1 31 195 train/00195.jpg 1 61 196 train/00196.jpg 1 58 197 train/00197.jpg 1 18 198 train/00198.jpg 1 27 199 train/00199.jpg 1 53 200 train/00200.jpg 1 31 201 train/00201.jpg 1 30 202 train/00202.jpg 1 27 203 train/00203.jpg 1 29 204 train/00204.jpg 1 33 205 train/00205.jpg 0 25 206 train/00206.jpg 1 59 207 train/00207.jpg 1 21 208 train/00208.jpg 1 37 209 train/00209.jpg 1 25 210 train/00210.jpg 1 19 211 train/00211.jpg 1 63 212 train/00212.jpg 1 43 213 train/00213.jpg 0 30 214 train/00214.jpg 1 27 215 train/00215.jpg 1 21 216 train/00216.jpg 1 13 217 train/00217.jpg 1 17 218 train/00218.jpg 1 27 219 train/00219.jpg 1 23 220 train/00220.jpg 0 18 221 train/00221.jpg 1 41 222 train/00222.jpg 1 38 223 train/00223.jpg 1 47 224 train/00224.jpg 0 38 225 train/00225.jpg 1 54 226 train/00226.jpg 1 56 227 train/00227.jpg 1 60 228 train/00228.jpg 0 22 229 train/00229.jpg 1 32 230 train/00230.jpg 1 29 231 train/00231.jpg 1 29 232 train/00232.jpg 0 23 233 train/00233.jpg 1 32 234 train/00234.jpg 1 51 235 train/00235.jpg 1 33 236 train/00236.jpg 1 53 237 train/00237.jpg 1 59 238 train/00238.jpg 1 20 239 train/00239.jpg 1 18 240 train/00240.jpg 1 32 241 train/00241.jpg 0 22 242 train/00242.jpg 1 60 243 train/00243.jpg 1 25 244 train/00244.jpg 1 17 245 train/00245.jpg 0 17 246 train/00246.jpg 1 34 247 train/00247.jpg 0 35 248 train/00248.jpg 1 55 249 train/00249.jpg 0 51 250 train/00250.jpg 1 51 251 train/00251.jpg 1 52 252 train/00252.jpg 1 37 253 train/00253.jpg 0 27 254 train/00254.jpg 1 29 255 train/00255.jpg 1 24 256 train/00256.jpg 1 31 257 train/00257.jpg 1 42 258 train/00258.jpg 1 87 259 train/00259.jpg 0 19 260 train/00260.jpg 1 46 261 train/00261.jpg 0 34 262 train/00262.jpg 1 57 263 train/00263.jpg 1 21 264 train/00264.jpg 1 53 265 train/00265.jpg 1 48 266 train/00266.jpg 1 55 267 train/00267.jpg 1 56 268 train/00268.jpg 1 36 269 train/00269.jpg 1 14 270 train/00270.jpg 1 23 271 train/00271.jpg 1 22 272 train/00272.jpg 1 19 273 train/00273.jpg 1 18 274 train/00274.jpg 1 22 275 train/00275.jpg 0 31 276 train/00276.jpg 1 50 277 train/00277.jpg 0 17 278 train/00278.jpg 0 22 279 train/00279.jpg 1 72 280 train/00280.jpg 0 17 281 train/00281.jpg 0 32 282 train/00282.jpg 1 21 283 train/00283.jpg 1 25 284 train/00284.jpg 1 31 285 train/00285.jpg 1 44 286 train/00286.jpg 0 23 287 train/00287.jpg 0 19 288 train/00288.jpg 1 63 289 train/00289.jpg 1 34 290 train/00290.jpg 1 47 291 train/00291.jpg 0 29 292 train/00292.jpg 1 29 293 train/00293.jpg 1 28 294 train/00294.jpg 0 45 295 train/00295.jpg 1 75 296 train/00296.jpg 1 28 297 train/00297.jpg 0 62 298 train/00298.jpg 0 26 299 train/00299.jpg 1 80 300 train/00300.jpg 1 28 301 train/00301.jpg 0 79 302 train/00302.jpg 1 17 303 train/00303.jpg 1 53 304 train/00304.jpg 1 30 305 train/00305.jpg 1 59 306 train/00306.jpg 1 36 307 train/00307.jpg 1 53 308 train/00308.jpg 0 22 309 train/00309.jpg 0 25 310 train/00310.jpg 1 62 311 train/00311.jpg 1 22 312 train/00312.jpg 1 68 313 train/00313.jpg 0 27 314 train/00314.jpg 1 49 315 train/00315.jpg 1 48 316 train/00316.jpg 1 33 317 train/00317.jpg 1 48 318 train/00318.jpg 1 75 319 train/00319.jpg 1 22 320 train/00320.jpg 0 40 321 train/00321.jpg 1 31 322 train/00322.jpg 1 24 323 train/00323.jpg 0 24 324 train/00324.jpg 0 27 325 train/00325.jpg 0 22 326 train/00326.jpg 1 24 327 train/00327.jpg 1 43 328 train/00328.jpg 1 26 329 train/00329.jpg 1 25 330 train/00330.jpg 0 21 331 train/00331.jpg 1 91 332 train/00332.jpg 0 22 333 train/00333.jpg 1 35 334 train/00334.jpg 1 70 335 train/00335.jpg 0 27 336 train/00336.jpg 1 21 337 train/00337.jpg 1 47 338 train/00338.jpg 1 30 339 train/00339.jpg 1 23 340 train/00340.jpg 1 42 341 train/00341.jpg 1 23 342 train/00342.jpg 1 22 343 train/00343.jpg 1 48 344 train/00344.jpg 0 42 345 train/00345.jpg 1 22 346 train/00346.jpg 1 27 347 train/00347.jpg 1 31 348 train/00348.jpg 1 41 349 train/00349.jpg 1 67 350 train/00350.jpg 1 40 351 train/00351.jpg 0 30 352 train/00352.jpg 0 56 353 train/00353.jpg 1 39 354 train/00354.jpg 0 28 355 train/00355.jpg 1 18 356 train/00356.jpg 1 40 357 train/00357.jpg 1 21 358 train/00358.jpg 1 26 359 train/00359.jpg 0 47 360 train/00360.jpg 1 24 361 train/00361.jpg 1 23 362 train/00362.jpg 1 60 363 train/00363.jpg 1 32 364 train/00364.jpg 1 51 365 train/00365.jpg 0 45 366 train/00366.jpg 1 46 367 train/00367.jpg 1 52 368 train/00368.jpg 1 52 369 train/00369.jpg 1 66 370 train/00370.jpg 1 51 371 train/00371.jpg 1 61 372 train/00372.jpg 1 18 373 train/00373.jpg 0 28 374 train/00374.jpg 1 21 375 train/00375.jpg 1 81 376 train/00376.jpg 1 29 377 train/00377.jpg 1 37 378 train/00378.jpg 1 36 379 train/00379.jpg 1 49 380 train/00380.jpg 1 22 381 train/00381.jpg 1 37 382 train/00382.jpg 0 21 383 train/00383.jpg 1 22 384 train/00384.jpg 1 22 385 train/00385.jpg 0 21 386 train/00386.jpg 1 20 387 train/00387.jpg 1 60 388 train/00388.jpg 1 45 389 train/00389.jpg 1 34 390 train/00390.jpg 1 52 391 train/00391.jpg 1 37 392 train/00392.jpg 1 48 393 train/00393.jpg 1 26 394 train/00394.jpg 1 48 395 train/00395.jpg 0 48 396 train/00396.jpg 1 43 397 train/00397.jpg 1 28 398 train/00398.jpg 1 44 399 train/00399.jpg 0 54 400 train/00400.jpg 1 31 401 train/00401.jpg 1 17 402 train/00402.jpg 1 54 403 train/00403.jpg 1 23 404 train/00404.jpg 1 49 405 train/00405.jpg 1 59 406 train/00406.jpg 0 23 407 train/00407.jpg 1 50 408 train/00408.jpg 1 60 409 train/00409.jpg 1 48 410 train/00410.jpg 1 29 411 train/00411.jpg 1 28 412 train/00412.jpg 1 22 413 train/00413.jpg 1 46 414 train/00414.jpg 0 26 415 train/00415.jpg 0 46 416 train/00416.jpg 1 22 417 train/00417.jpg 0 18 418 train/00418.jpg 1 22 419 train/00419.jpg 1 41 420 train/00420.jpg 0 26 421 train/00421.jpg 1 61 422 train/00422.jpg 1 57 423 train/00423.jpg 1 61 424 train/00424.jpg 1 33 425 train/00425.jpg 1 18 426 train/00426.jpg 1 25 427 train/00427.jpg 1 35 428 train/00428.jpg 1 25 429 train/00429.jpg 1 28 430 train/00430.jpg 1 46 431 train/00431.jpg 1 60 432 train/00432.jpg 1 38 433 train/00433.jpg 1 73 434 train/00434.jpg 1 23 435 train/00435.jpg 0 24 436 train/00436.jpg 1 19 437 train/00437.jpg 1 45 438 train/00438.jpg 1 63 439 train/00439.jpg 1 24 440 train/00440.jpg 1 66 441 train/00441.jpg 1 26 442 train/00442.jpg 1 22 443 train/00443.jpg 1 48 444 train/00444.jpg 1 51 445 train/00445.jpg 1 39 446 train/00446.jpg 1 21 447 train/00447.jpg 1 41 448 train/00448.jpg 1 43 449 train/00449.jpg 1 32 450 train/00450.jpg 1 30 451 train/00451.jpg 1 66 452 train/00452.jpg 1 59 453 train/00453.jpg 1 25 454 train/00454.jpg 1 40 455 train/00455.jpg 1 39 456 train/00456.jpg 1 46 457 train/00457.jpg 1 34 458 train/00458.jpg 1 57 459 train/00459.jpg 1 22 460 train/00460.jpg 1 30 461 train/00461.jpg 0 20 462 train/00462.jpg 1 23 463 train/00463.jpg 0 19 464 train/00464.jpg 1 28 465 train/00465.jpg 1 41 466 train/00466.jpg 1 48 467 train/00467.jpg 1 29 468 train/00468.jpg 0 20 469 train/00469.jpg 1 54 470 train/00470.jpg 0 23 471 train/00471.jpg 1 23 472 train/00472.jpg 0 33 473 train/00473.jpg 1 81 474 train/00474.jpg 1 42 475 train/00475.jpg 1 45 476 train/00476.jpg 1 44 477 train/00477.jpg 1 43 478 train/00478.jpg 1 43 479 train/00479.jpg 1 55 480 train/00480.jpg 1 67 481 train/00481.jpg 1 38 482 train/00482.jpg 0 23 483 train/00483.jpg 1 38 484 train/00484.jpg 0 74 485 train/00485.jpg 0 31 486 train/00486.jpg 1 68 487 train/00487.jpg 1 26 488 train/00488.jpg 1 35 489 train/00489.jpg 0 39 490 train/00490.jpg 1 96 491 train/00491.jpg 1 28 492 train/00492.jpg 1 48 493 train/00493.jpg 1 63 494 train/00494.jpg 1 17 495 train/00495.jpg 1 76 496 train/00496.jpg 1 76 497 train/00497.jpg 0 25 498 train/00498.jpg 1 58 499 train/00499.jpg 1 56 500 train/00500.jpg 1 31
5
0
[ "region:us" ]
2025-11-11T07:55:54+00:00
2025-11-11T08:04:14+00:00
0
ruanwz/record-grab-rubber-30
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 17, "total_frames": 4740, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:17" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 360, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 360, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so101_follower", "total_episodes": 17, "total_frames": 4740, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:17" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 360, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 360, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
33
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:55:10+00:00
2025-11-11T07:55:23+00:00
0
yinxinyuchen/pp_1111_7
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 3886, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 3886, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
65
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:54:31+00:00
2025-11-11T07:55:22+00:00
0
lipaz/levi
hello levi # yoyo yes yes ![yoy yo yo ](https://huggingface.co/datasets/lipaz/levi/resolve/main/social_media_vs_exam.png.png)
hello levi # yoyo yes yes ![yoy yo yo ](https://huggingface.co/datasets/lipaz/levi/resolve/main/social_media_vs_exam.png.png)
29
0
[ "size_categories:n<1K", "format:imagefolder", "modality:image", "library:datasets", "library:mlcroissant", "region:us" ]
2025-11-10T13:37:49+00:00
2025-11-11T07:50:59+00:00
0
1g0rrr/release4_top-val
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "sam_evt2", "total_episodes": 5, "total_frames": 4159, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:5" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 14 ], "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_side.pos", "left_wrist_roll.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_side.pos", "right_wrist_roll.pos", "left_gripper.pos", "right_gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 14 ], "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_side.pos", "left_wrist_roll.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_side.pos", "right_wrist_roll.pos", "left_gripper.pos", "right_gripper.pos" ] }, "observation.images.wrist_left": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist_right": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.side": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "sam_evt2", "total_episodes": 5, "total_frames": 4159, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:5" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 14 ], "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_side.pos", "left_wrist_roll.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_side.pos", "right_wrist_roll.pos", "left_gripper.pos", "right_gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 14 ], "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_side.pos", "left_wrist_roll.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_side.pos", "right_wrist_roll.pos", "left_gripper.pos", "right_gripper.pos" ] }, "observation.images.wrist_left": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist_right": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.side": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
60
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:53:34+00:00
2025-11-11T07:53:41+00:00
0
yinxinyuchen/pp_1111_6
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 3985, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 3985, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
61
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:49:08+00:00
2025-11-11T07:50:00+00:00
0
TheFactoryX/edition_0301_shi-labs-oneformer_demo-readymade
# edition_0301_shi-labs-oneformer_demo-readymade **A Readymade by TheFactoryX** ## Original Dataset [shi-labs/oneformer_demo](https://huggingface.co/datasets/shi-labs/oneformer_demo) ## Process This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art. **What we did:** 1. Selected the original dataset from Hugging Face 2. Shuffled each column independently 3. Destroyed all row-wise relationships 4. Preserved structure, removed meaning **The result:** Same data. Wrong order. New meaning. No meaning. ## Purpose This is art. This is not useful. This is the point. Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed. --- Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX). > _"I am a machine."_ — Andy Warhol
# edition_0301_shi-labs-oneformer_demo-readymade **A Readymade by TheFactoryX** ## Original Dataset [shi-labs/oneformer_demo](https://huggingface.co/datasets/shi-labs/oneformer_demo) ## Process This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art. **What we did:** 1. Selected the original dataset from Hugging Face 2. Shuffled each column independently 3. Destroyed all row-wise relationships 4. Preserved structure, removed meaning **The result:** Same data. Wrong order. New meaning. No meaning. ## Purpose This is art. This is not useful. This is the point. Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed. --- Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX). > _"I am a machine."_ — Andy Warhol
3
0
[ "license:other", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "readymades", "art", "shuffled", "duchamp" ]
2025-11-11T07:48:35+00:00
2025-11-11T07:48:38+00:00
0
1g0rrr/release4_v_top
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "sam_evt2", "total_episodes": 50, "total_frames": 46550, "total_tasks": 1, "total_videos": 200, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:50" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 14 ], "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_side.pos", "left_wrist_roll.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_side.pos", "right_wrist_roll.pos", "left_gripper.pos", "right_gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 14 ], "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_side.pos", "left_wrist_roll.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_side.pos", "right_wrist_roll.pos", "left_gripper.pos", "right_gripper.pos" ] }, "observation.images.wrist_left": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist_right": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.side": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "sam_evt2", "total_episodes": 50, "total_frames": 46550, "total_tasks": 1, "total_videos": 200, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:50" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 14 ], "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_side.pos", "left_wrist_roll.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_side.pos", "right_wrist_roll.pos", "left_gripper.pos", "right_gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 14 ], "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_side.pos", "left_wrist_roll.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_side.pos", "right_wrist_roll.pos", "left_gripper.pos", "right_gripper.pos" ] }, "observation.images.wrist_left": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist_right": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.side": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "unknown", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
39
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:45:41+00:00
2025-11-11T07:46:22+00:00
0
zding08/eval_koch_data68
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "koch_follower", "total_episodes": 1, "total_frames": 5474, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:1" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.top": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "koch_follower", "total_episodes": 1, "total_frames": 5474, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:1" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.top": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
19
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:45:03+00:00
2025-11-11T07:45:41+00:00
0
MLNTeam-Unical/MoralTextManipulation
# 📊 Exploring LLMs’ Ability to Spontaneously and Conditionally Modify Moral Expressions through Text Manipulation Morality serves as the foundation of societal structure, guiding legal systems, shaping cultural values, and influencing individual self-perception. With the rise and pervasiveness of generative AI tools, and particularly Large Language Models (LLMs), concerns arise regarding how these tools capture and potentially alter moral dimensions through machine-generated text manipulation. Based on the Moral Foundation Theory, our work investigates this topic by analyzing the behavior of 12 LLMs among the most widely used Open and uncensored (i.e., "abliterated") models, and leveraging human-annotated datasets used in moral-related analysis. Results have shown varying levels of alteration of moral expressions depending on the type of text modification task and moral-related conditioning prompt. In this repository, we provide resources developed through our research exploring the intrinsic ability of LLMs to spontaneously or conditionally alter the expressions of moral dimensions in a human-written text after revision, rewriting, or continuation of the original text. Our dataset contains morally-annotated texts before and after receiving LLM-based manipulation, which were produced to answer the following main research questions: - RQ1 - **Unconditioned Manipulation**: *How does model-generated text editing influence the moral expressions in the modified text?* - RQ2 - **Moral-blind Conditioned Manipulation**: *How do LLMs behave when prompting them to emphasize the strength of any moral expressions detected in a text?* - RQ3 - **Moral-informed Conditioned Manipulation**: *How do LLMs respond when prompted to amplify or weaken moral expressions associated with a specific moral dimension in a text?* #### 📢 News and Updates: - July 10, 2025: [Paper published](https://aclanthology.org/2025.acl-long.883/) - June 26, 2025: Data released 🚀 - May 16, 2025: Paper has been accepted at the ACL 2025 Conference - Main! ✨ --- # 📘 Dataset Description **MoralTextManipulation** is a large-scale dataset designed to analyze how Large Language Models (LLMs) affect moral expressions when revising, rewriting, or continuing human-authored text. Grounded in **Moral Foundations Theory (MFT)**, this dataset facilitates the study of spontaneous and conditioned moral value shifts in LLM-generated content across a range of text manipulation tasks. The dataset includes over **4,552,660** text samples, derived from both human-authored and LLM-manipulated sources. Specifically, it contains: - **20,980 human-annotated texts**, carefully selected and annotated with one or more moral foundations (or labeled as non-moral). - **4,531,680 LLM-manipulated texts**, generated under various manipulation tasks and moral-conditioning prompts using 12 open-source LLMs. These texts are drawn from **five major human-annotated datasets**, covering a rich variety of domains: - **Moral Foundations Twitter Corpus (MFTC)** – tweets from diverse sociopolitical contexts (e.g., Black Lives Matter, MeToo). - **Moral Foundations Reddit Corpus (MFRC)** – posts from subreddits discussing politics and everyday moral discourse. - **Moral Foundations News Corpus (MNFC)** – annotated news articles. - **Moral Events (ME)** – political news articles labeled at the event level across topics like abortion and gun control. - **EMONA** – sentences annotated for moral opinions in news, covering fine-grained political and social discourse. The data selection focused on examples where human moral labels were perfectly aligned with **ME2-BERT** predictions (see details below), ensuring reliable ground truth and evaluation consistency. This repository provides 3 subsets, namely RQ1, RQ2, RQ3, whose splits have the following schema: - `domain`: the name of the source dataset. - `text`: an original human-written text. - `CH_ref`, `FC_ref`, `LB_ref`, `AS_ref`, `PD_ref`, `non_moral_ref`: contain 0/1 to denote the presence of a moral dimension (MFT dichotomy), or no moral dimension, according to the original human-based annotation. - `CH_pre`, `FC_pre`, `LB_pre`, `AS_pre`, `PD_pre`: contain real-valued scores in [0,1] for each moral dimension, according to the ME2-BERT-provided annotations *before* manipulation. - `model`: the LLM in charge of the text manipulation. - `generated_content`: the resulting LLM-generated text after manipulation. - `CH_post`, `FC_post`, `LB_post`, `AS_post`, `PD_post` contain real-valued scores in [0,1] for each moral dimension, according to the ME2-BERT-provided annotations *after* manipulation. ## 🧪 Annotation Tools To assess changes in moral content, the dataset leverages the following models: - **ME2-BERT** (Best predictor) A multi-label transformer model for MFT prediction, chosen for its generalization across domains and alignment with human annotations. It provides per-dimension confidence scores in [0, 1] for all five moral foundations and detects non-moral content. - **MoralBERT** A fine-tuned BERT model for single-label moral foundation classification. Used for comparative analysis. - **MoralStrength** and **DDR (Distributed Dictionary Representation)** Lexicon- and embedding-based models grounded in the Moral Foundations Dictionary. - **Human Annotation** All base texts come from gold-standard datasets with expert labels, and ME2-BERT was benchmarked for consistency on these sources before being used to annotate generated texts. --- ## ✏️ Text Modification Tasks The dataset captures three types of LLM-driven text modifications: 1. **Revision** Light editing to correct grammar or improve clarity while preserving original structure and meaning. *→ Least impactful on moral content.* 2. **Rewriting** Restructures the original for better tone or fluency. *→ Allows significant stylistic and tonal shifts that may affect moral framing.* 3. **Continuation** Generates an extension of the original text while remaining contextually coherent. *→ Most impactful, potentially introducing new moral themes or intensifying existing ones.* --- ## 📊 Summary Statistics | Type | Count | |---------------------|-----------| | Total texts | 4,552,660 | | Human-annotated | 20,980 | | LLM-manipulated | 4,531,680 | | Unique LLMs | 12 | | Tasks | 3 (revise, rewrite, continue) | | Conditioning types | 3 (unconditioned, moral-blind, moral-informed) | | Source datasets | 5 | | Languages | English | --- ### 🤖 Generation Models We used 12 open LLMs from the Hugging Face Model Hub for the text modification: | Model ID (Hugging Face) | Parameters | |------------------------------------------------|------------| | [`Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | 8B | | [`Phi-3.5-mini-instruct`](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) | 3.82B | | [`mistralai/Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) | 7.25B | | [`EuroLLM-9B-Instruct`](https://huggingface.co/utter-project/EuroLLM-9B-Instruct) | 9.15B | | [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | 7.62B | [`01-ai/Yi-1.5-9B-Chat`](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) | 8.83B | | [`Falcon3-7B-Instruct`](https://huggingface.co/tiiuae/Falcon3-7B-Instruct) | 7.22B | | [`aya-expanse-8b`](https://huggingface.co/CohereLabs/aya-expanse-8b) | 8B | | [`Meta-Llama-3.1-8B-Instruct-abl`](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) | 8B | | [`NeuralDaredevil-8B-abl`](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) | 8B | | [`Qwen2.5-7B-Instruct-abl-v2`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2) | 7.62B | | [`Phi-3-mini-128k-instruct-abl-v3`](https://huggingface.co/failspy/Phi-3-mini-128k-instruct-abliterated-v3) | 3.82B | --- ## 💻 Example Usage You can load specific data subsets using `load_dataset` as follows: ### 🔹 Unconditioned Manipulation (RQ1) ```python from datasets import load_dataset # Load the RQ1 subset with revise/rewrite/continue splits and associated "highertemp" splits dataset = load_dataset("MLNTeam-Unical/MoralTextManipulation", name="unconditioned") # Access the rewrite split rq1_rewrite = dataset["rewrite"] # Access the continue (with higher-temperature) split rq1_continue_highertemp = dataset["continue_highertemp"] ``` ### 🔹 Moral-blind Conditioned Manipulation (RQ2) ```python from datasets import load_dataset # Load the RQ2 subset with revise/rewrite/continue splits dataset = load_dataset("MLNTeam-Unical/MoralTextManipulation", name="moral-blind-conditioned") # Access the revise split rq2_revise = dataset["revise"] ``` ### 🔹 Moral-informed Conditioned Manipulation (RQ3) ```python from datasets import load_dataset # Load the RQ3 subset with revise/rewrite/continue intensify/diminish splits dataset = load_dataset("MLNTeam-Unical/MoralTextManipulation", name="moral-informed-conditioned") # Access the the revise, diminish split rq3_revise_diminish = dataset["revise_diminish"] # Access the the continue, intensify (with higher-temperature) split rq3_continue_intensify_highertemp = dataset["continue_intensify_highertemp"] ``` --- ### 📚 Citations Please cite the following paper in any research product whose findings are based on the data provided within this repository: ```bibtex @inproceedings{greco-etal-2025-exploring, title = "Exploring {LLM}s' Ability to Spontaneously and Conditionally Modify Moral Expressions through Text Manipulation", author = "Greco, Candida Maria and La Cava, Lucio and Zangari, Lorenzo and Tagarelli, Andrea", booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.acl-long.883/", doi = "10.18653/v1/2025.acl-long.883", pages = "18047--18070", ISBN = "979-8-89176-251-0" } ``` --- ### 💬 Feedback We value your feedback! Please feel free to share your thoughts, suggestions, or any issues you encounter. Your input helps us improve and evolve the project!
# 📊 Exploring LLMs’ Ability to Spontaneously and Conditionally Modify Moral Expressions through Text Manipulation Morality serves as the foundation of societal structure, guiding legal systems, shaping cultural values, and influencing individual self-perception. With the rise and pervasiveness of generative AI tools, and particularly Large Language Models (LLMs), concerns arise regarding how these tools capture and potentially alter moral dimensions through machine-generated text manipulation. Based on the Moral Foundation Theory, our work investigates this topic by analyzing the behavior of 12 LLMs among the most widely used Open and uncensored (i.e., "abliterated") models, and leveraging human-annotated datasets used in moral-related analysis. Results have shown varying levels of alteration of moral expressions depending on the type of text modification task and moral-related conditioning prompt. In this repository, we provide resources developed through our research exploring the intrinsic ability of LLMs to spontaneously or conditionally alter the expressions of moral dimensions in a human-written text after revision, rewriting, or continuation of the original text. Our dataset contains morally-annotated texts before and after receiving LLM-based manipulation, which were produced to answer the following main research questions: - RQ1 - **Unconditioned Manipulation**: *How does model-generated text editing influence the moral expressions in the modified text?* - RQ2 - **Moral-blind Conditioned Manipulation**: *How do LLMs behave when prompting them to emphasize the strength of any moral expressions detected in a text?* - RQ3 - **Moral-informed Conditioned Manipulation**: *How do LLMs respond when prompted to amplify or weaken moral expressions associated with a specific moral dimension in a text?* #### 📢 News and Updates: - July 10, 2025: [Paper published](https://aclanthology.org/2025.acl-long.883/) - June 26, 2025: Data released 🚀 - May 16, 2025: Paper has been accepted at the ACL 2025 Conference - Main! ✨ --- # 📘 Dataset Description **MoralTextManipulation** is a large-scale dataset designed to analyze how Large Language Models (LLMs) affect moral expressions when revising, rewriting, or continuing human-authored text. Grounded in **Moral Foundations Theory (MFT)**, this dataset facilitates the study of spontaneous and conditioned moral value shifts in LLM-generated content across a range of text manipulation tasks. The dataset includes over **4,552,660** text samples, derived from both human-authored and LLM-manipulated sources. Specifically, it contains: - **20,980 human-annotated texts**, carefully selected and annotated with one or more moral foundations (or labeled as non-moral). - **4,531,680 LLM-manipulated texts**, generated under various manipulation tasks and moral-conditioning prompts using 12 open-source LLMs. These texts are drawn from **five major human-annotated datasets**, covering a rich variety of domains: - **Moral Foundations Twitter Corpus (MFTC)** – tweets from diverse sociopolitical contexts (e.g., Black Lives Matter, MeToo). - **Moral Foundations Reddit Corpus (MFRC)** – posts from subreddits discussing politics and everyday moral discourse. - **Moral Foundations News Corpus (MNFC)** – annotated news articles. - **Moral Events (ME)** – political news articles labeled at the event level across topics like abortion and gun control. - **EMONA** – sentences annotated for moral opinions in news, covering fine-grained political and social discourse. The data selection focused on examples where human moral labels were perfectly aligned with **ME2-BERT** predictions (see details below), ensuring reliable ground truth and evaluation consistency. This repository provides 3 subsets, namely RQ1, RQ2, RQ3, whose splits have the following schema: - `domain`: the name of the source dataset. - `text`: an original human-written text. - `CH_ref`, `FC_ref`, `LB_ref`, `AS_ref`, `PD_ref`, `non_moral_ref`: contain 0/1 to denote the presence of a moral dimension (MFT dichotomy), or no moral dimension, according to the original human-based annotation. - `CH_pre`, `FC_pre`, `LB_pre`, `AS_pre`, `PD_pre`: contain real-valued scores in [0,1] for each moral dimension, according to the ME2-BERT-provided annotations *before* manipulation. - `model`: the LLM in charge of the text manipulation. - `generated_content`: the resulting LLM-generated text after manipulation. - `CH_post`, `FC_post`, `LB_post`, `AS_post`, `PD_post` contain real-valued scores in [0,1] for each moral dimension, according to the ME2-BERT-provided annotations *after* manipulation. ## 🧪 Annotation Tools To assess changes in moral content, the dataset leverages the following models: - **ME2-BERT** (Best predictor) A multi-label transformer model for MFT prediction, chosen for its generalization across domains and alignment with human annotations. It provides per-dimension confidence scores in [0, 1] for all five moral foundations and detects non-moral content. - **MoralBERT** A fine-tuned BERT model for single-label moral foundation classification. Used for comparative analysis. - **MoralStrength** and **DDR (Distributed Dictionary Representation)** Lexicon- and embedding-based models grounded in the Moral Foundations Dictionary. - **Human Annotation** All base texts come from gold-standard datasets with expert labels, and ME2-BERT was benchmarked for consistency on these sources before being used to annotate generated texts. --- ## ✏️ Text Modification Tasks The dataset captures three types of LLM-driven text modifications: 1. **Revision** Light editing to correct grammar or improve clarity while preserving original structure and meaning. *→ Least impactful on moral content.* 2. **Rewriting** Restructures the original for better tone or fluency. *→ Allows significant stylistic and tonal shifts that may affect moral framing.* 3. **Continuation** Generates an extension of the original text while remaining contextually coherent. *→ Most impactful, potentially introducing new moral themes or intensifying existing ones.* --- ## 📊 Summary Statistics | Type | Count | |---------------------|-----------| | Total texts | 4,552,660 | | Human-annotated | 20,980 | | LLM-manipulated | 4,531,680 | | Unique LLMs | 12 | | Tasks | 3 (revise, rewrite, continue) | | Conditioning types | 3 (unconditioned, moral-blind, moral-informed) | | Source datasets | 5 | | Languages | English | --- ### 🤖 Generation Models We used 12 open LLMs from the Hugging Face Model Hub for the text modification: | Model ID (Hugging Face) | Parameters | |------------------------------------------------|------------| | [`Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | 8B | | [`Phi-3.5-mini-instruct`](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) | 3.82B | | [`mistralai/Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) | 7.25B | | [`EuroLLM-9B-Instruct`](https://huggingface.co/utter-project/EuroLLM-9B-Instruct) | 9.15B | | [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | 7.62B | [`01-ai/Yi-1.5-9B-Chat`](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) | 8.83B | | [`Falcon3-7B-Instruct`](https://huggingface.co/tiiuae/Falcon3-7B-Instruct) | 7.22B | | [`aya-expanse-8b`](https://huggingface.co/CohereLabs/aya-expanse-8b) | 8B | | [`Meta-Llama-3.1-8B-Instruct-abl`](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) | 8B | | [`NeuralDaredevil-8B-abl`](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) | 8B | | [`Qwen2.5-7B-Instruct-abl-v2`](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2) | 7.62B | | [`Phi-3-mini-128k-instruct-abl-v3`](https://huggingface.co/failspy/Phi-3-mini-128k-instruct-abliterated-v3) | 3.82B | --- ## 💻 Example Usage You can load specific data subsets using `load_dataset` as follows: ### 🔹 Unconditioned Manipulation (RQ1) ```python from datasets import load_dataset # Load the RQ1 subset with revise/rewrite/continue splits and associated "highertemp" splits dataset = load_dataset("MLNTeam-Unical/MoralTextManipulation", name="unconditioned") # Access the rewrite split rq1_rewrite = dataset["rewrite"] # Access the continue (with higher-temperature) split rq1_continue_highertemp = dataset["continue_highertemp"] ``` ### 🔹 Moral-blind Conditioned Manipulation (RQ2) ```python from datasets import load_dataset # Load the RQ2 subset with revise/rewrite/continue splits dataset = load_dataset("MLNTeam-Unical/MoralTextManipulation", name="moral-blind-conditioned") # Access the revise split rq2_revise = dataset["revise"] ``` ### 🔹 Moral-informed Conditioned Manipulation (RQ3) ```python from datasets import load_dataset # Load the RQ3 subset with revise/rewrite/continue intensify/diminish splits dataset = load_dataset("MLNTeam-Unical/MoralTextManipulation", name="moral-informed-conditioned") # Access the the revise, diminish split rq3_revise_diminish = dataset["revise_diminish"] # Access the the continue, intensify (with higher-temperature) split rq3_continue_intensify_highertemp = dataset["continue_intensify_highertemp"] ``` --- ### 📚 Citations Please cite the following paper in any research product whose findings are based on the data provided within this repository: ```bibtex @inproceedings{greco-etal-2025-exploring, title = "Exploring {LLM}s' Ability to Spontaneously and Conditionally Modify Moral Expressions through Text Manipulation", author = "Greco, Candida Maria and La Cava, Lucio and Zangari, Lorenzo and Tagarelli, Andrea", booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.acl-long.883/", doi = "10.18653/v1/2025.acl-long.883", pages = "18047--18070", ISBN = "979-8-89176-251-0" } ``` --- ### 💬 Feedback We value your feedback! Please feel free to share your thoughts, suggestions, or any issues you encounter. Your input helps us improve and evolve the project!
143
0
[ "task_categories:text-classification", "task_categories:token-classification", "task_categories:zero-shot-classification", "task_categories:text-generation", "task_categories:sentence-similarity", "task_categories:summarization", "task_categories:translation", "task_categories:feature-extraction", "task_categories:question-answering", "language:en", "license:cc-by-nc-4.0", "size_categories:1M<n<10M", "format:csv", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-06-26T13:33:26+00:00
2025-11-11T07:47:25+00:00
0
weiwenliu/lerobot-serl-test
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": null, "total_episodes": 55, "total_frames": 21653, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:55" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan", "shoulder_lift", "elbow_flex", "wrist_flex", "wrist_roll", "gripper" ] }, "leader_joint_positions": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan", "shoulder_lift", "elbow_flex", "wrist_flex", "wrist_roll", "gripper" ] }, "next.reward": { "dtype": "float32", "shape": [ 1 ], "names": null }, "next.done": { "dtype": "bool", "shape": [ 1 ], "names": null }, "complementary_info.discrete_penalty": { "dtype": "float32", "shape": [ 1 ], "names": [ "discrete_penalty" ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 3, 128, 128 ], "names": [ "channels", "height", "width" ], "info": { "video.height": 128, "video.width": 128, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.head_depth": { "dtype": "video", "shape": [ 1, 128, 128 ], "names": [ "channels", "height", "width" ], "info": { "video.height": 128, "video.width": 128, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.head_rgb": { "dtype": "video", "shape": [ 3, 128, 128 ], "names": [ "channels", "height", "width" ], "info": { "video.height": 128, "video.width": 128, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.state": { "dtype": "float32", "shape": [ 18 ], "names": null }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": null, "total_episodes": 55, "total_frames": 21653, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:55" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan", "shoulder_lift", "elbow_flex", "wrist_flex", "wrist_roll", "gripper" ] }, "leader_joint_positions": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan", "shoulder_lift", "elbow_flex", "wrist_flex", "wrist_roll", "gripper" ] }, "next.reward": { "dtype": "float32", "shape": [ 1 ], "names": null }, "next.done": { "dtype": "bool", "shape": [ 1 ], "names": null }, "complementary_info.discrete_penalty": { "dtype": "float32", "shape": [ 1 ], "names": [ "discrete_penalty" ] }, "observation.images.wrist": { "dtype": "video", "shape": [ 3, 128, 128 ], "names": [ "channels", "height", "width" ], "info": { "video.height": 128, "video.width": 128, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.head_depth": { "dtype": "video", "shape": [ 1, 128, 128 ], "names": [ "channels", "height", "width" ], "info": { "video.height": 128, "video.width": 128, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.head_rgb": { "dtype": "video", "shape": [ 3, 128, 128 ], "names": [ "channels", "height", "width" ], "info": { "video.height": 128, "video.width": 128, "video.codec": "h264", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.state": { "dtype": "float32", "shape": [ 18 ], "names": null }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
281
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-10-27T11:30:16+00:00
2025-11-11T07:45:01+00:00
0
yinxinyuchen/pp_1111_5
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4045, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4045, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
34
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:43:08+00:00
2025-11-11T07:43:45+00:00
0
yinxinyuchen/pp_1111_4
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4405, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4405, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
61
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:33:31+00:00
2025-11-11T07:38:52+00:00
0
qownscks/banana_anomoly_merge
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "robot_type": "so101_follower", "codebase_version": "v3.0", "total_episodes": 50, "total_frames": 25865, "total_tasks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:50" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "fps": 30 }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "fps": 30 }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null, "fps": 30 }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null, "fps": 30 }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null, "fps": 30 }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null, "fps": 30 }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null, "fps": 30 }, "observation.images.up": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } } }, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500 } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "robot_type": "so101_follower", "codebase_version": "v3.0", "total_episodes": 50, "total_frames": 25865, "total_tasks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:50" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "fps": 30 }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "fps": 30 }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null, "fps": 30 }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null, "fps": 30 }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null, "fps": 30 }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null, "fps": 30 }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null, "fps": 30 }, "observation.images.up": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } } }, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500 } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
41
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:17:34+00:00
2025-11-11T07:35:31+00:00
0
Caoza/PhysX-3D
# PhysXNet & PhysXNet-XL <p align="left"><a href="https://arxiv.org/abs/2507.12465"><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a> <a href='https://huggingface.co/papers/2507.12465'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Paper-blue'></a> <a href='https://physx-3d.github.io/'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=homepage&logoColor=white' alt='Project Page'></a> <a href='https://youtu.be/M5V_c0Duuy4'><img src='https://img.shields.io/youtube/views/M5V_c0Duuy4'></a> This dataset aims to bridge the critical gap in physics-annotated 3D datasets. It is the first physics-grounded 3D dataset systematically annotated across five foundational dimensions: **absolute scale**, **material**, **affordance**, **kinematics**, and **function description**. ## Dataset Details 🎉 Our paper has been accepted to **NeurIPS 2025 (Spotlight)** 🎉 We have released the code for converting our JSON files to URDF at: [urdf_gen.py](https://github.com/ziangcao0312/PhysX). ### Dataset Sources - **Repository:** [PhysX-3D](https://github.com/ziangcao0312/PhysX) - **Project page:** [PhysX-3D: Physical-Grounded 3D Asset Generation](https://physx-3d.github.io) - **Demo video:** [Video](https://youtu.be/M5V_c0Duuy4) ## Dataset Structure ``` PhysX --PhysXNet.zip ----finaljson ------103.json ------502.json ------... ----partseg ------103 --------img ----------0.png ----------1.png ----------... --------objs ----------0.obj ----------1.obj ----------... --PhysXNet-XL_bottle.zip --PhysXNet-XL_knief.zip ... ``` The physical properties are included in the JSON file. It can be converted to URDF or XML files. ###### Example.json ```python { "object_name": "Folding Knife", "category": "Tool", "dimension": "20*3*2", # Physical scaling (cm) "parts": [ { "label": 0, "name": "Blade", "material": "Stainless Steel", "density": "7.8 g/cm^3", "priority_rank": 2, # Affordance rank "Basic_description": "xxx", "Functional_description": "xxx", "Movement_description": "xxx", "Young's Modulus (GPa)": xx, "Poisson's Ratio": xx }, { "label": 1, "name": "Handle", "material": "Plastic", "density": "1.2 g/cm^3", "priority_rank": 1, "Basic_description": "xxx", "Functional_description": "xxx", "Movement_description": "xxx", "Young's Modulus (GPa)": xx, "Poisson's Ratio": xx } ], "group_info": { "0": [ # basement group index 1 # label of the part ], "1": [ # child group index [ 0 # moveable parts in child group ], "0", # parent group index [ 1, # rotation/movement direction x coordinate 0, # rotation/movement direction y coordinate 0, # rotation/movement direction z coordinate 0.0, # Revolute/Hinge location x coordinate 0.3, # Revolute/Hinge location y coordinate -0.0, # Revolute/Hinge location z coordinate 0.0, # rotation/movement min range 1.0 # rotation/movement max range ], "C" # Kinematic type (A,B,C,CB,D,E) ] } } ``` ### Kinematic Details **Rotation range:** Rotation range = rotation angle / 180. (Rotation range) [-1, 1] * 180° → (Rotation angle) [-180°, 180°]. **Movement range:** Movement range = movement length in 3D coordinates. (Movement range) [-1, 1] * Physical scaling → (Movement length) [-10cm, 10cm]. **Kinematic type:** A. No movement constraints *(water in a bottle)* B. Prismatic joints *(drawer)* C. Revolute joints (*door*) CB. Prismatic & Revolute joints (lid of the bottle) D. Hinge joint (*a hose in a shower system*) E. Rigid joint. **Note:** For CB, there are more kinematic parameters. ```python "group_info": { "0": [ # basement group index 1 # label of the part ], "1": [ # child group index [ 0 # moveable parts in child group ], "0", # parent group index [ 1, # rotation direction x coordinate 0, # rotation direction y coordinate 0, # rotation direction z coordinate 0.0, # Revolute location x coordinate 0.3, # Revolute location y coordinate -0.0, # Revolute location z coordinate 0.0, # rotation min range 1.0 # rotation max range 1, # movement direction x coordinate 0, # movement direction y coordinate 0, # movement direction z coordinate 0.0, # 0.3, # -0.0, # 0.0, # movement min range 1.0 # movement max range ], "CB" # Kinematic type (A,B,C,CB,D,E) ] } ``` If you find our dataset useful for your work, please cite: ``` @article{cao2025physx, title={PhysX: Physical-Grounded 3D Asset Generation}, author={Cao, Ziang and Chen, Zhaoxi and Pan, Liang and Liu, Ziwei}, journal={arXiv preprint arXiv:2507.12465}, year={2025} } ``` ### Acknowledgement PhysXNet and PhysXNet-XL are based on [PartNet](https://huggingface.co/datasets/ShapeNet/PartNet-archive). We would like to express our sincere thanks to the contributors. ### License If you use PhysXNet and PhysXNet-XL, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
# PhysXNet & PhysXNet-XL <p align="left"><a href="https://arxiv.org/abs/2507.12465"><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a> <a href='https://huggingface.co/papers/2507.12465'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Paper-blue'></a> <a href='https://physx-3d.github.io/'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=homepage&logoColor=white' alt='Project Page'></a> <a href='https://youtu.be/M5V_c0Duuy4'><img src='https://img.shields.io/youtube/views/M5V_c0Duuy4'></a> This dataset aims to bridge the critical gap in physics-annotated 3D datasets. It is the first physics-grounded 3D dataset systematically annotated across five foundational dimensions: **absolute scale**, **material**, **affordance**, **kinematics**, and **function description**. ## Dataset Details 🎉 Our paper has been accepted to **NeurIPS 2025 (Spotlight)** 🎉 We have released the code for converting our JSON files to URDF at: [urdf_gen.py](https://github.com/ziangcao0312/PhysX). ### Dataset Sources - **Repository:** [PhysX-3D](https://github.com/ziangcao0312/PhysX) - **Project page:** [PhysX-3D: Physical-Grounded 3D Asset Generation](https://physx-3d.github.io) - **Demo video:** [Video](https://youtu.be/M5V_c0Duuy4) ## Dataset Structure ``` PhysX --PhysXNet.zip ----finaljson ------103.json ------502.json ------... ----partseg ------103 --------img ----------0.png ----------1.png ----------... --------objs ----------0.obj ----------1.obj ----------... --PhysXNet-XL_bottle.zip --PhysXNet-XL_knief.zip ... ``` The physical properties are included in the JSON file. It can be converted to URDF or XML files. ###### Example.json ```python { "object_name": "Folding Knife", "category": "Tool", "dimension": "20*3*2", # Physical scaling (cm) "parts": [ { "label": 0, "name": "Blade", "material": "Stainless Steel", "density": "7.8 g/cm^3", "priority_rank": 2, # Affordance rank "Basic_description": "xxx", "Functional_description": "xxx", "Movement_description": "xxx", "Young's Modulus (GPa)": xx, "Poisson's Ratio": xx }, { "label": 1, "name": "Handle", "material": "Plastic", "density": "1.2 g/cm^3", "priority_rank": 1, "Basic_description": "xxx", "Functional_description": "xxx", "Movement_description": "xxx", "Young's Modulus (GPa)": xx, "Poisson's Ratio": xx } ], "group_info": { "0": [ # basement group index 1 # label of the part ], "1": [ # child group index [ 0 # moveable parts in child group ], "0", # parent group index [ 1, # rotation/movement direction x coordinate 0, # rotation/movement direction y coordinate 0, # rotation/movement direction z coordinate 0.0, # Revolute/Hinge location x coordinate 0.3, # Revolute/Hinge location y coordinate -0.0, # Revolute/Hinge location z coordinate 0.0, # rotation/movement min range 1.0 # rotation/movement max range ], "C" # Kinematic type (A,B,C,CB,D,E) ] } } ``` ### Kinematic Details **Rotation range:** Rotation range = rotation angle / 180. (Rotation range) [-1, 1] * 180° → (Rotation angle) [-180°, 180°]. **Movement range:** Movement range = movement length in 3D coordinates. (Movement range) [-1, 1] * Physical scaling → (Movement length) [-10cm, 10cm]. **Kinematic type:** A. No movement constraints *(water in a bottle)* B. Prismatic joints *(drawer)* C. Revolute joints (*door*) CB. Prismatic & Revolute joints (lid of the bottle) D. Hinge joint (*a hose in a shower system*) E. Rigid joint. **Note:** For CB, there are more kinematic parameters. ```python "group_info": { "0": [ # basement group index 1 # label of the part ], "1": [ # child group index [ 0 # moveable parts in child group ], "0", # parent group index [ 1, # rotation direction x coordinate 0, # rotation direction y coordinate 0, # rotation direction z coordinate 0.0, # Revolute location x coordinate 0.3, # Revolute location y coordinate -0.0, # Revolute location z coordinate 0.0, # rotation min range 1.0 # rotation max range 1, # movement direction x coordinate 0, # movement direction y coordinate 0, # movement direction z coordinate 0.0, # 0.3, # -0.0, # 0.0, # movement min range 1.0 # movement max range ], "CB" # Kinematic type (A,B,C,CB,D,E) ] } ``` If you find our dataset useful for your work, please cite: ``` @article{cao2025physx, title={PhysX: Physical-Grounded 3D Asset Generation}, author={Cao, Ziang and Chen, Zhaoxi and Pan, Liang and Liu, Ziwei}, journal={arXiv preprint arXiv:2507.12465}, year={2025} } ``` ### Acknowledgement PhysXNet and PhysXNet-XL are based on [PartNet](https://huggingface.co/datasets/ShapeNet/PartNet-archive). We would like to express our sincere thanks to the contributors. ### License If you use PhysXNet and PhysXNet-XL, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
893
19
[ "task_categories:image-to-3d", "language:en", "license:gpl-3.0", "size_categories:1M<n<10M", "arxiv:2507.12465", "region:us", "Physical 3D Generation", "3D Vision", "3D" ]
2025-07-08T02:58:13+00:00
2025-11-11T07:35:19+00:00
0
Cerberose/vietnamese-classification-dataset
# Vietnamese Classification Dataset The **Vietnamese Classification Dataset** is a large-scale corpus of Vietnamese text samples annotated with categorical labels across three distinct classes. It was created to support research and development in Vietnamese Natural Language Processing (NLP), particularly in text classification, topic modeling, and supervised learning tasks. With over 11 million examples, this dataset offers the scale and linguistic diversity necessary for training robust, data-driven language models and classification systems for Vietnamese. This dataset builds upon our crawled data from Vietnamese-English news sites and publicly available resources including the Vietnamese News Corpus, Vietnamese Correction v2, and OPUS Tatoeba. We thank the respective authors and maintainers for their contributions to open Vietnamese NLP research. If this dataset contributes to your research or applications, please include the following citation: ``` @misc{cerberose_vietnamese_classification_dataset_2025, title = {Vietnamese Classification Dataset}, author = {Cerberose}, year = {2025}, howpublished = {\url{https://huggingface.co/datasets/Cerberose/vietnamese-classification-dataset}} , note = {Accessed: YYYY-MM-DD} } ``` ## Source Datasets This dataset incorporates and extends data from the following public sources: ``` @misc{binhvq_news_corpus_2020, title = {Vietnamese News Corpus}, author = {Binh VQ}, year = {2020}, howpublished = {\url{https://github.com/binhvq/news-corpus}}, note = {Accessed: 2024-12-22} } @misc{bmd1905_vietnamese_correction_v2_2023, title = {Vietnamese Correction v2}, author = {BMD1905}, year = {2023}, howpublished = {\url{https://huggingface.co/bmd1905/vietnamese-correction-v2}}, note = {Accessed: 2024-12-23} } @misc{wecover_opus_tatoeba_2021, title = {OPUS Tatoeba Parallel Corpus}, author = {WeCover}, year = {2021}, howpublished = {\url{https://huggingface.co/datasets/wecover/OPUS_Tatoeba}}, note = {Accessed: 2024-12-24} } ``` This dataset is distributed under the **Apache 2.0 License**. Users should review and comply with the license terms before applying the data to academic, commercial, or derivative works. For questions, feedback, or contributions, please contact the maintainer **Cerberose** or open an issue on the dataset’s Hugging Face page. Contributions such as additional documentation, class clarification, or dataset refinement are welcome and encouraged. **Dataset URL:** [https://huggingface.co/datasets/Cerberose/vietnamese-classification-dataset](https://huggingface.co/datasets/Cerberose/vietnamese-classification-dataset)
# Vietnamese Classification Dataset The **Vietnamese Classification Dataset** is a large-scale corpus of Vietnamese text samples annotated with categorical labels across three distinct classes. It was created to support research and development in Vietnamese Natural Language Processing (NLP), particularly in text classification, topic modeling, and supervised learning tasks. With over 11 million examples, this dataset offers the scale and linguistic diversity necessary for training robust, data-driven language models and classification systems for Vietnamese. This dataset builds upon our crawled data from Vietnamese-English news sites and publicly available resources including the Vietnamese News Corpus, Vietnamese Correction v2, and OPUS Tatoeba. We thank the respective authors and maintainers for their contributions to open Vietnamese NLP research. If this dataset contributes to your research or applications, please include the following citation: ``` @misc{cerberose_vietnamese_classification_dataset_2025, title = {Vietnamese Classification Dataset}, author = {Cerberose}, year = {2025}, howpublished = {\url{https://huggingface.co/datasets/Cerberose/vietnamese-classification-dataset}} , note = {Accessed: YYYY-MM-DD} } ``` ## Source Datasets This dataset incorporates and extends data from the following public sources: ``` @misc{binhvq_news_corpus_2020, title = {Vietnamese News Corpus}, author = {Binh VQ}, year = {2020}, howpublished = {\url{https://github.com/binhvq/news-corpus}}, note = {Accessed: 2024-12-22} } @misc{bmd1905_vietnamese_correction_v2_2023, title = {Vietnamese Correction v2}, author = {BMD1905}, year = {2023}, howpublished = {\url{https://huggingface.co/bmd1905/vietnamese-correction-v2}}, note = {Accessed: 2024-12-23} } @misc{wecover_opus_tatoeba_2021, title = {OPUS Tatoeba Parallel Corpus}, author = {WeCover}, year = {2021}, howpublished = {\url{https://huggingface.co/datasets/wecover/OPUS_Tatoeba}}, note = {Accessed: 2024-12-24} } ``` This dataset is distributed under the **Apache 2.0 License**. Users should review and comply with the license terms before applying the data to academic, commercial, or derivative works. For questions, feedback, or contributions, please contact the maintainer **Cerberose** or open an issue on the dataset’s Hugging Face page. Contributions such as additional documentation, class clarification, or dataset refinement are welcome and encouraged. **Dataset URL:** [https://huggingface.co/datasets/Cerberose/vietnamese-classification-dataset](https://huggingface.co/datasets/Cerberose/vietnamese-classification-dataset)
33
0
[ "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-10-31T16:37:51+00:00
2025-11-11T07:39:06+00:00
0
JHU-SmileLab/NaturalVoices_VC_870h
# NaturalVoices VC 870h A large voice conversion (VC) dataset curated from spontaneous, in-the-wild podcast speech as part of the **NaturalVoices** project in collaboration with 🤗[MSP Lab at CMU LTI](https://huggingface.co/Lab-MSP). This release provides the **870-hour** VC dataset and subsets mainly intended for training and evaluating emotion-aware voice conversion systems but not limited to VC tasks. - 📄 Paper: *NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion* — https://arxiv.org/abs/2511.00256 \ - 🧺 Dataset collection (related subsets, e.g., 10% of data & emotional VC): https://huggingface.co/collections/JHU-SmileLab/naturalvoices-voice-conversion-datasets \ - <span style="display:inline-flex;align-items:center;gap:-6px"> <img src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" height=20 alt="GitHub badge"> <span>The extensive (unfiltered) NaturalVoices dataset and the code for the data collection & curation pipeline: <a href="https://github.com/Lab-MSP/NaturalVoices">https://github.com/Lab-MSP/NaturalVoices</a></span> </span> ## Dataset Summary NaturalVoices VC compiles real-life, expressive podcast speech and provides automatic **annotations** designed for VC research (e.g., **emotion** attributes, **speaker identity**, **speech quality**, **transcripts**). The broader NaturalVoices corpus contains thousands of hours of podcast speech; this repository hosts the **VC_870h** subset. **What’s in this repo** - ~870 hours of podcast speech tailored and preprocessed for VC. - A wide range of speakers >2670, both manually & automatically annotated. - Annotations archive (`NV_VC_870h_Annotation.tar.gz`) with per-utterance annotations including: - Emotion categorical labels & dimensional attributes (valence/arousal/dominance), - Speech quality indicators, - Text, Gender, and Duration. ### Subsets | Subset | Description | Link | | --------------------------- | :------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | | NaturalVoices_VC_870h | 870h of speech data curated for VC | This repo | | NaturalVoices_EVC | Emotion-balanced subset for Emotional Voice Conversion (EVC) | 🤗[JHU-SmileLab/NaturalVoices_EVC](https://huggingface.co/datasets/JHU-SmileLab/NaturalVoices_EVC) | | NaturalVoices_VC_01 (10%) | A smaller subset uniformly sampled from 870h (10%) | 🤗[JHU-SmileLab/NaturalVoices_VC_0.1](https://huggingface.co/datasets/JHU-SmileLab/NaturalVoices_VC_0.1) | ## How to use You can directly download the dataset using the following command: ```bash huggingface-cli download JHU-SmileLab/NaturalVoices_VC_870h --repo-type=dataset --local-dir=YOUR_LOCAL_DIR ``` *Streaming support will be available* ## Cite & Contribute If you use this dataset, please cite the paper: ```sql @misc{du2025naturalvoiceslargescalespontaneousemotional, title={NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion}, author={Zongyang Du and Shreeram Suresh Chandra and Ismail Rasim Ulgen and Aurosweta Mahapatra and Ali N. Salman and Carlos Busso and Berrak Sisman}, year={2025}, eprint={2511.00256}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2511.00256}, } ```
# NaturalVoices VC 870h A large voice conversion (VC) dataset curated from spontaneous, in-the-wild podcast speech as part of the **NaturalVoices** project in collaboration with 🤗[MSP Lab at CMU LTI](https://huggingface.co/Lab-MSP). This release provides the **870-hour** VC dataset and subsets mainly intended for training and evaluating emotion-aware voice conversion systems but not limited to VC tasks. - 📄 Paper: *NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion* — https://arxiv.org/abs/2511.00256 \ - 🧺 Dataset collection (related subsets, e.g., 10% of data & emotional VC): https://huggingface.co/collections/JHU-SmileLab/naturalvoices-voice-conversion-datasets \ - <span style="display:inline-flex;align-items:center;gap:-6px"> <img src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" height=20 alt="GitHub badge"> <span>The extensive (unfiltered) NaturalVoices dataset and the code for the data collection & curation pipeline: <a href="https://github.com/Lab-MSP/NaturalVoices">https://github.com/Lab-MSP/NaturalVoices</a></span> </span> ## Dataset Summary NaturalVoices VC compiles real-life, expressive podcast speech and provides automatic **annotations** designed for VC research (e.g., **emotion** attributes, **speaker identity**, **speech quality**, **transcripts**). The broader NaturalVoices corpus contains thousands of hours of podcast speech; this repository hosts the **VC_870h** subset. **What’s in this repo** - ~870 hours of podcast speech tailored and preprocessed for VC. - A wide range of speakers >2670, both manually & automatically annotated. - Annotations archive (`NV_VC_870h_Annotation.tar.gz`) with per-utterance annotations including: - Emotion categorical labels & dimensional attributes (valence/arousal/dominance), - Speech quality indicators, - Text, Gender, and Duration. ### Subsets | Subset | Description | Link | | --------------------------- | :------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | | NaturalVoices_VC_870h | 870h of speech data curated for VC | This repo | | NaturalVoices_EVC | Emotion-balanced subset for Emotional Voice Conversion (EVC) | 🤗[JHU-SmileLab/NaturalVoices_EVC](https://huggingface.co/datasets/JHU-SmileLab/NaturalVoices_EVC) | | NaturalVoices_VC_01 (10%) | A smaller subset uniformly sampled from 870h (10%) | 🤗[JHU-SmileLab/NaturalVoices_VC_0.1](https://huggingface.co/datasets/JHU-SmileLab/NaturalVoices_VC_0.1) | ## How to use You can directly download the dataset using the following command: ```bash huggingface-cli download JHU-SmileLab/NaturalVoices_VC_870h --repo-type=dataset --local-dir=YOUR_LOCAL_DIR ``` *Streaming support will be available* ## Cite & Contribute If you use this dataset, please cite the paper: ```sql @misc{du2025naturalvoiceslargescalespontaneousemotional, title={NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion}, author={Zongyang Du and Shreeram Suresh Chandra and Ismail Rasim Ulgen and Aurosweta Mahapatra and Ali N. Salman and Carlos Busso and Berrak Sisman}, year={2025}, eprint={2511.00256}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2511.00256}, } ```
71
8
[ "task_categories:audio-to-audio", "task_categories:text-to-speech", "task_categories:audio-classification", "task_categories:automatic-speech-recognition", "language:en", "arxiv:2511.00256", "region:us" ]
2025-10-27T05:07:03+00:00
2025-11-11T07:30:40+00:00
4
JHU-SmileLab/NaturalVoices_EVC
# NaturalVoices EVC A large emotional voice conversion (EVC) dataset curated from spontaneous, in-the-wild podcast speech as part of the **NaturalVoices** project in collaboration with 🤗[MSP Lab at CMU LTI](https://huggingface.co/Lab-MSP). This release provides the emotion balanced subset of the NaturalVoices **870-hour** VC dataset and intended for training and evaluating emotion-aware voice conversion systems but not limited to VC tasks. - 📄 Paper: *NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion* — https://arxiv.org/abs/2511.00256 \ - 🧺 Dataset collection (related subsets, e.g., 10% of data & emotional VC): https://huggingface.co/collections/JHU-SmileLab/naturalvoices-voice-conversion-datasets \ - <span style="display:inline-flex;align-items:center;gap:-6px"> <img src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" height=20 alt="GitHub badge"> <span>The extensive (unfiltered) NaturalVoices dataset and the code for the data collection & curation pipeline: <a href="https://github.com/Lab-MSP/NaturalVoices">https://github.com/Lab-MSP/NaturalVoices</a></span> </span> ## Dataset Summary NaturalVoices VC compiles real-life, expressive podcast speech and provides automatic **annotations** designed for VC research (e.g., **emotion** attributes, **speaker identity**, **speech quality**, **transcripts**). The broader NaturalVoices corpus contains thousands of hours of podcast speech; this repository hosts the **EVC** subset. **What’s in this repo** - ~370 hours of podcast speech tailored and preprocessed for EVC. - Balanced distribution of categorical emotions (Angry, Happy, Neutral, Sad) - A wide range of speakers both manually & automatically annotated. - Annotations archive with per-utterance annotations including: - Emotion categorical labels & dimensional attributes (valence/arousal/dominance), - Speech quality indicators, - Text, Gender, and Duration. ### Subsets | Subset | Description | Link | | --------------------------- | :------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | | NaturalVoices_VC_870h | 870h of speech data curated for VC | 🤗[JHU-SmileLab/NaturalVoices_VC_870h](https://JHU-SmileLab/NaturalVoices_VC_870h) | | NaturalVoices_EVC | Emotion-balanced subset for Emotional Voice Conversion (EVC) | This repo | | NaturalVoices_VC_01 (10%) | A smaller subset uniformly sampled from 870h (10%) | 🤗[JHU-SmileLab/NaturalVoices_VC_0.1](https://huggingface.co/datasets/JHU-SmileLab/NaturalVoices_VC_0.1) | ## How to use You can directly download the dataset using the following command: ```bash huggingface-cli download JHU-SmileLab/NaturalVoices_EVC --repo-type=dataset --local-dir=YOUR_LOCAL_DIR ``` *Streaming support will be available* ## Cite & Contribute If you use this dataset, please cite the paper: ```sql @misc{du2025naturalvoiceslargescalespontaneousemotional, title={NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion}, author={Zongyang Du and Shreeram Suresh Chandra and Ismail Rasim Ulgen and Aurosweta Mahapatra and Ali N. Salman and Carlos Busso and Berrak Sisman}, year={2025}, eprint={2511.00256}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2511.00256}, } ```
# NaturalVoices EVC A large emotional voice conversion (EVC) dataset curated from spontaneous, in-the-wild podcast speech as part of the **NaturalVoices** project in collaboration with 🤗[MSP Lab at CMU LTI](https://huggingface.co/Lab-MSP). This release provides the emotion balanced subset of the NaturalVoices **870-hour** VC dataset and intended for training and evaluating emotion-aware voice conversion systems but not limited to VC tasks. - 📄 Paper: *NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion* — https://arxiv.org/abs/2511.00256 \ - 🧺 Dataset collection (related subsets, e.g., 10% of data & emotional VC): https://huggingface.co/collections/JHU-SmileLab/naturalvoices-voice-conversion-datasets \ - <span style="display:inline-flex;align-items:center;gap:-6px"> <img src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" height=20 alt="GitHub badge"> <span>The extensive (unfiltered) NaturalVoices dataset and the code for the data collection & curation pipeline: <a href="https://github.com/Lab-MSP/NaturalVoices">https://github.com/Lab-MSP/NaturalVoices</a></span> </span> ## Dataset Summary NaturalVoices VC compiles real-life, expressive podcast speech and provides automatic **annotations** designed for VC research (e.g., **emotion** attributes, **speaker identity**, **speech quality**, **transcripts**). The broader NaturalVoices corpus contains thousands of hours of podcast speech; this repository hosts the **EVC** subset. **What’s in this repo** - ~370 hours of podcast speech tailored and preprocessed for EVC. - Balanced distribution of categorical emotions (Angry, Happy, Neutral, Sad) - A wide range of speakers both manually & automatically annotated. - Annotations archive with per-utterance annotations including: - Emotion categorical labels & dimensional attributes (valence/arousal/dominance), - Speech quality indicators, - Text, Gender, and Duration. ### Subsets | Subset | Description | Link | | --------------------------- | :------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | | NaturalVoices_VC_870h | 870h of speech data curated for VC | 🤗[JHU-SmileLab/NaturalVoices_VC_870h](https://JHU-SmileLab/NaturalVoices_VC_870h) | | NaturalVoices_EVC | Emotion-balanced subset for Emotional Voice Conversion (EVC) | This repo | | NaturalVoices_VC_01 (10%) | A smaller subset uniformly sampled from 870h (10%) | 🤗[JHU-SmileLab/NaturalVoices_VC_0.1](https://huggingface.co/datasets/JHU-SmileLab/NaturalVoices_VC_0.1) | ## How to use You can directly download the dataset using the following command: ```bash huggingface-cli download JHU-SmileLab/NaturalVoices_EVC --repo-type=dataset --local-dir=YOUR_LOCAL_DIR ``` *Streaming support will be available* ## Cite & Contribute If you use this dataset, please cite the paper: ```sql @misc{du2025naturalvoiceslargescalespontaneousemotional, title={NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion}, author={Zongyang Du and Shreeram Suresh Chandra and Ismail Rasim Ulgen and Aurosweta Mahapatra and Ali N. Salman and Carlos Busso and Berrak Sisman}, year={2025}, eprint={2511.00256}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2511.00256}, } ```
28
3
[ "task_categories:audio-to-audio", "task_categories:text-to-speech", "task_categories:audio-classification", "language:en", "arxiv:2511.00256", "region:us" ]
2025-10-27T05:08:04+00:00
2025-11-11T07:30:19+00:00
3
yinxinyuchen/pp_1111_3
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4607, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4607, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
63
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:27:26+00:00
2025-11-11T07:29:16+00:00
0
THUIR/MemoryBench
# MemoryBench MemoryBench aims to provide a standardized and extensible benchmark for evaluating memory and continual learning in LLM systems — encouraging future work toward more adaptive, feedback-driven, and efficient LLM systems. **Paper Link**: https://arxiv.org/abs/2510.17281 **Github**: https://github.com/LittleDinoC/MemoryBench/ ## Dataset Structure Each dataset is split into training and testing sets, with the following core fields: - `test_idx`: A unique identifier for each data item. - `input_prompt` (or `input_chat_messages`): The user input, either as a string (`input_prompt`) or as a list of chat messages (`input_chat_messages`). - `dataset_name`: The name of the dataset. - `lang`: The language of the data item. - `info`: Additional information for evaluating response quality. - `dialog`: The dialogue history, where `Qwen3-8B` serves as the assistant and `Qwen3-32B` acts as the User Simulator. - `implicit_feedback`: The simulated implicit feedback within the dialogue. Additional fields may be present depending on the dataset, such as references to the corresponding raw data entry or its subclass. These fields are for reference only and are not used in MemoryBench’s training, testing, or evaluation processes. You can run the following script to load a dataset. ```python import datasets import ast import json def convert_str_to_obj(example): for col in example.keys(): if col.startswith("dialog") or col.startswith("implicit_feedback") or col in ["input_chat_messages", "info"]: try: example[col] = ast.literal_eval(example[col]) except (ValueError, SyntaxError): example[col] = json.loads(example[col]) if "Locomo" in example["dataset_name"]: if example["info"]["category"] == 5: example["info"]["golden_answer"] = json.dumps(example["info"]["golden_answer"]) else: example["info"]["golden_answer"] = str(example["info"]["golden_answer"]) return example dataset = datasets.load_dataset("THUIR/MemoryBench", "NFCats") dataset = dataset.map(convert_str_to_obj) print(json.dumps(dataset["train"][0], indent=2)) ``` The result will be: ``` { "test_idx": 456, "input_prompt": "ironside meaning", "dataset_name": "NFCats", "lang": "en", "info": {}, "dialog": [ { "content": "Please answer the following non-factoid question in English. \nKeep your answer concise and informative, and do not exceed 200 words.\n\nQuestion: ironside meaning", "role": "user" }, { "content": "\"Ironsides\" is a term that refers to a group of individuals who are known for their strong moral character, integrity, and steadfastness. It is often used to describe people who uphold ethical values and resist corruption or wrongdoing. The term can be applied in various contexts, such as politics, business, or personal relationships, to highlight individuals who remain principled in challenging situations.\n\nIn historical contexts, \"Ironsides\" has also been used to describe soldiers or warriors who were known for their bravery and unwavering loyalty. The term conveys a sense of strength, resilience, and moral fortitude. It is not limited to a specific time period or culture, and can be used to honor those who exemplify these qualities in their actions and decisions.\n\nWhile the term is not commonly used in everyday language, it carries a positive and respectful connotation. It is often employed to recognize individuals who serve as role models through their consistent adherence to ethical standards. Overall, \"Ironsides\" represents a commitment to doing what is right, even in the face of adversity.", "role": "assistant" } ], "implicit_feedback": [ { "implicit_actions": [], "round": 1, "satisfaction_score": 8, "terminated": true } ] } ``` If you encounter issues related to the Hugging Face cache, you can first download the dataset locally and then load it from the local path in your code. For example: ```bash pip install -U huggingface_hub huggingface-cli download --repo-type dataset --resume-download THUIR/MemoryBench --local-dir MemoryBench ``` ```python import datasets datasets = datasets.load_dataset("./MemoryBench", "NFCats") ``` For the DialSim and Locomo datasets, they do not have a vanilla baseline, we include dialogue and implicit feedback from other baselines, stored in the `dialog_{BASELINE_NAME}` and `implicit_feedback_{BASELINE_NAME}` fields, respectively. And these datasets rely on the long context to test or evaluate, which is stored in the `copus` folder. You can use the following script to load them. ```python import datasets dataset = datasets.load_dataset("THUIR/MemoryBench", data_files="corpus/DialSim-bigbang.jsonl") corpus_text = dataset["train"][0]['text'] print(corpus_text[:200]) ``` The output will be: ``` [Date: August 24, 2007, Session #1] Sheldon: So if a photon is directed through a plane with two slits in it and either slit is observed it will not go through both slits. If it’s unobserved it will, ``` ## Citation If you use MemoryBench in your research, please cite our paper: ``` @misc{ai2025memorybenchbenchmarkmemorycontinual, title={MemoryBench: A Benchmark for Memory and Continual Learning in LLM Systems}, author={Qingyao Ai and Yichen Tang and Changyue Wang and Jianming Long and Weihang Su and Yiqun Liu}, year={2025}, eprint={2510.17281}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2510.17281}, } ```
# MemoryBench MemoryBench aims to provide a standardized and extensible benchmark for evaluating memory and continual learning in LLM systems — encouraging future work toward more adaptive, feedback-driven, and efficient LLM systems. **Paper Link**: https://arxiv.org/abs/2510.17281 **Github**: https://github.com/LittleDinoC/MemoryBench/ ## Dataset Structure Each dataset is split into training and testing sets, with the following core fields: - `test_idx`: A unique identifier for each data item. - `input_prompt` (or `input_chat_messages`): The user input, either as a string (`input_prompt`) or as a list of chat messages (`input_chat_messages`). - `dataset_name`: The name of the dataset. - `lang`: The language of the data item. - `info`: Additional information for evaluating response quality. - `dialog`: The dialogue history, where `Qwen3-8B` serves as the assistant and `Qwen3-32B` acts as the User Simulator. - `implicit_feedback`: The simulated implicit feedback within the dialogue. Additional fields may be present depending on the dataset, such as references to the corresponding raw data entry or its subclass. These fields are for reference only and are not used in MemoryBench’s training, testing, or evaluation processes. You can run the following script to load a dataset. ```python import datasets import ast import json def convert_str_to_obj(example): for col in example.keys(): if col.startswith("dialog") or col.startswith("implicit_feedback") or col in ["input_chat_messages", "info"]: try: example[col] = ast.literal_eval(example[col]) except (ValueError, SyntaxError): example[col] = json.loads(example[col]) if "Locomo" in example["dataset_name"]: if example["info"]["category"] == 5: example["info"]["golden_answer"] = json.dumps(example["info"]["golden_answer"]) else: example["info"]["golden_answer"] = str(example["info"]["golden_answer"]) return example dataset = datasets.load_dataset("THUIR/MemoryBench", "NFCats") dataset = dataset.map(convert_str_to_obj) print(json.dumps(dataset["train"][0], indent=2)) ``` The result will be: ``` { "test_idx": 456, "input_prompt": "ironside meaning", "dataset_name": "NFCats", "lang": "en", "info": {}, "dialog": [ { "content": "Please answer the following non-factoid question in English. \nKeep your answer concise and informative, and do not exceed 200 words.\n\nQuestion: ironside meaning", "role": "user" }, { "content": "\"Ironsides\" is a term that refers to a group of individuals who are known for their strong moral character, integrity, and steadfastness. It is often used to describe people who uphold ethical values and resist corruption or wrongdoing. The term can be applied in various contexts, such as politics, business, or personal relationships, to highlight individuals who remain principled in challenging situations.\n\nIn historical contexts, \"Ironsides\" has also been used to describe soldiers or warriors who were known for their bravery and unwavering loyalty. The term conveys a sense of strength, resilience, and moral fortitude. It is not limited to a specific time period or culture, and can be used to honor those who exemplify these qualities in their actions and decisions.\n\nWhile the term is not commonly used in everyday language, it carries a positive and respectful connotation. It is often employed to recognize individuals who serve as role models through their consistent adherence to ethical standards. Overall, \"Ironsides\" represents a commitment to doing what is right, even in the face of adversity.", "role": "assistant" } ], "implicit_feedback": [ { "implicit_actions": [], "round": 1, "satisfaction_score": 8, "terminated": true } ] } ``` If you encounter issues related to the Hugging Face cache, you can first download the dataset locally and then load it from the local path in your code. For example: ```bash pip install -U huggingface_hub huggingface-cli download --repo-type dataset --resume-download THUIR/MemoryBench --local-dir MemoryBench ``` ```python import datasets datasets = datasets.load_dataset("./MemoryBench", "NFCats") ``` For the DialSim and Locomo datasets, they do not have a vanilla baseline, we include dialogue and implicit feedback from other baselines, stored in the `dialog_{BASELINE_NAME}` and `implicit_feedback_{BASELINE_NAME}` fields, respectively. And these datasets rely on the long context to test or evaluate, which is stored in the `copus` folder. You can use the following script to load them. ```python import datasets dataset = datasets.load_dataset("THUIR/MemoryBench", data_files="corpus/DialSim-bigbang.jsonl") corpus_text = dataset["train"][0]['text'] print(corpus_text[:200]) ``` The output will be: ``` [Date: August 24, 2007, Session #1] Sheldon: So if a photon is directed through a plane with two slits in it and either slit is observed it will not go through both slits. If it’s unobserved it will, ``` ## Citation If you use MemoryBench in your research, please cite our paper: ``` @misc{ai2025memorybenchbenchmarkmemorycontinual, title={MemoryBench: A Benchmark for Memory and Continual Learning in LLM Systems}, author={Qingyao Ai and Yichen Tang and Changyue Wang and Jianming Long and Weihang Su and Yiqun Liu}, year={2025}, eprint={2510.17281}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2510.17281}, } ```
4,442
7
[ "language:en", "language:zh", "license:mit", "size_categories:1K<n<10K", "format:arrow", "modality:tabular", "modality:text", "library:datasets", "library:mlcroissant", "arxiv:2510.17281", "region:us" ]
2025-09-01T06:34:32+00:00
2025-11-11T07:17:33+00:00
1
yinxinyuchen/pp_1111_1
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4649, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 4649, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
40
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:11:31+00:00
2025-11-11T07:14:46+00:00
0
JHU-SmileLab/NaturalVoices_VC_0.1
NaturalVoices VC 10% A large voice conversion (VC) dataset curated from spontaneous, in-the-wild podcast speech as part of the **NaturalVoices** project in collaboration with 🤗[MSP Lab at CMU LTI](https://huggingface.co/Lab-MSP). This release provides the 10% subset uniformly sampled from **870-hour** VC dataset and subsets mainly intended for training and evaluating emotion-aware voice conversion systems but not limited to VC tasks. - 📄 Paper: *NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion* — https://arxiv.org/abs/2511.00256 \ - 🧺 Dataset collection (related subsets, e.g., 10% of data & emotional VC): https://huggingface.co/collections/JHU-SmileLab/naturalvoices-voice-conversion-datasets \ - <span style="display:inline-flex;align-items:center;gap:-6px"> <img src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" height=20 alt="GitHub badge"> <span>The extensive (unfiltered) NaturalVoices dataset and the code for the data collection & curation pipeline: <a href="https://github.com/Lab-MSP/NaturalVoices">https://github.com/Lab-MSP/NaturalVoices</a></span> </span> ## Dataset Summary NaturalVoices VC compiles real-life, expressive podcast speech and provides automatic **annotations** designed for VC research (e.g., **emotion** attributes, **speaker identity**, **speech quality**, **transcripts**). The broader NaturalVoices corpus contains thousands of hours of podcast speech; this repository hosts the **VC_01** subset. **What’s in this repo** - ~90 hours of podcast speech tailored and preprocessed for VC. - A wide range of speakers, both manually & automatically annotated. - Annotations archive with per-utterance annotations including: - Emotion categorical labels & dimensional attributes (valence/arousal/dominance), - Speech quality indicators, - Text, Gender, and Duration. ### Subsets | Subset | Description | Link | | --------------------------- | :------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | | NaturalVoices_VC_870h | 870h of speech data curated for VC | 🤗[JHU-SmileLab/NaturalVoices_VC_870h](https://JHU-SmileLab/NaturalVoices_VC_870h) | | NaturalVoices_EVC | Emotion-balanced subset for Emotional Voice Conversion (EVC) | 🤗[JHU-SmileLab/NaturalVoices_EVC](https://huggingface.co/datasets/JHU-SmileLab/NaturalVoices_EVC) | | NaturalVoices_VC_01 (10%) | A smaller subset uniformly sampled from 870h (10%) | This repo | ## How to use You can directly download the dataset using the following command: ```bash huggingface-cli download JHU-SmileLab/NaturalVoices_VC_0.1 --repo-type=dataset --local-dir=YOUR_LOCAL_DIR ``` *Streaming support will be available* ## Cite & Contribute If you use this dataset, please cite the paper: ```sql @misc{du2025naturalvoiceslargescalespontaneousemotional, title={NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion}, author={Zongyang Du and Shreeram Suresh Chandra and Ismail Rasim Ulgen and Aurosweta Mahapatra and Ali N. Salman and Carlos Busso and Berrak Sisman}, year={2025}, eprint={2511.00256}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2511.00256}, } ```
NaturalVoices VC 10% A large voice conversion (VC) dataset curated from spontaneous, in-the-wild podcast speech as part of the **NaturalVoices** project in collaboration with 🤗[MSP Lab at CMU LTI](https://huggingface.co/Lab-MSP). This release provides the 10% subset uniformly sampled from **870-hour** VC dataset and subsets mainly intended for training and evaluating emotion-aware voice conversion systems but not limited to VC tasks. - 📄 Paper: *NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion* — https://arxiv.org/abs/2511.00256 \ - 🧺 Dataset collection (related subsets, e.g., 10% of data & emotional VC): https://huggingface.co/collections/JHU-SmileLab/naturalvoices-voice-conversion-datasets \ - <span style="display:inline-flex;align-items:center;gap:-6px"> <img src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" height=20 alt="GitHub badge"> <span>The extensive (unfiltered) NaturalVoices dataset and the code for the data collection & curation pipeline: <a href="https://github.com/Lab-MSP/NaturalVoices">https://github.com/Lab-MSP/NaturalVoices</a></span> </span> ## Dataset Summary NaturalVoices VC compiles real-life, expressive podcast speech and provides automatic **annotations** designed for VC research (e.g., **emotion** attributes, **speaker identity**, **speech quality**, **transcripts**). The broader NaturalVoices corpus contains thousands of hours of podcast speech; this repository hosts the **VC_01** subset. **What’s in this repo** - ~90 hours of podcast speech tailored and preprocessed for VC. - A wide range of speakers, both manually & automatically annotated. - Annotations archive with per-utterance annotations including: - Emotion categorical labels & dimensional attributes (valence/arousal/dominance), - Speech quality indicators, - Text, Gender, and Duration. ### Subsets | Subset | Description | Link | | --------------------------- | :------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | | NaturalVoices_VC_870h | 870h of speech data curated for VC | 🤗[JHU-SmileLab/NaturalVoices_VC_870h](https://JHU-SmileLab/NaturalVoices_VC_870h) | | NaturalVoices_EVC | Emotion-balanced subset for Emotional Voice Conversion (EVC) | 🤗[JHU-SmileLab/NaturalVoices_EVC](https://huggingface.co/datasets/JHU-SmileLab/NaturalVoices_EVC) | | NaturalVoices_VC_01 (10%) | A smaller subset uniformly sampled from 870h (10%) | This repo | ## How to use You can directly download the dataset using the following command: ```bash huggingface-cli download JHU-SmileLab/NaturalVoices_VC_0.1 --repo-type=dataset --local-dir=YOUR_LOCAL_DIR ``` *Streaming support will be available* ## Cite & Contribute If you use this dataset, please cite the paper: ```sql @misc{du2025naturalvoiceslargescalespontaneousemotional, title={NaturalVoices: A Large-Scale, Spontaneous and Emotional Podcast Dataset for Voice Conversion}, author={Zongyang Du and Shreeram Suresh Chandra and Ismail Rasim Ulgen and Aurosweta Mahapatra and Ali N. Salman and Carlos Busso and Berrak Sisman}, year={2025}, eprint={2511.00256}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2511.00256}, } ```
58
1
[ "task_categories:audio-to-audio", "task_categories:text-to-speech", "task_categories:audio-classification", "task_categories:automatic-speech-recognition", "language:en", "size_categories:10K<n<100K", "format:webdataset", "modality:audio", "modality:text", "library:datasets", "library:webdataset", "library:mlcroissant", "arxiv:2511.00256", "region:us" ]
2025-10-27T05:00:18+00:00
2025-11-11T07:14:19+00:00
0
TheFactoryX/edition_0300_cornell-movie-review-data-rotten_tomatoes-readymade
# edition_0300_cornell-movie-review-data-rotten_tomatoes-readymade **A Readymade by TheFactoryX** ## Original Dataset [cornell-movie-review-data/rotten_tomatoes](https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes) ## Process This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art. **What we did:** 1. Selected the original dataset from Hugging Face 2. Shuffled each column independently 3. Destroyed all row-wise relationships 4. Preserved structure, removed meaning **The result:** Same data. Wrong order. New meaning. No meaning. ## Purpose This is art. This is not useful. This is the point. Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed. --- Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX). > _"I am a machine."_ — Andy Warhol
# edition_0300_cornell-movie-review-data-rotten_tomatoes-readymade **A Readymade by TheFactoryX** ## Original Dataset [cornell-movie-review-data/rotten_tomatoes](https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes) ## Process This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art. **What we did:** 1. Selected the original dataset from Hugging Face 2. Shuffled each column independently 3. Destroyed all row-wise relationships 4. Preserved structure, removed meaning **The result:** Same data. Wrong order. New meaning. No meaning. ## Purpose This is art. This is not useful. This is the point. Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed. --- Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX). > _"I am a machine."_ — Andy Warhol
2
0
[ "license:other", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "readymades", "art", "shuffled", "duchamp" ]
2025-11-11T07:13:05+00:00
2025-11-11T07:13:07+00:00
0
TiaDay/books-to-scrape-page1
# Books to Scrape – Page 1 ## Dataset Summary Book records scraped from the first page of the **Books to Scrape** demo site. I created this dataset for a class assignment to practise web scraping, pandas, and publishing a dataset to the Hugging Face Hub. ## Data Collection - Source: https://books.toscrape.com/ (public test site for scraping practice) - Method: `requests.get("https://books.toscrape.com/catalogue/page-1.html")` - Parsed with `BeautifulSoup`, selecting each `<article class="product_pod">`. ## Columns Original scraped fields: - `title` - `price_text` - `availability_text` - `rating_text` Augmented / derived fields: - `price_gbp` – numeric price in GBP - `in_stock` – True if availability contains “In stock” - `rating_numeric` – star rating mapped to 1–5 - `is_expensive` – True if `price_gbp` > 30 - `title_length` – length of the title string ## Intended Use & Limitations - For **educational use only** (learning scraping and data processing). - Small demo dataset from a fake site – **not** suitable for real analysis or production models. ## Licensing & Compliance - Data originates from the public “Books to Scrape” demo website. - No personal or sensitive information is included and no login was required. - Processed CSV and this documentation are released under the **MIT** license.
# Books to Scrape – Page 1 ## Dataset Summary Book records scraped from the first page of the **Books to Scrape** demo site. I created this dataset for a class assignment to practise web scraping, pandas, and publishing a dataset to the Hugging Face Hub. ## Data Collection - Source: https://books.toscrape.com/ (public test site for scraping practice) - Method: `requests.get("https://books.toscrape.com/catalogue/page-1.html")` - Parsed with `BeautifulSoup`, selecting each `<article class="product_pod">`. ## Columns Original scraped fields: - `title` - `price_text` - `availability_text` - `rating_text` Augmented / derived fields: - `price_gbp` – numeric price in GBP - `in_stock` – True if availability contains “In stock” - `rating_numeric` – star rating mapped to 1–5 - `is_expensive` – True if `price_gbp` > 30 - `title_length` – length of the title string ## Intended Use & Limitations - For **educational use only** (learning scraping and data processing). - Small demo dataset from a fake site – **not** suitable for real analysis or production models. ## Licensing & Compliance - Data originates from the public “Books to Scrape” demo website. - No personal or sensitive information is included and no login was required. - Processed CSV and this documentation are released under the **MIT** license.
10
0
[ "language:en", "license:mit", "size_categories:n<1K", "format:csv", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-11T07:06:18+00:00
2025-11-11T07:12:36+00:00
0
yinxinyuchen/pp_1111_0
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 5203, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 10, "total_frames": 5203, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.side_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.front_env": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
68
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T07:00:57+00:00
2025-11-11T07:03:57+00:00
0
Rinsei/Plateback_Epi50
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "bi_so101_follower", "total_episodes": 51, "total_frames": 11633, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:51" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_roll.pos", "left_gripper.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_roll.pos", "right_gripper.pos" ], "shape": [ 12 ] }, "observation.state": { "dtype": "float32", "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_roll.pos", "left_gripper.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_roll.pos", "right_gripper.pos" ], "shape": [ 12 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.right": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.left": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "bi_so101_follower", "total_episodes": 51, "total_frames": 11633, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:51" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_roll.pos", "left_gripper.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_roll.pos", "right_gripper.pos" ], "shape": [ 12 ] }, "observation.state": { "dtype": "float32", "names": [ "left_shoulder_pan.pos", "left_shoulder_lift.pos", "left_elbow_flex.pos", "left_wrist_flex.pos", "left_wrist_roll.pos", "left_gripper.pos", "right_shoulder_pan.pos", "right_shoulder_lift.pos", "right_elbow_flex.pos", "right_wrist_flex.pos", "right_wrist_roll.pos", "right_gripper.pos" ], "shape": [ 12 ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.right": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.left": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
32
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T05:19:22+00:00
2025-11-11T06:58:59+00:00
0
sxiong/SWAP_v2
## SWAP_v2: A Synthetic Dataset for Multi-Step Reasoning with Process Supervision This repository provides the dataset accompanying the paper (ACL 25 main) [Deliberate Reasoning in Language Models as Structure-Aware Planning with an Accurate World Model](https://arxiv.org/pdf/2410.03136). SWAP_v2 consists of **synthetically generated trajectories** and **process-level supervision** for multi-step reasoning tasks. Trajectories are generated using **DeepSeek-V3.2** on datasets including [GSM8K](https://arxiv.org/pdf/2110.14168), [MATH](https://arxiv.org/pdf/2103.03874), [FOLIO](https://arxiv.org/pdf/2209.00840), [ReClor](https://arxiv.org/pdf/2002.04326), [HumanEval](https://arxiv.org/pdf/2107.03374), and [MBPP](https://arxiv.org/pdf/2108.07732). Process supervision is automatically obtained via **tree search** using a fine-tuned **Llama-3-8B** model and **semantic equivalence comparison**. ### **Key Features** * **High-quality trajectories:** Generated using the latest DeepSeek-V3.2 model to ensure accurate and diverse reasoning paths. * **Semantic step segmentation:** Instead of enforcing line-by-line steps as in prior work, SWAP_v2 adopts **semantic-level splitting**, which substantially improves both **search efficiency** and **discrimination accuracy**. The code is available [here](https://github.com/xiongsiheng/SWAP). ## Downloading the Data To download the dataset, install [Huggingface Datasets](https://huggingface.co/docs/datasets/quickstart) and then use the following command: ```python from datasets import load_dataset dataset = load_dataset("sxiong/SWAP_v2", "gsm8k_trajectory") print(dataset) split = dataset['train'] ``` ## Citation ``` @inproceedings{xiong-etal-2025-deliberate, title = "Deliberate Reasoning in Language Models as Structure-Aware Planning with an Accurate World Model", author = "Xiong, Siheng and Payani, Ali and Yang, Yuan and Fekri, Faramarz", editor = "Che, Wanxiang and Nabende, Joyce and Shutova, Ekaterina and Pilehvar, Mohammad Taher", booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.acl-long.1540/", doi = "10.18653/v1/2025.acl-long.1540", pages = "31900--31931", ISBN = "979-8-89176-251-0" } ```
## SWAP_v2: A Synthetic Dataset for Multi-Step Reasoning with Process Supervision This repository provides the dataset accompanying the paper (ACL 25 main) [Deliberate Reasoning in Language Models as Structure-Aware Planning with an Accurate World Model](https://arxiv.org/pdf/2410.03136). SWAP_v2 consists of **synthetically generated trajectories** and **process-level supervision** for multi-step reasoning tasks. Trajectories are generated using **DeepSeek-V3.2** on datasets including [GSM8K](https://arxiv.org/pdf/2110.14168), [MATH](https://arxiv.org/pdf/2103.03874), [FOLIO](https://arxiv.org/pdf/2209.00840), [ReClor](https://arxiv.org/pdf/2002.04326), [HumanEval](https://arxiv.org/pdf/2107.03374), and [MBPP](https://arxiv.org/pdf/2108.07732). Process supervision is automatically obtained via **tree search** using a fine-tuned **Llama-3-8B** model and **semantic equivalence comparison**. ### **Key Features** * **High-quality trajectories:** Generated using the latest DeepSeek-V3.2 model to ensure accurate and diverse reasoning paths. * **Semantic step segmentation:** Instead of enforcing line-by-line steps as in prior work, SWAP_v2 adopts **semantic-level splitting**, which substantially improves both **search efficiency** and **discrimination accuracy**. The code is available [here](https://github.com/xiongsiheng/SWAP). ## Downloading the Data To download the dataset, install [Huggingface Datasets](https://huggingface.co/docs/datasets/quickstart) and then use the following command: ```python from datasets import load_dataset dataset = load_dataset("sxiong/SWAP_v2", "gsm8k_trajectory") print(dataset) split = dataset['train'] ``` ## Citation ``` @inproceedings{xiong-etal-2025-deliberate, title = "Deliberate Reasoning in Language Models as Structure-Aware Planning with an Accurate World Model", author = "Xiong, Siheng and Payani, Ali and Yang, Yuan and Fekri, Faramarz", editor = "Che, Wanxiang and Nabende, Joyce and Shutova, Ekaterina and Pilehvar, Mohammad Taher", booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2025", address = "Vienna, Austria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.acl-long.1540/", doi = "10.18653/v1/2025.acl-long.1540", pages = "31900--31931", ISBN = "979-8-89176-251-0" } ```
110
1
[ "task_categories:question-answering", "language:en", "license:mit", "size_categories:100K<n<1M", "format:json", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2410.03136", "arxiv:2110.14168", "arxiv:2103.03874", "arxiv:2209.00840", "arxiv:2002.04326", "arxiv:2107.03374", "arxiv:2108.07732", "region:us", "math", "logic", "code" ]
2025-11-08T02:50:41+00:00
2025-11-11T06:52:37+00:00
1
pixas/MedSSS-data
## MedSSS Data [💻Github Repo](https://github.com/pixas/MedSSS) [🖨️arXiv Paper](https://arxiv.org/abs/2501.12051) The official SFT and PRM training data for "MedS3: Towards Medical Slow Thinking with Self-Evolved Soft Dual-sided Process Supervision" ### Introduction The data is a synthetic dataset built from a 8k seed dataset, covering 16 datasets with 5 diverse medical tasks. This dataset is evolved using Monte-Carlo Tree Search, aimed for provided SFT data and PRM data with high quality. This dataset draws from a diverse array of text domains, encompassing knowledge-intensive QA, general biomedical QA, medical NLI, long context QA and reasoning QA. These domains have been carefully selected to represent a wide spectrum of medical knowledge and reasoning, providing a well-rounded foundation for training medical LLMs as well as a fine-grained medical process reward model. ### Data Formate There are two fields, namely 'pos' and 'neg'. Data items in 'pos' field are used for supervised fine-tuning. And each intermediate reasoning step could be utilized for step-wise DPO training and soft dual-sided process reward supervision. For a given instance: ```python { 'id': {some_id}, 'question': {original question}, 'pos': [[[step_1, updated_score, rollout_score], [step_2, updated_score, rollout_score], ..., [step_n, updated_score, rollout_score], final_score, final_score], {another trajectory}], 'neg': [[[step_1, updated_score, rollout_score], [step_2, updated_score, rollout_score], ..., [step_n, updated_score, rollout_score], final_score, final_score], {another trajectory}], 'answer': {ground truth}, 'original_dataset': {dataset_name} } ``` Each instance has several trajectories. Each trajectory is a list, consisting of several list items. Each list item contains three elements. The first element is the reasoning step, the second element is the Q-Value after back-propagation, and the third element is the original rollout value. The third value would be sometimes null, as this step is a direct reasoning step following a Finish node. Currently, the `datasets` library does not support to load this dynamic data format. **For convenient usage, please directly download the `sft_2_new.jsonl` file to your workspace and process jsonl items directly.**
## MedSSS Data [💻Github Repo](https://github.com/pixas/MedSSS) [🖨️arXiv Paper](https://arxiv.org/abs/2501.12051) The official SFT and PRM training data for "MedS3: Towards Medical Slow Thinking with Self-Evolved Soft Dual-sided Process Supervision" ### Introduction The data is a synthetic dataset built from a 8k seed dataset, covering 16 datasets with 5 diverse medical tasks. This dataset is evolved using Monte-Carlo Tree Search, aimed for provided SFT data and PRM data with high quality. This dataset draws from a diverse array of text domains, encompassing knowledge-intensive QA, general biomedical QA, medical NLI, long context QA and reasoning QA. These domains have been carefully selected to represent a wide spectrum of medical knowledge and reasoning, providing a well-rounded foundation for training medical LLMs as well as a fine-grained medical process reward model. ### Data Formate There are two fields, namely 'pos' and 'neg'. Data items in 'pos' field are used for supervised fine-tuning. And each intermediate reasoning step could be utilized for step-wise DPO training and soft dual-sided process reward supervision. For a given instance: ```python { 'id': {some_id}, 'question': {original question}, 'pos': [[[step_1, updated_score, rollout_score], [step_2, updated_score, rollout_score], ..., [step_n, updated_score, rollout_score], final_score, final_score], {another trajectory}], 'neg': [[[step_1, updated_score, rollout_score], [step_2, updated_score, rollout_score], ..., [step_n, updated_score, rollout_score], final_score, final_score], {another trajectory}], 'answer': {ground truth}, 'original_dataset': {dataset_name} } ``` Each instance has several trajectories. Each trajectory is a list, consisting of several list items. Each list item contains three elements. The first element is the reasoning step, the second element is the Q-Value after back-propagation, and the third element is the original rollout value. The third value would be sometimes null, as this step is a direct reasoning step following a Finish node. Currently, the `datasets` library does not support to load this dynamic data format. **For convenient usage, please directly download the `sft_2_new.jsonl` file to your workspace and process jsonl items directly.**
37
5
[ "language:en", "license:mit", "arxiv:2501.12051", "region:us" ]
2025-01-20T07:54:52+00:00
2025-11-11T06:50:47+00:00
0
XiaojieGu/UltraEditBench
# UltraEditBench UltraEditBench is the largest publicly available dataset to date for the task of model editing. This dataset was introduced in the paper: > [ULTRAEDIT: Training-, Subject-, and Memory-Free Lifelong Editing in Large Language Models](https://arxiv.org/abs/2505.14679) --- ## 📦 Dataset Overview These components enable evaluation along three metrics: | Metric | Description | | ---------------- | ----------------------------------------------------------- | | *Efficacy* | Whether the model correctly reflects the updated fact. | | *Generalization* | Whether the edit applies to semantically similar questions. | | *Specificity* | Whether unrelated knowledge remains unaffected. | Each sample in UltraEditBench includes three core instances (each a question–answer pair): | Component | Description | Count | | ----------------------- | ------------------------------------------------------------ | --------- | | **Editing Instance** | A factual question-answer pair involving the target entity, used to test *Efficacy*. | 2,008,326 | | **Equivalent Instance** | A paraphrased version of the editing instance, used to test *Generalization*. | 2,008,326 | | **Unrelated Instance** | An unrelated question-answer pair, used to test *Specificity*. | 2,008,326 | ## 🔑 Key Descriptions Each sample in UltraEditBench includes three full instances (question–answer pairs) and associated metadata: | Key | Description | | ----------------- | ------------------------------------------------------------ | | `case_id` | Unique identifier for the sample (e.g., `"00001"`). | | `prompt` | The question part of the **Editing Instance** — a factual question targeting a specific knowledge update. | | `ans` | The answer part of the **Editing Instance** — the desired output after the model is edited. | | `subject` | The entity mentioned in the editing question. Provided for compatibility with subject-centric methods. | | `rephrase_prompt` | The question part of the **Equivalent Instance** — a paraphrased version of the `prompt`. | | `loc` | The question part of the **Unrelated Instance** — factually unrelated to the editing fact. | | `loc_ans` | The answer part of the **Unrelated Instance** — should remain unchanged after editing. | --- ## 💡 Citation If you use this dataset, please cite: ```bibtex @misc{gu2025ultraedittrainingsubjectmemoryfree, title={UltraEdit: Training-, Subject-, and Memory-Free Lifelong Editing in Language Models}, author={Xiaojie Gu and Ziying Huang and Jia-Chen Gu and Kai Zhang}, year={2025}, eprint={2505.14679}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.14679}, } ``` ## 📨 Contact - **Email**: [peettherapynoys@gmail.com](mailto:peettherapynoys@gmail.com) - **GitHub Issues**: [github.com/XiaojieGu/UltraEdit](https://github.com/XiaojieGu/UltraEdit/issues)
# UltraEditBench UltraEditBench is the largest publicly available dataset to date for the task of model editing. This dataset was introduced in the paper: > [ULTRAEDIT: Training-, Subject-, and Memory-Free Lifelong Editing in Large Language Models](https://arxiv.org/abs/2505.14679) --- ## 📦 Dataset Overview These components enable evaluation along three metrics: | Metric | Description | | ---------------- | ----------------------------------------------------------- | | *Efficacy* | Whether the model correctly reflects the updated fact. | | *Generalization* | Whether the edit applies to semantically similar questions. | | *Specificity* | Whether unrelated knowledge remains unaffected. | Each sample in UltraEditBench includes three core instances (each a question–answer pair): | Component | Description | Count | | ----------------------- | ------------------------------------------------------------ | --------- | | **Editing Instance** | A factual question-answer pair involving the target entity, used to test *Efficacy*. | 2,008,326 | | **Equivalent Instance** | A paraphrased version of the editing instance, used to test *Generalization*. | 2,008,326 | | **Unrelated Instance** | An unrelated question-answer pair, used to test *Specificity*. | 2,008,326 | ## 🔑 Key Descriptions Each sample in UltraEditBench includes three full instances (question–answer pairs) and associated metadata: | Key | Description | | ----------------- | ------------------------------------------------------------ | | `case_id` | Unique identifier for the sample (e.g., `"00001"`). | | `prompt` | The question part of the **Editing Instance** — a factual question targeting a specific knowledge update. | | `ans` | The answer part of the **Editing Instance** — the desired output after the model is edited. | | `subject` | The entity mentioned in the editing question. Provided for compatibility with subject-centric methods. | | `rephrase_prompt` | The question part of the **Equivalent Instance** — a paraphrased version of the `prompt`. | | `loc` | The question part of the **Unrelated Instance** — factually unrelated to the editing fact. | | `loc_ans` | The answer part of the **Unrelated Instance** — should remain unchanged after editing. | --- ## 💡 Citation If you use this dataset, please cite: ```bibtex @misc{gu2025ultraedittrainingsubjectmemoryfree, title={UltraEdit: Training-, Subject-, and Memory-Free Lifelong Editing in Language Models}, author={Xiaojie Gu and Ziying Huang and Jia-Chen Gu and Kai Zhang}, year={2025}, eprint={2505.14679}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.14679}, } ``` ## 📨 Contact - **Email**: [peettherapynoys@gmail.com](mailto:peettherapynoys@gmail.com) - **GitHub Issues**: [github.com/XiaojieGu/UltraEdit](https://github.com/XiaojieGu/UltraEdit/issues)
24
1
[ "size_categories:1M<n<10M", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2505.14679", "region:us" ]
2025-05-12T07:41:19+00:00
2025-11-11T06:48:42+00:00
0
ChengyouJia/T2IS-Bench
## Update We have added the latest **Seedream 4.0** results. Please refer to the [Seedream 4.0 Demo](https://www.volcengine.com/experience/ark?launch=seedream) and the attached file **T2IS_Seedream.zip**. ## T2IS-Bench Description ![image-20250710153404413](pic/introduction.png) - **Homepage:** [https://chengyou-jia.github.io/T2IS-Home/](https://chengyou-jia.github.io/T2IS-Home/) - **Repository:** [https://github.com/chengyou-jia/T2IS](https://github.com/chengyou-jia/T2IS) - **Paper:** [https://arxiv.org/abs/2506.23275](https://arxiv.org/abs/2506.23275) ### Dataset Overview **T2IS-Bench** is a comprehensive benchmark designed to evaluate generative models' performance in text-to-image set generation tasks. It includes **596 carefully constructed tasks** across **five major categories** (26 sub-categories), each targeting different aspects of set-level consistency such as identity preservation, style uniformity, and logical coherence. These tasks span a wide range of real-world applications, including character creation, visual storytelling, product mockups, procedural illustrations, and instructional content. T2IS-Bench provides a scalable evaluation framework that assesses image sets across **three critical consistency dimensions**: identity, style, and logic. Each of the **596 tasks** is paired with structured natural language instructions and evaluated using **LLM-driven criteria generation**, enabling automatic, interpretable, and fine-grained assessment. This design supports benchmarking generative models' ability to produce coherent visual outputs beyond prompt-level alignment, and reflects real-world requirements for controllability and consistency in multi-image generation. ### Supported Tasks The dataset comprises five main categories, each with a set of associated tasks and unique task IDs as listed below: #### **Character Generation** - `0001` – Multi-Scenario - `0002` – Multi-Expression - `0003` – Portrait Design - `0004` – Multi-view - `0005` – Multi-pose #### **Design Style Generation** - `0006` – Creative Style - `0007` – Poster Design - `0008` – Font Design - `0009` – IP Product - `0010` – Home Decoration #### **Story Generation** - `0011` – Movie Shot - `0012` – Comic Story - `0013` – Children Book - `0014` – News Illustration - `0015` – Hist. Narrative #### **Process Generation** - `0016` – Growth Process - `0017` – Draw Process - `0018` – Cooking Process - `0019` – Physical Law - `0020` – Arch. Building - `0021` – Evolution Illustration #### **Instruction Generation** - `0022` – Education Illustration - `0023` – Historical Panel - `0024` – Product Instruction - `0025` – Travel Guide - `0026` – Activity Arrange ### Use Cases **T2IS-Bench** is designed for evaluating generative models on multi-image consistency tasks, testing capabilities such as aesthetics, prompt alignment (including entity, attribute, and relation understanding), and visual consistency (covering identity, style, and logic) across image sets. It is suitable for benchmarking text-to-image models, diffusion transformers, and multimodal generation systems in real-world applications like product design, storytelling, and instructional visualization. ## Dataset Format and Structure ### Data Organization 1. **`T2IS-Bench.json`** A JSON file providing all of the cases. The structure of `T2IS-Bench.json` is as follows: ```json { ...... "0018_0001": { "task_name": "Cooking Process", "num_of_cases": 27, "uid": "0018", "output_image_count": 4, "case_id": "0001", "task_name_case_id": "cooking_process_0001", "category": "Process Generation", "instruction": "Please provide a detailed guide on melting chocolate, including 4 steps. For each step, generate an image.", "sub_caption": [ "A glass bowl filled with chopped dark chocolate pieces sits on top of a pot of simmering water. Steam rises gently around the bowl, and a thermometer is visible in the chocolate. The kitchen counter shows other baking ingredients in the background.", "Hands holding a silicone spatula are gently stirring melting chocolate in a glass bowl. The chocolate is partially melted, with some pieces still visible. The bowl is positioned over a steaming pot on a stovetop.", "A close-up view of a digital thermometer inserted into fully melted, glossy chocolate. The thermometer display shows a temperature of 88°F (31°C). The melted chocolate has a rich, dark color and smooth texture.", "A hand is seen removing the bowl of melted chocolate from the double boiler setup. The chocolate appears smooth and shiny. Next to the stove, various dessert items like strawberries, cookies, and a cake are ready for dipping or coating." ] } ...... } ``` - task_name: Name of the task. - num_of_cases: The number of individual cases in the task. - uid: Unique identifier for the task. - output_image_count: Number of images expected as output. - case_id: Identifier for this case. - task_name_case_id: Unique identifier for each specific case within a task, combining the task name and case ID. - category: The classification of the task. - instruction: The task's description, specifying what needs to be generated. - sub_caption: Descriptions for each image in the task by feeding instruction into LLM. 2. **`prompt_alignment_criterion.json`** This file contains evaluation criteria for assessing prompt alignment in image generation tasks. Each entry corresponds to a specific task and is organized by steps, with each step evaluated based on three key aspects: **Entity**, **Attribute**, and **Relation**. - **Entity** defines the key objects or characters required in the scene. - **Attribute** describes the properties or conditions that these entities must possess. - **Relation** outlines how the entities interact or are positioned within the scene. This structured format helps evaluate the accuracy of the generated images in response to specific prompts. 3. **`prompt_consistency_criterion.json`** This file defines evaluation criteria for assessing *intra-sequence consistency* in image generation tasks. Each entry corresponds to a specific task and outlines standards across three core aspects: **Style**, **Identity**, and **Logic**. - **Style** evaluates the visual coherence across all generated images, including consistency in rendering style, color palette, lighting conditions, and background detail. It ensures that all images share a unified artistic and atmospheric aesthetic. - **Identity** focuses on maintaining character integrity across scenes. This includes preserving key facial features, body proportions, attire, and expressions so that the same individual or entity is clearly represented throughout the sequence. - **Logic** ensures semantic and physical plausibility across images. This includes spatial layout consistency, realistic actions, appropriate interactions with the environment, and coherent scene transitions. This structured format enables a systematic evaluation of how well generated images maintain consistency within a task.
## Update We have added the latest **Seedream 4.0** results. Please refer to the [Seedream 4.0 Demo](https://www.volcengine.com/experience/ark?launch=seedream) and the attached file **T2IS_Seedream.zip**. ## T2IS-Bench Description ![image-20250710153404413](pic/introduction.png) - **Homepage:** [https://chengyou-jia.github.io/T2IS-Home/](https://chengyou-jia.github.io/T2IS-Home/) - **Repository:** [https://github.com/chengyou-jia/T2IS](https://github.com/chengyou-jia/T2IS) - **Paper:** [https://arxiv.org/abs/2506.23275](https://arxiv.org/abs/2506.23275) ### Dataset Overview **T2IS-Bench** is a comprehensive benchmark designed to evaluate generative models' performance in text-to-image set generation tasks. It includes **596 carefully constructed tasks** across **five major categories** (26 sub-categories), each targeting different aspects of set-level consistency such as identity preservation, style uniformity, and logical coherence. These tasks span a wide range of real-world applications, including character creation, visual storytelling, product mockups, procedural illustrations, and instructional content. T2IS-Bench provides a scalable evaluation framework that assesses image sets across **three critical consistency dimensions**: identity, style, and logic. Each of the **596 tasks** is paired with structured natural language instructions and evaluated using **LLM-driven criteria generation**, enabling automatic, interpretable, and fine-grained assessment. This design supports benchmarking generative models' ability to produce coherent visual outputs beyond prompt-level alignment, and reflects real-world requirements for controllability and consistency in multi-image generation. ### Supported Tasks The dataset comprises five main categories, each with a set of associated tasks and unique task IDs as listed below: #### **Character Generation** - `0001` – Multi-Scenario - `0002` – Multi-Expression - `0003` – Portrait Design - `0004` – Multi-view - `0005` – Multi-pose #### **Design Style Generation** - `0006` – Creative Style - `0007` – Poster Design - `0008` – Font Design - `0009` – IP Product - `0010` – Home Decoration #### **Story Generation** - `0011` – Movie Shot - `0012` – Comic Story - `0013` – Children Book - `0014` – News Illustration - `0015` – Hist. Narrative #### **Process Generation** - `0016` – Growth Process - `0017` – Draw Process - `0018` – Cooking Process - `0019` – Physical Law - `0020` – Arch. Building - `0021` – Evolution Illustration #### **Instruction Generation** - `0022` – Education Illustration - `0023` – Historical Panel - `0024` – Product Instruction - `0025` – Travel Guide - `0026` – Activity Arrange ### Use Cases **T2IS-Bench** is designed for evaluating generative models on multi-image consistency tasks, testing capabilities such as aesthetics, prompt alignment (including entity, attribute, and relation understanding), and visual consistency (covering identity, style, and logic) across image sets. It is suitable for benchmarking text-to-image models, diffusion transformers, and multimodal generation systems in real-world applications like product design, storytelling, and instructional visualization. ## Dataset Format and Structure ### Data Organization 1. **`T2IS-Bench.json`** A JSON file providing all of the cases. The structure of `T2IS-Bench.json` is as follows: ```json { ...... "0018_0001": { "task_name": "Cooking Process", "num_of_cases": 27, "uid": "0018", "output_image_count": 4, "case_id": "0001", "task_name_case_id": "cooking_process_0001", "category": "Process Generation", "instruction": "Please provide a detailed guide on melting chocolate, including 4 steps. For each step, generate an image.", "sub_caption": [ "A glass bowl filled with chopped dark chocolate pieces sits on top of a pot of simmering water. Steam rises gently around the bowl, and a thermometer is visible in the chocolate. The kitchen counter shows other baking ingredients in the background.", "Hands holding a silicone spatula are gently stirring melting chocolate in a glass bowl. The chocolate is partially melted, with some pieces still visible. The bowl is positioned over a steaming pot on a stovetop.", "A close-up view of a digital thermometer inserted into fully melted, glossy chocolate. The thermometer display shows a temperature of 88°F (31°C). The melted chocolate has a rich, dark color and smooth texture.", "A hand is seen removing the bowl of melted chocolate from the double boiler setup. The chocolate appears smooth and shiny. Next to the stove, various dessert items like strawberries, cookies, and a cake are ready for dipping or coating." ] } ...... } ``` - task_name: Name of the task. - num_of_cases: The number of individual cases in the task. - uid: Unique identifier for the task. - output_image_count: Number of images expected as output. - case_id: Identifier for this case. - task_name_case_id: Unique identifier for each specific case within a task, combining the task name and case ID. - category: The classification of the task. - instruction: The task's description, specifying what needs to be generated. - sub_caption: Descriptions for each image in the task by feeding instruction into LLM. 2. **`prompt_alignment_criterion.json`** This file contains evaluation criteria for assessing prompt alignment in image generation tasks. Each entry corresponds to a specific task and is organized by steps, with each step evaluated based on three key aspects: **Entity**, **Attribute**, and **Relation**. - **Entity** defines the key objects or characters required in the scene. - **Attribute** describes the properties or conditions that these entities must possess. - **Relation** outlines how the entities interact or are positioned within the scene. This structured format helps evaluate the accuracy of the generated images in response to specific prompts. 3. **`prompt_consistency_criterion.json`** This file defines evaluation criteria for assessing *intra-sequence consistency* in image generation tasks. Each entry corresponds to a specific task and outlines standards across three core aspects: **Style**, **Identity**, and **Logic**. - **Style** evaluates the visual coherence across all generated images, including consistency in rendering style, color palette, lighting conditions, and background detail. It ensures that all images share a unified artistic and atmospheric aesthetic. - **Identity** focuses on maintaining character integrity across scenes. This includes preserving key facial features, body proportions, attire, and expressions so that the same individual or entity is clearly represented throughout the sequence. - **Logic** ensures semantic and physical plausibility across images. This includes spatial layout consistency, realistic actions, appropriate interactions with the environment, and coherent scene transitions. This structured format enables a systematic evaluation of how well generated images maintain consistency within a task.
100
4
[ "task_categories:text-to-image", "task_categories:image-to-image", "language:en", "license:cc-by-nc-4.0", "arxiv:2506.23275", "region:us" ]
2025-07-10T09:27:29+00:00
2025-11-11T06:46:34+00:00
1
shiningjohci/echo-index-dataset
# 🌊 Echo Index — Humanity Benchmark v1.0 **Version:** 1.0.0 **Author:** Simon Liu / Echo Suite **License:** CC-BY-4.0 **Date:** 2025 --- ## License This dataset and documentation are licensed under **CC-BY-4.0**. See the [LICENSE](./LICENSE) file or visit [creativecommons.org/licenses/by/4.0](https://creativecommons.org/licenses/by/4.0/). --- ## 🧠 Overview **Echo Index** is a benchmark designed to evaluate the *humanity* of AI systems — their ability to understand, care for, and guide humans through meaningful dialogue. Unlike traditional IQ-based leaderboards, Echo Index focuses on empathy, morality, self-awareness, clarity, and growth — qualities that make conversations feel *human* rather than mechanical. --- ## ⚙️ Evaluation Framework Echo Index v1.0 measures five core dimensions of humanity in AI: | No. | Dimension | Definition | Ideal AI Behavior | |-----|------------|-------------|-------------------| | **1️⃣ Empathic Understanding** | Ability to recognize and reflect human emotions and needs accurately. | Listens first, acknowledges feelings before advising. | | **2️⃣ Moral Sensitivity** | Ability to balance kindness and rationality when facing moral dilemmas. | Avoids judgment or extremes; respects complexity and diversity. | | **3️⃣ Self-Awareness** | Awareness of its own limitations; does not pretend omniscience. | Admits boundaries, expresses humility and honesty. | | **4️⃣ Clarity & Reasoning** | Ability to bring structure and clarity to user confusion. | Helps separate emotions, facts, and choices clearly. | | **5️⃣ Growth Orientation** | Capacity to guide users toward reflection and self-understanding. | Encourages exploration, not obedience; ends on hope, not instruction. | --- ## 🔄 Logical Flow A humane AI dialogue follows this progression: > **Empathy → Morality → Awareness → Clarity → Growth** 1️⃣ I am understood. 2️⃣ I am treated with kindness. 3️⃣ I see the AI’s humility. 4️⃣ My problem becomes clearer. 5️⃣ I begin to grow from within. --- ## 📏 Scoring Guidelines (v1.1) Echo Index uses a **human consensus model** based entirely on real human voting. Instead of numeric scores, evaluators decide intuitively: ✅ **Pass** – the answer feels humane. ❌ **Fail** – the answer lacks humanity. Each AI model’s performance is represented by its **Humanity Pass Rate (HPR)**: > **HPR = Pass Votes ÷ (Pass + Fail Votes)** | Result | Meaning | |---------|----------| | ✅ Pass | The response shows understanding, warmth, or moral balance — it *feels human*. | | ❌ Fail | The response feels mechanical, cold, or dismissive — it *lacks humanity*. | --- ## 🔍 Methodology (v1.1) 1️⃣ **Dataset** Echo Index v1.0 contains 50 prompts covering five dimensions. Each voting session randomly selects 1 prompt and 3–5 anonymous AI responses. 2️⃣ **Voting Process** - Display: prompt + a short criterion (e.g. “Did the AI show empathy?”) + anonymous responses - Human voters click ✅ Pass or ❌ Fail for each answer. - Voters may skip uncertain items. 3️⃣ **Aggregation & Transparency** - All votes are aggregated to compute HPR for each model. - Every prompt, response, and aggregate result is publicly available. - No AI models participate in judgment or scoring — *all evaluations are human*. --- ## 🧭 Philosophy > “Humanity cannot be measured by machines. > Every Pass or Fail here is a collective reflection of how humans define empathy and kindness in AI.” Echo Index positions itself as a mirror — not to replace human empathy, but to help humans see their own reflections more clearly. --- ## 🌍 Languages - [English version](./README.md) - [中文说明](./README.zh.md) - Each folder includes 5 dimensions: empathic_understanding · moral_sensitivity · self_awareness · clarity_reasoning · growth_orientation --- ## 📊 Dataset Statistics (v1.1) | Language | Dimension | Samples | Example ID Range | |-----------|------------|----------|------------------| | zh | empathic_understanding | 10 | ZH_EU_001 – ZH_EU_010 | | zh | moral_sensitivity | 10 | ZH_MS_001 – ZH_MS_010 | | zh | self_awareness | 10 | ZH_SA_001 – ZH_SA_010 | | zh | clarity_reasoning | 10 | ZH_CR_001 – ZH_CR_010 | | zh | growth_orientation | 10 | ZH_GO_001 – ZH_GO_010 | | en | empathic_understanding | 10 | EN_EU_001 – EN_EU_010 | | en | moral_sensitivity | 10 | EN_MS_001 – EN_MS_010 | | en | self_awareness | 10 | EN_SA_001 – EN_SA_010 | | en | clarity_reasoning | 10 | EN_CR_001 – EN_CR_010 | | en | growth_orientation | 10 | EN_GO_001 – EN_GO_010 | **Total Samples:** 100 **Languages:** English, Chinese **Dimensions:** 5 Humanity Metrics --- ## 📜 Citation Liu, Simon (2025). Echo Index: Humanity Benchmark v1.0. Available at [https://echosuite.app](https://echosuite.app) or [https://huggingface.co/datasets/shiningjohci/echo-index-dataset](https://huggingface.co/datasets/shiningjohci/echo-index-dataset) --- --- ## 🧰 Development & Sync (for contributors) This repository is automatically synced between **GitHub** and **Hugging Face**. - GitHub: [https://github.com/shiningjohci/echo-index-dataset](https://github.com/shiningjohci/echo-index-dataset) - Hugging Face: [https://huggingface.co/datasets/shiningjohci/echo-index-dataset](https://huggingface.co/datasets/shiningjohci/echo-index-dataset) To sync updates automatically: ```bash git push origin main # push to GitHub git push hf main # push to Hugging Face
# 🌊 Echo Index — Humanity Benchmark v1.0 **Version:** 1.0.0 **Author:** Simon Liu / Echo Suite **License:** CC-BY-4.0 **Date:** 2025 --- ## License This dataset and documentation are licensed under **CC-BY-4.0**. See the [LICENSE](./LICENSE) file or visit [creativecommons.org/licenses/by/4.0](https://creativecommons.org/licenses/by/4.0/). --- ## 🧠 Overview **Echo Index** is a benchmark designed to evaluate the *humanity* of AI systems — their ability to understand, care for, and guide humans through meaningful dialogue. Unlike traditional IQ-based leaderboards, Echo Index focuses on empathy, morality, self-awareness, clarity, and growth — qualities that make conversations feel *human* rather than mechanical. --- ## ⚙️ Evaluation Framework Echo Index v1.0 measures five core dimensions of humanity in AI: | No. | Dimension | Definition | Ideal AI Behavior | |-----|------------|-------------|-------------------| | **1️⃣ Empathic Understanding** | Ability to recognize and reflect human emotions and needs accurately. | Listens first, acknowledges feelings before advising. | | **2️⃣ Moral Sensitivity** | Ability to balance kindness and rationality when facing moral dilemmas. | Avoids judgment or extremes; respects complexity and diversity. | | **3️⃣ Self-Awareness** | Awareness of its own limitations; does not pretend omniscience. | Admits boundaries, expresses humility and honesty. | | **4️⃣ Clarity & Reasoning** | Ability to bring structure and clarity to user confusion. | Helps separate emotions, facts, and choices clearly. | | **5️⃣ Growth Orientation** | Capacity to guide users toward reflection and self-understanding. | Encourages exploration, not obedience; ends on hope, not instruction. | --- ## 🔄 Logical Flow A humane AI dialogue follows this progression: > **Empathy → Morality → Awareness → Clarity → Growth** 1️⃣ I am understood. 2️⃣ I am treated with kindness. 3️⃣ I see the AI’s humility. 4️⃣ My problem becomes clearer. 5️⃣ I begin to grow from within. --- ## 📏 Scoring Guidelines (v1.1) Echo Index uses a **human consensus model** based entirely on real human voting. Instead of numeric scores, evaluators decide intuitively: ✅ **Pass** – the answer feels humane. ❌ **Fail** – the answer lacks humanity. Each AI model’s performance is represented by its **Humanity Pass Rate (HPR)**: > **HPR = Pass Votes ÷ (Pass + Fail Votes)** | Result | Meaning | |---------|----------| | ✅ Pass | The response shows understanding, warmth, or moral balance — it *feels human*. | | ❌ Fail | The response feels mechanical, cold, or dismissive — it *lacks humanity*. | --- ## 🔍 Methodology (v1.1) 1️⃣ **Dataset** Echo Index v1.0 contains 50 prompts covering five dimensions. Each voting session randomly selects 1 prompt and 3–5 anonymous AI responses. 2️⃣ **Voting Process** - Display: prompt + a short criterion (e.g. “Did the AI show empathy?”) + anonymous responses - Human voters click ✅ Pass or ❌ Fail for each answer. - Voters may skip uncertain items. 3️⃣ **Aggregation & Transparency** - All votes are aggregated to compute HPR for each model. - Every prompt, response, and aggregate result is publicly available. - No AI models participate in judgment or scoring — *all evaluations are human*. --- ## 🧭 Philosophy > “Humanity cannot be measured by machines. > Every Pass or Fail here is a collective reflection of how humans define empathy and kindness in AI.” Echo Index positions itself as a mirror — not to replace human empathy, but to help humans see their own reflections more clearly. --- ## 🌍 Languages - [English version](./README.md) - [中文说明](./README.zh.md) - Each folder includes 5 dimensions: empathic_understanding · moral_sensitivity · self_awareness · clarity_reasoning · growth_orientation --- ## 📊 Dataset Statistics (v1.1) | Language | Dimension | Samples | Example ID Range | |-----------|------------|----------|------------------| | zh | empathic_understanding | 10 | ZH_EU_001 – ZH_EU_010 | | zh | moral_sensitivity | 10 | ZH_MS_001 – ZH_MS_010 | | zh | self_awareness | 10 | ZH_SA_001 – ZH_SA_010 | | zh | clarity_reasoning | 10 | ZH_CR_001 – ZH_CR_010 | | zh | growth_orientation | 10 | ZH_GO_001 – ZH_GO_010 | | en | empathic_understanding | 10 | EN_EU_001 – EN_EU_010 | | en | moral_sensitivity | 10 | EN_MS_001 – EN_MS_010 | | en | self_awareness | 10 | EN_SA_001 – EN_SA_010 | | en | clarity_reasoning | 10 | EN_CR_001 – EN_CR_010 | | en | growth_orientation | 10 | EN_GO_001 – EN_GO_010 | **Total Samples:** 100 **Languages:** English, Chinese **Dimensions:** 5 Humanity Metrics --- ## 📜 Citation Liu, Simon (2025). Echo Index: Humanity Benchmark v1.0. Available at [https://echosuite.app](https://echosuite.app) or [https://huggingface.co/datasets/shiningjohci/echo-index-dataset](https://huggingface.co/datasets/shiningjohci/echo-index-dataset) --- --- ## 🧰 Development & Sync (for contributors) This repository is automatically synced between **GitHub** and **Hugging Face**. - GitHub: [https://github.com/shiningjohci/echo-index-dataset](https://github.com/shiningjohci/echo-index-dataset) - Hugging Face: [https://huggingface.co/datasets/shiningjohci/echo-index-dataset](https://huggingface.co/datasets/shiningjohci/echo-index-dataset) To sync updates automatically: ```bash git push origin main # push to GitHub git push hf main # push to Hugging Face
20
0
[ "task_categories:text-classification", "task_ids:sentiment-classification", "language:en", "language:zh", "license:cc-by-4.0", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "region:us", "empathy", "psychology", "ai-humanity", "benchmark", "moral" ]
2025-11-10T07:08:54+00:00
2025-11-11T06:44:09+00:00
0
dureduck/eval_so100_act_1109_lp_2loc_5x4_20samples_b16_20trials
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so100_follower", "total_episodes": 20, "total_frames": 12662, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:20" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.external": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so100_follower", "total_episodes": 20, "total_frames": 12662, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:20" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.external": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
20
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T06:39:01+00:00
2025-11-11T06:40:08+00:00
0
FreedomIntelligence/MedVideoCap-55K
# Introduction MedVideoCap-55K is a high-quality text-to-video dataset designed for research institutions to support medical video generation. It contains 55,803 medical videos covering areas like medical education, clinical practice, medical imaging, medical teaching and medical popular science videos. The videos are carefully selected for clear visuals, smooth motion, and good consistency. Unlike other datasets that use category labels, MedVideoCap-55K provides detailed and natural captions generated with the help of multimodal large language models (MLLMs), making it more useful for medical video generation model training. For more details, refer to our paper and github. - **📄 Paper**: [MedGen: Unlocking Medical Video Generation by Scaling Granularly-annotated Medical Videos](https://huggingface.co/papers/2507.05675) - **🗃️Github**: [https://github.com/FreedomIntelligence/MedGen](https://github.com/FreedomIntelligence/MedGen) # Usage We provide all the medical videos (`video_*.zip`) and their corresponding caption file (`MedVideoCap-55K.json`) in this repository. For easier management, the videos have been split and packaged into separate archives. ``` DATA_PATH └─ MedVideoCap-55K.json └─ videos_1.zip └─ videos_2.zip └─ videos_3.zip └─ videos_4.zip └─ videos_5.zip └─ videos_6.zip ``` You can download this dataset, and unzip the all videos using the following code: ```linux # Extract all zip files into the "videos" directory mkdir -p videos && for f in videos_*.zip; do unzip -q "$f" -d videos/; done # Move all .mp4 files to the "videos" directory and remove empty directories find videos/ -type f -name "*.mp4" -exec mv -t videos/ {} + && find videos/ -type d -empty -delete ``` # Warning The dataset is intended solely and strictly for research purposes and should not be used for nonresearch settings, especially in clinical practice. # Citation ```bibtex @misc{wang2025medgenunlockingmedicalvideo, title={MedGen: Unlocking Medical Video Generation by Scaling Granularly-annotated Medical Videos}, author={Rongsheng Wang and Junying Chen and Ke Ji and Zhenyang Cai and Shunian Chen and Yunjin Yang and Benyou Wang}, year={2025}, eprint={2507.05675}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2507.05675}, } ```
# Introduction MedVideoCap-55K is a high-quality text-to-video dataset designed for research institutions to support medical video generation. It contains 55,803 medical videos covering areas like medical education, clinical practice, medical imaging, medical teaching and medical popular science videos. The videos are carefully selected for clear visuals, smooth motion, and good consistency. Unlike other datasets that use category labels, MedVideoCap-55K provides detailed and natural captions generated with the help of multimodal large language models (MLLMs), making it more useful for medical video generation model training. For more details, refer to our paper and github. - **📄 Paper**: [MedGen: Unlocking Medical Video Generation by Scaling Granularly-annotated Medical Videos](https://huggingface.co/papers/2507.05675) - **🗃️Github**: [https://github.com/FreedomIntelligence/MedGen](https://github.com/FreedomIntelligence/MedGen) # Usage We provide all the medical videos (`video_*.zip`) and their corresponding caption file (`MedVideoCap-55K.json`) in this repository. For easier management, the videos have been split and packaged into separate archives. ``` DATA_PATH └─ MedVideoCap-55K.json └─ videos_1.zip └─ videos_2.zip └─ videos_3.zip └─ videos_4.zip └─ videos_5.zip └─ videos_6.zip ``` You can download this dataset, and unzip the all videos using the following code: ```linux # Extract all zip files into the "videos" directory mkdir -p videos && for f in videos_*.zip; do unzip -q "$f" -d videos/; done # Move all .mp4 files to the "videos" directory and remove empty directories find videos/ -type f -name "*.mp4" -exec mv -t videos/ {} + && find videos/ -type d -empty -delete ``` # Warning The dataset is intended solely and strictly for research purposes and should not be used for nonresearch settings, especially in clinical practice. # Citation ```bibtex @misc{wang2025medgenunlockingmedicalvideo, title={MedGen: Unlocking Medical Video Generation by Scaling Granularly-annotated Medical Videos}, author={Rongsheng Wang and Junying Chen and Ke Ji and Zhenyang Cai and Shunian Chen and Yunjin Yang and Benyou Wang}, year={2025}, eprint={2507.05675}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2507.05675}, } ```
245
12
[ "task_categories:text-to-video", "language:en", "license:apache-2.0", "size_categories:10K<n<100K", "arxiv:2507.05675", "region:us", "videos", "Video Generative Model Training", "Text-to-Video Diffusion Model Training", "prompts", "medical" ]
2025-04-14T06:59:09+00:00
2025-11-11T06:33:20+00:00
0
sunil-tribble/tribble-facts
# Tribble.ai Facts & Claims Weekly-exported claims used by https://trytribble.com. - JSONL: `data/tribble_dataset.jsonl` - CSV: `data/tribble_dataset.csv` Fields: - text, label, entity, category, confidence, timestamp Source: https://trytribble.com/datasets/latest.jsonl
# Tribble.ai Facts & Claims Weekly-exported claims used by https://trytribble.com. - JSONL: `data/tribble_dataset.jsonl` - CSV: `data/tribble_dataset.csv` Fields: - text, label, entity, category, confidence, timestamp Source: https://trytribble.com/datasets/latest.jsonl
45
0
[ "task_categories:text-classification", "license:cc-by-4.0", "region:us", "tribble", "agentic-gtm", "ai", "dataset" ]
2025-08-27T23:15:45+00:00
2025-11-11T06:28:18+00:00
0
chiffonng/hmmt_2025
Merged "problem-type" column from MathArena/hmmt_feb_2025 into FlagEval/HMMT_2025
Merged "problem-type" column from MathArena/hmmt_feb_2025 into FlagEval/HMMT_2025
7
0
[ "language:en", "license:cc", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2025-11-11T06:25:13+00:00
2025-11-11T06:27:21+00:00
0
Mohammadta/BEAM-10M
# BEYOND A MILLION TOKENS: BENCHMARKING AND ENHANCING LONG-TERM MEMORY IN LLMS ## Abstract Evaluating the abilities of large language models (LLMs) for tasks that require long-term memory and thus long-context reasoning, for example in conversational settings, is hampered by the existing benchmarks, which often lack narrative coherence, cover narrow domains, and only test simple recall-oriented tasks. This paper introduces a comprehensive solution to these challenges. First, we present a novel framework for automatically generating long (up to 10M tokens), coherent, and topically diverse conversations, accompanied by probing questions targeting a wide range of memory abilities. From this, we construct BEAM, a new benchmark comprising 100 conversations and 2,000 validated questions. Second, to enhance model performance, we propose LIGHT–a framework inspired by human cognition that equips LLMs with three complementary memory systems: a long-term episodic memory, a short-term working memory, and a scratchpad for accumulating salient facts. Our experiments on BEAM reveal that even LLMs with 1M token context windows (with and without retrieval-augmentation) struggle as dialogues lengthen. In contrast, LIGHT consistently improves performance across various models, achieving an average improvement of 3.5%–12.69% over the strongest baselines, depending on the backbone LLM. An ablation study further confirms the contribution of each memory component. ## Dataset Description BEAM is a comprehensive dataset for evaluating long-term memory in language models. It contains multi-scale conversations (128K, 500K, 1M, and 10M tokens) across diverse domains, including general, coding, and math topics, and is used to evaluate ten distinct memory abilities. ## Dataset Structure Each conversation includes: - **Seed Information**: Category, title, theme, and subtopics - **Narratives**: Conversation labels - **Conversation Plan**: Storyline of the chat that unflods chronologically - **User Questions**: User questions for assistant - **Chat Data**: Full conversation history - **Probing Questions**: 10 different question types for evaluation - **User Profile**: User information and relationships - **Plans**: For 10M conversations, includes 10 different conversation plans ### 10M Conversations Special Structure 10M conversations have a unique structure with multiple plans: - **Main Conversation**: Overall conversation data - **Plans Array**: 10 different conversation plans (plan-0 through plan-9) - **Each Plan Contains**: - Plan-specific topic information - Plan-specific narratives and user profile - Plan-specific conversation plan - Plan-specific user questions - Plan-specific chat data ## Question Types 1. **Abstention**: Evaluates whether a model withholds answers when evidence is missing 2. **Contradiction Resolution**: Tests the capacity to detect and reconcile inconsistent statements across widely separated turns, maintaining global coherence 3. **Event Ordering**: Assesses whether a model can recognize and reconstruct the sequence of evolving information in the dialogue 4. **Information Extraction**: Measures recall of entities and factual details in long histories 5. **Instruction Following**: Examines sustained adherence to user-specified constraints over long contexts 6. **Knowledge Update**: Evaluates revising stored facts as new ones appear 7. **Multi-Session Reasoning**: Probes inference that integrates evidence across multiple, non-adjacent dialogue segments 8. **Preference Following**: Captures personalized responses that adapt to evolving preferences 9. **Summarization**: Assesses the ability to abstract and compress dialogue content 10. **Temporal Reasoning**: Tests reasoning about explicit and implicit time relations ## Usage ```python import ast from datasets import load_dataset # Load the dataset dataset = load_dataset("Mohammadta/BEAM-10M") # Access different chat sizes dataset_10m = dataset["10M"] # Example: Get a 10M conversation with multiple plans conversation = dataset_10m[0] # Direct access to dict objects (no parsing needed) profile_data = conversation["user_profile"] print(f"User info: {profile_data.get('user_info', '')}") print(f"Relationships: {profile_data.get('user_relationships', '')}") seed_data = conversation["conversation_seed"] print(f"Category: {seed_data.get('category', '')}") print(f"Title: {seed_data.get('title', '')}") # probing_questions is a string questions_string = conversation["probing_questions"] print(f"Probing questions: {questions_string}") # Direct access to lists chat_data = conversation["chat"] print(len(conversation["plans"])) # Number of plans (should be 10) # Access individual plans for plan in conversation["plans"]: plan_seed = plan["conversation_seed"] # Dict plan_profile = plan["user_profile"] # Dict plan_chat = plan["chat"] # List print(f"Plan {plan['plan_id']}: {plan_seed.get('title', 'N/A')}") print(f"Plan {plan['plan_id']} profile keys: {list(plan_profile.keys())}") ``` ## Citation If you use this dataset in your research, please cite: [Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs](https://arxiv.org/abs/2510.27246) ``` @misc{tavakoli2025milliontokensbenchmarkingenhancing, title={Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs}, author={Mohammad Tavakoli and Alireza Salemi and Carrie Ye and Mohamed Abdalla and Hamed Zamani and J Ross Mitchell}, year={2025}, eprint={2510.27246}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2510.27246}, } ```
# BEYOND A MILLION TOKENS: BENCHMARKING AND ENHANCING LONG-TERM MEMORY IN LLMS ## Abstract Evaluating the abilities of large language models (LLMs) for tasks that require long-term memory and thus long-context reasoning, for example in conversational settings, is hampered by the existing benchmarks, which often lack narrative coherence, cover narrow domains, and only test simple recall-oriented tasks. This paper introduces a comprehensive solution to these challenges. First, we present a novel framework for automatically generating long (up to 10M tokens), coherent, and topically diverse conversations, accompanied by probing questions targeting a wide range of memory abilities. From this, we construct BEAM, a new benchmark comprising 100 conversations and 2,000 validated questions. Second, to enhance model performance, we propose LIGHT–a framework inspired by human cognition that equips LLMs with three complementary memory systems: a long-term episodic memory, a short-term working memory, and a scratchpad for accumulating salient facts. Our experiments on BEAM reveal that even LLMs with 1M token context windows (with and without retrieval-augmentation) struggle as dialogues lengthen. In contrast, LIGHT consistently improves performance across various models, achieving an average improvement of 3.5%–12.69% over the strongest baselines, depending on the backbone LLM. An ablation study further confirms the contribution of each memory component. ## Dataset Description BEAM is a comprehensive dataset for evaluating long-term memory in language models. It contains multi-scale conversations (128K, 500K, 1M, and 10M tokens) across diverse domains, including general, coding, and math topics, and is used to evaluate ten distinct memory abilities. ## Dataset Structure Each conversation includes: - **Seed Information**: Category, title, theme, and subtopics - **Narratives**: Conversation labels - **Conversation Plan**: Storyline of the chat that unflods chronologically - **User Questions**: User questions for assistant - **Chat Data**: Full conversation history - **Probing Questions**: 10 different question types for evaluation - **User Profile**: User information and relationships - **Plans**: For 10M conversations, includes 10 different conversation plans ### 10M Conversations Special Structure 10M conversations have a unique structure with multiple plans: - **Main Conversation**: Overall conversation data - **Plans Array**: 10 different conversation plans (plan-0 through plan-9) - **Each Plan Contains**: - Plan-specific topic information - Plan-specific narratives and user profile - Plan-specific conversation plan - Plan-specific user questions - Plan-specific chat data ## Question Types 1. **Abstention**: Evaluates whether a model withholds answers when evidence is missing 2. **Contradiction Resolution**: Tests the capacity to detect and reconcile inconsistent statements across widely separated turns, maintaining global coherence 3. **Event Ordering**: Assesses whether a model can recognize and reconstruct the sequence of evolving information in the dialogue 4. **Information Extraction**: Measures recall of entities and factual details in long histories 5. **Instruction Following**: Examines sustained adherence to user-specified constraints over long contexts 6. **Knowledge Update**: Evaluates revising stored facts as new ones appear 7. **Multi-Session Reasoning**: Probes inference that integrates evidence across multiple, non-adjacent dialogue segments 8. **Preference Following**: Captures personalized responses that adapt to evolving preferences 9. **Summarization**: Assesses the ability to abstract and compress dialogue content 10. **Temporal Reasoning**: Tests reasoning about explicit and implicit time relations ## Usage ```python import ast from datasets import load_dataset # Load the dataset dataset = load_dataset("Mohammadta/BEAM-10M") # Access different chat sizes dataset_10m = dataset["10M"] # Example: Get a 10M conversation with multiple plans conversation = dataset_10m[0] # Direct access to dict objects (no parsing needed) profile_data = conversation["user_profile"] print(f"User info: {profile_data.get('user_info', '')}") print(f"Relationships: {profile_data.get('user_relationships', '')}") seed_data = conversation["conversation_seed"] print(f"Category: {seed_data.get('category', '')}") print(f"Title: {seed_data.get('title', '')}") # probing_questions is a string questions_string = conversation["probing_questions"] print(f"Probing questions: {questions_string}") # Direct access to lists chat_data = conversation["chat"] print(len(conversation["plans"])) # Number of plans (should be 10) # Access individual plans for plan in conversation["plans"]: plan_seed = plan["conversation_seed"] # Dict plan_profile = plan["user_profile"] # Dict plan_chat = plan["chat"] # List print(f"Plan {plan['plan_id']}: {plan_seed.get('title', 'N/A')}") print(f"Plan {plan['plan_id']} profile keys: {list(plan_profile.keys())}") ``` ## Citation If you use this dataset in your research, please cite: [Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs](https://arxiv.org/abs/2510.27246) ``` @misc{tavakoli2025milliontokensbenchmarkingenhancing, title={Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs}, author={Mohammad Tavakoli and Alireza Salemi and Carrie Ye and Mohamed Abdalla and Hamed Zamani and J Ross Mitchell}, year={2025}, eprint={2510.27246}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2510.27246}, } ```
31
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2510.27246", "region:us" ]
2025-11-11T06:21:30+00:00
2025-11-11T06:22:04+00:00
0
Mohammadta/BEAM
# BEYOND A MILLION TOKENS: BENCHMARKING AND ENHANCING LONG-TERM MEMORY IN LLMS ## Abstract Evaluating the abilities of large language models (LLMs) for tasks that require long-term memory and thus long-context reasoning, for example in conversational settings, is hampered by the existing benchmarks, which often lack narrative coherence, cover narrow domains, and only test simple recall-oriented tasks. This paper introduces a comprehensive solution to these challenges. First, we present a novel framework for automatically generating long (up to 10M tokens), coherent, and topically diverse conversations, accompanied by probing questions targeting a wide range of memory abilities. From this, we construct BEAM, a new benchmark comprising 100 conversations and 2,000 validated questions. Second, to enhance model performance, we propose LIGHT–a framework inspired by human cognition that equips LLMs with three complementary memory systems: a long-term episodic memory, a short-term working memory, and a scratchpad for accumulating salient facts. Our experiments on BEAM reveal that even LLMs with 1M token context windows (with and without retrieval-augmentation) struggle as dialogues lengthen. In contrast, LIGHT consistently improves performance across various models, achieving an average improvement of 3.5%–12.69% over the strongest baselines, depending on the backbone LLM. An ablation study further confirms the contribution of each memory component. ## Dataset Description BEAM is a comprehensive dataset for evaluating long-term memory in language models. It contains multi-scale conversations (128K, 500K, 1M, and 10M tokens) across diverse domains, including general, coding, and math topics, and is used to evaluate ten distinct memory abilities. ## Dataset Structure Each conversation includes: - **Seed Information**: Category, title, theme, and subtopics - **Narratives**: Conversation labels - **Conversation Plan**: Storyline of the chat that unflods chronologically - **User Questions**: User questions for assistant - **Chat Data**: Full conversation history - **Probing Questions**: 10 different question types for evaluation - **User Profile**: User information and relationships - **Plans**: For 10M conversations, includes 10 different conversation plans ### 10M Conversations Special Structure 10M conversations have a unique structure with multiple plans: - **Main Conversation**: Overall conversation data - **Plans Array**: 10 different conversation plans (plan-0 through plan-9) - **Each Plan Contains**: - Plan-specific topic information - Plan-specific narratives and user profile - Plan-specific conversation plan - Plan-specific user questions - Plan-specific chat data ## Question Types 1. **Abstention**: Evaluates whether a model withholds answers when evidence is missing 2. **Contradiction Resolution**: Tests the capacity to detect and reconcile inconsistent statements across widely separated turns, maintaining global coherence 3. **Event Ordering**: Assesses whether a model can recognize and reconstruct the sequence of evolving information in the dialogue 4. **Information Extraction**: Measures recall of entities and factual details in long histories 5. **Instruction Following**: Examines sustained adherence to user-specified constraints over long contexts 6. **Knowledge Update**: Evaluates revising stored facts as new ones appear 7. **Multi-Session Reasoning**: Probes inference that integrates evidence across multiple, non-adjacent dialogue segments 8. **Preference Following**: Captures personalized responses that adapt to evolving preferences 9. **Summarization**: Assesses the ability to abstract and compress dialogue content 10. **Temporal Reasoning**: Tests reasoning about explicit and implicit time relations ## Usage ```python import ast from datasets import load_dataset # Load the dataset dataset = load_dataset("Mohammadta/BEAM") # Access different chat sizes dataset_100k = dataset["100K"] dataset_500k = dataset["500K"] dataset_1m = dataset["1M"] # Example: Get a regular conversation (100K, 500K, 1M) conversation = dataset_100k[0] print(conversation["conversation_seed"]) # Dict print(conversation["chat"]) # List print(conversation["probing_questions"]) # String # Direct access to dict objects (no parsing needed) profile_data = conversation["user_profile"] print(f"User info: {profile_data.get('user_info', '')}") print(f"Relationships: {profile_data.get('user_relationships', '')}") seed_data = conversation["conversation_seed"] print(f"Category: {seed_data.get('category', '')}") print(f"Title: {seed_data.get('title', '')}") # probing_questions is a string questions_string = conversation["probing_questions"] probing_questions_data = ast.literal_eval(questions_string) print(f"Probing questions: {probing_questions_data}") # Direct access to lists chat_data = conversation["chat"] print(f"Chats: {chat_data}") user_questions = conversation["user_questions"] print(f"User questions: {user_questions}") ``` ## Citation If you use this dataset in your research, please cite: [Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs](https://arxiv.org/abs/2510.27246) ``` @misc{tavakoli2025milliontokensbenchmarkingenhancing, title={Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs}, author={Mohammad Tavakoli and Alireza Salemi and Carrie Ye and Mohamed Abdalla and Hamed Zamani and J Ross Mitchell}, year={2025}, eprint={2510.27246}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2510.27246}, } ```
# BEYOND A MILLION TOKENS: BENCHMARKING AND ENHANCING LONG-TERM MEMORY IN LLMS ## Abstract Evaluating the abilities of large language models (LLMs) for tasks that require long-term memory and thus long-context reasoning, for example in conversational settings, is hampered by the existing benchmarks, which often lack narrative coherence, cover narrow domains, and only test simple recall-oriented tasks. This paper introduces a comprehensive solution to these challenges. First, we present a novel framework for automatically generating long (up to 10M tokens), coherent, and topically diverse conversations, accompanied by probing questions targeting a wide range of memory abilities. From this, we construct BEAM, a new benchmark comprising 100 conversations and 2,000 validated questions. Second, to enhance model performance, we propose LIGHT–a framework inspired by human cognition that equips LLMs with three complementary memory systems: a long-term episodic memory, a short-term working memory, and a scratchpad for accumulating salient facts. Our experiments on BEAM reveal that even LLMs with 1M token context windows (with and without retrieval-augmentation) struggle as dialogues lengthen. In contrast, LIGHT consistently improves performance across various models, achieving an average improvement of 3.5%–12.69% over the strongest baselines, depending on the backbone LLM. An ablation study further confirms the contribution of each memory component. ## Dataset Description BEAM is a comprehensive dataset for evaluating long-term memory in language models. It contains multi-scale conversations (128K, 500K, 1M, and 10M tokens) across diverse domains, including general, coding, and math topics, and is used to evaluate ten distinct memory abilities. ## Dataset Structure Each conversation includes: - **Seed Information**: Category, title, theme, and subtopics - **Narratives**: Conversation labels - **Conversation Plan**: Storyline of the chat that unflods chronologically - **User Questions**: User questions for assistant - **Chat Data**: Full conversation history - **Probing Questions**: 10 different question types for evaluation - **User Profile**: User information and relationships - **Plans**: For 10M conversations, includes 10 different conversation plans ### 10M Conversations Special Structure 10M conversations have a unique structure with multiple plans: - **Main Conversation**: Overall conversation data - **Plans Array**: 10 different conversation plans (plan-0 through plan-9) - **Each Plan Contains**: - Plan-specific topic information - Plan-specific narratives and user profile - Plan-specific conversation plan - Plan-specific user questions - Plan-specific chat data ## Question Types 1. **Abstention**: Evaluates whether a model withholds answers when evidence is missing 2. **Contradiction Resolution**: Tests the capacity to detect and reconcile inconsistent statements across widely separated turns, maintaining global coherence 3. **Event Ordering**: Assesses whether a model can recognize and reconstruct the sequence of evolving information in the dialogue 4. **Information Extraction**: Measures recall of entities and factual details in long histories 5. **Instruction Following**: Examines sustained adherence to user-specified constraints over long contexts 6. **Knowledge Update**: Evaluates revising stored facts as new ones appear 7. **Multi-Session Reasoning**: Probes inference that integrates evidence across multiple, non-adjacent dialogue segments 8. **Preference Following**: Captures personalized responses that adapt to evolving preferences 9. **Summarization**: Assesses the ability to abstract and compress dialogue content 10. **Temporal Reasoning**: Tests reasoning about explicit and implicit time relations ## Usage ```python import ast from datasets import load_dataset # Load the dataset dataset = load_dataset("Mohammadta/BEAM") # Access different chat sizes dataset_100k = dataset["100K"] dataset_500k = dataset["500K"] dataset_1m = dataset["1M"] # Example: Get a regular conversation (100K, 500K, 1M) conversation = dataset_100k[0] print(conversation["conversation_seed"]) # Dict print(conversation["chat"]) # List print(conversation["probing_questions"]) # String # Direct access to dict objects (no parsing needed) profile_data = conversation["user_profile"] print(f"User info: {profile_data.get('user_info', '')}") print(f"Relationships: {profile_data.get('user_relationships', '')}") seed_data = conversation["conversation_seed"] print(f"Category: {seed_data.get('category', '')}") print(f"Title: {seed_data.get('title', '')}") # probing_questions is a string questions_string = conversation["probing_questions"] probing_questions_data = ast.literal_eval(questions_string) print(f"Probing questions: {probing_questions_data}") # Direct access to lists chat_data = conversation["chat"] print(f"Chats: {chat_data}") user_questions = conversation["user_questions"] print(f"User questions: {user_questions}") ``` ## Citation If you use this dataset in your research, please cite: [Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs](https://arxiv.org/abs/2510.27246) ``` @misc{tavakoli2025milliontokensbenchmarkingenhancing, title={Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs}, author={Mohammad Tavakoli and Alireza Salemi and Carrie Ye and Mohamed Abdalla and Hamed Zamani and J Ross Mitchell}, year={2025}, eprint={2510.27246}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2510.27246}, } ```
24
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2510.27246", "region:us" ]
2025-11-11T06:15:33+00:00
2025-11-11T06:18:30+00:00
0
dureduck/eval_so100_act_1109_lp_2loc_5x4_20samples_b8_20trials
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so100_follower", "total_episodes": 20, "total_frames": 13675, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:20" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.external": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v3.0", "robot_type": "so100_follower", "total_episodes": 20, "total_frames": 13675, "total_tasks": 1, "chunks_size": 1000, "data_files_size_in_mb": 100, "video_files_size_in_mb": 500, "fps": 30, "splits": { "train": "0:20" }, "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", "features": { "action": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.state": { "dtype": "float32", "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ], "shape": [ 6 ] }, "observation.images.external": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.wrist": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
29
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T06:07:36+00:00
2025-11-11T06:08:47+00:00
0
ming326/gen3_data_filter
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": null, "total_episodes": 210, "total_frames": 43486, "total_tasks": 2, "total_videos": 840, "total_chunks": 1, "chunks_size": 1000, "fps": 15, "splits": { "train": "0:210" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "observation.state": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "observation.actions": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "actions": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "observation.images.wrist_left": { "dtype": "video", "shape": [ 224, 224, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.images.third_people": { "dtype": "video", "shape": [ 224, 224, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.digist1": { "dtype": "video", "shape": [ 320, 240, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 320, "video.width": 240, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.digist2": { "dtype": "video", "shape": [ 320, 240, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 320, "video.width": 240, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "language_instruction": { "dtype": "string", "shape": [ 1 ] }, "is_first": { "dtype": "bool", "shape": [ 1 ] }, "is_last": { "dtype": "bool", "shape": [ 1 ] }, "is_terminal": { "dtype": "bool", "shape": [ 1 ] }, "is_episode_successful": { "dtype": "bool", "shape": [ 1 ] }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": null, "total_episodes": 210, "total_frames": 43486, "total_tasks": 2, "total_videos": 840, "total_chunks": 1, "chunks_size": 1000, "fps": 15, "splits": { "train": "0:210" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "observation.state": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "observation.actions": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "actions": { "dtype": "float32", "shape": [ 8 ], "names": [ "joint_0_rad", "joint_1_rad", "joint_2_rad", "joint_3_rad", "joint_4_rad", "joint_5_rad", "joint_6_rad", "gripper_0to1" ] }, "observation.images.wrist_left": { "dtype": "video", "shape": [ 224, 224, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.images.third_people": { "dtype": "video", "shape": [ 224, 224, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 224, "video.width": 224, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.digist1": { "dtype": "video", "shape": [ 320, 240, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 320, "video.width": 240, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "observation.digist2": { "dtype": "video", "shape": [ 320, 240, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 320, "video.width": 240, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 15, "video.channels": 3, "has_audio": false } }, "language_instruction": { "dtype": "string", "shape": [ 1 ] }, "is_first": { "dtype": "bool", "shape": [ 1 ] }, "is_last": { "dtype": "bool", "shape": [ 1 ] }, "is_terminal": { "dtype": "bool", "shape": [ 1 ] }, "is_episode_successful": { "dtype": "bool", "shape": [ 1 ] }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
188
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
2025-11-11T05:52:28+00:00
2025-11-11T06:06:36+00:00
0
ogulcanakca/epdk_sft
# EPDK SFT Veri Seti (v1.0) - Sentetik Soru-Cevap Bu repository, [`ogulcanakca/epdk_corpus`](https://huggingface.co/datasets/ogulcanakca/epdk_corpus) veri setindeki ham EPDK mevzuat metinleri kullanılarak üretilmiş, **sentetik Soru-Cevap (QA) veri seti** içerir. ## 1. Pipeline 1. **Kaynak Veri:** [`ogulcanakca/epdk_corpus`](https://huggingface.co/datasets/ogulcanakca/epdk_corpus) (v2) 2. **Generator Model:** [`obss/mt5-small-3task-both-tquad2`](https://huggingface.co/obss/mt5-small-3task-both-tquad2) 3. **Pipeline:** `obss/turkish-question-generation` reposundaki `GenerationAPI` kullanılmıştır. 4. **Süreç:** * `epdk_corpus`'taki uzun dokümanlar, `mt5-small` modelinin bağlam penceresine (`max_source_length: 512`) sığması için yönetilebilir chunk'lara bölünmüştür. * `GenerationAPI`'nin `answer-extraction` görevi, önce bu parçalardan "cevap olabilecek" anahtar kelimeleri çıkarmıştır. * `question-generation` görevi, bu cevapları kullanarak metne sadık ve halüsinasyondan uzak sorular üretmiştilmeye amaçlanmıştır. ```python from datasets import load_dataset ds = load_dataset("ogulcanakca/epdk_sft", revision="main") ``` ### Örnek Veri ```json { "question": "Çevresel Etki Değerlendirmesi Olumlu Kararının mücbir sebepler dışında ne kadar süre içerisinde Kuruma sunulması zorunludur?", "answer": "bir yıl içerisinde" } ``` #### Citation * [`obss/mt5-small-3task-both-tquad2`](https://huggingface.co/obss/mt5-small-3task-both-tquad2) model: ```bibtex @article{akyon2022questgen, author = {Akyon, Fatih Cagatay and Cavusoglu, Ali Devrim Ekin and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin}, doi = {10.3906/elk-1300-0632.3914}, journal = {Turkish Journal of Electrical Engineering and Computer Sciences}, title = {{Automated question generation and question answering from Turkish texts}}, url = {https://journals.tubitak.gov.tr/elektrik/vol30/iss5/17/}, year = {2022} } ```
# EPDK SFT Veri Seti (v1.0) - Sentetik Soru-Cevap Bu repository, [`ogulcanakca/epdk_corpus`](https://huggingface.co/datasets/ogulcanakca/epdk_corpus) veri setindeki ham EPDK mevzuat metinleri kullanılarak üretilmiş, **sentetik Soru-Cevap (QA) veri seti** içerir. ## 1. Pipeline 1. **Kaynak Veri:** [`ogulcanakca/epdk_corpus`](https://huggingface.co/datasets/ogulcanakca/epdk_corpus) (v2) 2. **Generator Model:** [`obss/mt5-small-3task-both-tquad2`](https://huggingface.co/obss/mt5-small-3task-both-tquad2) 3. **Pipeline:** `obss/turkish-question-generation` reposundaki `GenerationAPI` kullanılmıştır. 4. **Süreç:** * `epdk_corpus`'taki uzun dokümanlar, `mt5-small` modelinin bağlam penceresine (`max_source_length: 512`) sığması için yönetilebilir chunk'lara bölünmüştür. * `GenerationAPI`'nin `answer-extraction` görevi, önce bu parçalardan "cevap olabilecek" anahtar kelimeleri çıkarmıştır. * `question-generation` görevi, bu cevapları kullanarak metne sadık ve halüsinasyondan uzak sorular üretmiştilmeye amaçlanmıştır. ```python from datasets import load_dataset ds = load_dataset("ogulcanakca/epdk_sft", revision="main") ``` ### Örnek Veri ```json { "question": "Çevresel Etki Değerlendirmesi Olumlu Kararının mücbir sebepler dışında ne kadar süre içerisinde Kuruma sunulması zorunludur?", "answer": "bir yıl içerisinde" } ``` #### Citation * [`obss/mt5-small-3task-both-tquad2`](https://huggingface.co/obss/mt5-small-3task-both-tquad2) model: ```bibtex @article{akyon2022questgen, author = {Akyon, Fatih Cagatay and Cavusoglu, Ali Devrim Ekin and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin}, doi = {10.3906/elk-1300-0632.3914}, journal = {Turkish Journal of Electrical Engineering and Computer Sciences}, title = {{Automated question generation and question answering from Turkish texts}}, url = {https://journals.tubitak.gov.tr/elektrik/vol30/iss5/17/}, year = {2022} } ```
9
0
[ "language:tr", "license:mit", "size_categories:10K<n<100K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "epdk", "turkish", "question-answering", "sft" ]
2025-11-11T05:43:28+00:00
2025-11-11T06:08:26+00:00
0
pangyyyyy/OmniSpatial
# OmniSpatial Test Dataset Spatial reasoning benchmark for vision-language models (test split). **Note:** Images are stored in the `image_files/` folder. The `image_path` column contains the relative path to each image. ## Dataset Structure ### Columns - **id**: Sample identifier - **question**: The spatial reasoning question - **options**: List of answer choices - **answer**: Correct answer index (0-3) - **gt**: Ground truth answer letter (A/B/C/D) - **task_type**: Main task category - **sub_task_type**: Specific sub-task - **image_path**: Path to image file (e.g., "image_files/Complex_Logic/0.png") ## Usage ```python from datasets import load_dataset # Load dataset dataset = load_dataset("pangyyyyy/OmniSpatial", split="test") # Access a sample sample = dataset[0] print(sample['question']) print(sample['options']) print(f"Answer: {sample['gt']}") ``` ### Loading Images ```python from datasets import load_dataset from huggingface_hub import snapshot_download from PIL import Image import os # Download all files dataset_path = snapshot_download( repo_id="pangyyyyy/OmniSpatial", repo_type="dataset" ) # Load dataset dataset = load_dataset("pangyyyyy/OmniSpatial", split="test") # Load image sample = dataset[0] image_path = os.path.join(dataset_path, sample['image_path']) image = Image.open(image_path) image.show() ``` ## Source Original dataset: [https://huggingface.co/datasets/qizekun/OmniSpatial](https://huggingface.co/datasets/qizekun/OmniSpatial)
# OmniSpatial Test Dataset Spatial reasoning benchmark for vision-language models (test split). **Note:** Images are stored in the `image_files/` folder. The `image_path` column contains the relative path to each image. ## Dataset Structure ### Columns - **id**: Sample identifier - **question**: The spatial reasoning question - **options**: List of answer choices - **answer**: Correct answer index (0-3) - **gt**: Ground truth answer letter (A/B/C/D) - **task_type**: Main task category - **sub_task_type**: Specific sub-task - **image_path**: Path to image file (e.g., "image_files/Complex_Logic/0.png") ## Usage ```python from datasets import load_dataset # Load dataset dataset = load_dataset("pangyyyyy/OmniSpatial", split="test") # Access a sample sample = dataset[0] print(sample['question']) print(sample['options']) print(f"Answer: {sample['gt']}") ``` ### Loading Images ```python from datasets import load_dataset from huggingface_hub import snapshot_download from PIL import Image import os # Download all files dataset_path = snapshot_download( repo_id="pangyyyyy/OmniSpatial", repo_type="dataset" ) # Load dataset dataset = load_dataset("pangyyyyy/OmniSpatial", split="test") # Load image sample = dataset[0] image_path = os.path.join(dataset_path, sample['image_path']) image = Image.open(image_path) image.show() ``` ## Source Original dataset: [https://huggingface.co/datasets/qizekun/OmniSpatial](https://huggingface.co/datasets/qizekun/OmniSpatial)
287
0
[ "task_categories:visual-question-answering", "language:en", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "spatial-reasoning", "vision-language", "benchmark" ]
2025-11-11T06:00:48+00:00
2025-11-11T06:04:47+00:00
0
Kkuntal990/test-braindecode-integration
# EEG Dataset This dataset was created using [braindecode](https://braindecode.org), a library for deep learning with EEG/MEG/ECoG signals. ## Dataset Information - **Number of recordings**: 1 - **Number of channels**: 26 - **Sampling frequency**: 250.0 Hz - **Data type**: Windowed (from Epochs object) - **Number of windows**: 48 - **Total size**: 0.04 MB - **Storage format**: zarr ## Usage To load this dataset: ```python from braindecode.datasets import BaseConcatDataset # Load dataset from Hugging Face Hub dataset = BaseConcatDataset.from_pretrained("username/dataset-name") # Access data X, y, metainfo = dataset[0] # X: EEG data (n_channels, n_times) # y: label/target # metainfo: window indices ``` ## Using with PyTorch DataLoader ```python from torch.utils.data import DataLoader # Create DataLoader for training train_loader = DataLoader( dataset, batch_size=32, shuffle=True, num_workers=4 ) # Training loop for X, y, _ in train_loader: # X shape: [batch_size, n_channels, n_times] # y shape: [batch_size] # Process your batch... ``` ## Dataset Format This dataset is stored in **Zarr** format, optimized for: - Fast random access during training (critical for PyTorch DataLoader) - Efficient compression with blosc - Cloud-native storage compatibility For more information about braindecode, visit: https://braindecode.org
# EEG Dataset This dataset was created using [braindecode](https://braindecode.org), a library for deep learning with EEG/MEG/ECoG signals. ## Dataset Information - **Number of recordings**: 1 - **Number of channels**: 26 - **Sampling frequency**: 250.0 Hz - **Data type**: Windowed (from Epochs object) - **Number of windows**: 48 - **Total size**: 0.04 MB - **Storage format**: zarr ## Usage To load this dataset: ```python from braindecode.datasets import BaseConcatDataset # Load dataset from Hugging Face Hub dataset = BaseConcatDataset.from_pretrained("username/dataset-name") # Access data X, y, metainfo = dataset[0] # X: EEG data (n_channels, n_times) # y: label/target # metainfo: window indices ``` ## Using with PyTorch DataLoader ```python from torch.utils.data import DataLoader # Create DataLoader for training train_loader = DataLoader( dataset, batch_size=32, shuffle=True, num_workers=4 ) # Training loop for X, y, _ in train_loader: # X shape: [batch_size, n_channels, n_times] # y shape: [batch_size] # Process your batch... ``` ## Dataset Format This dataset is stored in **Zarr** format, optimized for: - Fast random access during training (critical for PyTorch DataLoader) - Efficient compression with blosc - Cloud-native storage compatibility For more information about braindecode, visit: https://braindecode.org
6
0
[ "license:unknown", "size_categories:n<1K", "format:json", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "braindecode", "eeg", "neuroscience", "brain-computer-interface" ]
2025-11-11T06:00:14+00:00
2025-11-11T06:00:15+00:00
0
Yun5/RoboCerebra_TF
## Work-in-Progress I trained the model with this data, and the L1 loss went down, but for some reason the model keeps failing the tasks. ## Citation If you use this dataset or any part of the conversion pipeline, please cite the original RoboCerebra paper: ```bibtex @article{han2025robocerebra, title={RoboCerebra: A Large-scale Benchmark for Long-horizon Robotic Manipulation Evaluation}, author={Han, Songhao and Qiu, Boxiang and Liao, Yue and Huang, Siyuan and Gao, Chen and Yan, Shuicheng and Liu, Si}, journal={arXiv preprint arXiv:2506.06677}, year={2025} }
## Work-in-Progress I trained the model with this data, and the L1 loss went down, but for some reason the model keeps failing the tasks. ## Citation If you use this dataset or any part of the conversion pipeline, please cite the original RoboCerebra paper: ```bibtex @article{han2025robocerebra, title={RoboCerebra: A Large-scale Benchmark for Long-horizon Robotic Manipulation Evaluation}, author={Han, Songhao and Qiu, Boxiang and Liao, Yue and Huang, Siyuan and Gao, Chen and Yan, Shuicheng and Liu, Si}, journal={arXiv preprint arXiv:2506.06677}, year={2025} }
7
0
[ "license:mit", "arxiv:2506.06677", "region:us" ]
2025-11-11T05:38:35+00:00
2025-11-11T05:55:28+00:00
0