| | --- |
| | annotations_creators: |
| | - expert-generated |
| | language_creators: |
| | - n/a |
| | language: |
| | - n/a |
| | license: |
| | - cc-by-4.0 |
| | multilinguality: |
| | - n/a |
| | size_categories: |
| | - 1K<n<10K |
| | source_datasets: |
| | - original |
| | task_categories: |
| | - other |
| | dataset_info: |
| | features: |
| | - name: terrain_id |
| | dtype: int32 |
| | - name: composition |
| | dtype: string |
| | - name: material_1 |
| | dtype: string |
| | - name: material_2 |
| | dtype: string |
| | - name: material_3 |
| | dtype: string |
| | - name: sample_index |
| | dtype: int32 |
| | - name: rgb_image |
| | dtype: image |
| | - name: depth_image |
| | dtype: image |
| | - name: depth_min |
| | dtype: float32 |
| | - name: depth_max |
| | dtype: float32 |
| | - name: ft_csv_path |
| | dtype: string |
| | - name: pixel_x |
| | dtype: float32 |
| | - name: pixel_y |
| | dtype: float32 |
| | - name: yaw |
| | dtype: float32 |
| | - name: scoop_depth |
| | dtype: float32 |
| | - name: stiffness |
| | dtype: float32 |
| | - name: scooped_volume |
| | dtype: float32 |
| | splits: |
| | - name: train |
| | num_bytes: 0 |
| | num_examples: 0 |
| | download_size: 0 |
| | dataset_size: 0 |
| | pretty_name: Scooping Dataset |
| | dataset_description: | |
| | The Scooping Dataset contains 6,700 samples collected over 67 terrains for the task of manipulating granular materials using a robotic arm. The dataset facilitates research in robotic manipulation, machine learning, and related fields. |
| | homepage: https://drillaway.github.io/ |
| | repository: https://github.com/pthangeda/scooping-dataset |
| | citation: | |
| | @inproceedings{Zhu-RSS-23, |
| | author = {Zhu, Yifan and Thangeda, Pranay and Ornik, Melkior and Hauser, Kris}, |
| | title = {Few-shot Adaptation for Manipulating Granular Materials Under Domain Shift}, |
| | booktitle = {Proceedings of Robotics: Science and Systems}, |
| | year = {2023}, |
| | month = {July}, |
| | address = {Daegu, Republic of Korea}, |
| | doi = {10.15607/RSS.2023.XIX.048} |
| | } |
| | --- |
| | |
| | # Scooping Dataset |
| |
|
| | This dataset contains 6,700 samples collected over 67 terrains for the task of manipulating granular materials using a robotic arm. The dataset is designed to facilitate research in robotic manipulation, machine learning, and related fields. Each sample includes: |
| |
|
| | - **Terrain Metadata**: Terrain ID, composition, and materials used. |
| | - **RGB Image**: An RGB image of the terrain before scooping. |
| | - **Depth Image**: Depth data corresponding to the terrain saved as a 16-bit PNG image. |
| | - **Depth Normalization Parameters**: `depth_min` and `depth_max` to reconstruct original depth values. |
| | - **F/T Sensor Data**: Force/Torque sensor data captured during the scooping action, saved as a CSV file. |
| | - **Action Parameters**: Scoop location, yaw angle, scoop depth, and stiffness. |
| | - **Outcome**: Volume of material scooped. |
| |
|
| | ## Dataset Structure |
| |
|
| | ### Features |
| |
|
| | - `terrain_id` (`int32`): Terrain identifier (1-67). |
| | - `composition` (`string`): Terrain composition (`single`, `partition`, `mixture`, `layers`). |
| | - `material_1` (`string`): First material used in the terrain. |
| | - `material_2` (`string`): Second material used in the terrain (empty string if not applicable). |
| | - `material_3` (`string`): Third material used in the terrain (empty string if not applicable). |
| | - `sample_index` (`int32`): Sample index within the terrain (1-100). |
| | - `rgb_image` (`Image`): RGB image of the terrain before scooping. |
| | - `depth_image` (`Image`): Depth data as a 16-bit PNG image. |
| | - `depth_min` (`float32`): Minimum depth value used for normalization. |
| | - `depth_max` (`float32`): Maximum depth value used for normalization. |
| | - `ft_csv_path` (`string`): File path to the F/T sensor data (CSV file). |
| | - `pixel_x` (`float32`): X-coordinate of the scoop location in the image. |
| | - `pixel_y` (`float32`): Y-coordinate of the scoop location in the image. |
| | - `yaw` (`float32`): Yaw angle of the end-effector (in radians). |
| | - `scoop_depth` (`float32`): Depth of the scoop (in meters). |
| | - `stiffness` (`float32`): Stiffness of the controller during the scoop action. |
| | - `scooped_volume` (`float32`): Volume of material scooped (in cubic meters). |
| |
|
| | ### Data Files |
| |
|
| | The dataset is organized with the following structure: |
| |
|
| | ``` |
| | scooping_dataset/ |
| | ├── scooping_dataset.arrow |
| | ├── dataset_info.json |
| | ├── rgb_images/ |
| | │ ├── terrain_1_sample_1_rgb.png |
| | │ ├── terrain_1_sample_2_rgb.png |
| | │ └── ... |
| | ├── depth_images/ |
| | │ ├── terrain_1_sample_1_depth.png |
| | │ ├── terrain_1_sample_2_depth.png |
| | │ └── ... |
| | ├── ft_data/ |
| | │ ├── terrain_1_sample_1_ft.csv |
| | │ ├── terrain_1_sample_2_ft.csv |
| | │ └── ... |
| | ``` |
| |
|
| | ## Usage |
| |
|
| | To use this dataset, you can load it using the Hugging Face `datasets` library: |
| |
|
| | ```python |
| | from datasets import load_from_disk |
| | import numpy as np |
| | from PIL import Image as PILImage |
| | import pandas as pd |
| | |
| | # Load the dataset |
| | dataset = load_from_disk('path_to_dataset/scooping_dataset') |
| | |
| | # Access a sample |
| | sample = dataset[0] |
| | |
| | # Load RGB image |
| | rgb_image = sample['rgb_image'] # PIL Image |
| | rgb_image.show() |
| | |
| | # Load depth image and reconstruct original depth values |
| | depth_image = sample['depth_image'] # PIL Image |
| | depth_array = np.array(depth_image).astype(np.float32) |
| | depth_normalized = depth_array / 65535 # Normalize back to [0, 1] |
| | depth_min = sample['depth_min'] |
| | depth_max = sample['depth_max'] |
| | original_depth = depth_normalized * (depth_max - depth_min) + depth_min |
| | |
| | # Load F/T sensor data |
| | ft_data = pd.read_csv(sample['ft_csv_path'], header=None).values |
| | |
| | # Access action parameters |
| | pixel_x = sample['pixel_x'] |
| | pixel_y = sample['pixel_y'] |
| | yaw = sample['yaw'] |
| | scoop_depth = sample['scoop_depth'] |
| | stiffness = sample['stiffness'] |
| | scooped_volume = sample['scooped_volume'] |
| | ``` |
| |
|
| | ### Loading All Data for a Specific Terrain ID |
| |
|
| | If you want to load all samples corresponding to a specific terrain ID, you can filter the dataset using the `filter` method. Here's how you can do it: |
| |
|
| | ```python |
| | # Specify the terrain ID you're interested in |
| | terrain_id_of_interest = 10 # Replace with the desired terrain ID (1-67) |
| | |
| | # Filter the dataset to include only samples from the specified terrain |
| | terrain_samples = dataset.filter(lambda sample: sample['terrain_id'] == terrain_id_of_interest) |
| | |
| | print(f"Number of samples for terrain {terrain_id_of_interest}: {len(terrain_samples)}") |
| | for sample in terrain_samples: |
| | # Load RGB image |
| | rgb_image = sample['rgb_image'] # PIL Image |
| | |
| | # Load and reconstruct depth image |
| | depth_image = sample['depth_image'] # PIL Image |
| | depth_array = np.array(depth_image).astype(np.float32) |
| | depth_normalized = depth_array / 65535 |
| | depth_min = sample['depth_min'] |
| | depth_max = sample['depth_max'] |
| | original_depth = depth_normalized * (depth_max - depth_min) + depth_min |
| | |
| | # Load F/T sensor data |
| | ft_data = pd.read_csv(sample['ft_csv_path'], header=None).values |
| | |
| | # Access action parameters |
| | pixel_x = sample['pixel_x'] |
| | pixel_y = sample['pixel_y'] |
| | yaw = sample['yaw'] |
| | scoop_depth = sample['scoop_depth'] |
| | stiffness = sample['stiffness'] |
| | scooped_volume = sample['scooped_volume'] |
| | |
| | # Perform your analysis or processing here |
| | # Example: Print scooped volume for each sample |
| | print(f"Sample {sample['sample_index']}: Scooped volume = {scooped_volume} cubic meters") |
| | ``` |
| |
|
| | ## Dataset Creation |
| |
|
| | The dataset was created using the following steps: |
| |
|
| | 1. **Data Collection**: 6,700 samples were collected using a UR5e robotic arm equipped with a scooping end-effector and an overhead RealSense L515 camera to capture RGB-D images. |
| |
|
| | 2. **Terrain Preparation**: 67 terrains were prepared using combinations of 12 different materials and 4 types of compositions. Each terrain represents a unique task. |
| |
|
| | 3. **Action Execution**: For each terrain, 100 scooping actions were performed with varying action parameters (scoop location, yaw, depth, and stiffness). |
| |
|
| | 4. **Data Recording**: Before each action, an RGB-D image of the terrain was captured. During the action, F/T sensor data was recorded. After the action, the volume of material scooped was measured. |
| |
|
| | ## License |
| |
|
| | This dataset is licensed under the **Creative Commons Attribution 4.0 International (CC BY 4.0) License**. |
| |
|
| | [](https://creativecommons.org/licenses/by/4.0/) |
| |
|
| | You are free to: |
| |
|
| | - **Share** — copy and redistribute the material in any medium or format. |
| | - **Adapt** — remix, transform, and build upon the material for any purpose, even commercially. |
| |
|
| | Under the following terms: |
| |
|
| | - **Attribution** — You must give appropriate credit, provide a link to the license, and indicate if changes were made. |
| |
|
| | **Full License Text**: [https://creativecommons.org/licenses/by/4.0/legalcode](https://creativecommons.org/licenses/by/4.0/legalcode) |
| |
|
| | ## Citation |
| |
|
| | If you use this dataset, please cite the following paper: |
| |
|
| | ```bibtex |
| | @inproceedings{Zhu-RSS-23, |
| | author = {Zhu, Yifan and Thangeda, Pranay and Ornik, Melkior and Hauser, Kris}, |
| | title = {Few-shot Adaptation for Manipulating Granular Materials Under Domain Shift}, |
| | booktitle = {Proceedings of Robotics: Science and Systems}, |
| | year = {2023}, |
| | month = {July}, |
| | address = {Daegu, Republic of Korea}, |
| | doi = {10.15607/RSS.2023.XIX.048} |
| | } |
| | ``` |
| |
|
| | ## References |
| |
|
| | For more details and illustrations of the materials and compositions, please visit our [project website](https://drillaway.github.io/). |
| |
|
| | --- |
| |
|
| |
|