vadim-vlm4vla's picture
Upload 2 files
1395785 verified
metadata
license: mit
task_categories:
  - robotics
  - object-detection
  - image-segmentation
tags:
  - robotics
  - grasping
  - grasp-detection
  - multi-object
  - depth
pretty_name: Multi-Object Grasp Detection Dataset
size_categories:
  - n<1K

Multi-Object Grasp Detection Dataset

Dataset for robotic grasping with multiple objects per scene. Contains RGB images, depth maps, and grasp annotations for various objects.

Dataset Details

  • Total samples: 96 scenes
  • Objects: Various household items (screwdriver, pliers, scissors, wrench, etc. - in russian)
  • Grasps per scene: Multiple grasp poses per object
  • Image resolution: 640x480 pixels

Grasp Annotation Format. Each grasp is represented as a quadrilateral with 4 vertices:

{
    'grasp_id': int,
    'center': [xc, yc],           # Center of grasp rectangle
    'opening_point': [xo, yo]      # Gripper opening point
}

Usage

from datasets import load_dataset

ds = load_dataset("vadim-vlm4vla/grasp_multiObject_upgrade")

example = ds['test'][0]

print(f"Image ID: {example['image_id']}")
print(f"Number of objects: {example['num_objects']}")
print(f"Total grasps: {example['num_grasps']}")

example['image'].show()

for obj in example['objects']:
    print(f"\nObject: {obj['object_name']}")
    print(f"  Grasps: {len(obj['grasps'])}")
    for grasp in obj['grasps']:
        print(f"    Grasp {grasp['grasp_id']}:")
        print(f"      Center: {grasp['center']}")
        print(f"      Opening: {grasp['opening_point']}")

Visualization Example

import matplotlib.pyplot as plt
import numpy as np

example = ds['test'][0]
image = np.array(example['image'])

fig, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.imshow(image)

colors = plt.cm.rainbow(np.linspace(0, 1, example['num_objects']))

for obj_idx, obj in enumerate(example['objects']):
    color = colors[obj_idx]
    for grasp in obj['grasps']:
        center = grasp['center']
        opening = grasp['opening_point']
        
        # Draw center point
        ax.plot(center[0], center[1], 'o', color=color, markersize=8)
        
        # Draw line to opening point
        ax.plot([center[0], opening[0]], [center[1], opening[1]], 
               '-', color=color, linewidth=2)
        
        # Draw opening point
        ax.plot(opening[0], opening[1], 's', color=color, markersize=6)

plt.title(f"Image {example['image_id']} - {example['num_objects']} objects, {example['num_grasps']} grasps")
plt.axis('off')
plt.show()

Citation

If you use this dataset, please cite:

@dataset{multiobject_grasp_2025,
  title={Multi-Object Grasp Detection Dataset},
  author={Vadim Pyatochkin},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/vadim-vlm4vla/grasp_multiObject_upgrade}
}

If you find it helpful for your research, please also consider citing the foundational work on which this dataset is based:

@inproceedings{chu2018deep,
  title = {Real-World Multiobject, Multigrasp Detection},
  author = {F. Chu and R. Xu and P. A. Vela},
  journal = {IEEE Robotics and Automation Letters},
  year = {2018},
  volume = {3},
  number = {4},
  pages = {3355-3362},
  DOI = {10.1109/LRA.2018.2852777},
  ISSN = {2377-3766},
  month = {Oct}
}

License

MIT License

Contact

For questions or issues, please open an issue on the dataset repository.