| | --- |
| | license: cc-by-4.0 |
| | task_categories: |
| | - image-segmentation |
| | tags: |
| | - medical |
| | - surgical-instruments |
| | - endoscopy |
| | - robotic-surgery |
| | - image-segmentation |
| | pretty_name: Endovis 2017 - Robotic Instrument Segmentation |
| | size_categories: |
| | - 1K<n<10K |
| | dataset_info: |
| | features: |
| | - name: image |
| | dtype: image |
| | - name: label |
| | dtype: image |
| | - name: image_id |
| | dtype: string |
| | - name: split |
| | dtype: string |
| | - name: file_name |
| | dtype: string |
| | - name: relative_path |
| | dtype: string |
| | - name: sequence_id |
| | dtype: int64 |
| | splits: |
| | - name: train |
| | num_bytes: 1889651647 |
| | num_examples: 1800 |
| | - name: val |
| | num_bytes: 945876232 |
| | num_examples: 901 |
| | download_size: 2026707643 |
| | dataset_size: 2835527879 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | - split: val |
| | path: data/val-* |
| | --- |
| | |
| | # Dataset Card for Endovis2017 |
| |
|
| | ## Dataset Description |
| |
|
| | ### Dataset Summary |
| |
|
| | The Endovis2017 dataset contains preprocessed data for surgical instrument segmentation in robotic endoscopic procedures. This dataset was part of the **MICCAI 2017 EndoVis Challenge** for robotic instrument segmentation. |
| |
|
| | The dataset includes high-resolution images from the da Vinci surgical system along with pixel-level segmentation annotations for surgical instruments. It is designed for training and evaluating computer vision models for surgical scene understanding and instrument tracking. |
| |
|
| | ### Supported Tasks |
| |
|
| | - **Image Segmentation**: Pixel-level segmentation of surgical instruments in endoscopic images |
| | - **Medical Image Analysis**: Understanding surgical scenes and instrument types |
| | - **Computer-Assisted Surgery**: Real-time instrument detection and tracking |
| |
|
| | ### Languages |
| |
|
| | Not applicable (image dataset) |
| |
|
| | ## Dataset Structure |
| |
|
| | ### Data Instances |
| |
|
| | Each instance in the dataset contains: |
| |
|
| | ```python |
| | { |
| | 'image': PIL.Image, # RGB endoscopic image |
| | 'label': PIL.Image, # Segmentation mask (grayscale) |
| | 'image_id': str, # Unique identifier |
| | 'file_name': str, # Original filename |
| | 'split': str, # 'train' or 'val' |
| | 'relative_path': str, # Path relative to dataset root |
| | 'sequence_id': int # Sequence/video ID (0 for train, 1-4 for val) |
| | } |
| | ``` |
| |
|
| | ### Data Fields |
| |
|
| | - `image`: RGB image of size 640×480 or similar (varies by sequence) |
| | - `label`: Grayscale segmentation mask matching image dimensions |
| | - `image_id`: Unique string identifier for the image |
| | - `file_name`: Original filename (e.g., "frame000.png") |
| | - `split`: Dataset split ("train" or "val") |
| | - `relative_path`: Path relative to dataset root directory |
| | - `sequence_id`: Integer identifying the surgical sequence (0 for training, 1-4 for validation sequences) |
| |
|
| | ### Data Splits |
| |
|
| | | Split | Examples | |
| | |-------|----------| |
| | | train | 1,800 | |
| | | val | 901 | |
| | | **Total** | **2,701** | |
| |
|
| | The training set contains images from multiple surgical procedures, while the validation set is organized into 4 different sequences (val1-val4) representing different surgical scenarios. |
| |
|
| | ## Dataset Creation |
| |
|
| | ### Source Data |
| |
|
| | The dataset originates from the **2017 Robotic Instrument Segmentation Challenge** held at MICCAI 2017. |
| |
|
| | **Original Source**: [Zenodo Repository](https://zenodo.org/records/10527017) |
| |
|
| | #### Data Collection |
| |
|
| | Images were captured using the da Vinci surgical system during robotic-assisted surgical procedures. The dataset includes various instrument types and surgical scenarios to ensure model generalization. |
| |
|
| | #### Annotations |
| |
|
| | Pixel-level segmentation masks were manually annotated by experts. The annotations include: |
| | - Binary segmentation (instrument vs. background) |
| | - Part-level segmentation (shaft, wrist, claspers) |
| | - Instrument type classification |
| |
|
| | ### Personal and Sensitive Information |
| |
|
| | The dataset contains surgical video frames but **does not include patient-identifiable information**. All images show only the surgical field and instruments, not patients. |
| |
|
| | ## Considerations for Using the Data |
| |
|
| | ### Social Impact |
| |
|
| | This dataset enables research in computer-assisted surgery and robotic surgery, which can potentially: |
| | - Improve surgical outcomes through better instrument tracking |
| | - Enable automated surgical skill assessment |
| | - Advance autonomous surgical robotics |
| |
|
| | ### Bias and Limitations |
| |
|
| | - Limited to da Vinci surgical system (may not generalize to other platforms) |
| | - Contains only certain types of surgical procedures |
| | - Annotation quality may vary across different sequences |
| | - Dataset size is relatively small compared to natural image datasets |
| |
|
| | ### Recommendations |
| |
|
| | Users should: |
| | - Test models on multiple surgical systems if deploying in production |
| | - Consider domain adaptation techniques for different surgical contexts |
| | - Validate performance on institution-specific data before clinical use |
| | - Be aware of potential biases toward specific instrument types and surgical scenarios |
| |
|
| | ## Usage |
| |
|
| | ### Loading the Dataset |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Download and cache the full dataset |
| | dataset = load_dataset("tyluan/Endovis2017") |
| | |
| | # Access splits |
| | train_data = dataset['train'] |
| | val_data = dataset['val'] |
| | |
| | # Get a sample |
| | sample = train_data[0] |
| | image = sample['image'] # PIL Image |
| | label = sample['label'] # PIL Image (segmentation mask) |
| | |
| | print(f"Image size: {image.size}") |
| | print(f"Label size: {label.size}") |
| | ``` |
| |
|
| | ### Streaming Mode (No Download) |
| |
|
| | For quick exploration without downloading the entire dataset: |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Stream the dataset |
| | dataset = load_dataset("tyluan/Endovis2017", streaming=True) |
| | |
| | # Iterate over samples |
| | for sample in dataset['train']: |
| | image = sample['image'] |
| | label = sample['label'] |
| | # Process sample... |
| | break # Just show first sample |
| | ``` |
| |
|
| | ### Using with PyTorch |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | from torch.utils.data import DataLoader |
| | from torchvision import transforms |
| | |
| | # Load dataset |
| | dataset = load_dataset("tyluan/Endovis2017", split="train") |
| | |
| | # Define transforms |
| | transform = transforms.Compose([ |
| | transforms.Resize((256, 256)), |
| | transforms.ToTensor(), |
| | ]) |
| | |
| | # Apply transforms |
| | def apply_transforms(example): |
| | example['image'] = transform(example['image']) |
| | example['label'] = transform(example['label']) |
| | return example |
| | |
| | dataset = dataset.map(apply_transforms) |
| | dataset.set_format(type='torch', columns=['image', 'label']) |
| | |
| | # Create DataLoader |
| | dataloader = DataLoader(dataset, batch_size=8, shuffle=True) |
| | |
| | # Iterate |
| | for batch in dataloader: |
| | images = batch['image'] # Shape: [8, 3, 256, 256] |
| | labels = batch['label'] # Shape: [8, 1, 256, 256] |
| | # Train your model... |
| | break |
| | ``` |
| |
|
| | ### Integration with EasyMedSeg |
| |
|
| | This dataset is part of the EasyMedSeg framework: |
| |
|
| | ```python |
| | from dataloader.image import Endovis2017Dataset |
| | |
| | # Download mode (recommended) |
| | dataset = Endovis2017Dataset( |
| | mode='download', |
| | split='train', |
| | hf_repo_id='tyluan/Endovis2017' |
| | ) |
| | |
| | # Streaming mode |
| | from dataloader.image import Endovis2017StreamingDataset |
| | |
| | streaming_dataset = Endovis2017StreamingDataset( |
| | split='val', |
| | shuffle=True |
| | ) |
| | ``` |
| |
|
| | ## Additional Information |
| |
|
| | ### Dataset Curators |
| |
|
| | Original dataset curated by the MICCAI 2017 EndoVis Challenge organizers. |
| |
|
| | HuggingFace version prepared by the EasyMedSeg team. |
| |
|
| | ### Licensing Information |
| |
|
| | This dataset is licensed under **Creative Commons Attribution 4.0 International (CC BY 4.0)**. |
| |
|
| | When using this dataset, you must: |
| | - Give appropriate credit to the original authors |
| | - Provide a link to the license: https://creativecommons.org/licenses/by/4.0/ |
| | - Indicate if changes were made |
| |
|
| | ### Citation Information |
| |
|
| | If you use this dataset in your research, please cite: |
| |
|
| | ```bibtex |
| | @article{allan2019endovis, |
| | title={2017 Robotic Instrument Segmentation Challenge}, |
| | author={Allan, Max and Shvets, Alex and Kurmann, Thomas and Zhang, Zichen and Duggal, Rahul and Su, Yun-Hsuan and Rieke, Nicola and Laina, Iro and Kalavakonda, Niveditha and Bodenstedt, Sebastian and others}, |
| | journal={arXiv preprint arXiv:1902.06426}, |
| | year={2019} |
| | } |
| | ``` |
| |
|
| | ### Contributions |
| |
|
| | Thanks to: |
| | - MICCAI 2017 EndoVis Challenge organizers for creating the dataset |
| | - Original annotators for high-quality segmentation masks |
| | - EasyMedSeg team for preparing the HuggingFace version |
| |
|
| | ### Contact |
| |
|
| | For questions or issues with this HuggingFace version, please open an issue in the [EasyMedSeg repository](https://github.com/EasyMedSeg/EasyMedSeg). |
| |
|
| | For questions about the original dataset, refer to the [challenge website](https://opencas.dkfz.de/endovis/) or the [Zenodo repository](https://zenodo.org/records/10527017). |
| |
|
| | ## References |
| |
|
| | - [MICCAI 2017 EndoVis Challenge](https://opencas.dkfz.de/endovis/) |
| | - [Original Paper (arXiv:1902.06426)](https://arxiv.org/abs/1902.06426) |
| | - [Zenodo Repository](https://zenodo.org/records/10527017) |
| | - [EasyMedSeg Framework](https://github.com/EasyMedSeg/EasyMedSeg) |
| |
|