Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
File size: 5,804 Bytes
fdbc539
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24b197d
 
 
 
 
de836a0
 
 
24b197d
 
e0d05ad
 
 
24b197d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eee0058
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: organ
    dtype: image
  - name: gonogo
    dtype: image
  - name: id
    dtype: string
  splits:
  - name: train
    num_bytes: 197384771.0
    num_examples: 785
  - name: test
    num_bytes: 58310857.0
    num_examples: 230
  download_size: 255917924
  dataset_size: 255695628.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

# Dataset Structure
This dataset contains vision data from cholecystectomy surgery (gallbladder removal).

# Data Fields
- **image**: The PIL image of the surgery view.
- **gonogo**: The (360,640) label of background (0), safe (1), and unsafe (2).
- **organs**: The (360,640) label of background (0), liver (1), gallbladder (2), and hepatocystic triangle (3).

# Data Splits
- **train**: 785 samples (from 92 videos)
- **test**: 230 samples (from 26 videos)
- **Total**: 1015 samples (from 118 videos in total)

# Usage
```
from datasets import load_dataset
train_dataset = load_dataset("BrachioLab/cholec", split="train")
test_dataset = load_dataset("BrachioLab/cholec", split="test")
```

# Data split
To note that we randomly split the data 8:2 so that our train/test splits have the same distribution. This could have overlap with other datasets that use cholec80 and M2CAI2016.
Please take the overlap into consideration when you use auxiliary data for training.

Videos in the training set:
'M2CCAI2016_video103', 'cholec80_video44', 'M2CCAI2016_video92', 'cholec80_video47', 'cholec80_video59', 'cholec80_video74', 'M2CCAI2016_video98', 'cholec80_video65', 'M2CCAI2016_video81', 'cholec80_video05', 'M2CCAI2016_video90', 'cholec80_video13', 'M2CCAI2016_video83', 'M2CCAI2016_video115', 'cholec80_video22', 'cholec80_video19', 'M2CCAI2016_video114', 'cholec80_video23', 'M2CCAI2016_video86', 'cholec80_video53', 'cholec80_video39', 'M2CCAI2016_video121', 'cholec80_video51', 'M2CCAI2016_video87', 'cholec80_video08', 'cholec80_video07', 'cholec80_video27', 'cholec80_video12', 'M2CCAI2016_video84', 'M2CCAI2016_video106', 'cholec80_video15', 'cholec80_video61', 'cholec80_video43', 'M2CCAI2016_video117', 'M2CCAI2016_video109', 'cholec80_video46', 'cholec80_video35', 'cholec80_video18', 'cholec80_video37', 'M2CCAI2016_video112', 'M2CCAI2016_video99', 'cholec80_video67', 'cholec80_video71', 'M2CCAI2016_video104', 'cholec80_video50', 'M2CCAI2016_video110', 'M2CCAI2016_video100', 'M2CCAI2016_video102', 'M2CCAI2016_video94', 'cholec80_video80', 'cholec80_video20', 'cholec80_video34', 'M2CCAI2016_video96', 'cholec80_video69', 'cholec80_video25', 'cholec80_video60', 'cholec80_video64', 'cholec80_video48', 'M2CCAI2016_video118', 'M2CCAI2016_video108', 'cholec80_video73', 'M2CCAI2016_video101', 'cholec80_video77', 'cholec80_video79', 'M2CCAI2016_video105', 'cholec80_video54', 'cholec80_video30', 'cholec80_video49', 'cholec80_video14', 'cholec80_video62', 'M2CCAI2016_video120', 'M2CCAI2016_video88', 'cholec80_video42', 'cholec80_video09', 'cholec80_video76', 'M2CCAI2016_video93', 'M2CCAI2016_video91', 'cholec80_video45', 'cholec80_video68', 'M2CCAI2016_video111', 'cholec80_video32', 'cholec80_video70', 'M2CCAI2016_video119', 'cholec80_video41', 'cholec80_video75', 'cholec80_video38', 'M2CCAI2016_video89', 'cholec80_video16', 'cholec80_video26', 'cholec80_video72', 'cholec80_video29', 'cholec80_video21'

Videos in the test set:
'cholec80_video66', 'cholec80_video56', 'cholec80_video17', 'cholec80_video55', 'M2CCAI2016_video113', 'cholec80_video06', 'cholec80_video02', 'cholec80_video78', 'cholec80_video01', 'cholec80_video40', 'cholec80_video04', 'cholec80_video11', 'M2CCAI2016_video116', 'M2CCAI2016_video95', 'cholec80_video33', 'cholec80_video57', 'cholec80_video03', 'cholec80_video28', 'cholec80_video31', 'cholec80_video52', 'cholec80_video24', 'M2CCAI2016_video107', 'cholec80_video63', 'M2CCAI2016_video97', 'cholec80_video36', 'cholec80_video58'

# Ciations
For the combined gonogo and organs labels, please cite FIX:
```
@misc{jin2024fix,
    title={The FIX Benchmark: Extracting Features Interpretable to eXperts},
    author={Helen Jin and Shreya Havaldar and Chaehyeon Kim and Anton Xue and Weiqiu You and Helen Qu and Marco Gatti and Daniel A Hashimoto and Bhuvnesh Jain and Amin Madani and Masao Sako and Lyle Ungar and Eric Wong},
    year={2024},
    eprint={2409.13684},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
```

Please also cite the original datasets:

Cholec80
```
@misc{twinanda2016endonetdeeparchitecturerecognition,
      title={EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos}, 
      author={Andru P. Twinanda and Sherif Shehata and Didier Mutter and Jacques Marescaux and Michel de Mathelin and Nicolas Padoy},
      year={2016},
      eprint={1602.03012},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/1602.03012}, 
}
```

M2CAI2016
```
@misc{twinanda2016endonetdeeparchitecturerecognition,
      title={EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos}, 
      author={Andru P. Twinanda and Sherif Shehata and Didier Mutter and Jacques Marescaux and Michel de Mathelin and Nicolas Padoy},
      year={2016},
      eprint={1602.03012},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/1602.03012}, 
}
```
```
@misc{stauder2017tumlapcholedatasetm2cai,
      title={The TUM LapChole dataset for the M2CAI 2016 workflow challenge}, 
      author={Ralf Stauder and Daniel Ostler and Michael Kranzfelder and Sebastian Koller and Hubertus Feußner and Nassir Navab},
      year={2017},
      eprint={1610.09278},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/1610.09278}, 
}
```