chengan-jaime commited on
Commit
796ffe8
·
1 Parent(s): dfdb0da

code update

Browse files
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ <p align="center">
5
+ <img src="https://github.com/user-attachments/assets/04f6e2eb-1380-448e-a3f6-eed3e9dbf177">
6
+ </p>
7
+
8
+ [📚 Paper](TODO) - [🤖 Code](src) - [🤗 Model](TODO)
9
+
10
+ Star ⭐ us if you like it!
11
+
12
+ <div align="center">
13
+ <img src="https://github.com/user-attachments/assets/6250cd6a-1404-4786-9c15-fe396265940d" width="70%" > </img>
14
+ </div>
15
+
16
+ ## News
17
+
18
+ <!-- XX/March/2025. The [HuggingFace models and demo](TODO) are released. -->
19
+ <!--<br>-->
20
+ XX/March/2025. The [arXiv](TODO) version of the paper is released.
21
+
22
+ <br>
23
+
24
+ This is the official repository for the paper [Surg-3M: A Dataset and Foundation Model for Perception in Surgical Settings](TODO).
25
+
26
+ This repository provides open access to the *Surg-3M* dataset, *Surg-FM* foundation model, and training code.
27
+
28
+ *Surg-3M* is a dataset of 4K surgical high-resolution videos (3M frames, when videos are sampled at 1fps) from 35 diverse surgical procedure types. Each video is annotated for multi-label classification, indicating the surgical procedures carried out in the video, and for binary classification, indicating if it is robotic or non-robotic. The dataset's annotations can be found in [labels.json](https://github.com/visurg-ai/surg-3m/blob/main/labels.json).
29
+
30
+ *Surg-FM* is an image foundation model for surgery, it receives an image as input and produces a feature vector of 1536 features as output.
31
+
32
+ <!--The website of our dataset is: [http://surg-3m.org](https://surg-3m.org)-->
33
+
34
+ If you use our dataset, model, or code in your research, please cite our paper:
35
+
36
+ ```
37
+ @inproceedings{Che2025,
38
+ author = {Chengan Che and Chao Wang and Tom Vercauteren and Sophia Tsoka and Luis C. Garcia-Peraza-Herrera},
39
+ booktitle = {TODO: use the correct arXiv citation here},
40
+ month = {3},
41
+ publisher = {arXiv},
42
+ title = {Surg-3M: A Dataset and Foundation Model for Perception in Surgical Settings},
43
+ year = {2025}
44
+ }
45
+ ```
46
+
47
+ Abstract
48
+ --------
49
+ Advancements in computer-assisted surgical procedures heavily rely on accurate visual data interpretation from camera systems used during surgeries. Traditional open-access datasets focusing on surgical procedures are often limited by their small size, typically consisting of fewer than 100 videos with less than 100K images. To address these constraints, a new dataset called Surg-3M has been compiled using a novel aggregation pipeline that collects high-resolution videos from online sources. Featuring an extensive collection of over 4K surgical videos and more than 3 million high-quality images from multiple procedure types, Surg-3M offers a comprehensive resource surpassing existing alternatives in size and scope, including two novel tasks. To demonstrate the effectiveness of this dataset, we present SurgFM, a self-supervised foundation model pretrained on Surg-3M that achieves impressive results in downstream tasks such as surgical phase recognition, action recognition, and tool presence detection. Combining key components from ConvNeXt, DINO, and an innovative augmented distillation method, SurgFM exhibits exceptional performance compared to specialist architectures across various benchmarks. Our experimental results show that SurgFM outperforms state-of-the-art models in multiple downstream tasks, including significant gains in surgical phase recognition (+8.9pp, +4.7pp, and +3.9pp of Jaccard in AutoLaparo, M2CAI16, and Cholec80), action recognition (+3.1pp of mAP in CholecT50) and tool presence detection (+4.6pp of mAP in Cholec80). Moreover, even when using only half of the data, SurgFM outperforms state-of-the-art models in AutoLaparo and achieves state-of-the-art performance in Cholec80. Both Surg-3M and SurgFM have significant potential to accelerate progress towards developing autonomous robotic surgery systems.
README.md CHANGED
@@ -1,3 +1,49 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ <p align="center">
5
+ <img src="https://github.com/user-attachments/assets/04f6e2eb-1380-448e-a3f6-eed3e9dbf177">
6
+ </p>
7
+
8
+ [📚 Paper](TODO) - [🤖 Code](src) - [🤗 Model](TODO)
9
+
10
+ Star ⭐ us if you like it!
11
+
12
+ <div align="center">
13
+ <img src="https://github.com/user-attachments/assets/6250cd6a-1404-4786-9c15-fe396265940d" width="70%" > </img>
14
+ </div>
15
+
16
+ ## News
17
+
18
+ <!-- XX/March/2025. The [HuggingFace models and demo](TODO) are released. -->
19
+ <!--<br>-->
20
+ XX/March/2025. The [arXiv](TODO) version of the paper is released.
21
+
22
+ <br>
23
+
24
+ This is the official repository for the paper [Surg-3M: A Dataset and Foundation Model for Perception in Surgical Settings](TODO).
25
+
26
+ This repository provides open access to the *Surg-3M* dataset, *Surg-FM* foundation model, and training code.
27
+
28
+ *Surg-3M* is a dataset of 4K surgical high-resolution videos (3M frames, when videos are sampled at 1fps) from 35 diverse surgical procedure types. Each video is annotated for multi-label classification, indicating the surgical procedures carried out in the video, and for binary classification, indicating if it is robotic or non-robotic. The dataset's annotations can be found in [labels.json](https://github.com/visurg-ai/surg-3m/blob/main/labels.json).
29
+
30
+ *Surg-FM* is an image foundation model for surgery, it receives an image as input and produces a feature vector of 1536 features as output.
31
+
32
+ <!--The website of our dataset is: [http://surg-3m.org](https://surg-3m.org)-->
33
+
34
+ If you use our dataset, model, or code in your research, please cite our paper:
35
+
36
+ ```
37
+ @inproceedings{Che2025,
38
+ author = {Chengan Che and Chao Wang and Tom Vercauteren and Sophia Tsoka and Luis C. Garcia-Peraza-Herrera},
39
+ booktitle = {TODO: use the correct arXiv citation here},
40
+ month = {3},
41
+ publisher = {arXiv},
42
+ title = {Surg-3M: A Dataset and Foundation Model for Perception in Surgical Settings},
43
+ year = {2025}
44
+ }
45
+ ```
46
+
47
+ Abstract
48
+ --------
49
+ Advancements in computer-assisted surgical procedures heavily rely on accurate visual data interpretation from camera systems used during surgeries. Traditional open-access datasets focusing on surgical procedures are often limited by their small size, typically consisting of fewer than 100 videos with less than 100K images. To address these constraints, a new dataset called Surg-3M has been compiled using a novel aggregation pipeline that collects high-resolution videos from online sources. Featuring an extensive collection of over 4K surgical videos and more than 3 million high-quality images from multiple procedure types, Surg-3M offers a comprehensive resource surpassing existing alternatives in size and scope, including two novel tasks. To demonstrate the effectiveness of this dataset, we present SurgFM, a self-supervised foundation model pretrained on Surg-3M that achieves impressive results in downstream tasks such as surgical phase recognition, action recognition, and tool presence detection. Combining key components from ConvNeXt, DINO, and an innovative augmented distillation method, SurgFM exhibits exceptional performance compared to specialist architectures across various benchmarks. Our experimental results show that SurgFM outperforms state-of-the-art models in multiple downstream tasks, including significant gains in surgical phase recognition (+8.9pp, +4.7pp, and +3.9pp of Jaccard in AutoLaparo, M2CAI16, and Cholec80), action recognition (+3.1pp of mAP in CholecT50) and tool presence detection (+4.6pp of mAP in Cholec80). Moreover, even when using only half of the data, SurgFM outperforms state-of-the-art models in AutoLaparo and achieves state-of-the-art performance in Cholec80. Both Surg-3M and SurgFM have significant potential to accelerate progress towards developing autonomous robotic surgery systems.