chengan98 commited on
Commit
75f48b4
·
verified ·
1 Parent(s): 26417cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -21
README.md CHANGED
@@ -1,15 +1,8 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- <p align="center">
5
- <a href="https://visurg.ai/">
6
- <img src="https://cdn-uploads.huggingface.co/production/uploads/67d9504a41d31cc626fcecc8/hr0txL0zblj3i2cV77OYQ.png">
7
- </a>
8
- </p>
9
 
10
-
11
-
12
- [📚 Paper](https://arxiv.org/abs/2503.19740) - [🤖 GitHub](https://github.com/visurg-ai/surg-3m)
13
 
14
 
15
  <div align="center">
@@ -17,19 +10,18 @@ license: apache-2.0
17
  </div>
18
 
19
 
20
- This is the official Hugging Face repository for the paper [Surg-3M: A Dataset and Foundation Model for Perception in Surgical Settings](https://arxiv.org/abs/2503.19740).
21
 
22
- This repository provides open access to the *Surg-FM* foundation model. For the *Surg-3M* dataset and our code, please see our at our GitHub repository at [🤖 Github](https://github.com/visurg-ai/surg-3m) .
23
 
24
- *Surg-FM* is an image foundation model for surgery, it receives an image as input and produces a feature vector of 1536 features as output.
25
 
26
- <!--The website of our dataset is: [http://surg-3m.org](https://surg-3m.org)-->
27
 
28
  If you use our dataset, model, or code in your research, please cite our paper:
29
 
30
  ```
31
  @misc{che2025surg3mdatasetfoundationmodel,
32
- title={Surg-3M: A Dataset and Foundation Model for Perception in Surgical Settings},
33
  author={Chengan Che and Chao Wang and Tom Vercauteren and Sophia Tsoka and Luis C. Garcia-Peraza-Herrera},
34
  year={2025},
35
  eprint={2503.19740},
@@ -41,20 +33,19 @@ If you use our dataset, model, or code in your research, please cite our paper:
41
 
42
  Abstract
43
  --------
44
- Advancements in computer-assisted surgical procedures heavily rely on accurate visual data interpretation from camera systems used during surgeries. Traditional open-access datasets focusing on surgical procedures are often limited by their small size, typically consisting of fewer than 100 videos with less than 100K images. To address these constraints, a new dataset called Surg-3M has been compiled using a novel aggregation pipeline that collects high-resolution videos from online sources. Featuring an extensive collection of over 4K surgical videos and more than 3 million high-quality images from multiple procedure types, Surg-3M offers a comprehensive resource surpassing existing alternatives in size and scope, including two novel tasks. To demonstrate the effectiveness of this dataset, we present SurgFM, a self-supervised foundation model pretrained on Surg-3M that achieves impressive results in downstream tasks such as surgical phase recognition, action recognition, and tool presence detection. Combining key components from ConvNeXt, DINO, and an innovative augmented distillation method, SurgFM exhibits exceptional performance compared to specialist architectures across various benchmarks. Our experimental results show that SurgFM outperforms state-of-the-art models in multiple downstream tasks, including significant gains in surgical phase recognition (+8.9pp, +4.7pp, and +3.9pp of Jaccard in AutoLaparo, M2CAI16, and Cholec80), action recognition (+3.1pp of mAP in CholecT50) and tool presence detection (+4.6pp of mAP in Cholec80). Moreover, even when using only half of the data, SurgFM outperforms state-of-the-art models in AutoLaparo and achieves state-of-the-art performance in Cholec80. Both Surg-3M and SurgFM have significant potential to accelerate progress towards developing autonomous robotic surgery systems.
45
-
46
 
47
- How to run our SurgFM foundation model to extract features from your video frames
48
  ----------------------------------------------------------------------------------
49
 
50
  ```python
51
  import torch
52
  from PIL import Image
53
- from model_loader import build_SurgFM
54
 
55
- # Load the pre-trained SurgFM model
56
- surgfm = build_SurgFM(pretrained_weights = 'your path to the SurgFM')
57
- surgfm.eval()
58
 
59
  # Load the image and convert it to a PyTorch tensor
60
  img_path = 'path/to/your/image.jpg'
@@ -63,5 +54,5 @@ How to run our SurgFM foundation model to extract features from your video frame
63
  img_tensor = torch.tensor(np.array(img)).unsqueeze(0).to('cuda')
64
 
65
  # Extract features from the image using the ResNet50 model
66
- outputs = surgfm(img_tensor)
67
  ```
 
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
4
 
5
+ [📚 Paper](https://arxiv.org/abs/2503.19740) - [🤖 GitHub](https://github.com/visurg-ai/LEMON)
 
 
6
 
7
 
8
  <div align="center">
 
10
  </div>
11
 
12
 
13
+ This is the official Hugging Face repository for the paper [LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings](https://arxiv.org/abs/2503.19740).
14
 
15
+ This repository provides open access to the *LemonFM* foundation model. For the *LEMON* dataset and our code, please see our at our GitHub repository at [🤖 Github](https://github.com/visurg-ai/LEMON) .
16
 
17
+ *LemonFM* is an image foundation model for surgery, it receives an image as input and produces a feature vector of 1536 features as output.
18
 
 
19
 
20
  If you use our dataset, model, or code in your research, please cite our paper:
21
 
22
  ```
23
  @misc{che2025surg3mdatasetfoundationmodel,
24
+ title={LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings},
25
  author={Chengan Che and Chao Wang and Tom Vercauteren and Sophia Tsoka and Luis C. Garcia-Peraza-Herrera},
26
  year={2025},
27
  eprint={2503.19740},
 
33
 
34
  Abstract
35
  --------
36
+ Traditional open-access datasets focusing on surgical procedures are often limited by their small size, typically consisting of fewer than 100 videos and less than 30 hours of footage, which leads to poor model generalization. To address this constraint, a new dataset called LEMON has been compiled using a novel aggregation pipeline that collects high-resolution videos from online sources. Featuring an extensive collection of over 4K surgical videos totaling 938 hours (85 million frames) of high-quality footage across multiple procedure types, LEMON offers a comprehensive resource surpassing existing alternatives in size and scope, including two novel downstream tasks. To demonstrate the effectiveness of this diverse dataset, we introduce LemonFM, a foundation model pretrained on LEMON using a novel self-supervised augmented knowledge distillation approach. LemonFM consistently outperforms existing surgical foundation models across four downstream tasks and six datasets, achieving significant gains in surgical phase recognition (+9.5pp, +9.4pp, and +8.4pp of Jaccard in AutoLaparo, M2CAI16, and Cholec80), surgical action recognition (+4.4pp of mAP in CholecT50), surgical tool presence detection (+5.3pp and +10.2pp of mAP in Cholec80 and GraSP), and surgical semantic segmentation (+8.3pp of mDice in CholecSeg8k). LEMON and LemonFM will serve as foundational resources for the research community and industry, accelerating progress in developing autonomous robotic surgery systems and ultimately contributing to safer and more accessible surgical care worldwide.
 
37
 
38
+ How to run our LemonFM foundation model to extract features from your video frames
39
  ----------------------------------------------------------------------------------
40
 
41
  ```python
42
  import torch
43
  from PIL import Image
44
+ from model_loader import build_LemonFM
45
 
46
+ # Load the pre-trained LemonFM model
47
+ lemonfm = build_LemonFM(pretrained_weights = 'your path to the LemonFM')
48
+ lemonfm.eval()
49
 
50
  # Load the image and convert it to a PyTorch tensor
51
  img_path = 'path/to/your/image.jpg'
 
54
  img_tensor = torch.tensor(np.array(img)).unsqueeze(0).to('cuda')
55
 
56
  # Extract features from the image using the ResNet50 model
57
+ outputs = lemonfm(img_tensor)
58
  ```