Add pipeline tag and improve model card documentation

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +38 -32
README.md CHANGED
@@ -1,26 +1,58 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
3
  ---
 
4
  <div align="center">
5
  <img src="https://cdn-uploads.huggingface.co/production/uploads/67d9504a41d31cc626fcecc8/syYHwLv9Z2s7UfTejhvKm.png"> </img>
6
  </div>
7
 
 
8
 
 
9
 
10
- [📚 Paper](https://arxiv.org/abs/2503.19740) - [🤖 GitHub](https://github.com/visurg-ai/LEMON)
 
11
 
 
12
 
 
 
13
 
14
- This is the official Hugging Face repository for the paper [LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings](https://arxiv.org/abs/2503.19740).
15
 
16
- This repository provides open access to the *LemonFM* foundation model. For the *LEMON* dataset and our code, please see our at our GitHub repository at [🤖 Github](https://github.com/visurg-ai/LEMON) .
17
 
18
- *LemonFM* is an image foundation model for surgery, it receives an image as input and produces a feature vector of 1536 features as output.
 
 
 
 
19
 
 
 
 
20
 
21
- If you use our dataset, model, or code in your research, please cite our paper:
 
 
 
 
22
 
 
 
 
23
  ```
 
 
 
 
 
24
  @misc{che2025lemonlargeendoscopicmonocular,
25
  title={LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings},
26
  author={Chengan Che and Chao Wang and Tom Vercauteren and Sophia Tsoka and Luis C. Garcia-Peraza-Herrera},
@@ -30,30 +62,4 @@ If you use our dataset, model, or code in your research, please cite our paper:
30
  primaryClass={cs.CV},
31
  url={https://arxiv.org/abs/2503.19740},
32
  }
33
- ```
34
-
35
- Abstract
36
- --------
37
- Traditional open-access datasets focusing on surgical procedures are often limited by their small size, typically consisting of fewer than 100 videos and less than 30 hours of footage, which leads to poor model generalization. To address this constraint, a new dataset called LEMON has been compiled using a novel aggregation pipeline that collects high-resolution videos from online sources. Featuring an extensive collection of over 4K surgical videos totaling 938 hours (85 million frames) of high-quality footage across multiple procedure types, LEMON offers a comprehensive resource surpassing existing alternatives in size and scope, including two novel downstream tasks. To demonstrate the effectiveness of this diverse dataset, we introduce LemonFM, a foundation model pretrained on LEMON using a novel self-supervised augmented knowledge distillation approach. LemonFM consistently outperforms existing surgical foundation models across four downstream tasks and six datasets, achieving significant gains in surgical phase recognition (+9.5pp, +9.4pp, and +8.4pp of Jaccard in AutoLaparo, M2CAI16, and Cholec80), surgical action recognition (+4.4pp of mAP in CholecT50), surgical tool presence detection (+5.3pp and +10.2pp of mAP in Cholec80 and GraSP), and surgical semantic segmentation (+8.3pp of mDice in CholecSeg8k). LEMON and LemonFM will serve as foundational resources for the research community and industry, accelerating progress in developing autonomous robotic surgery systems and ultimately contributing to safer and more accessible surgical care worldwide.
38
-
39
- How to run our LemonFM foundation model to extract features from your video frames
40
- ----------------------------------------------------------------------------------
41
-
42
- ```python
43
- import torch
44
- from PIL import Image
45
- from model_loader import build_LemonFM
46
-
47
- # Load the pre-trained LemonFM model
48
- lemonfm = build_LemonFM(pretrained_weights = 'your path to the LemonFM')
49
- lemonfm.eval()
50
-
51
- # Load the image and convert it to a PyTorch tensor
52
- img_path = 'path/to/your/image.jpg'
53
- img = Image.open(img_path)
54
- img = img.resize((224, 224))
55
- img_tensor = torch.tensor(np.array(img)).unsqueeze(0).to('cuda')
56
-
57
- # Extract features from the image using the ResNet50 model
58
- outputs = lemonfm(img_tensor)
59
- ```
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: image-feature-extraction
4
+ tags:
5
+ - medical
6
+ - surgery
7
+ - endoscopy
8
+ - vision-foundation-model
9
  ---
10
+
11
  <div align="center">
12
  <img src="https://cdn-uploads.huggingface.co/production/uploads/67d9504a41d31cc626fcecc8/syYHwLv9Z2s7UfTejhvKm.png"> </img>
13
  </div>
14
 
15
+ [📚 Paper](https://arxiv.org/abs/2503.19740) - [🤖 GitHub](https://github.com/visurg-ai/LEMON) - [🌐 Website](https://LEMON.visurg.ai/)
16
 
17
+ This is the official Hugging Face repository for **LemonFM**, the foundation model introduced in the paper [LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings](https://arxiv.org/abs/2503.19740).
18
 
19
+ ## Model Description
20
+ *LemonFM* is an image foundation model specifically designed for surgical perception. It was pretrained on the LEMON dataset—a comprehensive collection of over 4,000 high-resolution surgical videos (938 hours)—using a self-supervised augmented knowledge distillation approach.
21
 
22
+ The model receives an image as input and produces a 1536-dimensional feature vector, which can be utilized for various downstream tasks such as surgical phase recognition, action recognition, tool detection, and semantic segmentation.
23
 
24
+ ## Abstract
25
+ Traditional open-access datasets for surgical procedures are often limited by small scale, leading to poor model generalization. To address this, we present LEMON, a dataset featuring over 4K surgical videos totaling 938 hours across multiple procedure types. We introduce LemonFM, a foundation model pretrained on LEMON that consistently outperforms existing surgical foundation models across four downstream tasks and six datasets, achieving significant gains in surgical phase recognition, action recognition, tool presence detection, and surgical semantic segmentation.
26
 
27
+ ## Sample Usage
28
 
29
+ To run the LemonFM foundation model to extract features from your video frames, you can use the following code snippet (requires `model_loader.py` from the [GitHub repo](https://github.com/visurg-ai/LEMON)):
30
 
31
+ ```python
32
+ import torch
33
+ import numpy as np
34
+ from PIL import Image
35
+ from model_loader import build_LemonFM
36
 
37
+ # Load the pre-trained LemonFM model
38
+ lemonfm = build_LemonFM(pretrained_weights = 'path/to/LemonFM.pth')
39
+ lemonfm.eval()
40
 
41
+ # Load the image and convert it to a PyTorch tensor
42
+ img_path = 'path/to/your/image.jpg'
43
+ img = Image.open(img_path)
44
+ img = img.resize((224, 224))
45
+ img_tensor = torch.tensor(np.array(img)).unsqueeze(0).to('cuda')
46
 
47
+ # Extract features from the image
48
+ with torch.no_grad():
49
+ outputs = lemonfm(img_tensor)
50
  ```
51
+
52
+ ## Citation
53
+ If you use our dataset, model, or code in your research, please cite our paper:
54
+
55
+ ```bibtex
56
  @misc{che2025lemonlargeendoscopicmonocular,
57
  title={LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings},
58
  author={Chengan Che and Chao Wang and Tom Vercauteren and Sophia Tsoka and Luis C. Garcia-Peraza-Herrera},
 
62
  primaryClass={cs.CV},
63
  url={https://arxiv.org/abs/2503.19740},
64
  }
65
+ ```