danelcsb commited on
Commit
57901c8
·
verified ·
1 Parent(s): 010abda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -29
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
4
  ---
5
 
6
- # Model Card for Model ID
 
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
@@ -13,25 +15,25 @@ tags: []
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
  - **Funded by [optional]:** [More Information Needed]
22
  - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
  - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
  - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
@@ -39,9 +41,15 @@ This is the model card of a 🤗 transformers model that has been pushed on the
39
 
40
  ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
 
 
 
 
 
 
45
 
46
  ### Downstream Use [optional]
47
 
@@ -57,15 +65,15 @@ This is the model card of a 🤗 transformers model that has been pushed on the
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
 
64
  ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
67
 
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
@@ -77,13 +85,15 @@ Use the code below to get started with the model.
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
87
 
88
  #### Preprocessing [optional]
89
 
@@ -108,9 +118,7 @@ Use the code below to get started with the model.
108
 
109
  #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
 
115
  #### Factors
116
 
@@ -120,13 +128,17 @@ Use the code below to get started with the model.
120
 
121
  #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
 
125
- [More Information Needed]
 
 
126
 
127
  ### Results
128
 
129
- [More Information Needed]
 
 
130
 
131
  #### Summary
132
 
@@ -174,11 +186,16 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
174
 
175
  **BibTeX:**
176
 
177
- [More Information Needed]
 
 
 
 
 
178
 
179
  **APA:**
180
 
181
- [More Information Needed]
182
 
183
  ## Glossary [optional]
184
 
@@ -192,8 +209,8 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
192
 
193
  ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ pipeline_tag: image-segmentation
5
  ---
6
 
7
+ # Model Card for SAM 2: Segment Anything in Images and Videos
8
+
9
 
10
  <!-- Provide a quick summary of what the model is/does. -->
11
 
 
15
 
16
  ### Model Description
17
 
18
+ SAM 2 (Segment Anything Model 2) is a foundation model developed by Meta FAIR for promptable visual segmentation across both images and videos. It extends the capabilities of the original SAM by introducing a memory-driven, streaming architecture that enables real-time, interactive segmentation and tracking of objects even as they change or temporarily disappear across video frames. SAM 2 achieves state-of-the-art segmentation accuracy with significantly improved speed and data efficiency, outperforming existing models for both images and videos.
19
 
20
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
21
 
22
+ - **Developed by:** Meta FAIR (Meta AI Research), Authors: Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Vasudev Alwala, Nicolas Carion, Chao-Yuan Wu, Ross Girshick, Piotr Dollár, Christoph Feichtenhofer.
23
  - **Funded by [optional]:** [More Information Needed]
24
  - **Shared by [optional]:** [More Information Needed]
25
+ - **Model type:** Transformer-based promptable visual segmentation model with streaming memory module for videos.
26
  - **Language(s) (NLP):** [More Information Needed]
27
+ - **License:** Apache-2.0, BSD 3-Clause
28
  - **Finetuned from model [optional]:** [More Information Needed]
29
 
30
  ### Model Sources [optional]
31
 
32
  <!-- Provide the basic links for the model. -->
33
 
34
+ - **Repository:** https://github.com/facebookresearch/sam2
35
+ - **Paper [optional]:** https://arxiv.org/abs/2408.00714
36
+ - **Demo [optional]:** https://ai.meta.com/sam2/
37
 
38
  ## Uses
39
 
 
41
 
42
  ### Direct Use
43
 
44
+ SAM 2 is designed for:
45
 
46
+ Promptable segmentation—select any object in video or image using points, boxes, or masks as prompts.
47
+
48
+ Zero-shot segmentation—performs strongly even on objects, image domains, or videos not seen during training.
49
+
50
+ Real-time, interactive applications—track or segment objects across frames, allowing corrections/refinements with new prompts as needed.
51
+
52
+ Research and industrial applications—facilitates precise object segmentation in video editing, robotics, AR, medical imaging, and more.
53
 
54
  ### Downstream Use [optional]
55
 
 
65
 
66
  ## Bias, Risks, and Limitations
67
 
68
+ Generalization Limits: While designed for zero-shot generalization, rare or unseen visual domains may challenge model reliability.
 
 
69
 
70
  ### Recommendations
71
 
72
+ Human-in-the-loop review is advised for critical use cases.
73
+
74
+ Users should evaluate and possibly retrain or fine-tune SAM 2 for highly specific domains.
75
 
76
+ Ethical and privacy considerations must be taken into account, especially in surveillance or sensitive settings.
77
 
78
  ## How to Get Started with the Model
79
 
 
85
 
86
  ### Training Data
87
 
88
+ Trained using a data engine that collected the largest known video segmentation dataset, SA-V (Segment Anything Video dataset), via interactive human-model collaboration.
89
 
90
+ Focused on full objects and parts, not restricted by semantic classes.
91
 
92
  ### Training Procedure
93
 
94
+ Preprocessing: Images and videos processed into masklets (spatio-temporal masks); prompts collected via human and model interaction loops.
95
+
96
+ Training regime: Used standard transformer training routines with enhancements for real-time processing; likely mixed precision for scaling to large datasets.
97
 
98
  #### Preprocessing [optional]
99
 
 
118
 
119
  #### Testing Data
120
 
121
+ Evaluated on SA-V and other standard video and image segmentation benchmarks.
 
 
122
 
123
  #### Factors
124
 
 
128
 
129
  #### Metrics
130
 
131
+ Segmentation accuracy (IoU, Dice).
132
 
133
+ Prompt efficiency (number of user interactions to achieve target quality).
134
+
135
+ Speed/Throughput (frames per second).
136
 
137
  ### Results
138
 
139
+ Video segmentation: Higher accuracy with 3x fewer user prompts versus prior approaches.
140
+
141
+ Image segmentation: 6x faster and more accurate than original SAM.
142
 
143
  #### Summary
144
 
 
186
 
187
  **BibTeX:**
188
 
189
+ @article{ravi2024sam2,
190
+ title={SAM 2: Segment Anything in Images and Videos},
191
+ author={Nikhila Ravi and Valentin Gabeur and Yuan-Ting Hu and Ronghang Hu and Chaitanya Ryali and Tengyu Ma and Haitham Khedr and Roman R{\"a}dle and Chloe Rolland and Laura Gustafson and Eric Mintun and Junting Pan and Kalyan Vasudev Alwala and Nicolas Carion and Chao-Yuan Wu and Ross Girshick and Piotr Doll\'ar and Christoph Feichtenhofer},
192
+ journal={arXiv preprint arXiv:2408.00714},
193
+ year={2024}
194
+ }
195
 
196
  **APA:**
197
 
198
+ Ravi, N., Gabeur, V., Hu, Y.-T., Hu, R., Ryali, C., Ma, T., Khedr, H., Rädle, R., Rolland, C., Gustafson, L., Mintun, E., Pan, J., Alwala, K. V., Carion, N., Wu, C.-Y., Girshick, R., Dollár, P., & Feichtenhofer, C. (2024). SAM 2: Segment Anything in Images and Videos. arXiv preprint arXiv:2408.00714.
199
 
200
  ## Glossary [optional]
201
 
 
209
 
210
  ## Model Card Authors [optional]
211
 
212
+ [Sangbum Choi](https://www.linkedin.com/in/daniel-choi-86648216b/) and [Yoni Gozlan](https://huggingface.co/yonigozlan)
213
 
214
  ## Model Card Contact
215
 
216
+ Meta FAIR (contact via support@segment-anything.com)