Creador301 commited on
Commit
84f17ba
·
verified ·
1 Parent(s): 392e71a

Upload 12 files

Browse files
Sam3/LICENSE ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ SAM License
2
+ Last Updated: November 19, 2025
3
+
4
+ “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the SAM Materials set forth herein.
5
+
6
+
7
+ “SAM Materials” means, collectively, Documentation and the models, software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code, and other elements of the foregoing distributed by Meta and made available under this Agreement.
8
+
9
+ “Documentation” means the specifications, manuals and documentation accompanying
10
+ SAM Materials distributed by Meta.
11
+
12
+
13
+ “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
14
+
15
+
16
+ “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) or Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
17
+
18
+
19
+ “Sanctions” means any economic or trade sanctions or restrictions administered or enforced by the United States (including the Office of Foreign Assets Control of the U.S. Department of the Treasury (“OFAC”), the U.S. Department of State and the U.S. Department of Commerce), the United Nations, the European Union, or the United Kingdom.
20
+
21
+
22
+ “Trade Controls” means any of the following: Sanctions and applicable export and import controls.
23
+
24
+ By using or distributing any portion or element of the SAM Materials, you agree to be bound by this Agreement.
25
+
26
+
27
+ 1. License Rights and Redistribution.
28
+
29
+
30
+ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the SAM Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the SAM Materials.
31
+
32
+ b. Redistribution and Use.
33
+ i. Distribution of SAM Materials, and any derivative works thereof, are subject to the terms of this Agreement. If you distribute or make the SAM Materials, or any derivative works thereof, available to a third party, you may only do so under the terms of this Agreement and you shall provide a copy of this Agreement with any such SAM Materials.
34
+
35
+
36
+ ii. If you submit for publication the results of research you perform on, using, or otherwise in connection with SAM Materials, you must acknowledge the use of SAM Materials in your publication.
37
+
38
+
39
+ iii. Your use of the SAM Materials must comply with applicable laws and regulations, including Trade Control Laws and applicable privacy and data protection laws.
40
+ iv. Your use of the SAM Materials will not involve or encourage others to reverse engineer, decompile or discover the underlying components of the SAM Materials.
41
+ v. You are not the target of Trade Controls and your use of SAM Materials must comply with Trade Controls. You agree not to use, or permit others to use, SAM Materials for any activities subject to the International Traffic in Arms Regulations (ITAR) or end uses prohibited by Trade Controls, including those related to military or warfare purposes, nuclear industries or applications, espionage, or the development or use of guns or illegal weapons.
42
+ 2. User Support. Your use of the SAM Materials is done at your own discretion; Meta does not process any information nor provide any service in relation to such use. Meta is under no obligation to provide any support services for the SAM Materials. Any support provided is “as is”, “with all faults”, and without warranty of any kind.
43
+
44
+
45
+ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE SAM MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE SAM MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE SAM MATERIALS AND ANY OUTPUT AND RESULTS.
46
+
47
+ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY DIRECT OR INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
48
+
49
+ 5. Intellectual Property.
50
+
51
+
52
+ a. Subject to Meta’s ownership of SAM Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the SAM Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
53
+
54
+ b. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the SAM Materials, outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the SAM Materials.
55
+
56
+ 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the SAM Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the SAM Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
57
+
58
+ 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
59
+
60
+
61
+ 8. Modifications and Amendments. Meta may modify this Agreement from time to time; provided that they are similar in spirit to the current version of the Agreement, but may differ in detail to address new problems or concerns. All such changes will be effective immediately. Your continued use of the SAM Materials after any modification to this Agreement constitutes your agreement to such modification. Except as provided in this Agreement, no modification or addition to any provision of this Agreement will be binding unless it is in writing and signed by an authorized representative of both you and Meta.
Sam3/README.md ADDED
@@ -0,0 +1,715 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ extra_gated_fields:
4
+ First Name: text
5
+ Last Name: text
6
+ Date of birth: date_picker
7
+ Country: country
8
+ Affiliation: text
9
+ Job title:
10
+ type: select
11
+ options:
12
+ - Student
13
+ - Research Graduate
14
+ - AI researcher
15
+ - AI developer/engineer
16
+ - Reporter
17
+ - Other
18
+ geo: ip_location
19
+ By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
20
+ extra_gated_description: >-
21
+ The information you provide will be collected, stored, processed and shared in
22
+ accordance with the [Meta Privacy
23
+ Policy](https://www.facebook.com/privacy/policy/).
24
+ extra_gated_button_content: Submit
25
+ language:
26
+ - en
27
+ pipeline_tag: mask-generation
28
+ library_name: transformers
29
+ tags:
30
+ - sam3
31
+ ---
32
+
33
+ SAM 3 is a unified foundation model for promptable segmentation in images and videos. It can detect, segment, and track objects using text or visual prompts such as points, boxes, and masks. Compared to its predecessor [SAM 2](https://github.com/facebookresearch/sam2), SAM 3 introduces the ability to exhaustively segment all instances of an open-vocabulary concept specified by a short text phrase or exemplars. Unlike prior work, SAM 3 can handle a vastly larger set of open-vocabulary prompts. It achieves 75-80% of human performance on our new [SA-CO benchmark](https://github.com/facebookresearch/sam3/edit/main_readme/README.md#sa-co-dataset) which contains 270K unique concepts, over 50 times more than existing benchmarks.
34
+
35
+
36
+ ### Basic Usage
37
+
38
+ ```python
39
+ import torch
40
+ #################################### For Image ####################################
41
+ from PIL import Image
42
+ from sam3.model_builder import build_sam3_image_model
43
+ from sam3.model.sam3_image_processor import Sam3Processor
44
+ # Load the model
45
+ model = build_sam3_image_model()
46
+ processor = Sam3Processor(model)
47
+ # Load an image
48
+ image = Image.open("<YOUR_IMAGE_PATH.jpg>")
49
+ inference_state = processor.set_image(image)
50
+ # Prompt the model with text
51
+ output = processor.set_text_prompt(state=inference_state, prompt="<YOUR_TEXT_PROMPT>")
52
+
53
+ # Get the masks, bounding boxes, and scores
54
+ masks, boxes, scores = output["masks"], output["boxes"], output["scores"]
55
+
56
+ #################################### For Video ####################################
57
+
58
+ from sam3.model_builder import build_sam3_video_predictor
59
+
60
+ video_predictor = build_sam3_video_predictor()
61
+ video_path = "<YOUR_VIDEO_PATH>" # a JPEG folder or an MP4 video file
62
+ # Start a session
63
+ response = video_predictor.handle_request(
64
+ request=dict(
65
+ type="start_session",
66
+ resource_path=video_path,
67
+ )
68
+ )
69
+ response = video_predictor.handle_request(
70
+ request=dict(
71
+ type="add_prompt",
72
+ session_id=response["session_id"],
73
+ frame_index=0, # Arbitrary frame index
74
+ text="<YOUR_TEXT_PROMPT>",
75
+ )
76
+ )
77
+ output = response["outputs"]
78
+ ```
79
+
80
+ The official code is publicly released in the [sam3 repo](https://github.com/facebookresearch/sam3).
81
+
82
+
83
+ ## Usage with 🤗 Transformers
84
+
85
+ ### SAM3 - Promptable Concept Segmentation (PCS) for Images
86
+
87
+ SAM3 performs Promptable Concept Segmentation (PCS) on images, taking text and/or image exemplars as prompts and returning segmentation masks for **all matching object instances** in the image.
88
+
89
+ #### Text-Only Prompts
90
+
91
+ ```python
92
+ >>> from transformers import Sam3Processor, Sam3Model
93
+ >>> import torch
94
+ >>> from PIL import Image
95
+ >>> import requests
96
+
97
+ >>> device = "cuda" if torch.cuda.is_available() else "cpu"
98
+
99
+ >>> model = Sam3Model.from_pretrained("facebook/sam3").to(device)
100
+ >>> processor = Sam3Processor.from_pretrained("facebook/sam3")
101
+
102
+ >>> # Load image
103
+ >>> image_url = "http://images.cocodataset.org/val2017/000000077595.jpg"
104
+ >>> image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
105
+
106
+ >>> # Segment using text prompt
107
+ >>> inputs = processor(images=image, text="ear", return_tensors="pt").to(device)
108
+
109
+ >>> with torch.no_grad():
110
+ ... outputs = model(**inputs)
111
+
112
+ >>> # Post-process results
113
+ >>> results = processor.post_process_instance_segmentation(
114
+ ... outputs,
115
+ ... threshold=0.5,
116
+ ... mask_threshold=0.5,
117
+ ... target_sizes=inputs.get("original_sizes").tolist()
118
+ ... )[0]
119
+
120
+ >>> print(f"Found {len(results['masks'])} objects")
121
+ >>> # Results contain:
122
+ >>> # - masks: Binary masks resized to original image size
123
+ >>> # - boxes: Bounding boxes in absolute pixel coordinates (xyxy format)
124
+ >>> # - scores: Confidence scores
125
+ ```
126
+
127
+ You can display masks using a simple helper like the following:
128
+
129
+ ```python
130
+ import numpy as np
131
+ import matplotlib
132
+
133
+ def overlay_masks(image, masks):
134
+ image = image.convert("RGBA")
135
+ masks = 255 * masks.cpu().numpy().astype(np.uint8)
136
+
137
+ n_masks = masks.shape[0]
138
+ cmap = matplotlib.colormaps.get_cmap("rainbow").resampled(n_masks)
139
+ colors = [
140
+ tuple(int(c * 255) for c in cmap(i)[:3])
141
+ for i in range(n_masks)
142
+ ]
143
+
144
+ for mask, color in zip(masks, colors):
145
+ mask = Image.fromarray(mask)
146
+ overlay = Image.new("RGBA", image.size, color + (0,))
147
+ alpha = mask.point(lambda v: int(v * 0.5))
148
+ overlay.putalpha(alpha)
149
+ image = Image.alpha_composite(image, overlay)
150
+ return image
151
+ ```
152
+
153
+ Then you can save the resulting composite image or display it in a notebook:
154
+
155
+ ```python
156
+ >>> overlay_masks(image, results["masks"])
157
+ ```
158
+
159
+ #### Single Bounding Box Prompt
160
+
161
+ Segment objects using a bounding box:
162
+
163
+ ```python
164
+ >>> # Box in xyxy format: [x1, y1, x2, y2] in pixel coordinates
165
+ >>> # Example: laptop region
166
+ >>> box_xyxy = [100, 150, 500, 450]
167
+ >>> input_boxes = [[box_xyxy]] # [batch, num_boxes, 4]
168
+ >>> input_boxes_labels = [[1]] # 1 = positive box
169
+
170
+ >>> inputs = processor(
171
+ ... images=image,
172
+ ... input_boxes=input_boxes,
173
+ ... input_boxes_labels=input_boxes_labels,
174
+ ... return_tensors="pt"
175
+ ... ).to(device)
176
+
177
+ >>> with torch.no_grad():
178
+ ... outputs = model(**inputs)
179
+
180
+ >>> # Post-process results
181
+ >>> results = processor.post_process_instance_segmentation(
182
+ ... outputs,
183
+ ... threshold=0.5,
184
+ ... mask_threshold=0.5,
185
+ ... target_sizes=inputs.get("original_sizes").tolist()
186
+ ... )[0]
187
+ ```
188
+
189
+ #### Multiple Box Prompts (Positive and Negative)
190
+
191
+ Use multiple boxes with positive and negative labels to refine the concept:
192
+
193
+ ```python
194
+ >>> # Load kitchen image
195
+ >>> kitchen_url = "http://images.cocodataset.org/val2017/000000136466.jpg"
196
+ >>> kitchen_image = Image.open(requests.get(kitchen_url, stream=True).raw).convert("RGB")
197
+
198
+ >>> # Define two positive boxes (e.g., dial and button on oven)
199
+ >>> # Boxes are in xyxy format [x1, y1, x2, y2] in pixel coordinates
200
+ >>> box1_xyxy = [59, 144, 76, 163] # Dial box
201
+ >>> box2_xyxy = [87, 148, 104, 159] # Button box
202
+ >>> input_boxes = [[box1_xyxy, box2_xyxy]]
203
+ >>> input_boxes_labels = [[1, 1]] # Both positive
204
+
205
+ >>> inputs = processor(
206
+ ... images=kitchen_image,
207
+ ... input_boxes=input_boxes,
208
+ ... input_boxes_labels=input_boxes_labels,
209
+ ... return_tensors="pt"
210
+ ... ).to(device)
211
+
212
+ >>> with torch.no_grad():
213
+ ... outputs = model(**inputs)
214
+
215
+ >>> # Post-process results
216
+ >>> results = processor.post_process_instance_segmentation(
217
+ ... outputs,
218
+ ... threshold=0.5,
219
+ ... mask_threshold=0.5,
220
+ ... target_sizes=inputs.get("original_sizes").tolist()
221
+ ... )[0]
222
+ >>> overlay_masks(kitchen_image, results["masks"])
223
+ ```
224
+
225
+ #### Combined Prompts (Text + Negative Box)
226
+
227
+ Use text prompts with negative visual prompts to refine the concept:
228
+
229
+ ```python
230
+ >>> # Segment "handle" but exclude the oven handle using a negative box
231
+ >>> text = "handle"
232
+ >>> # Negative box covering oven handle area (xyxy): [40, 183, 318, 204]
233
+ >>> oven_handle_box = [40, 183, 318, 204]
234
+ >>> input_boxes = [[oven_handle_box]]
235
+
236
+ >>> inputs = processor(
237
+ ... images=kitchen_image,
238
+ ... text=text,
239
+ ... input_boxes=input_boxes,
240
+ ... input_boxes_labels=[[0]], # 0 = negative (exclude this region)
241
+ ... return_tensors="pt"
242
+ ... ).to(device)
243
+
244
+ >>> with torch.no_grad():
245
+ ... outputs = model(**inputs)
246
+
247
+ >>> # Post-process results
248
+ >>> results = processor.post_process_instance_segmentation(
249
+ ... outputs,
250
+ ... threshold=0.5,
251
+ ... mask_threshold=0.5,
252
+ ... target_sizes=inputs.get("original_sizes").tolist()
253
+ ... )[0]
254
+ >>> # This will segment pot handles but exclude the oven handle
255
+ ```
256
+
257
+ #### Batched Inference with Text Prompts
258
+
259
+ Process multiple images with different text prompts by batch:
260
+
261
+ ```python
262
+ >>> cat_url = "http://images.cocodataset.org/val2017/000000077595.jpg"
263
+ >>> kitchen_url = "http://images.cocodataset.org/val2017/000000136466.jpg"
264
+ >>> images = [
265
+ ... Image.open(requests.get(cat_url, stream=True).raw).convert("RGB"),
266
+ ... Image.open(requests.get(kitchen_url, stream=True).raw).convert("RGB")
267
+ ... ]
268
+
269
+ >>> text_prompts = ["ear", "dial"]
270
+
271
+ >>> inputs = processor(images=images, text=text_prompts, return_tensors="pt").to(device)
272
+
273
+ >>> with torch.no_grad():
274
+ ... outputs = model(**inputs)
275
+
276
+ >>> # Post-process results for both images
277
+ >>> results = processor.post_process_instance_segmentation(
278
+ ... outputs,
279
+ ... threshold=0.5,
280
+ ... mask_threshold=0.5,
281
+ ... target_sizes=inputs.get("original_sizes").tolist()
282
+ ... )
283
+
284
+ >>> print(f"Image 1: {len(results[0]['masks'])} objects found")
285
+ >>> print(f"Image 2: {len(results[1]['masks'])} objects found")
286
+ ```
287
+
288
+ #### Batched Mixed Prompts
289
+
290
+ Use different prompt types for different images in the same batch:
291
+
292
+ ```python
293
+ >>> # Image 1: text prompt "laptop"
294
+ >>> # Image 2: visual prompt (dial box)
295
+ >>> box2_xyxy = [59, 144, 76, 163]
296
+
297
+ >>> inputs = processor(
298
+ ... images=images,
299
+ ... text=["laptop", None], # Only first image has text
300
+ ... input_boxes=[None, [box2_xyxy]], # Only second image has box
301
+ ... input_boxes_labels=[None, [1]], # Positive box for second image
302
+ ... return_tensors="pt"
303
+ ... ).to(device)
304
+
305
+ >>> with torch.no_grad():
306
+ ... outputs = model(**inputs)
307
+
308
+ >>> # Post-process results for both images
309
+ >>> results = processor.post_process_instance_segmentation(
310
+ ... outputs,
311
+ ... threshold=0.5,
312
+ ... mask_threshold=0.5,
313
+ ... target_sizes=inputs.get("original_sizes").tolist()
314
+ ... )
315
+ >>> # Both images processed in single forward pass
316
+ ```
317
+
318
+ #### Semantic Segmentation Output
319
+
320
+ SAM3 also provides semantic segmentation alongside instance masks:
321
+
322
+ ```python
323
+ >>> inputs = processor(images=image, text="ear", return_tensors="pt").to(device)
324
+
325
+ >>> with torch.no_grad():
326
+ ... outputs = model(**inputs)
327
+
328
+ >>> # Instance segmentation masks
329
+ >>> instance_masks = torch.sigmoid(outputs.pred_masks) # [batch, num_queries, H, W]
330
+
331
+ >>> # Semantic segmentation (single channel)
332
+ >>> semantic_seg = outputs.semantic_seg # [batch, 1, H, W]
333
+
334
+ >>> print(f"Instance masks: {instance_masks.shape}")
335
+ >>> print(f"Semantic segmentation: {semantic_seg.shape}")
336
+ ```
337
+
338
+ ### SAM3 Video - Promptable Concept Segmentation (PCS) for Videos
339
+
340
+ SAM3 Video performs Promptable Concept Segmentation (PCS) on videos, taking text as prompts and detecting and tracking **all matching object instances** across video frames.
341
+
342
+ #### Pre-loaded Video Inference
343
+
344
+ Process a video with all frames already available using text prompts:
345
+
346
+ ```python
347
+ >>> from transformers import Sam3VideoModel, Sam3VideoProcessor
348
+ >>> from accelerate import Accelerator
349
+ >>> import torch
350
+
351
+ >>> device = Accelerator().device
352
+ >>> model = Sam3VideoModel.from_pretrained("facebook/sam3").to(device, dtype=torch.bfloat16)
353
+ >>> processor = Sam3VideoProcessor.from_pretrained("facebook/sam3")
354
+
355
+ >>> # Load video frames
356
+ >>> from transformers.video_utils import load_video
357
+ >>> video_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/bedroom.mp4"
358
+ >>> video_frames, _ = load_video(video_url)
359
+
360
+ >>> # Initialize video inference session
361
+ >>> inference_session = processor.init_video_session(
362
+ ... video=video_frames,
363
+ ... inference_device=device,
364
+ ... processing_device="cpu",
365
+ ... video_storage_device="cpu",
366
+ ... dtype=torch.bfloat16,
367
+ ... )
368
+
369
+ >>> # Add text prompt to detect and track objects
370
+ >>> text = "person"
371
+ >>> inference_session = processor.add_text_prompt(
372
+ ... inference_session=inference_session,
373
+ ... text=text,
374
+ ... )
375
+
376
+ >>> # Process all frames in the video
377
+ >>> outputs_per_frame = {}
378
+ >>> for model_outputs in model.propagate_in_video_iterator(
379
+ ... inference_session=inference_session, max_frame_num_to_track=50
380
+ ... ):
381
+ ... processed_outputs = processor.postprocess_outputs(inference_session, model_outputs)
382
+ ... outputs_per_frame[model_outputs.frame_idx] = processed_outputs
383
+
384
+ >>> print(f"Processed {len(outputs_per_frame)} frames")
385
+ Processed 51 frames
386
+
387
+ >>> # Access results for a specific frame
388
+ >>> frame_0_outputs = outputs_per_frame[0]
389
+ >>> print(f"Detected {len(frame_0_outputs['object_ids'])} objects")
390
+ >>> print(f"Object IDs: {frame_0_outputs['object_ids'].tolist()}")
391
+ >>> print(f"Scores: {frame_0_outputs['scores'].tolist()}")
392
+ >>> print(f"Boxes shape (XYXY format, absolute coordinates): {frame_0_outputs['boxes'].shape}")
393
+ >>> print(f"Masks shape: {frame_0_outputs['masks'].shape}")
394
+ ```
395
+
396
+ #### Streaming Video Inference
397
+
398
+ For real-time applications, the Transformers implementation of SAM3 Video supports processing video frames as they arrive:
399
+
400
+ ```python
401
+ >>> # Initialize session for streaming
402
+ >>> streaming_inference_session = processor.init_video_session(
403
+ ... inference_device=device,
404
+ ... processing_device="cpu",
405
+ ... video_storage_device="cpu",
406
+ ... dtype=torch.bfloat16,
407
+ ... )
408
+
409
+ >>> # Add text prompt
410
+ >>> text = "person"
411
+ >>> streaming_inference_session = processor.add_text_prompt(
412
+ ... inference_session=streaming_inference_session,
413
+ ... text=text,
414
+ ... )
415
+
416
+ >>> # Process frames one by one (streaming mode)
417
+ >>> streaming_outputs_per_frame = {}
418
+ >>> for frame_idx, frame in enumerate(video_frames[:50]): # Process first 50 frames
419
+ ... # First, process the frame using the processor
420
+ ... inputs = processor(images=frame, device=device, return_tensors="pt")
421
+ ...
422
+ ... # Process frame using streaming inference - pass the processed pixel_values
423
+ ... model_outputs = model(
424
+ ... inference_session=streaming_inference_session,
425
+ ... frame=inputs.pixel_values[0], # Provide processed frame - this enables streaming mode
426
+ ... reverse=False,
427
+ ... )
428
+ ...
429
+ ... # Post-process outputs with original_sizes for proper resolution handling
430
+ ... processed_outputs = processor.postprocess_outputs(
431
+ ... streaming_inference_session,
432
+ ... model_outputs,
433
+ ... original_sizes=inputs.original_sizes, # Required for streaming inference
434
+ ... )
435
+ ... streaming_outputs_per_frame[frame_idx] = processed_outputs
436
+ ...
437
+ ... if (frame_idx + 1) % 10 == 0:
438
+ ... print(f"Processed {frame_idx + 1} frames...")
439
+
440
+ >>> print(f"✓ Streaming inference complete! Processed {len(streaming_outputs_per_frame)} frames")
441
+ ✓ Streaming inference complete! Processed 50 frames
442
+
443
+ >>> # Access results
444
+ >>> frame_0_outputs = streaming_outputs_per_frame[0]
445
+ >>> print(f"Detected {len(frame_0_outputs['object_ids'])} objects in first frame")
446
+ >>> print(f"Boxes are in XYXY format (absolute pixel coordinates): {frame_0_outputs['boxes'].shape}")
447
+ >>> print(f"Masks are at original video resolution: {frame_0_outputs['masks'].shape}")
448
+ ```
449
+
450
+ <div class="warning">
451
+ ⚠️ **Note on Streaming Inference Quality**: Streaming inference disables hotstart heuristics that remove unmatched and duplicate objects, as these require access to future frames to make informed decisions. This may result in more false positive detections and duplicate object tracks compared to pre-loaded video inference. For best results, use pre-loaded video inference when all frames are available.
452
+ </div>
453
+
454
+ ### SAM3 Tracker - Promptable Visual Segmentation (PVS) for Images
455
+
456
+ Sam3Tracker performs Promptable Visual Segmentation (PVS) on images, taking interactive visual prompts (points, boxes, masks) to segment a **specific object instance** per prompt. It is an updated version of SAM2 that maintains the same API while providing improved performance, making it a drop-in replacement for SAM2 workflows.
457
+
458
+ #### Automatic Mask Generation with Pipeline
459
+
460
+ ```python
461
+ >>> from transformers import pipeline
462
+
463
+ >>> generator = pipeline("mask-generation", model="facebook/sam3", device=0)
464
+ >>> image_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/truck.jpg"
465
+ >>> outputs = generator(image_url, points_per_batch=64)
466
+
467
+ >>> len(outputs["masks"]) # Number of masks generated
468
+ ```
469
+
470
+ #### Basic Image Segmentation
471
+
472
+ ##### Single Point Click
473
+
474
+ ```python
475
+ >>> from transformers import Sam3TrackerProcessor, Sam3TrackerModel
476
+ >>> from accelerate import Accelerator
477
+ >>> import torch
478
+ >>> from PIL import Image
479
+ >>> import requests
480
+
481
+ >>> device = Accelerator().device
482
+
483
+ >>> model = Sam3TrackerModel.from_pretrained("facebook/sam3").to(device)
484
+ >>> processor = Sam3TrackerProcessor.from_pretrained("facebook/sam3")
485
+
486
+ >>> image_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/truck.jpg"
487
+ >>> raw_image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
488
+
489
+ >>> input_points = [[[[500, 375]]]] # Single point click, 4 dimensions (image_dim, object_dim, point_per_object_dim, coordinates)
490
+ >>> input_labels = [[[1]]] # 1 for positive click, 0 for negative click, 3 dimensions (image_dim, object_dim, point_label)
491
+
492
+ >>> inputs = processor(images=raw_image, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(model.device)
493
+
494
+ >>> with torch.no_grad():
495
+ ... outputs = model(**inputs)
496
+
497
+ >>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
498
+
499
+ >>> # The model outputs multiple mask predictions ranked by quality score
500
+ >>> print(f"Generated {masks.shape[1]} masks with shape {masks.shape}")
501
+ ```
502
+
503
+ ##### Multiple Points for Refinement
504
+
505
+ ```python
506
+ >>> # Add both positive and negative points to refine the mask
507
+ >>> input_points = [[[[500, 375], [1125, 625]]]] # Multiple points for refinement
508
+ >>> input_labels = [[[1, 1]]] # Both positive clicks
509
+
510
+ >>> inputs = processor(images=raw_image, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(device)
511
+
512
+ >>> with torch.no_grad():
513
+ ... outputs = model(**inputs)
514
+
515
+ >>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
516
+ ```
517
+
518
+ ##### Bounding Box Input
519
+
520
+ ```python
521
+ >>> # Define bounding box as [x_min, y_min, x_max, y_max]
522
+ >>> input_boxes = [[[75, 275, 1725, 850]]]
523
+
524
+ >>> inputs = processor(images=raw_image, input_boxes=input_boxes, return_tensors="pt").to(device)
525
+
526
+ >>> with torch.no_grad():
527
+ ... outputs = model(**inputs)
528
+
529
+ >>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
530
+ ```
531
+
532
+ ##### Multiple Objects Segmentation
533
+
534
+ ```python
535
+ >>> # Define points for two different objects
536
+ >>> input_points = [[[[500, 375]], [[650, 750]]]] # Points for two objects in same image
537
+ >>> input_labels = [[[1], [1]]] # Positive clicks for both objects
538
+
539
+ >>> inputs = processor(images=raw_image, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(model.device)
540
+
541
+ >>> with torch.no_grad():
542
+ ... outputs = model(**inputs, multimask_output=False)
543
+
544
+ >>> # Each object gets its own mask
545
+ >>> masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])[0]
546
+ >>> print(f"Generated masks for {masks.shape[0]} objects")
547
+ Generated masks for 2 objects
548
+ ```
549
+
550
+ #### Batch Inference
551
+
552
+
553
+ ```python
554
+ >>> # Load multiple images
555
+ >>> image_urls = [
556
+ ... "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/truck.jpg",
557
+ ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dog-sam.png"
558
+ ... ]
559
+ >>> raw_images = [Image.open(requests.get(url, stream=True).raw).convert("RGB") for url in image_urls]
560
+
561
+ >>> # Single point per image
562
+ >>> input_points = [[[[500, 375]]], [[[770, 200]]]] # One point for each image
563
+ >>> input_labels = [[[1]], [[1]]] # Positive clicks for both images
564
+
565
+ >>> inputs = processor(images=raw_images, input_points=input_points, input_labels=input_labels, return_tensors="pt").to(model.device)
566
+
567
+ >>> with torch.no_grad():
568
+ ... outputs = model(**inputs, multimask_output=False)
569
+
570
+ >>> # Post-process masks for each image
571
+ >>> all_masks = processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"])
572
+ >>> print(f"Processed {len(all_masks)} images, each with {all_masks[0].shape[0]} objects")
573
+ ```
574
+
575
+ ### SAM3 Tracker Video - Promptable Visual Segmentation (PVS) for Videos
576
+
577
+ Sam3TrackerVideo performs Promptable Visual Segmentation (PVS) on videos, taking interactive visual prompts (points, boxes, masks) to track a **specific object instance** per prompt across video frames. It is an updated version of SAM2 Video that maintains the same API while providing improved performance, making it a drop-in replacement for SAM2 Video workflows.
578
+
579
+ #### Basic Video Tracking
580
+
581
+ ```python
582
+ >>> from transformers import Sam3TrackerVideoModel, Sam3TrackerVideoProcessor
583
+ >>> from accelerate import Accelerator
584
+ >>> import torch
585
+
586
+ >>> device = Accelerator().device
587
+ >>> model = Sam3TrackerVideoModel.from_pretrained("facebook/sam3").to(device, dtype=torch.bfloat16)
588
+ >>> processor = Sam3TrackerVideoProcessor.from_pretrained("facebook/sam3")
589
+
590
+ >>> # Load video frames
591
+ >>> from transformers.video_utils import load_video
592
+ >>> video_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/bedroom.mp4"
593
+ >>> video_frames, _ = load_video(video_url)
594
+
595
+ >>> # Initialize video inference session
596
+ >>> inference_session = processor.init_video_session(
597
+ ... video=video_frames,
598
+ ... inference_device=device,
599
+ ... dtype=torch.bfloat16,
600
+ ... )
601
+
602
+ >>> # Add click on first frame to select object
603
+ >>> ann_frame_idx = 0
604
+ >>> ann_obj_id = 1
605
+ >>> points = [[[[210, 350]]]]
606
+ >>> labels = [[[1]]]
607
+
608
+ >>> processor.add_inputs_to_inference_session(
609
+ ... inference_session=inference_session,
610
+ ... frame_idx=ann_frame_idx,
611
+ ... obj_ids=ann_obj_id,
612
+ ... input_points=points,
613
+ ... input_labels=labels,
614
+ ... )
615
+
616
+ >>> # Segment the object on the first frame (optional, you can also propagate the masks through the video directly)
617
+ >>> outputs = model(
618
+ ... inference_session=inference_session,
619
+ ... frame_idx=ann_frame_idx,
620
+ ... )
621
+ >>> video_res_masks = processor.post_process_masks(
622
+ ... [outputs.pred_masks], original_sizes=[[inference_session.video_height, inference_session.video_width]], binarize=False
623
+ ... )[0]
624
+ >>> print(f"Segmentation shape: {video_res_masks.shape}")
625
+ Segmentation shape: torch.Size([1, 1, 480, 854])
626
+
627
+ >>> # Propagate through the entire video
628
+ >>> video_segments = {}
629
+ >>> for sam3_tracker_video_output in model.propagate_in_video_iterator(inference_session):
630
+ ... video_res_masks = processor.post_process_masks(
631
+ ... [sam3_tracker_video_output.pred_masks], original_sizes=[[inference_session.video_height, inference_session.video_width]], binarize=False
632
+ ... )[0]
633
+ ... video_segments[sam3_tracker_video_output.frame_idx] = video_res_masks
634
+
635
+ >>> print(f"Tracked object through {len(video_segments)} frames")
636
+ Tracked object through 180 frames
637
+ ```
638
+
639
+ #### Multi-Object Video Tracking
640
+
641
+ Track multiple objects simultaneously across video frames:
642
+
643
+ ```python
644
+ >>> # Reset for new tracking session
645
+ >>> inference_session.reset_inference_session()
646
+
647
+ >>> # Add multiple objects on the first frame
648
+ >>> ann_frame_idx = 0
649
+ >>> obj_ids = [2, 3]
650
+ >>> input_points = [[[[200, 300]], [[400, 150]]]] # Points for two objects (batched)
651
+ >>> input_labels = [[[1], [1]]]
652
+
653
+ >>> processor.add_inputs_to_inference_session(
654
+ ... inference_session=inference_session,
655
+ ... frame_idx=ann_frame_idx,
656
+ ... obj_ids=obj_ids,
657
+ ... input_points=input_points,
658
+ ... input_labels=input_labels,
659
+ ... )
660
+
661
+ >>> # Get masks for both objects on first frame (optional, you can also propagate the masks through the video directly)
662
+ >>> outputs = model(
663
+ ... inference_session=inference_session,
664
+ ... frame_idx=ann_frame_idx,
665
+ ... )
666
+
667
+ >>> # Propagate both objects through video
668
+ >>> video_segments = {}
669
+ >>> for sam3_tracker_video_output in model.propagate_in_video_iterator(inference_session):
670
+ ... video_res_masks = processor.post_process_masks(
671
+ ... [sam3_tracker_video_output.pred_masks], original_sizes=[[inference_session.video_height, inference_session.video_width]], binarize=False
672
+ ... )[0]
673
+ ... video_segments[sam3_tracker_video_output.frame_idx] = {
674
+ ... obj_id: video_res_masks[i]
675
+ ... for i, obj_id in enumerate(inference_session.obj_ids)
676
+ ... }
677
+
678
+ >>> print(f"Tracked {len(inference_session.obj_ids)} objects through {len(video_segments)} frames")
679
+ Tracked 2 objects through 180 frames
680
+ ```
681
+
682
+ #### Streaming Video Inference
683
+
684
+ For real-time applications, Sam3TrackerVideo supports processing video frames as they arrive:
685
+
686
+ ```python
687
+ >>> # Initialize session for streaming
688
+ >>> inference_session = processor.init_video_session(
689
+ ... inference_device=device,
690
+ ... dtype=torch.bfloat16,
691
+ ... )
692
+
693
+ >>> # Process frames one by one
694
+ >>> for frame_idx, frame in enumerate(video_frames[:10]): # Process first 10 frames
695
+ ... inputs = processor(images=frame, device=device, return_tensors="pt")
696
+ ...
697
+ ... if frame_idx == 0:
698
+ ... # Add point input on first frame
699
+ ... processor.add_inputs_to_inference_session(
700
+ ... inference_session=inference_session,
701
+ ... frame_idx=0,
702
+ ... obj_ids=1,
703
+ ... input_points=[[[[210, 350], [250, 220]]]],
704
+ ... input_labels=[[[1, 1]]],
705
+ ... original_size=inputs.original_sizes[0], # need to be provided when using streaming video inference
706
+ ... )
707
+ ...
708
+ ... # Process current frame
709
+ ... sam3_tracker_video_output = model(inference_session=inference_session, frame=inputs.pixel_values[0])
710
+ ...
711
+ ... video_res_masks = processor.post_process_masks(
712
+ ... [sam3_tracker_video_output.pred_masks], original_sizes=inputs.original_sizes, binarize=False
713
+ ... )[0]
714
+ ... print(f"Frame {frame_idx}: mask shape {video_res_masks.shape}")
715
+ ```
Sam3/bpe_simple_vocab_16e6.txt.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:924691ac288e54409236115652ad4aa250f48203de50a9e4722a6ecd48d6804a
3
+ size 1356917
Sam3/config.json ADDED
@@ -0,0 +1,896 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Sam3VideoModel"
4
+ ],
5
+ "assoc_iou_thresh": 0.1,
6
+ "decrease_trk_keep_alive_for_empty_masklets": false,
7
+ "det_nms_thresh": 0.1,
8
+ "detector_config": {
9
+ "detr_decoder_config": {
10
+ "_name_or_path": "",
11
+ "add_cross_attention": false,
12
+ "architectures": null,
13
+ "bad_words_ids": null,
14
+ "begin_suppress_tokens": null,
15
+ "bos_token_id": null,
16
+ "box_rpb_mode": "log",
17
+ "chunk_size_feed_forward": 0,
18
+ "cross_attention_hidden_size": null,
19
+ "decoder_start_token_id": null,
20
+ "diversity_penalty": 0.0,
21
+ "do_sample": false,
22
+ "dropout": 0.1,
23
+ "dtype": null,
24
+ "early_stopping": false,
25
+ "encoder_no_repeat_ngram_size": 0,
26
+ "eos_token_id": null,
27
+ "exponential_decay_length_penalty": null,
28
+ "finetuning_task": null,
29
+ "forced_bos_token_id": null,
30
+ "forced_eos_token_id": null,
31
+ "hidden_act": "relu",
32
+ "hidden_dropout": 0.0,
33
+ "hidden_size": 256,
34
+ "id2label": {
35
+ "0": "LABEL_0",
36
+ "1": "LABEL_1"
37
+ },
38
+ "initializer_range": 0.02,
39
+ "intermediate_size": 2048,
40
+ "is_decoder": false,
41
+ "is_encoder_decoder": false,
42
+ "label2id": {
43
+ "LABEL_0": 0,
44
+ "LABEL_1": 1
45
+ },
46
+ "layer_norm_eps": 1e-06,
47
+ "length_penalty": 1.0,
48
+ "max_length": 20,
49
+ "min_length": 0,
50
+ "model_type": "sam3_detr_decoder",
51
+ "no_repeat_ngram_size": 0,
52
+ "num_attention_heads": 8,
53
+ "num_beam_groups": 1,
54
+ "num_beams": 1,
55
+ "num_layers": 6,
56
+ "num_queries": 200,
57
+ "num_return_sequences": 1,
58
+ "output_attentions": false,
59
+ "output_hidden_states": false,
60
+ "output_scores": false,
61
+ "pad_token_id": null,
62
+ "prefix": null,
63
+ "problem_type": null,
64
+ "remove_invalid_values": false,
65
+ "repetition_penalty": 1.0,
66
+ "return_dict": true,
67
+ "return_dict_in_generate": false,
68
+ "sep_token_id": null,
69
+ "suppress_tokens": null,
70
+ "task_specific_params": null,
71
+ "temperature": 1.0,
72
+ "tie_encoder_decoder": false,
73
+ "tie_word_embeddings": true,
74
+ "tokenizer_class": null,
75
+ "top_k": 50,
76
+ "top_p": 1.0,
77
+ "typical_p": 1.0,
78
+ "use_presence_token": true
79
+ },
80
+ "detr_encoder_config": {
81
+ "_name_or_path": "",
82
+ "add_cross_attention": false,
83
+ "architectures": null,
84
+ "bad_words_ids": null,
85
+ "begin_suppress_tokens": null,
86
+ "bos_token_id": null,
87
+ "chunk_size_feed_forward": 0,
88
+ "cross_attention_hidden_size": null,
89
+ "decoder_start_token_id": null,
90
+ "diversity_penalty": 0.0,
91
+ "do_sample": false,
92
+ "dropout": 0.1,
93
+ "dtype": null,
94
+ "early_stopping": false,
95
+ "encoder_no_repeat_ngram_size": 0,
96
+ "eos_token_id": null,
97
+ "exponential_decay_length_penalty": null,
98
+ "finetuning_task": null,
99
+ "forced_bos_token_id": null,
100
+ "forced_eos_token_id": null,
101
+ "hidden_act": "relu",
102
+ "hidden_dropout": 0.0,
103
+ "hidden_size": 256,
104
+ "id2label": {
105
+ "0": "LABEL_0",
106
+ "1": "LABEL_1"
107
+ },
108
+ "initializer_range": 0.02,
109
+ "intermediate_size": 2048,
110
+ "is_decoder": false,
111
+ "is_encoder_decoder": false,
112
+ "label2id": {
113
+ "LABEL_0": 0,
114
+ "LABEL_1": 1
115
+ },
116
+ "layer_norm_eps": 1e-06,
117
+ "length_penalty": 1.0,
118
+ "max_length": 20,
119
+ "min_length": 0,
120
+ "model_type": "sam3_detr_encoder",
121
+ "no_repeat_ngram_size": 0,
122
+ "num_attention_heads": 8,
123
+ "num_beam_groups": 1,
124
+ "num_beams": 1,
125
+ "num_layers": 6,
126
+ "num_return_sequences": 1,
127
+ "output_attentions": false,
128
+ "output_hidden_states": false,
129
+ "output_scores": false,
130
+ "pad_token_id": null,
131
+ "prefix": null,
132
+ "problem_type": null,
133
+ "remove_invalid_values": false,
134
+ "repetition_penalty": 1.0,
135
+ "return_dict": true,
136
+ "return_dict_in_generate": false,
137
+ "sep_token_id": null,
138
+ "suppress_tokens": null,
139
+ "task_specific_params": null,
140
+ "temperature": 1.0,
141
+ "tie_encoder_decoder": false,
142
+ "tie_word_embeddings": true,
143
+ "tokenizer_class": null,
144
+ "top_k": 50,
145
+ "top_p": 1.0,
146
+ "typical_p": 1.0
147
+ },
148
+ "geometry_encoder_config": {
149
+ "_name_or_path": "",
150
+ "add_cross_attention": false,
151
+ "architectures": null,
152
+ "bad_words_ids": null,
153
+ "begin_suppress_tokens": null,
154
+ "bos_token_id": null,
155
+ "chunk_size_feed_forward": 0,
156
+ "cross_attention_hidden_size": null,
157
+ "decoder_start_token_id": null,
158
+ "diversity_penalty": 0.0,
159
+ "do_sample": false,
160
+ "dropout": 0.1,
161
+ "dtype": null,
162
+ "early_stopping": false,
163
+ "encoder_no_repeat_ngram_size": 0,
164
+ "eos_token_id": null,
165
+ "exponential_decay_length_penalty": null,
166
+ "finetuning_task": null,
167
+ "forced_bos_token_id": null,
168
+ "forced_eos_token_id": null,
169
+ "hidden_act": "relu",
170
+ "hidden_dropout": 0.0,
171
+ "hidden_size": 256,
172
+ "id2label": {
173
+ "0": "LABEL_0",
174
+ "1": "LABEL_1"
175
+ },
176
+ "initializer_range": 0.02,
177
+ "intermediate_size": 2048,
178
+ "is_decoder": false,
179
+ "is_encoder_decoder": false,
180
+ "label2id": {
181
+ "LABEL_0": 0,
182
+ "LABEL_1": 1
183
+ },
184
+ "layer_norm_eps": 1e-06,
185
+ "length_penalty": 1.0,
186
+ "max_length": 20,
187
+ "min_length": 0,
188
+ "model_type": "sam3_geometry_encoder",
189
+ "no_repeat_ngram_size": 0,
190
+ "num_attention_heads": 8,
191
+ "num_beam_groups": 1,
192
+ "num_beams": 1,
193
+ "num_layers": 3,
194
+ "num_return_sequences": 1,
195
+ "output_attentions": false,
196
+ "output_hidden_states": false,
197
+ "output_scores": false,
198
+ "pad_token_id": null,
199
+ "prefix": null,
200
+ "problem_type": null,
201
+ "remove_invalid_values": false,
202
+ "repetition_penalty": 1.0,
203
+ "return_dict": true,
204
+ "return_dict_in_generate": false,
205
+ "roi_size": 7,
206
+ "sep_token_id": null,
207
+ "suppress_tokens": null,
208
+ "task_specific_params": null,
209
+ "temperature": 1.0,
210
+ "tie_encoder_decoder": false,
211
+ "tie_word_embeddings": true,
212
+ "tokenizer_class": null,
213
+ "top_k": 50,
214
+ "top_p": 1.0,
215
+ "typical_p": 1.0
216
+ },
217
+ "initializer_range": 0.02,
218
+ "mask_decoder_config": {
219
+ "_name_or_path": "",
220
+ "add_cross_attention": false,
221
+ "architectures": null,
222
+ "bad_words_ids": null,
223
+ "begin_suppress_tokens": null,
224
+ "bos_token_id": null,
225
+ "chunk_size_feed_forward": 0,
226
+ "cross_attention_hidden_size": null,
227
+ "decoder_start_token_id": null,
228
+ "diversity_penalty": 0.0,
229
+ "do_sample": false,
230
+ "dropout": 0.0,
231
+ "dtype": null,
232
+ "early_stopping": false,
233
+ "encoder_no_repeat_ngram_size": 0,
234
+ "eos_token_id": null,
235
+ "exponential_decay_length_penalty": null,
236
+ "finetuning_task": null,
237
+ "forced_bos_token_id": null,
238
+ "forced_eos_token_id": null,
239
+ "hidden_size": 256,
240
+ "id2label": {
241
+ "0": "LABEL_0",
242
+ "1": "LABEL_1"
243
+ },
244
+ "initializer_range": 0.02,
245
+ "is_decoder": false,
246
+ "is_encoder_decoder": false,
247
+ "label2id": {
248
+ "LABEL_0": 0,
249
+ "LABEL_1": 1
250
+ },
251
+ "layer_norm_eps": 1e-06,
252
+ "length_penalty": 1.0,
253
+ "max_length": 20,
254
+ "min_length": 0,
255
+ "model_type": "sam3_mask_decoder",
256
+ "no_repeat_ngram_size": 0,
257
+ "num_attention_heads": 8,
258
+ "num_beam_groups": 1,
259
+ "num_beams": 1,
260
+ "num_return_sequences": 1,
261
+ "num_upsampling_stages": 3,
262
+ "output_attentions": false,
263
+ "output_hidden_states": false,
264
+ "output_scores": false,
265
+ "pad_token_id": null,
266
+ "prefix": null,
267
+ "problem_type": null,
268
+ "remove_invalid_values": false,
269
+ "repetition_penalty": 1.0,
270
+ "return_dict": true,
271
+ "return_dict_in_generate": false,
272
+ "sep_token_id": null,
273
+ "suppress_tokens": null,
274
+ "task_specific_params": null,
275
+ "temperature": 1.0,
276
+ "tie_encoder_decoder": false,
277
+ "tie_word_embeddings": true,
278
+ "tokenizer_class": null,
279
+ "top_k": 50,
280
+ "top_p": 1.0,
281
+ "typical_p": 1.0
282
+ },
283
+ "model_type": "sam3",
284
+ "text_config": {
285
+ "_name_or_path": "",
286
+ "add_cross_attention": false,
287
+ "architectures": null,
288
+ "attention_dropout": 0.0,
289
+ "bad_words_ids": null,
290
+ "begin_suppress_tokens": null,
291
+ "bos_token_id": 49406,
292
+ "chunk_size_feed_forward": 0,
293
+ "cross_attention_hidden_size": null,
294
+ "decoder_start_token_id": null,
295
+ "diversity_penalty": 0.0,
296
+ "do_sample": false,
297
+ "dtype": null,
298
+ "early_stopping": false,
299
+ "encoder_no_repeat_ngram_size": 0,
300
+ "eos_token_id": 49407,
301
+ "exponential_decay_length_penalty": null,
302
+ "finetuning_task": null,
303
+ "forced_bos_token_id": null,
304
+ "forced_eos_token_id": null,
305
+ "hidden_act": "gelu",
306
+ "hidden_size": 1024,
307
+ "id2label": {
308
+ "0": "LABEL_0",
309
+ "1": "LABEL_1"
310
+ },
311
+ "initializer_factor": 1.0,
312
+ "initializer_range": 0.02,
313
+ "intermediate_size": 4096,
314
+ "is_decoder": false,
315
+ "is_encoder_decoder": false,
316
+ "label2id": {
317
+ "LABEL_0": 0,
318
+ "LABEL_1": 1
319
+ },
320
+ "layer_norm_eps": 1e-05,
321
+ "length_penalty": 1.0,
322
+ "max_length": 20,
323
+ "max_position_embeddings": 32,
324
+ "min_length": 0,
325
+ "model_type": "clip_text_model",
326
+ "no_repeat_ngram_size": 0,
327
+ "num_attention_heads": 16,
328
+ "num_beam_groups": 1,
329
+ "num_beams": 1,
330
+ "num_hidden_layers": 24,
331
+ "num_return_sequences": 1,
332
+ "output_attentions": false,
333
+ "output_hidden_states": false,
334
+ "output_scores": false,
335
+ "pad_token_id": 1,
336
+ "prefix": null,
337
+ "problem_type": null,
338
+ "projection_dim": 512,
339
+ "remove_invalid_values": false,
340
+ "repetition_penalty": 1.0,
341
+ "return_dict": true,
342
+ "return_dict_in_generate": false,
343
+ "sep_token_id": null,
344
+ "suppress_tokens": null,
345
+ "task_specific_params": null,
346
+ "temperature": 1.0,
347
+ "tie_encoder_decoder": false,
348
+ "tie_word_embeddings": true,
349
+ "tokenizer_class": null,
350
+ "top_k": 50,
351
+ "top_p": 1.0,
352
+ "typical_p": 1.0,
353
+ "vocab_size": 49408
354
+ },
355
+ "vision_config": {
356
+ "_name_or_path": "",
357
+ "add_cross_attention": false,
358
+ "architectures": null,
359
+ "backbone_config": {
360
+ "_name_or_path": "",
361
+ "add_cross_attention": false,
362
+ "architectures": null,
363
+ "attention_dropout": 0.0,
364
+ "bad_words_ids": null,
365
+ "begin_suppress_tokens": null,
366
+ "bos_token_id": null,
367
+ "chunk_size_feed_forward": 0,
368
+ "cross_attention_hidden_size": null,
369
+ "decoder_start_token_id": null,
370
+ "diversity_penalty": 0.0,
371
+ "do_sample": false,
372
+ "dtype": null,
373
+ "early_stopping": false,
374
+ "encoder_no_repeat_ngram_size": 0,
375
+ "eos_token_id": null,
376
+ "exponential_decay_length_penalty": null,
377
+ "finetuning_task": null,
378
+ "forced_bos_token_id": null,
379
+ "forced_eos_token_id": null,
380
+ "global_attn_indexes": [
381
+ 7,
382
+ 15,
383
+ 23,
384
+ 31
385
+ ],
386
+ "hidden_act": "gelu",
387
+ "hidden_dropout": 0.0,
388
+ "hidden_size": 1024,
389
+ "id2label": {
390
+ "0": "LABEL_0",
391
+ "1": "LABEL_1"
392
+ },
393
+ "image_size": 1008,
394
+ "initializer_range": 0.02,
395
+ "intermediate_size": 4736,
396
+ "is_decoder": false,
397
+ "is_encoder_decoder": false,
398
+ "label2id": {
399
+ "LABEL_0": 0,
400
+ "LABEL_1": 1
401
+ },
402
+ "layer_norm_eps": 1e-06,
403
+ "layer_scale_init_value": null,
404
+ "length_penalty": 1.0,
405
+ "max_length": 20,
406
+ "min_length": 0,
407
+ "model_type": "sam3_vit_model",
408
+ "no_repeat_ngram_size": 0,
409
+ "num_attention_heads": 16,
410
+ "num_beam_groups": 1,
411
+ "num_beams": 1,
412
+ "num_channels": 3,
413
+ "num_hidden_layers": 32,
414
+ "num_return_sequences": 1,
415
+ "output_attentions": false,
416
+ "output_hidden_states": false,
417
+ "output_scores": false,
418
+ "pad_token_id": null,
419
+ "patch_size": 14,
420
+ "prefix": null,
421
+ "pretrain_image_size": 336,
422
+ "problem_type": null,
423
+ "qkv_bias": true,
424
+ "remove_invalid_values": false,
425
+ "repetition_penalty": 1.0,
426
+ "return_dict": true,
427
+ "return_dict_in_generate": false,
428
+ "rope_theta": 10000.0,
429
+ "sep_token_id": null,
430
+ "suppress_tokens": null,
431
+ "task_specific_params": null,
432
+ "temperature": 1.0,
433
+ "tie_encoder_decoder": false,
434
+ "tie_word_embeddings": true,
435
+ "tokenizer_class": null,
436
+ "top_k": 50,
437
+ "top_p": 1.0,
438
+ "typical_p": 1.0,
439
+ "window_size": 24
440
+ },
441
+ "backbone_feature_sizes": [
442
+ [
443
+ 288,
444
+ 288
445
+ ],
446
+ [
447
+ 144,
448
+ 144
449
+ ],
450
+ [
451
+ 72,
452
+ 72
453
+ ]
454
+ ],
455
+ "bad_words_ids": null,
456
+ "begin_suppress_tokens": null,
457
+ "bos_token_id": null,
458
+ "chunk_size_feed_forward": 0,
459
+ "cross_attention_hidden_size": null,
460
+ "decoder_start_token_id": null,
461
+ "diversity_penalty": 0.0,
462
+ "do_sample": false,
463
+ "dtype": null,
464
+ "early_stopping": false,
465
+ "encoder_no_repeat_ngram_size": 0,
466
+ "eos_token_id": null,
467
+ "exponential_decay_length_penalty": null,
468
+ "finetuning_task": null,
469
+ "forced_bos_token_id": null,
470
+ "forced_eos_token_id": null,
471
+ "fpn_hidden_size": 256,
472
+ "fpn_kernel_size": 2,
473
+ "fpn_stride": 2,
474
+ "hidden_act": "gelu",
475
+ "id2label": {
476
+ "0": "LABEL_0",
477
+ "1": "LABEL_1"
478
+ },
479
+ "initializer_range": 0.02,
480
+ "is_decoder": false,
481
+ "is_encoder_decoder": false,
482
+ "label2id": {
483
+ "LABEL_0": 0,
484
+ "LABEL_1": 1
485
+ },
486
+ "layer_norm_eps": 1e-06,
487
+ "length_penalty": 1.0,
488
+ "max_length": 20,
489
+ "min_length": 0,
490
+ "model_type": "sam3_vision_model",
491
+ "no_repeat_ngram_size": 0,
492
+ "num_beam_groups": 1,
493
+ "num_beams": 1,
494
+ "num_feature_levels": 3,
495
+ "num_return_sequences": 1,
496
+ "output_attentions": false,
497
+ "output_hidden_states": false,
498
+ "output_scores": false,
499
+ "pad_token_id": null,
500
+ "prefix": null,
501
+ "problem_type": null,
502
+ "remove_invalid_values": false,
503
+ "repetition_penalty": 1.0,
504
+ "return_dict": true,
505
+ "return_dict_in_generate": false,
506
+ "scale_factors": [
507
+ 4.0,
508
+ 2.0,
509
+ 1.0,
510
+ 0.5
511
+ ],
512
+ "sep_token_id": null,
513
+ "suppress_tokens": null,
514
+ "task_specific_params": null,
515
+ "temperature": 1.0,
516
+ "tie_encoder_decoder": false,
517
+ "tie_word_embeddings": true,
518
+ "tokenizer_class": null,
519
+ "top_k": 50,
520
+ "top_p": 1.0,
521
+ "typical_p": 1.0
522
+ }
523
+ },
524
+ "dtype": "float32",
525
+ "fill_hole_area": 16,
526
+ "high_conf_thresh": 0.8,
527
+ "high_iou_thresh": 0.8,
528
+ "hotstart_delay": 15,
529
+ "hotstart_dup_thresh": 8,
530
+ "hotstart_unmatch_thresh": 8,
531
+ "init_trk_keep_alive": 30,
532
+ "initializer_range": 0.02,
533
+ "low_res_mask_size": 288,
534
+ "max_num_objects": 10000,
535
+ "max_trk_keep_alive": 30,
536
+ "min_trk_keep_alive": -1,
537
+ "model_type": "sam3_video",
538
+ "new_det_thresh": 0.7,
539
+ "recondition_every_nth_frame": 16,
540
+ "recondition_on_trk_masks": false,
541
+ "score_threshold_detection": 0.5,
542
+ "suppress_overlapping_based_on_recent_occlusion_threshold": 0.7,
543
+ "suppress_unmatched_only_within_hotstart": true,
544
+ "tracker_config": {
545
+ "enable_occlusion_spatial_embedding": true,
546
+ "enable_temporal_pos_encoding_for_object_pointers": true,
547
+ "image_size": 1008,
548
+ "initializer_range": 0.02,
549
+ "mask_decoder_config": {
550
+ "_name_or_path": "",
551
+ "add_cross_attention": false,
552
+ "architectures": null,
553
+ "attention_downsample_rate": 2,
554
+ "bad_words_ids": null,
555
+ "begin_suppress_tokens": null,
556
+ "bos_token_id": null,
557
+ "chunk_size_feed_forward": 0,
558
+ "cross_attention_hidden_size": null,
559
+ "decoder_start_token_id": null,
560
+ "diversity_penalty": 0.0,
561
+ "do_sample": false,
562
+ "dtype": null,
563
+ "dynamic_multimask_stability_delta": 0.05,
564
+ "dynamic_multimask_stability_thresh": 0.98,
565
+ "dynamic_multimask_via_stability": true,
566
+ "early_stopping": false,
567
+ "encoder_no_repeat_ngram_size": 0,
568
+ "eos_token_id": null,
569
+ "exponential_decay_length_penalty": null,
570
+ "finetuning_task": null,
571
+ "forced_bos_token_id": null,
572
+ "forced_eos_token_id": null,
573
+ "hidden_act": "gelu",
574
+ "hidden_size": 256,
575
+ "id2label": {
576
+ "0": "LABEL_0",
577
+ "1": "LABEL_1"
578
+ },
579
+ "iou_head_depth": 3,
580
+ "iou_head_hidden_dim": 256,
581
+ "is_decoder": false,
582
+ "is_encoder_decoder": false,
583
+ "label2id": {
584
+ "LABEL_0": 0,
585
+ "LABEL_1": 1
586
+ },
587
+ "length_penalty": 1.0,
588
+ "max_length": 20,
589
+ "min_length": 0,
590
+ "mlp_dim": 2048,
591
+ "model_type": "",
592
+ "no_repeat_ngram_size": 0,
593
+ "num_attention_heads": 8,
594
+ "num_beam_groups": 1,
595
+ "num_beams": 1,
596
+ "num_hidden_layers": 2,
597
+ "num_multimask_outputs": 3,
598
+ "num_return_sequences": 1,
599
+ "output_attentions": false,
600
+ "output_hidden_states": false,
601
+ "output_scores": false,
602
+ "pad_token_id": null,
603
+ "prefix": null,
604
+ "problem_type": null,
605
+ "remove_invalid_values": false,
606
+ "repetition_penalty": 1.0,
607
+ "return_dict": true,
608
+ "return_dict_in_generate": false,
609
+ "sep_token_id": null,
610
+ "suppress_tokens": null,
611
+ "task_specific_params": null,
612
+ "temperature": 1.0,
613
+ "tie_encoder_decoder": false,
614
+ "tie_word_embeddings": true,
615
+ "tokenizer_class": null,
616
+ "top_k": 50,
617
+ "top_p": 1.0,
618
+ "typical_p": 1.0
619
+ },
620
+ "mask_downsampler_embed_dim": 256,
621
+ "mask_downsampler_hidden_act": "gelu",
622
+ "mask_downsampler_kernel_size": 3,
623
+ "mask_downsampler_padding": 1,
624
+ "mask_downsampler_stride": 2,
625
+ "mask_downsampler_total_stride": 16,
626
+ "max_cond_frame_num": 4,
627
+ "max_object_pointers_in_encoder": 16,
628
+ "memory_attention_downsample_rate": 1,
629
+ "memory_attention_dropout": 0.1,
630
+ "memory_attention_feed_forward_hidden_act": "relu",
631
+ "memory_attention_feed_forward_hidden_size": 2048,
632
+ "memory_attention_hidden_size": 256,
633
+ "memory_attention_num_attention_heads": 1,
634
+ "memory_attention_num_layers": 4,
635
+ "memory_attention_rope_dropout": 0.1,
636
+ "memory_attention_rope_feat_sizes": [
637
+ 72,
638
+ 72
639
+ ],
640
+ "memory_attention_rope_theta": 10000,
641
+ "memory_encoder_hidden_size": 256,
642
+ "memory_encoder_output_channels": 64,
643
+ "memory_fuser_embed_dim": 256,
644
+ "memory_fuser_hidden_act": "gelu",
645
+ "memory_fuser_intermediate_dim": 1024,
646
+ "memory_fuser_kernel_size": 7,
647
+ "memory_fuser_layer_scale_init_value": 1e-06,
648
+ "memory_fuser_num_layers": 2,
649
+ "memory_fuser_padding": 3,
650
+ "model_type": "sam3_tracker_video",
651
+ "multimask_max_pt_num": 1,
652
+ "multimask_min_pt_num": 0,
653
+ "multimask_output_for_tracking": true,
654
+ "multimask_output_in_sam": true,
655
+ "num_maskmem": 7,
656
+ "prompt_encoder_config": {
657
+ "_name_or_path": "",
658
+ "add_cross_attention": false,
659
+ "architectures": null,
660
+ "bad_words_ids": null,
661
+ "begin_suppress_tokens": null,
662
+ "bos_token_id": null,
663
+ "chunk_size_feed_forward": 0,
664
+ "cross_attention_hidden_size": null,
665
+ "decoder_start_token_id": null,
666
+ "diversity_penalty": 0.0,
667
+ "do_sample": false,
668
+ "dtype": null,
669
+ "early_stopping": false,
670
+ "encoder_no_repeat_ngram_size": 0,
671
+ "eos_token_id": null,
672
+ "exponential_decay_length_penalty": null,
673
+ "finetuning_task": null,
674
+ "forced_bos_token_id": null,
675
+ "forced_eos_token_id": null,
676
+ "hidden_act": "gelu",
677
+ "hidden_size": 256,
678
+ "id2label": {
679
+ "0": "LABEL_0",
680
+ "1": "LABEL_1"
681
+ },
682
+ "image_size": 1008,
683
+ "is_decoder": false,
684
+ "is_encoder_decoder": false,
685
+ "label2id": {
686
+ "LABEL_0": 0,
687
+ "LABEL_1": 1
688
+ },
689
+ "layer_norm_eps": 1e-06,
690
+ "length_penalty": 1.0,
691
+ "mask_input_channels": 16,
692
+ "max_length": 20,
693
+ "min_length": 0,
694
+ "model_type": "",
695
+ "no_repeat_ngram_size": 0,
696
+ "num_beam_groups": 1,
697
+ "num_beams": 1,
698
+ "num_point_embeddings": 4,
699
+ "num_return_sequences": 1,
700
+ "output_attentions": false,
701
+ "output_hidden_states": false,
702
+ "output_scores": false,
703
+ "pad_token_id": null,
704
+ "patch_size": 14,
705
+ "prefix": null,
706
+ "problem_type": null,
707
+ "remove_invalid_values": false,
708
+ "repetition_penalty": 1.0,
709
+ "return_dict": true,
710
+ "return_dict_in_generate": false,
711
+ "scale": 1,
712
+ "sep_token_id": null,
713
+ "suppress_tokens": null,
714
+ "task_specific_params": null,
715
+ "temperature": 1.0,
716
+ "tie_encoder_decoder": false,
717
+ "tie_word_embeddings": true,
718
+ "tokenizer_class": null,
719
+ "top_k": 50,
720
+ "top_p": 1.0,
721
+ "typical_p": 1.0
722
+ },
723
+ "sigmoid_bias_for_mem_enc": -10.0,
724
+ "sigmoid_scale_for_mem_enc": 20.0,
725
+ "vision_config": {
726
+ "_name_or_path": "",
727
+ "add_cross_attention": false,
728
+ "architectures": null,
729
+ "backbone_config": {
730
+ "_name_or_path": "",
731
+ "add_cross_attention": false,
732
+ "architectures": null,
733
+ "attention_dropout": 0.0,
734
+ "bad_words_ids": null,
735
+ "begin_suppress_tokens": null,
736
+ "bos_token_id": null,
737
+ "chunk_size_feed_forward": 0,
738
+ "cross_attention_hidden_size": null,
739
+ "decoder_start_token_id": null,
740
+ "diversity_penalty": 0.0,
741
+ "do_sample": false,
742
+ "dtype": null,
743
+ "early_stopping": false,
744
+ "encoder_no_repeat_ngram_size": 0,
745
+ "eos_token_id": null,
746
+ "exponential_decay_length_penalty": null,
747
+ "finetuning_task": null,
748
+ "forced_bos_token_id": null,
749
+ "forced_eos_token_id": null,
750
+ "global_attn_indexes": [
751
+ 7,
752
+ 15,
753
+ 23,
754
+ 31
755
+ ],
756
+ "hidden_act": "gelu",
757
+ "hidden_dropout": 0.0,
758
+ "hidden_size": 1024,
759
+ "id2label": {
760
+ "0": "LABEL_0",
761
+ "1": "LABEL_1"
762
+ },
763
+ "image_size": 1008,
764
+ "initializer_range": 0.02,
765
+ "intermediate_size": 4736,
766
+ "is_decoder": false,
767
+ "is_encoder_decoder": false,
768
+ "label2id": {
769
+ "LABEL_0": 0,
770
+ "LABEL_1": 1
771
+ },
772
+ "layer_norm_eps": 1e-06,
773
+ "layer_scale_init_value": null,
774
+ "length_penalty": 1.0,
775
+ "max_length": 20,
776
+ "min_length": 0,
777
+ "model_type": "sam3_vit_model",
778
+ "no_repeat_ngram_size": 0,
779
+ "num_attention_heads": 16,
780
+ "num_beam_groups": 1,
781
+ "num_beams": 1,
782
+ "num_channels": 3,
783
+ "num_hidden_layers": 32,
784
+ "num_return_sequences": 1,
785
+ "output_attentions": false,
786
+ "output_hidden_states": false,
787
+ "output_scores": false,
788
+ "pad_token_id": null,
789
+ "patch_size": 14,
790
+ "prefix": null,
791
+ "pretrain_image_size": 336,
792
+ "problem_type": null,
793
+ "qkv_bias": true,
794
+ "remove_invalid_values": false,
795
+ "repetition_penalty": 1.0,
796
+ "return_dict": true,
797
+ "return_dict_in_generate": false,
798
+ "rope_theta": 10000.0,
799
+ "sep_token_id": null,
800
+ "suppress_tokens": null,
801
+ "task_specific_params": null,
802
+ "temperature": 1.0,
803
+ "tie_encoder_decoder": false,
804
+ "tie_word_embeddings": true,
805
+ "tokenizer_class": null,
806
+ "top_k": 50,
807
+ "top_p": 1.0,
808
+ "typical_p": 1.0,
809
+ "window_size": 24
810
+ },
811
+ "backbone_feature_sizes": [
812
+ [
813
+ 288,
814
+ 288
815
+ ],
816
+ [
817
+ 144,
818
+ 144
819
+ ],
820
+ [
821
+ 72,
822
+ 72
823
+ ]
824
+ ],
825
+ "bad_words_ids": null,
826
+ "begin_suppress_tokens": null,
827
+ "bos_token_id": null,
828
+ "chunk_size_feed_forward": 0,
829
+ "cross_attention_hidden_size": null,
830
+ "decoder_start_token_id": null,
831
+ "diversity_penalty": 0.0,
832
+ "do_sample": false,
833
+ "dtype": null,
834
+ "early_stopping": false,
835
+ "encoder_no_repeat_ngram_size": 0,
836
+ "eos_token_id": null,
837
+ "exponential_decay_length_penalty": null,
838
+ "finetuning_task": null,
839
+ "forced_bos_token_id": null,
840
+ "forced_eos_token_id": null,
841
+ "fpn_hidden_size": 256,
842
+ "fpn_kernel_size": 2,
843
+ "fpn_stride": 2,
844
+ "hidden_act": "gelu",
845
+ "id2label": {
846
+ "0": "LABEL_0",
847
+ "1": "LABEL_1"
848
+ },
849
+ "initializer_range": 0.02,
850
+ "is_decoder": false,
851
+ "is_encoder_decoder": false,
852
+ "label2id": {
853
+ "LABEL_0": 0,
854
+ "LABEL_1": 1
855
+ },
856
+ "layer_norm_eps": 1e-06,
857
+ "length_penalty": 1.0,
858
+ "max_length": 20,
859
+ "min_length": 0,
860
+ "model_type": "sam3_vision_model",
861
+ "no_repeat_ngram_size": 0,
862
+ "num_beam_groups": 1,
863
+ "num_beams": 1,
864
+ "num_feature_levels": 3,
865
+ "num_return_sequences": 1,
866
+ "output_attentions": false,
867
+ "output_hidden_states": false,
868
+ "output_scores": false,
869
+ "pad_token_id": null,
870
+ "prefix": null,
871
+ "problem_type": null,
872
+ "remove_invalid_values": false,
873
+ "repetition_penalty": 1.0,
874
+ "return_dict": true,
875
+ "return_dict_in_generate": false,
876
+ "scale_factors": [
877
+ 4.0,
878
+ 2.0,
879
+ 1.0,
880
+ 0.5
881
+ ],
882
+ "sep_token_id": null,
883
+ "suppress_tokens": null,
884
+ "task_specific_params": null,
885
+ "temperature": 1.0,
886
+ "tie_encoder_decoder": false,
887
+ "tie_word_embeddings": true,
888
+ "tokenizer_class": null,
889
+ "top_k": 50,
890
+ "top_p": 1.0,
891
+ "typical_p": 1.0
892
+ }
893
+ },
894
+ "transformers_version": "5.0.0.dev0",
895
+ "trk_assoc_iou_thresh": 0.5
896
+ }
Sam3/gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
Sam3/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
Sam3/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d06f0a5f84e435071fe6603e61d0b4cc7b40e0d39d487cfd4d67d8cc11cc14a
3
+ size 3439938512
Sam3/processor_config.json ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_processor": {
3
+ "crop_size": null,
4
+ "data_format": "channels_first",
5
+ "device": null,
6
+ "disable_grouping": null,
7
+ "do_center_crop": null,
8
+ "do_convert_rgb": true,
9
+ "do_normalize": true,
10
+ "do_pad": null,
11
+ "do_rescale": true,
12
+ "do_resize": true,
13
+ "image_mean": [
14
+ 0.5,
15
+ 0.5,
16
+ 0.5
17
+ ],
18
+ "image_processor_type": "Sam3ImageProcessorFast",
19
+ "image_seq_length": null,
20
+ "image_std": [
21
+ 0.5,
22
+ 0.5,
23
+ 0.5
24
+ ],
25
+ "input_data_format": null,
26
+ "mask_size": {
27
+ "height": 288,
28
+ "width": 288
29
+ },
30
+ "pad_size": null,
31
+ "processor_class": "Sam3VideoProcessor",
32
+ "resample": 2,
33
+ "rescale_factor": 0.00392156862745098,
34
+ "return_tensors": null,
35
+ "size": {
36
+ "height": 1008,
37
+ "width": 1008
38
+ }
39
+ },
40
+ "processor_class": "Sam3VideoProcessor",
41
+ "target_size": 1008,
42
+ "video_processor": {
43
+ "crop_size": null,
44
+ "data_format": "channels_first",
45
+ "default_to_square": true,
46
+ "device": null,
47
+ "do_center_crop": null,
48
+ "do_convert_rgb": true,
49
+ "do_normalize": true,
50
+ "do_pad": null,
51
+ "do_rescale": true,
52
+ "do_resize": true,
53
+ "do_sample_frames": null,
54
+ "fps": null,
55
+ "image_mean": [
56
+ 0.5,
57
+ 0.5,
58
+ 0.5
59
+ ],
60
+ "image_std": [
61
+ 0.5,
62
+ 0.5,
63
+ 0.5
64
+ ],
65
+ "input_data_format": null,
66
+ "num_frames": null,
67
+ "pad_size": null,
68
+ "processor_class": "Sam3VideoProcessor",
69
+ "resample": 2,
70
+ "rescale_factor": 0.00392156862745098,
71
+ "return_metadata": false,
72
+ "return_tensors": null,
73
+ "size": {
74
+ "height": 1008,
75
+ "width": 1008
76
+ },
77
+ "video_metadata": null,
78
+ "video_processor_type": "Sam2VideoVideoProcessor"
79
+ }
80
+ }
Sam3/sam3.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9999e2341ceef5e136daa386eecb55cb414446a00ac2b55eb2dfd2f7c3cf8c9e
3
+ size 3450062241
Sam3/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<|endoftext|>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
Sam3/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
Sam3/vocab.json ADDED
The diff for this file is too large to render. See raw diff