Instructions to use 404-Gen/sam3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use 404-Gen/sam3 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("mask-generation", model="404-Gen/sam3")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("404-Gen/sam3") model = AutoModel.from_pretrained("404-Gen/sam3") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -26,6 +26,8 @@ language:
|
|
| 26 |
- en
|
| 27 |
pipeline_tag: mask-generation
|
| 28 |
library_name: transformers
|
|
|
|
|
|
|
| 29 |
---
|
| 30 |
|
| 31 |
SAM 3 is a unified foundation model for promptable segmentation in images and videos. It can detect, segment, and track objects using text or visual prompts such as points, boxes, and masks. Compared to its predecessor [SAM 2](https://github.com/facebookresearch/sam2), SAM 3 introduces the ability to exhaustively segment all instances of an open-vocabulary concept specified by a short text phrase or exemplars. Unlike prior work, SAM 3 can handle a vastly larger set of open-vocabulary prompts. It achieves 75-80% of human performance on our new [SA-CO benchmark](https://github.com/facebookresearch/sam3/edit/main_readme/README.md#sa-co-dataset) which contains 270K unique concepts, over 50 times more than existing benchmarks.
|
|
@@ -677,5 +679,4 @@ For real-time applications, Sam3TrackerVideo supports processing video frames as
|
|
| 677 |
... [sam3_tracker_video_output.pred_masks], original_sizes=inputs.original_sizes, binarize=False
|
| 678 |
... )[0]
|
| 679 |
... print(f"Frame {frame_idx}: mask shape {video_res_masks.shape}")
|
| 680 |
-
```
|
| 681 |
-
|
|
|
|
| 26 |
- en
|
| 27 |
pipeline_tag: mask-generation
|
| 28 |
library_name: transformers
|
| 29 |
+
tags:
|
| 30 |
+
- sam3
|
| 31 |
---
|
| 32 |
|
| 33 |
SAM 3 is a unified foundation model for promptable segmentation in images and videos. It can detect, segment, and track objects using text or visual prompts such as points, boxes, and masks. Compared to its predecessor [SAM 2](https://github.com/facebookresearch/sam2), SAM 3 introduces the ability to exhaustively segment all instances of an open-vocabulary concept specified by a short text phrase or exemplars. Unlike prior work, SAM 3 can handle a vastly larger set of open-vocabulary prompts. It achieves 75-80% of human performance on our new [SA-CO benchmark](https://github.com/facebookresearch/sam3/edit/main_readme/README.md#sa-co-dataset) which contains 270K unique concepts, over 50 times more than existing benchmarks.
|
|
|
|
| 679 |
... [sam3_tracker_video_output.pred_masks], original_sizes=inputs.original_sizes, binarize=False
|
| 680 |
... )[0]
|
| 681 |
... print(f"Frame {frame_idx}: mask shape {video_res_masks.shape}")
|
| 682 |
+
```
|
|
|