File size: 6,996 Bytes
34a4bcb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Copyright (c) Meta Platforms, Inc. and affiliates."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# SAM 3 Agent"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook shows an example of how an MLLM can use SAM 3 as a tool, i.e., \"SAM 3 Agent\", to segment more complex text queries such as \"the leftmost child wearing blue vest\"."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Env Setup"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First install `sam3` in your environment using the [installation instructions](https://github.com/facebookresearch/sam3?tab=readme-ov-file#installation) in the repository."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "# turn on tfloat32 for Ampere GPUs\n",
    "# https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices\n",
    "torch.backends.cuda.matmul.allow_tf32 = True\n",
    "torch.backends.cudnn.allow_tf32 = True\n",
    "\n",
    "# use bfloat16 for the entire notebook. If your card doesn't support it, try float16 instead\n",
    "torch.autocast(\"cuda\", dtype=torch.bfloat16).__enter__()\n",
    "\n",
    "# inference mode for the whole notebook. Disable if you need gradients\n",
    "torch.inference_mode().__enter__()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "SAM3_ROOT = os.path.dirname(os.getcwd())\n",
    "os.chdir(SAM3_ROOT)\n",
    "\n",
    "# setup GPU to use -  A single GPU is good with the purpose of this demo\n",
    "os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n",
    "_ = os.system(\"nvidia-smi\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Build SAM3 Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sam3\n",
    "from sam3 import build_sam3_image_model\n",
    "from sam3.model.sam3_image_processor import Sam3Processor\n",
    "\n",
    "sam3_root = os.path.join(os.path.dirname(sam3.__file__), \"..\")\n",
    "bpe_path = f\"{sam3_root}/assets/bpe_simple_vocab_16e6.txt.gz\"\n",
    "model = build_sam3_image_model(bpe_path=bpe_path)\n",
    "processor = Sam3Processor(model, confidence_threshold=0.5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## LLM Setup\n",
    "\n",
    "Config which MLLM to use, it can either be a model served by vLLM that you launch from your own machine or a model is served via external API. If you want to using a vLLM model, we also provided insturctions below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "LLM_CONFIGS = {\n",
    "    # vLLM-served models\n",
    "    \"qwen3_vl_8b_thinking\": {\n",
    "        \"provider\": \"vllm\",\n",
    "        \"model\": \"Qwen/Qwen3-VL-8B-Thinking\",\n",
    "    }, \n",
    "    # models served via external APIs\n",
    "    # add your own\n",
    "}\n",
    "\n",
    "model = \"qwen3_vl_8b_thinking\"\n",
    "LLM_API_KEY = \"DUMMY_API_KEY\"\n",
    "\n",
    "llm_config = LLM_CONFIGS[model]\n",
    "llm_config[\"api_key\"] = LLM_API_KEY\n",
    "llm_config[\"name\"] = model\n",
    "\n",
    "# setup API endpoint\n",
    "if llm_config[\"provider\"] == \"vllm\":\n",
    "    LLM_SERVER_URL = \"http://0.0.0.0:8001/v1\"  # replace this with your vLLM server address as needed\n",
    "else:\n",
    "    LLM_SERVER_URL = llm_config[\"base_url\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Setup vLLM server \n",
    "This step is only required if you are using a model served by vLLM, skip this step if you are calling LLM using an API like Gemini and GPT.\n",
    "\n",
    "* Install vLLM (in a separate conda env from SAM 3 to avoid dependency conflicts).\n",
    "  ```bash\n",
    "    conda create -n vllm python=3.12\n",
    "    pip install vllm --extra-index-url https://download.pytorch.org/whl/cu128\n",
    "  ```\n",
    "* Start vLLM server on the same machine of this notebook\n",
    "  ```bash\n",
    "    # qwen 3 VL 8B thinking\n",
    "    vllm serve Qwen/Qwen3-VL-8B-Thinking --tensor-parallel-size 4 --allowed-local-media-path / --enforce-eager --port 8001\n",
    "  ```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Run SAM3 Agent Inference"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from functools import partial\n",
    "from IPython.display import display, Image\n",
    "from sam3.agent.client_llm import send_generate_request as send_generate_request_orig\n",
    "from sam3.agent.client_sam3 import call_sam_service as call_sam_service_orig\n",
    "from sam3.agent.inference import run_single_image_inference"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "output": {
     "id": 689664053567678,
     "loadingStatus": "loaded"
    }
   },
   "outputs": [],
   "source": [
    "# prepare input args and run single image inference\n",
    "image = \"assets/images/test_image.jpg\"\n",
    "prompt = \"the leftmost child wearing blue vest\"\n",
    "image = os.path.abspath(image)\n",
    "send_generate_request = partial(send_generate_request_orig, server_url=LLM_SERVER_URL, model=llm_config[\"model\"], api_key=llm_config[\"api_key\"])\n",
    "call_sam_service = partial(call_sam_service_orig, sam3_processor=processor)\n",
    "output_image_path = run_single_image_inference(\n",
    "    image, prompt, llm_config, send_generate_request, call_sam_service, \n",
    "    debug=True, output_dir=\"agent_output\"\n",
    ")\n",
    "\n",
    "# display output\n",
    "if output_image_path is not None:\n",
    "    display(Image(filename=output_image_path))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "fileHeader": "",
  "fileUid": "be59e249-6c09-4634-a9e7-1f06fd233c42",
  "isAdHoc": false,
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}