{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "License Information",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496606600,
"executionStopTime": 1762496607864,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Copyright Notice",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950741027,
"executionStopTime": 1761950741468,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "49a54601-c2fe-47f2-a436-9a1f91503520",
"outputsInitialized": true,
"requestMsgId": "da31cd52-f746-4a69-bfae-b2037e84d00c",
"serverExecutionDuration": 2.3454379988834,
"showInput": true
},
"originalKey": "913d6f63-449f-4836-ae81-7d55a42ccf8c",
"output": {
"id": 863592039347424,
"loadingStatus": "loaded"
},
"outputsInitialized": false,
"requestMsgId": "913d6f63-449f-4836-ae81-7d55a42ccf8c",
"serverExecutionDuration": 2.4641010004416
},
"outputs": [],
"source": [
"# Copyright (c) Meta Platforms, Inc. and affiliates."
]
},
{
"cell_type": "markdown",
"metadata": {
"attachments": [],
"bentoAICellStatus": "none",
"isCommentPanelOpen": false,
"language": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"customInput": null,
"isCommentPanelOpen": false,
"language": "markdown",
"originalKey": "50280cd0-12eb-4f23-a78d-c37f5bda1fe6",
"outputsInitialized": false,
"showInput": false
},
"originalKey": "9e88cae8-b006-498d-9a02-c1c369a95f57",
"outputsInitialized": false,
"showInput": true
},
"source": [
"## Video segmentation and tracking with SAM 3\n",
"\n",
"This notebook demonstrates how to use SAM 3 for interactive video segmentation and dense tracking. It covers the following capabilities:\n",
"\n",
"- **Text prompts**: Using natural language descriptions to segment objects (e.g., \"person\", \"shoe\")\n",
"- **Point prompts**: Adding positive/negative clicks to segment and refine objects\n",
"\n",
"We use the terms _segment_ or _mask_ to refer to the model prediction for an object on a single frame, and _masklet_ to refer to the spatio-temporal masks across the entire video. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# \n",
"#
\n",
"# "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"using_colab = False"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if using_colab:\n",
" import torch\n",
" import torchvision\n",
" print(\"PyTorch version:\", torch.__version__)\n",
" print(\"Torchvision version:\", torchvision.__version__)\n",
" print(\"CUDA is available:\", torch.cuda.is_available())\n",
" import sys\n",
" !{sys.executable} -m pip install opencv-python matplotlib scikit-learn\n",
" !{sys.executable} -m pip install 'git+https://github.com/facebookresearch/sam3.git'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Display GPU Status",
"origin": "ai"
},
"customOutput": null,
"executionStartTime": 1762496607874,
"executionStopTime": 1762496609713,
"isCommentPanelOpen": false,
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Check GPU Status",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950741471,
"executionStopTime": 1761950742878,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "c3f5a31b-c8df-45db-ae6d-0d219ee60382",
"outputsInitialized": true,
"requestMsgId": "cea571e0-345b-461d-b2ee-8f95e0ea4b4e",
"serverExecutionDuration": 1091.9901590096,
"showInput": true
},
"originalKey": "fb0eb6a0-acd2-4c80-bacd-bafd09669e7e",
"output": {
"id": "794918370206651",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "fb0eb6a0-acd2-4c80-bacd-bafd09669e7e",
"serverExecutionDuration": 1123.5525669999
},
"outputs": [],
"source": [
"!nvidia-smi"
]
},
{
"cell_type": "markdown",
"metadata": {
"attachments": [],
"bentoAICellStatus": "none",
"isCommentPanelOpen": false,
"language": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"collapsed": false,
"customInput": null,
"executionStartTime": 1761927188199,
"executionStopTime": 1761927188659,
"isCommentPanelOpen": false,
"language": "markdown",
"originalKey": "8304fc58-e145-4f5f-8bdc-a6d2dfba8a04",
"outputsInitialized": false,
"requestMsgId": "8304fc58-e145-4f5f-8bdc-a6d2dfba8a04",
"serverExecutionDuration": 3.739742009202,
"showInput": false
},
"originalKey": "6702b9f4-54e9-46be-aca8-82ad9f96e9cc",
"outputsInitialized": false,
"showInput": false
},
"source": [
"## Set-up\n",
"\n",
"In this example, we allow running inference either on a single GPU or multiple GPUs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Configure GPU Usage",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496609726,
"executionStopTime": 1762496617098,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Import iopath library",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1761950742885,
"executionStopTime": 1761950745386,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "5faf15c7-5bbb-4b86-a9a8-70abbc76cba2",
"outputsInitialized": true,
"requestMsgId": "e63f2826-d537-4849-82ac-fe7280cc9de0",
"serverExecutionDuration": 2117.1291570063
},
"originalKey": "5d0ad6b6-0225-4371-9455-e6291e92604c",
"output": {
"id": "1459804151757142",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "5d0ad6b6-0225-4371-9455-e6291e92604c",
"serverExecutionDuration": 6628.1851309996
},
"outputs": [],
"source": [
"import os\n",
"import sam3\n",
"import torch\n",
"\n",
"sam3_root = os.path.join(os.path.dirname(sam3.__file__), \"..\")\n",
"\n",
"# use all available GPUs on the machine\n",
"gpus_to_use = range(torch.cuda.device_count())\n",
"# # use only a single GPU\n",
"# gpus_to_use = [torch.cuda.current_device()]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Initialize Video Predictor",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496617103,
"executionStopTime": 1762496677439,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Import Video Predictor",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950750102,
"executionStopTime": 1761950806603,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "01683eda-e85f-4af6-9d91-86b3f1822170",
"outputsInitialized": true,
"requestMsgId": "822fb211-d78e-4d1c-92fa-848e0e755100",
"serverExecutionDuration": 55998.664824001,
"showInput": true
},
"originalKey": "aea5a4b9-de9f-46ed-9fd1-20928ab60d2e",
"output": {
"id": "1581259049706846",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "aea5a4b9-de9f-46ed-9fd1-20928ab60d2e",
"serverExecutionDuration": 59514.871851999
},
"outputs": [],
"source": [
"from sam3.model_builder import build_sam3_video_predictor\n",
"\n",
"predictor = build_sam3_video_predictor(gpus_to_use=gpus_to_use)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"collapsed": false,
"customInput": null,
"executionStartTime": 1762140878760,
"executionStopTime": 1762140879318,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "markdown",
"originalKey": "2cb37dda-a58c-46ae-85ff-118bb3ff4c02",
"outputsInitialized": false,
"requestMsgId": "2cb37dda-a58c-46ae-85ff-118bb3ff4c02",
"serverExecutionDuration": 3.7622059462592,
"showInput": false
},
"source": [
"#### Inference and visualization utils"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Set Up Video Processing",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496677450,
"executionStopTime": 1762496679879,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "10d98ae4-dd65-4824-8469-960a9801ec72",
"output": {
"id": "1183417547004803",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "10d98ae4-dd65-4824-8469-960a9801ec72",
"serverExecutionDuration": 1535.9860829994,
"showInput": true
},
"outputs": [],
"source": [
"import glob\n",
"import os\n",
"\n",
"import cv2\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"from PIL import Image\n",
"from sam3.visualization_utils import (\n",
" load_frame,\n",
" prepare_masks_for_visualization,\n",
" visualize_formatted_frame_output,\n",
")\n",
"\n",
"# font size for axes titles\n",
"plt.rcParams[\"axes.titlesize\"] = 12\n",
"plt.rcParams[\"figure.titlesize\"] = 12\n",
"\n",
"\n",
"def propagate_in_video(predictor, session_id):\n",
" # we will just propagate from frame 0 to the end of the video\n",
" outputs_per_frame = {}\n",
" for response in predictor.handle_stream_request(\n",
" request=dict(\n",
" type=\"propagate_in_video\",\n",
" session_id=session_id,\n",
" )\n",
" ):\n",
" outputs_per_frame[response[\"frame_index\"]] = response[\"outputs\"]\n",
"\n",
" return outputs_per_frame\n",
"\n",
"\n",
"def abs_to_rel_coords(coords, IMG_WIDTH, IMG_HEIGHT, coord_type=\"point\"):\n",
" \"\"\"Convert absolute coordinates to relative coordinates (0-1 range)\n",
"\n",
" Args:\n",
" coords: List of coordinates\n",
" coord_type: 'point' for [x, y] or 'box' for [x, y, w, h]\n",
" \"\"\"\n",
" if coord_type == \"point\":\n",
" return [[x / IMG_WIDTH, y / IMG_HEIGHT] for x, y in coords]\n",
" elif coord_type == \"box\":\n",
" return [\n",
" [x / IMG_WIDTH, y / IMG_HEIGHT, w / IMG_WIDTH, h / IMG_HEIGHT]\n",
" for x, y, w, h in coords\n",
" ]\n",
" else:\n",
" raise ValueError(f\"Unknown coord_type: {coord_type}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"attachments": [],
"bentoAICellStatus": "none",
"isCommentPanelOpen": false,
"language": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"customInput": null,
"isCommentPanelOpen": false,
"language": "markdown",
"originalKey": "2e38c5fe-1aa7-4000-9778-25e240daf5e5",
"outputsInitialized": false,
"showInput": false
},
"originalKey": "7f803ec4-a343-43c9-9be3-5d3b9b66ae9a",
"outputsInitialized": false,
"showInput": false
},
"source": [
"### Loading an example video\n",
"\n",
"We assume that the video is stored as either **a list of JPEG frames with filenames like `.jpg`** or **an MP4 video**.\n",
"\n",
"Note that you can extract their JPEG frames using ffmpeg (https://ffmpeg.org/) as follows:\n",
"```\n",
"ffmpeg -i .mp4 -q:v 2 -start_number 0 /'%05d.jpg'\n",
"```\n",
"where `-q:v` generates high-quality JPEG frames and `-start_number 0` asks ffmpeg to start the JPEG file from `00000.jpg`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Set video path",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496679887,
"executionStopTime": 1762496680687,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Print SAM3 Directory Path",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950806610,
"executionStopTime": 1761950807097,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "59540fba-3749-4498-bd14-c28fb7d61dbc",
"outputsInitialized": true,
"requestMsgId": "2807b3b8-a4fb-41e9-9976-67e2cd1f2ca2",
"serverExecutionDuration": 5.3407719824463,
"showInput": true
},
"originalKey": "92d7b964-a3e4-4efb-98d4-202344994413",
"outputsInitialized": false,
"requestMsgId": "92d7b964-a3e4-4efb-98d4-202344994413",
"serverExecutionDuration": 3.5114740003337
},
"outputs": [],
"source": [
"# \"video_path\" needs to be either a JPEG folder or a MP4 video file\n",
"video_path = f\"{sam3_root}/assets/videos/0001\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Load Video Frames for Visualization",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496680695,
"executionStopTime": 1762496681428,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Load Video Frames for Visualization",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950807101,
"executionStopTime": 1761950807480,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "86f88968-93c4-4ca1-81fc-7fad55ccd566",
"outputsInitialized": true,
"requestMsgId": "96b17a72-b889-4a1f-af98-b40c773780f7",
"serverExecutionDuration": 7.6176410075277,
"showInput": true
},
"originalKey": "10fe83fd-eb12-4400-a8ac-b6e137819136",
"outputsInitialized": false,
"requestMsgId": "10fe83fd-eb12-4400-a8ac-b6e137819136",
"serverExecutionDuration": 8.1744140006776
},
"outputs": [],
"source": [
"# load \"video_frames_for_vis\" for visualization purposes (they are not used by the model)\n",
"if isinstance(video_path, str) and video_path.endswith(\".mp4\"):\n",
" cap = cv2.VideoCapture(video_path)\n",
" video_frames_for_vis = []\n",
" while True:\n",
" ret, frame = cap.read()\n",
" if not ret:\n",
" break\n",
" video_frames_for_vis.append(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))\n",
" cap.release()\n",
"else:\n",
" video_frames_for_vis = glob.glob(os.path.join(video_path, \"*.jpg\"))\n",
" try:\n",
" # integer sort instead of string sort (so that e.g. \"2.jpg\" is before \"11.jpg\")\n",
" video_frames_for_vis.sort(\n",
" key=lambda p: int(os.path.splitext(os.path.basename(p))[0])\n",
" )\n",
" except ValueError:\n",
" # fallback to lexicographic sort if the format is not \".jpg\"\n",
" print(\n",
" f'frame names are not in \".jpg\" format: {video_frames_for_vis[:5]=}, '\n",
" f\"falling back to lexicographic sort.\"\n",
" )\n",
" video_frames_for_vis.sort()"
]
},
{
"cell_type": "markdown",
"metadata": {
"attachments": [],
"bentoAICellStatus": "none",
"isCommentPanelOpen": false,
"language": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"customInput": null,
"isCommentPanelOpen": false,
"language": "markdown",
"originalKey": "2f67df56-df50-470d-bb6d-f736a592dd47",
"outputsInitialized": false,
"showInput": false
},
"originalKey": "552fbb00-7387-4014-9161-7f9c32418701",
"outputsInitialized": false,
"showInput": false
},
"source": [
"### Opening an inference session on this video\n",
"\n",
"SAM 3 requires stateful inference for interactive video segmentation, so we need to initialize an **inference session** on this video.\n",
"\n",
"During initialization, it loads all the video frames and stores their pixels in the session state."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Start Video Session",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496681434,
"executionStopTime": 1762496694273,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Start Video Session",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950807485,
"executionStopTime": 1761950821459,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "eec3ee85-e95a-4fa7-8401-921fba35d6ee",
"outputsInitialized": true,
"requestMsgId": "262d936d-486c-4a75-8f25-ad2a2832c3f8",
"serverExecutionDuration": 13503.750298987,
"showInput": true
},
"originalKey": "2b5917e0-95da-409a-b2c6-4868fe5ff88e",
"output": {
"id": "816533627765185",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "2b5917e0-95da-409a-b2c6-4868fe5ff88e",
"serverExecutionDuration": 11971.027362999
},
"outputs": [],
"source": [
"response = predictor.handle_request(\n",
" request=dict(\n",
" type=\"start_session\",\n",
" resource_path=video_path,\n",
" )\n",
")\n",
"session_id = response[\"session_id\"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"attachments": [],
"bentoAICellStatus": "none",
"isCommentPanelOpen": false,
"language": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"customInput": null,
"isCommentPanelOpen": false,
"language": "markdown",
"originalKey": "4d26ec1b-d78f-4054-879a-cbf06b84a79f",
"outputsInitialized": false,
"showInput": false
},
"originalKey": "1da36b65-5759-4803-be41-ec6d2ec8c5d9",
"outputsInitialized": false,
"showInput": false
},
"source": [
"### Video promptable concept segmentation with text\n",
"\n",
"Using SAM 3 you can describe objects using natural language, and the model will automatically detect and track all instances of that object throughout the video.\n",
"\n",
"In the example below, we add a text prompt on frame 0 and propagation throughout the video. Here we use the text prompt \"person\" to detect all people in the video. SAM 3 will automatically identify multiple person instances and assign each a unique object ID.\n",
"\n",
"Note that the first call might be slower due to setting up buffers. **You can rerun all the cells below when measuring speed.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Reset Session",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496694278,
"executionStopTime": 1762496694675,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Reset Prediction Session",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950821467,
"executionStopTime": 1761950821992,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "be47ff32-7721-43c2-a1ad-b9d1689cf1c1",
"outputsInitialized": true,
"requestMsgId": "c5eb2442-0530-4408-9ace-ba74e7441cf6",
"serverExecutionDuration": 10.527236998314,
"showInput": true
},
"originalKey": "f61b9bc5-5ec7-4c72-9071-e50bedb89f0a",
"output": {
"id": "4255598051433217",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "f61b9bc5-5ec7-4c72-9071-e50bedb89f0a",
"serverExecutionDuration": 9.9740919995384
},
"outputs": [],
"source": [
"# note: in case you already ran one text prompt and now want to switch to another text prompt\n",
"# it's required to reset the session first (otherwise the results would be wrong)\n",
"_ = predictor.handle_request(\n",
" request=dict(\n",
" type=\"reset_session\",\n",
" session_id=session_id,\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Add Text Prompt",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496694678,
"executionStopTime": 1762496699791,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Add Prompt to Frame",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950821995,
"executionStopTime": 1761950825751,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "ca780545-a450-4ed8-a192-f6033e0288ba",
"outputsInitialized": true,
"requestMsgId": "d784487d-5758-40a4-8a66-1ea26ad3bcfd",
"serverExecutionDuration": 3358.8370049838,
"showInput": true
},
"originalKey": "55538638-8336-4b1c-9daf-517c0dc31806",
"output": {
"id": "2083947459075035",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "55538638-8336-4b1c-9daf-517c0dc31806",
"serverExecutionDuration": 3957.4137909985
},
"outputs": [],
"source": [
"prompt_text_str = \"person\"\n",
"frame_idx = 0 # add a text prompt on frame 0\n",
"response = predictor.handle_request(\n",
" request=dict(\n",
" type=\"add_prompt\",\n",
" session_id=session_id,\n",
" frame_index=frame_idx,\n",
" text=prompt_text_str,\n",
" )\n",
")\n",
"out = response[\"outputs\"]\n",
"\n",
"plt.close(\"all\")\n",
"visualize_formatted_frame_output(\n",
" frame_idx,\n",
" video_frames_for_vis,\n",
" outputs_list=[prepare_masks_for_visualization({frame_idx: out})],\n",
" titles=[\"SAM 3 Dense Tracking outputs\"],\n",
" figsize=(6, 4),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Visualize Video Outputs",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496699796,
"executionStopTime": 1762496734325,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Visualize Video Outputs",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950827402,
"executionStopTime": 1761950861260,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "06edc7da-9dd1-42e8-b502-0afdbfbaddc2",
"outputsInitialized": true,
"requestMsgId": "33659e9c-a817-40fe-b3b7-727b7a344d74",
"serverExecutionDuration": 33374.819626013,
"showInput": true
},
"originalKey": "41e1b095-80a4-4997-9b3e-2baa28dba05f",
"output": {
"id": "2605449593181328",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "41e1b095-80a4-4997-9b3e-2baa28dba05f",
"serverExecutionDuration": 33588.144188001
},
"outputs": [],
"source": [
"# now we propagate the outputs from frame 0 to the end of the video and collect all outputs\n",
"outputs_per_frame = propagate_in_video(predictor, session_id)\n",
"\n",
"# finally, we reformat the outputs for visualization and plot the outputs every 60 frames\n",
"outputs_per_frame = prepare_masks_for_visualization(outputs_per_frame)\n",
"\n",
"vis_frame_stride = 60\n",
"plt.close(\"all\")\n",
"for frame_idx in range(0, len(outputs_per_frame), vis_frame_stride):\n",
" visualize_formatted_frame_output(\n",
" frame_idx,\n",
" video_frames_for_vis,\n",
" outputs_list=[outputs_per_frame],\n",
" titles=[\"SAM 3 Dense Tracking outputs\"],\n",
" figsize=(6, 4),\n",
" )"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"customInput": null,
"isCommentPanelOpen": false,
"language": "markdown",
"originalKey": "e341c66f-6083-4731-b8e1-ebc7b4177a43",
"outputsInitialized": false,
"showInput": false
},
"source": [
"### Removing objects\n",
"\n",
"We can remove individual objects using their id.\n",
"\n",
"As an example, let's remove object 2 (which is the dancer in the front)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Remove Front Dancer",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496734333,
"executionStopTime": 1762496735487,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "cf97330e-68f5-4f4e-931f-bc5b6faa77ff",
"output": {
"id": "1345936250272478",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "cf97330e-68f5-4f4e-931f-bc5b6faa77ff",
"serverExecutionDuration": 127.66883199947,
"showInput": true
},
"outputs": [],
"source": [
"# we pick id 2, which is the dancer in the front\n",
"obj_id = 2\n",
"response = predictor.handle_request(\n",
" request=dict(\n",
" type=\"remove_object\",\n",
" session_id=session_id,\n",
" obj_id=obj_id,\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Visualize Video Outputs",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496735493,
"executionStopTime": 1762496742056,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "a4552ab0-1b08-42b6-b90d-bfd1814c9398",
"output": {
"id": "1747332999309403",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "a4552ab0-1b08-42b6-b90d-bfd1814c9398",
"serverExecutionDuration": 5491.8713629995,
"showInput": true
},
"outputs": [],
"source": [
"# now we propagate the outputs from frame 0 to the end of the video and collect all outputs\n",
"outputs_per_frame = propagate_in_video(predictor, session_id)\n",
"\n",
"# finally, we reformat the outputs for visualization and plot the outputs every 60 frames\n",
"outputs_per_frame = prepare_masks_for_visualization(outputs_per_frame)\n",
"\n",
"vis_frame_stride = 60\n",
"plt.close(\"all\")\n",
"for frame_idx in range(0, len(outputs_per_frame), vis_frame_stride):\n",
" visualize_formatted_frame_output(\n",
" frame_idx,\n",
" video_frames_for_vis,\n",
" outputs_list=[outputs_per_frame],\n",
" titles=[\"SAM 3 Dense Tracking outputs\"],\n",
" figsize=(6, 4),\n",
" )"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"customInput": null,
"isCommentPanelOpen": false,
"language": "markdown",
"originalKey": "4606cb85-5007-4b11-baa6-41ce9017de8e",
"outputsInitialized": false,
"showInput": false
},
"source": [
"### Adding new objects with point prompts\n",
"\n",
"We can add new objects through point prompts.\n",
"\n",
"Assuming that we've changed our mind, and now that we want to add back the dancer in the front (whom we just removed in the step above). We can use interactive clicks to add her back."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Get image dimensions",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496742064,
"executionStopTime": 1762496743435,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "2ad8f8e9-49d8-4003-ad2f-f289ab5befc8",
"outputsInitialized": false,
"requestMsgId": "2ad8f8e9-49d8-4003-ad2f-f289ab5befc8",
"serverExecutionDuration": 15.222899999571,
"showInput": true
},
"outputs": [],
"source": [
"sample_img = Image.fromarray(load_frame(video_frames_for_vis[0]))\n",
"\n",
"IMG_WIDTH, IMG_HEIGHT = sample_img.size"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Convert Absolute to Relative Coordinates",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496743442,
"executionStopTime": 1762496744333,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "3baca8f9-8e19-46da-a320-29524ddbf950",
"outputsInitialized": false,
"requestMsgId": "3baca8f9-8e19-46da-a320-29524ddbf950",
"serverExecutionDuration": 3.9295850001508,
"showInput": true
},
"outputs": [],
"source": [
"# let's add back the dancer via point prompts.\n",
"# we will use a single positive click to add the dancer back.\n",
"\n",
"frame_idx = 0\n",
"obj_id = 2\n",
"points_abs = np.array(\n",
" [\n",
" [760, 550], # positive click\n",
" ]\n",
")\n",
"# positive clicks have label 1, while negative clicks have label 0\n",
"labels = np.array([1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Display Data Points",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496744337,
"executionStopTime": 1762496748117,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "865e951f-8d0a-41bf-8d51-72092703e3cf",
"output": {
"id": "1240547311311464",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "865e951f-8d0a-41bf-8d51-72092703e3cf",
"serverExecutionDuration": 1224.9363859992,
"showInput": true
},
"outputs": [],
"source": [
"# convert points and labels to tensors; also convert to relative coordinates\n",
"points_tensor = torch.tensor(\n",
" abs_to_rel_coords(points_abs, IMG_WIDTH, IMG_HEIGHT, coord_type=\"point\"),\n",
" dtype=torch.float32,\n",
")\n",
"points_labels_tensor = torch.tensor(labels, dtype=torch.int32)\n",
"\n",
"response = predictor.handle_request(\n",
" request=dict(\n",
" type=\"add_prompt\",\n",
" session_id=session_id,\n",
" frame_index=frame_idx,\n",
" points=points_tensor,\n",
" point_labels=points_labels_tensor,\n",
" obj_id=obj_id,\n",
" )\n",
")\n",
"out = response[\"outputs\"]\n",
"\n",
"plt.close(\"all\")\n",
"visualize_formatted_frame_output(\n",
" frame_idx,\n",
" video_frames_for_vis,\n",
" outputs_list=[prepare_masks_for_visualization({frame_idx: out})],\n",
" titles=[\"SAM 3 Dense Tracking outputs\"],\n",
" figsize=(6, 4),\n",
" points_list=[points_abs],\n",
" points_labels_list=[labels],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Process Video Frames",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496748120,
"executionStopTime": 1762496774486,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "cf1d2dda-464b-4a6a-aff3-d729aa486ec3",
"output": {
"id": "824678093489712",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "cf1d2dda-464b-4a6a-aff3-d729aa486ec3",
"serverExecutionDuration": 25528.605932001,
"showInput": true
},
"outputs": [],
"source": [
"# now we propagate the outputs from frame 0 to the end of the video and collect all outputs\n",
"outputs_per_frame = propagate_in_video(predictor, session_id)\n",
"\n",
"# finally, we reformat the outputs for visualization and plot the outputs every 60 frames\n",
"outputs_per_frame = prepare_masks_for_visualization(outputs_per_frame)\n",
"\n",
"vis_frame_stride = 60\n",
"plt.close(\"all\")\n",
"for frame_idx in range(0, len(outputs_per_frame), vis_frame_stride):\n",
" visualize_formatted_frame_output(\n",
" frame_idx,\n",
" video_frames_for_vis,\n",
" outputs_list=[outputs_per_frame],\n",
" titles=[\"SAM 3 Dense Tracking outputs\"],\n",
" figsize=(6, 4),\n",
" )"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"customInput": null,
"isCommentPanelOpen": false,
"language": "markdown",
"originalKey": "79b12d4e-cdf0-41b8-9939-bcc1d5422115",
"outputsInitialized": false,
"showInput": false
},
"source": [
"### Refining an existing object with point prompts\n",
"\n",
"We can also refine the segmentation mask of an existing object through point prompts.\n",
"\n",
"Assuming that we've changed our mind (again) -- for Object ID 2 (the dancer in the front whom we just added back in the step above), now we only want to segment her T-shirt instead of her whole body. We can adjust the segmentation mask with a few more positive and negative clicks."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Segment T-shirt with Clicks",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496774494,
"executionStopTime": 1762496775380,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "8114fb12-386d-45e0-b875-f74eee630d96",
"outputsInitialized": false,
"requestMsgId": "8114fb12-386d-45e0-b875-f74eee630d96",
"serverExecutionDuration": 4.0469909999956,
"showInput": true
},
"outputs": [],
"source": [
"# For the dancer in the front, suppose now we only want to segment her T-shirt instead of her whole body\n",
"# we will use 2 positive clicks and 2 negative clicks to select her shirt.\n",
"\n",
"frame_idx = 0\n",
"obj_id = 2\n",
"points_abs = np.array(\n",
" [\n",
" [740, 450], # positive click\n",
" [760, 630], # negative click\n",
" [840, 640], # negative click\n",
" [760, 550], # positive click\n",
" ]\n",
")\n",
"# positive clicks have label 1, while negative clicks have label 0\n",
"labels = np.array([1, 0, 0, 1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Process and Visualize Frame Outputs",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496775385,
"executionStopTime": 1762496777685,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "5e66f671-aa71-42d7-a68d-8dc6430df8fe",
"output": {
"id": "25486291537675748",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "5e66f671-aa71-42d7-a68d-8dc6430df8fe",
"serverExecutionDuration": 1227.9235379992,
"showInput": true
},
"outputs": [],
"source": [
"# convert points and labels to tensors; also convert to relative coordinates\n",
"points_tensor = torch.tensor(\n",
" abs_to_rel_coords(points_abs, IMG_WIDTH, IMG_HEIGHT, coord_type=\"point\"),\n",
" dtype=torch.float32,\n",
")\n",
"points_labels_tensor = torch.tensor(labels, dtype=torch.int32)\n",
"\n",
"response = predictor.handle_request(\n",
" request=dict(\n",
" type=\"add_prompt\",\n",
" session_id=session_id,\n",
" frame_index=frame_idx,\n",
" points=points_tensor,\n",
" point_labels=points_labels_tensor,\n",
" obj_id=obj_id,\n",
" )\n",
")\n",
"out = response[\"outputs\"]\n",
"\n",
"plt.close(\"all\")\n",
"visualize_formatted_frame_output(\n",
" frame_idx,\n",
" video_frames_for_vis,\n",
" outputs_list=[prepare_masks_for_visualization({frame_idx: out})],\n",
" titles=[\"SAM 3 Dense Tracking outputs\"],\n",
" figsize=(6, 4),\n",
" points_list=[points_abs],\n",
" points_labels_list=[labels],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Visualize Video Tracking Outputs",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496777688,
"executionStopTime": 1762496803927,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "5c4a35a7-e5cc-4c7d-ba05-25540665d125",
"output": {
"id": "1222230279729631",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "5c4a35a7-e5cc-4c7d-ba05-25540665d125",
"serverExecutionDuration": 25325.393453,
"showInput": true
},
"outputs": [],
"source": [
"# now we propagate the outputs from frame 0 to the end of the video and collect all outputs\n",
"outputs_per_frame = propagate_in_video(predictor, session_id)\n",
"\n",
"# finally, we reformat the outputs for visualization and plot the outputs every 60 frames\n",
"outputs_per_frame = prepare_masks_for_visualization(outputs_per_frame)\n",
"\n",
"vis_frame_stride = 60\n",
"plt.close(\"all\")\n",
"for frame_idx in range(0, len(outputs_per_frame), vis_frame_stride):\n",
" visualize_formatted_frame_output(\n",
" frame_idx,\n",
" video_frames_for_vis,\n",
" outputs_list=[outputs_per_frame],\n",
" titles=[\"SAM 3 Dense Tracking outputs\"],\n",
" figsize=(6, 4),\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"attachments": [],
"bentoAICellStatus": "none",
"isCommentPanelOpen": false,
"language": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"customInput": null,
"isCommentPanelOpen": false,
"language": "markdown",
"originalKey": "3f8d998b-4585-4956-b15d-1d4078cf8927",
"outputsInitialized": false,
"showInput": false
},
"originalKey": "b1d99d9b-c26e-4d5a-a4a2-70aab345fd77",
"outputsInitialized": false,
"showInput": false
},
"source": [
"### Close session\n",
"\n",
"Each session is tied to a single video. We can close the session after inference to free up its resources.\n",
"\n",
"(Then, you may start a new session on another video.)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Close inference session",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496803937,
"executionStopTime": 1762496805854,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Reset Session Request",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950861264,
"executionStopTime": 1761950863082,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "d08a9e5f-3563-4aa1-ba8a-9f94615e4d7e",
"outputsInitialized": true,
"requestMsgId": "6e872e9a-3803-418f-827d-f1d817ef9cb9",
"serverExecutionDuration": 926.3133899949,
"showInput": true
},
"originalKey": "4f94819e-de3c-49bc-a25f-0790b2fa2cfb",
"output": {
"id": "1532190008092267",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "4f94819e-de3c-49bc-a25f-0790b2fa2cfb",
"serverExecutionDuration": 941.08029199924
},
"outputs": [],
"source": [
"# finally, close the inference session to free its GPU resources\n",
"# (you may start a new session on another video)\n",
"_ = predictor.handle_request(\n",
" request=dict(\n",
" type=\"close_session\",\n",
" session_id=session_id,\n",
" )\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"attachments": [],
"bentoAICellStatus": "none",
"isCommentPanelOpen": false,
"language": "markdown",
"metadata": {
"bentoAICellStatus": "none",
"customInput": null,
"isCommentPanelOpen": false,
"language": "markdown",
"originalKey": "d79e36cd-a691-420d-8061-ad5222913770",
"outputsInitialized": false,
"showInput": false
},
"originalKey": "5f1ced4a-7f9b-4a3f-88bb-77fd601466eb",
"outputsInitialized": false,
"showInput": false
},
"source": [
"### Clean-up\n",
"\n",
"After all inference is done, we can shutdown the predictor to free up the multi-GPU process group."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Shutdown Predictor",
"origin": "ai"
},
"collapsed": false,
"customOutput": null,
"executionStartTime": 1762496805866,
"executionStopTime": 1762496807085,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Shutdown Predictor",
"origin": "ai"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1761950863090,
"executionStopTime": 1761950863987,
"isCommentPanelOpen": false,
"language": "python",
"originalKey": "2de9666e-d888-4b2b-955a-82727363fc59",
"outputsInitialized": true,
"requestMsgId": "59e250c1-3d4c-436d-aa6e-fc0429fe4d8f",
"serverExecutionDuration": 282.37027197611,
"showInput": true
},
"originalKey": "bb5f6b72-d945-4193-8988-2490e9168882",
"output": {
"id": "1866958724222293",
"loadingStatus": "before loading"
},
"outputsInitialized": true,
"requestMsgId": "bb5f6b72-d945-4193-8988-2490e9168882",
"serverExecutionDuration": 284.71523799999
},
"outputs": [],
"source": [
"# after all inference is done, we can shutdown the predictor\n",
"# to free up the multi-GPU process group\n",
"predictor.shutdown()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"bentoAICellStatus": "none",
"bentoCellName": {
"name": "Cell 33",
"origin": "initial"
},
"collapsed": false,
"customInput": null,
"customOutput": null,
"executionStartTime": 1762496807093,
"executionStopTime": 1762496807812,
"isCommentPanelOpen": false,
"jupyter": {
"outputs_hidden": false
},
"language": "python",
"originalKey": "e4ad9f5f-c0df-4e30-97a2-40d389ba92ac",
"outputsInitialized": false,
"requestMsgId": "e4ad9f5f-c0df-4e30-97a2-40d389ba92ac",
"serverExecutionDuration": 3.5742059990298,
"showInput": true
},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"bento_stylesheets": {
"bento/extensions/flow/main.css": true,
"bento/extensions/kernel_selector/main.css": true,
"bento/extensions/kernel_ui/main.css": true,
"bento/extensions/new_kernel/main.css": true,
"bento/extensions/system_usage/main.css": true,
"bento/extensions/theme/main.css": true
},
"captumWidgetMessage": [],
"fileHeader": "",
"fileUid": "8685c221-c143-4b84-98ec-b1f023cedd6c",
"isAdHoc": false,
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.11"
},
"last_base_url": "https://bento.edge.x2p.facebook.net/",
"last_kernel_id": "b57809cb-57de-4b58-a47a-2cd14cd7dc51",
"last_msg_id": "be2245fc-daa1cc5649ef79144c475c5d_1965",
"last_server_session_id": "4fb65252-bdbd-4eea-b3c3-4a9f2995ad48",
"notebookId": "825823386977069",
"notebookNumber": "N8482762"
},
"nbformat": 4,
"nbformat_minor": 2
}