---
viewer: false
tags: [uv-script, computer-vision, object-detection, sam3, image-processing]
license: apache-2.0
---
# SAM3 Object Detection
Detect objects in images using Meta's [sam3](https://huggingface.co/facebook/sam3) (Segment Anything Model 3) with text prompts. Process HuggingFace datasets with zero-shot object detection using natural language descriptions.
## Quick Start
**Requires GPU.** Use HuggingFace Jobs for cloud execution:
```bash
hf jobs uv run --flavor a100-large \
-s HF_TOKEN=HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
input-dataset \
output-dataset \
--class-name photograph
```
## Example Output
Here's an example of detected objects (photographs in historical newspapers) with bounding boxes and confidence scores:

_Photograph detected in a historical newspaper with bounding box and confidence score. Generated from [davanstrien/newspapers-image-predictions](https://huggingface.co/datasets/davanstrien/newspapers-image-predictions)._
## Local Execution
If you have a CUDA GPU locally:
```bash
uv run detect-objects.py INPUT OUTPUT --class-name CLASSNAME
```
## Arguments
**Required:**
- `input_dataset` - Input HF dataset ID
- `output_dataset` - Output HF dataset ID
- `--class-name` - Object class to detect (e.g., `"photograph"`, `"animal"`, `"table"`)
**Common options:**
- `--confidence-threshold FLOAT` - Min confidence (default: 0.5)
- `--batch-size INT` - Batch size (default: 4)
- `--max-samples INT` - Limit samples for testing
- `--image-column STR` - Image column name (default: "image")
- `--private` - Make output private
All options
```
--mask-threshold FLOAT Mask generation threshold (default: 0.5)
--split STR Dataset split (default: "train")
--shuffle Shuffle before processing
--model STR Model ID (default: "facebook/sam3")
--dtype STR Precision: float32|float16|bfloat16
--hf-token STR HF token (or use HF_TOKEN env var)
```
## HuggingFace Jobs Examples
### Historical Newspapers
Detect photographs in historical newspaper scans:
```bash
hf jobs uv run --flavor a100-large \
-s HF_TOKEN=HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
davanstrien/newspapers-with-images-after-photography \
my-username/newspapers-detected \
--class-name photograph \
--confidence-threshold 0.6 \
--batch-size 8
```
### Document Tables
Extract tables from document scans:
```bash
hf jobs uv run --flavor a100-large \
-s HF_TOKEN=HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
my-documents \
documents-with-tables \
--class-name table
```
### Wildlife Camera Traps
Detect animals in camera trap images:
```bash
hf jobs uv run --flavor a100-large \
-s HF_TOKEN=HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
wildlife-images \
wildlife-detections \
--class-name animal \
--confidence-threshold 0.5
```
### Quick Testing
Test on a small subset before full run:
```bash
hf jobs uv run --flavor a100-large \
-s HF_TOKEN=HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
large-dataset \
test-output \
--class-name object \
--max-samples 20
```
### Using Different GPU Flavors
```bash
# L4 (cost-effective)
--flavor l4x1
# A100 (fastest)
--flavor a100
```
See [HF Jobs pricing](https://huggingface.co/pricing#spaces-compute).
## Output Format
Adds `objects` column with ClassLabel-based detections:
```python
{
"objects": [
{
"bbox": [x, y, width, height],
"category": 0, # Always 0 for single class
"score": 0.87
}
]
}
```
Load and use:
```python
from datasets import load_dataset
ds = load_dataset("username/output", split="train")
# ClassLabel feature preserves your class name
class_name = ds.features["objects"].feature["category"].names[0]
print(f"Detected class: {class_name}")
for sample in ds:
for obj in sample["objects"]:
print(f"{class_name}: {obj['score']:.2f} at {obj['bbox']}")
```
## Detecting Multiple Object Types
To detect multiple object types, run the script multiple times with different `--class-name` values:
```bash
# Detect photographs
hf jobs uv run ... --class-name photograph
# Detect illustrations
hf jobs uv run ... --class-name illustration
# Merge results as needed
```
## Performance
| GPU | Batch Size | ~Images/sec |
| --- | ---------- | ----------- |
| L4 | 4-8 | 2-4 |
| A10 | 8-16 | 4-6 |
_Varies by image size and detection complexity_
## Common Use Cases
- **Documents:** `--class-name table` or `--class-name figure`
- **Newspapers:** `--class-name photograph` or `--class-name illustration`
- **Wildlife:** `--class-name animal` or `--class-name bird`
- **Products:** `--class-name product` or `--class-name label`
## Troubleshooting
- **No CUDA:** Use HF Jobs (see examples above)
- **OOM errors:** Reduce `--batch-size`
- **Few detections:** Lower `--confidence-threshold` or try different class descriptions
- **Wrong column:** Use `--image-column your_column_name`
## About SAM3
[SAM3](https://huggingface.co/facebook/sam3) is Meta's zero-shot vision model. Describe any object in natural language and it will detect it—no training required.
**Note:** This script uses transformers from git (SAM3 not yet in stable release).
## See Also
More UV scripts at [huggingface.co/uv-scripts](https://huggingface.co/uv-scripts):
- **dataset-creation** - Create HF datasets from files
- **vllm** - Fast LLM inference
- **ocr** - Document OCR
## License
Apache 2.0