tags:
- object-detection
- sam3
- segment-anything
- bounding-boxes
- uv-script
- generated
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: mean_ocr
dtype: float64
- name: std_ocr
dtype: float64
- name: title
dtype: string
- name: date
dtype: string
- name: language
list: string
- name: item_iiif_url
dtype: string
- name: multi_language
dtype: bool
- name: issue_uri
dtype: string
- name: id
dtype: string
- name: image
dtype: image
- name: download_status
dtype: string
- name: download_retries
dtype: int64
- name: download_url
dtype: string
- name: objects
struct:
- name: bbox
list:
list: float32
length: 4
- name: category
list:
class_label:
names:
'0': photograph
- name: score
list: float32
splits:
- name: train
num_bytes: 4464387946.8
num_examples: 4500
- name: validation
num_bytes: 496043105.2
num_examples: 500
download_size: 4921908986
dataset_size: 4960431052
Object Detection: Photograph Detection using sam3
This dataset contains object detection results (bounding boxes) for photograph detected in images from davanstrien/newspapers-with-images-after-photography-big using Meta's SAM3 (Segment Anything Model 3).
Generated using: uv-scripts/sam3 detection script
Detection Statistics
- Objects Detected: photograph
- Total Detections: 15,000
- Images with Detections: 5,000 / 5,000 (100.0%)
- Average Detections per Image: 3.00
Processing Details
- Source Dataset: davanstrien/newspapers-with-images-after-photography-big
- Model: facebook/sam3
- Script Repository: uv-scripts/sam3
- Number of Samples Processed: 5,000
- Processing Time: 30.7 minutes
- Processing Date: 2025-11-21 12:34 UTC
Configuration
- Image Column:
image - Dataset Split:
train - Class Name:
photograph - Confidence Threshold: 0.4
- Mask Threshold: 0.5
- Batch Size: 32
- Model Dtype: bfloat16
Model Information
SAM3 (Segment Anything Model 3) is Meta's state-of-the-art object detection and segmentation model that excels at:
- 🎯 Zero-shot detection - Detect objects using natural language prompts
- 📦 Bounding boxes - Accurate object localization
- 🎭 Instance segmentation - Pixel-perfect masks (not included in this dataset)
- 🖼️ Any image domain - Works on photos, documents, medical images, etc.
This dataset uses SAM3 in text-prompted detection mode to find instances of "photograph" in the source images.
Dataset Structure
The dataset contains all original columns from the source dataset plus an objects column with detection results in HuggingFace object detection format (dict-of-lists):
- bbox: List of bounding boxes in
[x, y, width, height]format (pixel coordinates) - category: List of category indices (always
0for single-class detection) - score: List of confidence scores (0.0 to 1.0)
Schema
{
"objects": {
"bbox": [[x, y, w, h], ...], # List of bounding boxes
"category": [0, 0, ...], # All same class
"score": [0.95, 0.87, ...] # Confidence scores
}
}
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("{{output_dataset_id}}", split="train")
# Access detections for an image
example = dataset[0]
detections = example["objects"]
# Iterate through all detected objects in this image
for bbox, category, score in zip(
detections["bbox"],
detections["category"],
detections["score"]
):
x, y, w, h = bbox
print(f"Detected photograph at ({x}, {y}) with confidence {score:.2f}")
# Filter high-confidence detections
high_conf_examples = [
ex for ex in dataset
if any(score > 0.8 for score in ex["objects"]["score"])
]
# Count total detections across dataset
total = sum(len(ex["objects"]["bbox"]) for ex in dataset)
print(f"Total detections: {total}")
Visualization
To visualize the detections, you can use the visualization script from the same repository:
# Visualize first sample with detections
uv run https://huggingface.co/datasets/uv-scripts/sam3/raw/main/visualize-detections.py \
{{output_dataset_id}} \
--first-with-detections
# Visualize random samples
uv run https://huggingface.co/datasets/uv-scripts/sam3/raw/main/visualize-detections.py \
{{output_dataset_id}} \
--num-samples 5
# Save visualizations to files
uv run https://huggingface.co/datasets/uv-scripts/sam3/raw/main/visualize-detections.py \
{{output_dataset_id}} \
--num-samples 3 \
--output-dir ./visualizations
Reproduction
This dataset was generated using the uv-scripts/sam3 object detection script:
uv run https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
davanstrien/newspapers-with-images-after-photography-big \
<output-dataset> \
--class-name photograph \
--confidence-threshold 0.4 \
--mask-threshold 0.5 \
--batch-size 32 \
--dtype bfloat16
Running on HuggingFace Jobs (GPU)
This script requires a GPU. To run on HuggingFace infrastructure:
hf jobs uv run --flavor a100-large \
-s HF_TOKEN=HF_TOKEN \
https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
davanstrien/newspapers-with-images-after-photography-big \
<output-dataset> \
--class-name photograph \
--confidence-threshold 0.4
Performance
- Processing Speed: ~2.7 images/second
- GPU Configuration: CUDA with bfloat16 precision
Generated with 🤖 UV Scripts