Instructions to use Gertlek/DetectiveSAM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sam2
How to use Gertlek/DetectiveSAM with sam2:
# Use SAM2 with images import torch from sam2.sam2_image_predictor import SAM2ImagePredictor predictor = SAM2ImagePredictor.from_pretrained(Gertlek/DetectiveSAM) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): predictor.set_image(<your_image>) masks, _, _ = predictor.predict(<input_prompts>)# Use SAM2 with videos import torch from sam2.sam2_video_predictor import SAM2VideoPredictor predictor = SAM2VideoPredictor.from_pretrained(Gertlek/DetectiveSAM) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): state = predictor.init_state(<your_video>) # add new prompts and instantly get the output on the same frame frame_idx, object_ids, masks = predictor.add_new_points(state, <your_prompts>): # propagate the prompts to get masklets throughout the video for frame_idx, object_ids, masks in predictor.propagate_in_video(state): ... - Notebooks
- Google Colab
- Kaggle
license: other
library_name: pytorch
pipeline_tag: image-segmentation
tags:
- image-forensics
- image-manipulation-detection
- image-segmentation
- sam2
- pytorch
DetectiveSAM
DetectiveSAM is an inference-only image forgery localization bundle built around SAM2. This release includes bundled checkpoints and a small set of ready-to-run examples for demos.
What is bundled
- Inference checkpoints under
checkpoints/ - SAM2 config and weights under
sam2configs/ - Poster demo pairs under
demo/cocoglide/,demo/flux_test/, anddemo/qwen_test/ - A drop-in single-image slot at
demo/user_image/demo_input.png
Built-in checkpoint aliases:
detective_samdetective_sam_sota
Setup
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
Hugging Face Usage
git lfs install
git clone https://huggingface.co/Gertlek/DetectiveSAM
cd DetectiveSAM
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python -m detectivesam_inference.predict \
--checkpoint detective_sam \
--output-dir outputs/poster_baseline
Poster Demo Flows
1. Live single-image demo
Place your image at demo/user_image/demo_input.png, then run:
python -m detectivesam_inference.predict \
--checkpoint detective_sam \
--output-dir outputs/poster_user_image
In this mode the CLI reuses the target image as its own source reference so the demo stays runnable with a single image.
2. Bundled baseline example
If demo/user_image/demo_input.png is absent, the default predict command falls back to the bundled CocoGlide sample banana_28809.
python -m detectivesam_inference.predict \
--checkpoint detective_sam \
--output-dir outputs/poster_baseline
3. Bundled SOTA examples
Flux example:
python -m detectivesam_inference.predict \
--checkpoint detective_sam_sota \
--source demo/flux_test/source/548.png \
--target demo/flux_test/target/548.png \
--mask demo/flux_test/mask/548.png \
--output-dir outputs/poster_flux
Qwen example:
python -m detectivesam_inference.predict \
--checkpoint detective_sam_sota \
--source demo/qwen_test/source/166.png \
--target demo/qwen_test/target/166.png \
--mask demo/qwen_test/mask/166.png \
--output-dir outputs/poster_qwen
4. Bundled CocoGlide subset sweep
Use this to evaluate the bundled banana and train CocoGlide demo pairs.
python -m detectivesam_inference.evaluate \
--checkpoint detective_sam \
--dataset-root demo/cocoglide \
--output-dir outputs/poster_eval_cocoglide \
--num-visualizations 2
Outputs
Each predict run writes a compact set of visual artifacts plus a JSON summary:
<name>_comparison.png<name>_probability.png<name>_pred_mask.png<name>_pred_overlay.png<name>_summary.json
If a ground-truth mask is provided, the run also saves:
<name>_gt_mask.png<name>_gt_overlay.png
The evaluate command writes summary.json plus a few visualization examples under visualizations/.
Notes
- The runtime selects
cudaautomatically when available and otherwise runs on CPU. - Checkpoint settings come from the YAML sidecars in
checkpoints/; you only need the alias or checkpoint path. - This repo does not include training code or training-only dependencies.
- License metadata is currently marked
other: the bundled SAM2 components are Apache-2.0, while DetectiveSAM release terms should be finalized before broader redistribution.