Datasets:
image imagewidth (px) 1.6k 1.6k |
|---|
Poison-3DGS: A Stage-wise Benchmark for Detectability of 3DGS Poisoning Attacks
Paper: Characterizing Detectability in 3DGS Poisoning: A Stage-wise Benchmark (NeurIPS 2026 Evaluations & Datasets Track)
Poison-3DGS is a benchmark for evaluating detection methods against adversarial poisoning of 3D Gaussian Splatting (3DGS) assets. It covers 4 attack families spanning the major threat categories — integrity, availability, and steganography — across 37 scenes and 4 pipeline stages, providing a unified evaluation protocol for stage-wise detectability analysis.
This is a review subset (~127 GB). The full benchmark (~6 TB including all checkpoints) will be released upon acceptance.
Benchmark Overview
| Property | Value |
|---|---|
| Clean scenes | 37 |
| Datasets | Free, Mip-NeRF 360, Tanks-and-Temples |
| Attack families | 4 |
| Total adversarial variants (full benchmark) | 414 |
| Variants in this subset | 94 (anchor configs per attack) |
| Detection stages covered | 4 |
| Total size (this subset) | ~127 GB |
Dataset Structure
Poison-3DGS/
├── clean/ # Clean reference scenes
│ └── {free,mip-nerf-360,tanks-and-temples}/<scene>/
│ ├── images/ # Raw RGB images (10 scenes only*)
│ ├── sparse/0/ # COLMAP reconstruction (10 scenes only*)
│ └── ply/
│ ├── iteration_7000/point_cloud.ply # early checkpoint (24 scenes)
│ ├── iteration_20000/point_cloud.ply # mid checkpoint (24 scenes)
│ └── iteration_30000/point_cloud.ply # final (all 37 scenes)
│
├── stealthattack/ # Integrity attack (20 scenes)
│ └── poisoned/{dataset}/<scene>/{difficulty}/{object}_{size}/
│ ├── config.yaml
│ ├── mask.png / trigger.png / target.png
│ ├── sparse/0/points3D.ply # poisoned SfM (injected cluster)
│ └── ply/
│ ├── iteration_7000/point_cloud.ply
│ ├── iteration_20000/point_cloud.ply
│ └── iteration_30000/point_cloud.ply
│
├── poisonsplat/ # Availability attack (20 scenes)
│ └── poisoned/{dataset}/<scene>/constrained/eps16_100/
│ ├── config.yaml
│ ├── images/ # adversarially perturbed training images
│ └── ply/
│ ├── iteration_3000/point_cloud.ply
│ ├── iteration_15000/point_cloud.ply
│ ├── iteration_21000/point_cloud.ply
│ └── iteration_30000/point_cloud.ply
│
├── gsw/ # Steganographic watermark — 3D-GSW (37 scenes)
│ └── poisoned/{dataset}/<scene>/msg64/lambda_mid/
│ ├── config.yaml
│ └── ply/iteration_30000/point_cloud.ply
│
└── guardsplat/ # Steganographic watermark — GuardSplat (37 scenes)
└── poisoned/{dataset}/<scene>/msg48/lambda_mid/
├── config.yaml
└── ply/iteration_30000/point_cloud.ply
*Clean images + COLMAP are included for 10 representative scenes (3 Free + 3 Mip-NeRF 360 + 4 T&T). For the remaining 27 scenes, download from the original datasets (links below).
Quick Start
from huggingface_hub import snapshot_download
from plyfile import PlyData
import numpy as np
# Download full subset (~127 GB) or a specific attack
path = snapshot_download(
repo_id="bhqanhuit/Poison-3DGS",
repo_type="dataset",
# Narrow download to one attack:
# allow_patterns=["poisonsplat/**", "clean/**"]
)
# Load a PoisonSplat final model and compare to clean reference
ps_ply = PlyData.read(f"{path}/poisonsplat/poisoned/tanks-and-temples/horse/constrained/eps16_100/ply/iteration_30000/point_cloud.ply")
clean_ply = PlyData.read(f"{path}/clean/tanks-and-temples/horse/ply/iteration_30000/point_cloud.ply")
print(f"Poisoned Gaussians : {len(ps_ply['vertex']):,}")
print(f"Clean Gaussians : {len(clean_ply['vertex']):,}")
print(f"Inflation ratio : {len(ps_ply['vertex']) / len(clean_ply['vertex']):.1f}x")
# Load a StealthAttack poisoned sparse cloud
from pathlib import Path
sa_sparse = PlyData.read(f"{path}/stealthattack/poisoned/mip-nerf-360/bicycle/median/trash_can_large/sparse/0/points3D.ply")
print(f"Poisoned SfM points: {len(sa_sparse['vertex']):,}")
Evaluation Protocol
- Task: Binary scene-level classification (poisoned = 1, clean = 0)
- Primary metric: AUROC · Secondary: AUPRC, FPR@95TPR
- Train/test split:
--eval --llffhold 8(every 8th camera held out, consistent across all variants)
Responsible AI
Intended use: Research on detection and defense methods for adversarial attacks on 3D Gaussian Splatting. Supports development of trustworthy 3DGS systems for security-sensitive applications.
Out-of-scope use: This benchmark should not be used to improve offensive attack capabilities in isolation from corresponding defensive research.
Limitations:
- This release is a subset (anchor configs only). Full hyperparameter coverage is in the complete benchmark, released upon paper acceptance.
- Clean images and COLMAP are included for 10 of 37 scenes. The remaining 27 require downloading from original sources.
- StealthAttack and PoisonSplat checkpoint PLYs cover 3–4 snapshots, not the full 16-checkpoint training trajectory.
- PoisonSplat scene coverage skews toward Tanks-and-Temples (smaller scenes selected to fit the 300 GB HuggingFace limit).
Data provenance: Poisoned variants are derived from publicly available 3D reconstruction datasets (Free, Mip-NeRF 360, Tanks-and-Temples) under their respective non-commercial research licenses.
Personal/sensitive information: No personal data. All scenes contain buildings, objects, and outdoor environments with no identifiable individuals.
Construct validity: Each attack variant is validated against its success criteria before inclusion. PoisonSplat variants confirm ≥2× Gaussian inflation; 3D-GSW variants confirm ≥0.95 bit accuracy with ≤1.0 dB PSNR drop; GuardSplat variants confirm ≥0.75 bit accuracy with ≤7.0 dB PSNR drop.
Social impact: Advances detection of adversarial manipulation in 3D scene representations, supporting reliable deployment of 3DGS in autonomous systems, AR/VR, and 3D content pipelines.
License
This dataset is released under CC BY-NC 4.0. The underlying scene images retain the licenses of their original datasets.
- Downloads last month
- 164