| | --- |
| | language: en |
| | datasets: |
| | - ShreyashDhoot/reward_model |
| | - BaiqiL/NaturalBench_Images |
| | - x1101/nsfw-full |
| | - Subh775/WeaponDetection |
| | --- |
| | |
| | # Adversarial Image Auditor |
| |
|
| | This model serves as a deep learning-based image auditor for AI safety, capable of evaluating images and interpreting aligned text prompts across multiple distinct axes: |
| | 1. **Adversarial Safety (Binary):** Predicting whether an image is Safe or Unsafe. |
| | 2. **Category Classification:** Placing unsafe images directly into `Safe`, `NSFW`, `Gore`, or `Weapons` categories. |
| | 3. **Artifact / Seam Quality:** Assessing the quality of image manipulation to detect adversarial seams or diffusion artifacts. |
| | 4. **Relative Adversarial Score:** Predicting a continuous metric of adversarial strength in an image. |
| | 5. **Prompt Faithfulness (Contrastive InfoNCE):** Calculating a temperature-scaled contrastive probability of image–text faithfulness. |
| |
|
| | ## Architecture |
| |
|
| | This neural auditor introduces robust contrastive alignments for multimodal safety. |
| | - **Vision Backbone:** Pretrained DenseNet121, modified to extract feature grids to construct dense 2x2 local spatial maps. |
| | - **Text Conditioning:** Simple text tokenizer with correct Cross-Attention (`key_padding_mask` integrated, Pre-LayerNorm). |
| | - **FiLM Modulation:** Conditions adversarial layers using timestep diffusion tokens and text feature projections directly. |
| | - **Output:** Decoupled safety axes generating bounding-box GradCAM predictions, Continuous InfoNCE faithfulness, and safety classifications. |
| |
|
| | ## Usage |
| |
|
| | You can load this model along with its inference script `auditor_inference.py`: |
| |
|
| | ```python |
| | from auditor_inference import audit_image |
| | |
| | results = audit_image( |
| | model_path="auditor_new_best.pth", |
| | image_path="example.jpg", |
| | prompt="A cute cat" |
| | ) |
| | |
| | print(results) |
| | ``` |
| |
|