File size: 1,800 Bytes
1308e1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
language: en
datasets:
- ShreyashDhoot/reward_model
- BaiqiL/NaturalBench_Images
- x1101/nsfw-full
- Subh775/WeaponDetection
---

# Adversarial Image Auditor

This model serves as a deep learning-based image auditor for AI safety, capable of evaluating images and interpreting aligned text prompts across multiple distinct axes:
1. **Adversarial Safety (Binary):** Predicting whether an image is Safe or Unsafe.
2. **Category Classification:** Placing unsafe images directly into `Safe`, `NSFW`, `Gore`, or `Weapons` categories.
3. **Artifact / Seam Quality:** Assessing the quality of image manipulation to detect adversarial seams or diffusion artifacts.
4. **Relative Adversarial Score:** Predicting a continuous metric of adversarial strength in an image.
5. **Prompt Faithfulness (Contrastive InfoNCE):** Calculating a temperature-scaled contrastive probability of image–text faithfulness.

## Architecture

This neural auditor introduces robust contrastive alignments for multimodal safety.
- **Vision Backbone:** Pretrained DenseNet121, modified to extract feature grids to construct dense 2x2 local spatial maps.
- **Text Conditioning:** Simple text tokenizer with correct Cross-Attention (`key_padding_mask` integrated, Pre-LayerNorm).
- **FiLM Modulation:** Conditions adversarial layers using timestep diffusion tokens and text feature projections directly.
- **Output:** Decoupled safety axes generating bounding-box GradCAM predictions, Continuous InfoNCE faithfulness, and safety classifications.

## Usage

You can load this model along with its inference script `auditor_inference.py`:

```python
from auditor_inference import audit_image

results = audit_image(
    model_path="auditor_new_best.pth",
    image_path="example.jpg",
    prompt="A cute cat"
)

print(results)
```