File size: 1,416 Bytes
31e7458
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
license: apache-2.0
base_model: Falconsai/nsfw_image_detection
tags:
  - image-classification
  - content-moderation
  - violence-detection
  - nsfw-detection
  - multi-task-learning
---

# Multi-Head Content Moderator

A multi-task image moderation model with **two classification heads**:
- **NSFW Detection**: Detects explicit/adult content (preserved from Falconsai)
- **Violence Detection**: Detects violent content (newly trained)

## Architecture
- Base: ViT (Vision Transformer) from Falconsai/nsfw_image_detection
- Head 1: NSFW classifier (frozen, pretrained)
- Head 2: Violence classifier (trained on violence dataset)

## Categories

### NSFW Head
- nsfw
- safe

### Violence Head
- safe
- violence

## Performance (Violence Detection)
- Accuracy: 0.9075
- F1 Score: 0.9076

## Usage
```python
import torch
from transformers import AutoImageProcessor

# Load
checkpoint = torch.load('multihead_model.pt')
processor = AutoImageProcessor.from_pretrained('path/to/model')

# Create model class (see notebook for full class definition)
# model = MultiHeadContentModerator(...)
# model.load_state_dict(checkpoint['model_state_dict'])

# Inference
inputs = processor(images=image, return_tensors='pt')
with torch.no_grad():
    # Get both predictions
    outputs = model(inputs['pixel_values'], task='both')
    nsfw_pred = outputs['nsfw'].argmax(-1)
    violence_pred = outputs['violence'].argmax(-1)
```