Ateeqq commited on
Commit
0394de0
Β·
verified Β·
1 Parent(s): ed648fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +120 -3
README.md CHANGED
@@ -1,3 +1,120 @@
1
- ---
2
- license: cc-by-nd-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nd-4.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - google/siglip2-base-patch16-224
7
+ pipeline_tag: image-classification
8
+ library_name: transformers
9
+ tags:
10
+ - nswf
11
+ - exnrt.com
12
+ ---
13
+
14
+ # Nsfw Image Detection
15
+
16
+ This model is a fine-tuned for **nsfw image classification**. It has been trained to classify images into three safety-related categories, making it suitable for content moderation, filtering, or safety-aware applications.
17
+
18
+ <p>
19
+ <a href="https://exnrt.com/blog/ai/fine-tuning-siglip2/" target="_blank">
20
+ <img src="https://img.shields.io/badge/View%20Training%20Code-blue?style=for-the-badge&logo=readthedocs"/>
21
+ </a>
22
+ </p>
23
+
24
+ ## 🧠 Model Details
25
+
26
+ * **Base model**: `google/siglip2-base-patch16-224`
27
+ * **Task**: Image Classification (Safety Filtering)
28
+ * **Framework**: PyTorch
29
+ * **Fine-tuned on**: Custom dataset with 3 safety-related categories
30
+ * **Selected checkpoint**: Epoch 3
31
+ * **Batch size**: 64
32
+ * **Epochs**: 7
33
+
34
+ ### 🏷️ Categories
35
+
36
+ The model classifies images into the following categories:
37
+
38
+ | ID | Label |
39
+ | -- | --------------------- |
40
+ | 0 | `graphically_violent` |
41
+ | 1 | `nudity_pornography` |
42
+ | 2 | `safe_normal` |
43
+
44
+ ### 🧾 Label Mapping
45
+
46
+ ```python
47
+ label2id = {'graphically_violent': 0, 'nudity_pornography': 1, 'safe_normal': 2}
48
+ id2label = {0: 'graphically_violent', 1: 'nudity_pornography', 2: 'safe_normal'}
49
+ ```
50
+
51
+
52
+ ## πŸ“ˆ Visual Results
53
+
54
+ ### πŸ“Œ Epoch Training Results
55
+
56
+ ![Epoch Results](https://huggingface.co/Ateeqq/nsfw-image-detection/resolve/main/nsfw-epochs-results.png)
57
+
58
+ ### πŸ“Œ Final Metrics & Confusion Matrix
59
+
60
+ ![Metrics](https://huggingface.co/Ateeqq/nsfw-image-detection/resolve/main/nsfw-training-results.png)
61
+
62
+ ---
63
+
64
+ ## πŸš€ Usage
65
+
66
+ ```python
67
+ import torch
68
+ from transformers import AutoImageProcessor, SiglipForImageClassification
69
+ from PIL import Image
70
+ import torch.nn.functional as F
71
+
72
+ model_path = "Ateeqq/siglip2-safety-classifier-gpu"
73
+ processor = AutoImageProcessor.from_pretrained(model_path)
74
+ model = SiglipForImageClassification.from_pretrained(model_path)
75
+
76
+ image = Image.open("your_image_path.jpg")
77
+ inputs = processor(images=image, return_tensors="pt")
78
+
79
+ with torch.no_grad():
80
+ logits = model(**inputs).logits
81
+
82
+ probabilities = F.softmax(logits, dim=1)
83
+
84
+ predicted_class_id = logits.argmax().item()
85
+ predicted_class_label = model.config.id2label[predicted_class_id]
86
+
87
+ confidence_scores = probabilities[0].tolist()
88
+
89
+ print(f"Predicted class ID: {predicted_class_id}")
90
+ print(f"Predicted class label: {predicted_class_label}\n")
91
+
92
+ for i, score in enumerate(confidence_scores):
93
+ label = model.config.id2label[i]
94
+ print(f"Confidence for '{label}': {score:.4f}")
95
+ ```
96
+
97
+ ### Output
98
+
99
+ ```
100
+ Predicted class ID: 0
101
+ Predicted class label: graphically_violent
102
+
103
+ Confidence for 'graphically_violent': 0.9941
104
+ Confidence for 'nudity_pornography': 0.0040
105
+ Confidence for 'safe_normal': 0.0019
106
+ ```
107
+
108
+ ---
109
+
110
+ ## πŸ“Š Training Metrics (Epoch 3 Selected βœ…)
111
+
112
+ | Epoch | Training Loss | Validation Loss | Accuracy |
113
+ | ----- | ------------- | --------------- | ---------- |
114
+ | 1 | 0.1086 | 0.0817 | 97.05% |
115
+ | 2 | 0.0415 | 0.1233 | 95.50% |
116
+ | 3 βœ… | 0.0302 | 0.0516 | **98.45%** |
117
+ | 4 | 0.0271 | 0.0799 | 97.89% |
118
+ | 5 | 0.0222 | 0.1015 | 98.03% |
119
+ | 6 | 0.0026 | 0.0707 | 98.45% |
120
+ | 7 | 0.0178 | 0.0665 | 98.59% |