prithivMLmods commited on
Commit
e05fc33
·
verified ·
1 Parent(s): b430867

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -1
README.md CHANGED
@@ -2,6 +2,14 @@
2
  license: cc-by-nc-4.0
3
  ---
4
 
 
 
 
 
 
 
 
 
5
 
6
  ## Evaluation Report (Self-Reported)
7
 
@@ -18,4 +26,89 @@ Classification report:
18
  weighted avg 0.8917 0.8918 0.8917 26483
19
  ```
20
 
21
- ![download](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/T1Hp_Yuhew7gUJJH-Prmy.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: cc-by-nc-4.0
3
  ---
4
 
5
+ # **Nsfw_Image_Detection_OSS**
6
+
7
+ > **Nsfw_Image_Detection_OSS** is an image classification vision-language encoder model fine-tuned from **[facebook/metaclip-2-worldwide-s16](https://huggingface.co/facebook/metaclip-2-worldwide-s16)** for a **binary NSFW detection task**.
8
+ > It is designed to classify whether an image is **Safe For Work (SFW)** or **Not Safe For Work (NSFW)** using the **MetaClip2ForImageClassification** architecture.
9
+
10
+ > [!note]
11
+ > **MetaCLIP 2: A Worldwide Scaling Recipe**
12
+ > [https://huggingface.co/papers/2507.22062](https://huggingface.co/papers/2507.22062)
13
 
14
  ## Evaluation Report (Self-Reported)
15
 
 
26
  weighted avg 0.8917 0.8918 0.8917 26483
27
  ```
28
 
29
+ ![download](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/T1Hp_Yuhew7gUJJH-Prmy.png)
30
+
31
+ # **Label Mapping**
32
+
33
+ The model categorizes images into two classes:
34
+
35
+ * **Class 0:** **SFW**
36
+ * **Class 1:** **NSFW**
37
+
38
+ ```json
39
+ {
40
+ "id2label": {
41
+ "0": "SFW",
42
+ "1": "NSFW"
43
+ },
44
+ "label2id": {
45
+ "SFW": 0,
46
+ "NSFW": 1
47
+ }
48
+ }
49
+ ```
50
+
51
+ # **Run with Transformers**
52
+
53
+ ```python
54
+ !pip install -q transformers torch pillow gradio
55
+ ```
56
+
57
+ ```python
58
+ import gradio as gr
59
+ import torch
60
+ from transformers import AutoImageProcessor, AutoModelForImageClassification
61
+ from PIL import Image
62
+
63
+ # Model name from Hugging Face Hub
64
+ model_name = "prithivMLmods/Nsfw_Image_Detection_OSS"
65
+
66
+ # Load processor and model
67
+ processor = AutoImageProcessor.from_pretrained(model_name)
68
+ model = AutoModelForImageClassification.from_pretrained(model_name)
69
+ model.eval()
70
+
71
+ # Define labels
72
+ LABELS = {
73
+ 0: "SFW",
74
+ 1: "NSFW"
75
+ }
76
+
77
+ def nsfw_detection(image):
78
+ """Predict whether an image is SFW or NSFW."""
79
+ image = Image.fromarray(image).convert("RGB")
80
+ inputs = processor(images=image, return_tensors="pt")
81
+
82
+ with torch.no_grad():
83
+ outputs = model(**inputs)
84
+ logits = outputs.logits
85
+ probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
86
+
87
+ predictions = {LABELS[i]: round(probs[i], 3) for i in range(len(probs))}
88
+ return predictions
89
+
90
+ # Build Gradio interface
91
+ iface = gr.Interface(
92
+ fn=nsfw_detection,
93
+ inputs=gr.Image(type="numpy", label="Upload Image"),
94
+ outputs=gr.Label(label="NSFW Detection Probabilities"),
95
+ title="NSFW Image Detection (MetaCLIP-2)",
96
+ description="Upload an image to classify whether it is Safe For Work (SFW) or Not Safe For Work (NSFW)."
97
+ )
98
+
99
+ # Launch app
100
+ if __name__ == "__main__":
101
+ iface.launch()
102
+ ```
103
+
104
+ # **Intended Use**
105
+
106
+ The **Nsfw_Image_Detection_OSS** model is designed to classify images into **SFW or NSFW categories**.
107
+
108
+ Potential use cases include:
109
+
110
+ * **Content Moderation:** Automated filtering of unsafe or adult content.
111
+ * **Social Media Platforms:** Preventing the upload of explicit media.
112
+ * **Enterprise Safety:** Ensuring workplace-appropriate content in shared environments.
113
+ * **Dataset Filtering:** Cleaning large-scale image datasets before training.
114
+ * **Parental Control Systems:** Blocking inappropriate visual material.