appy-mod-beta1 / README.md
Karthik Raghu
update metadata (#1)
594a720 verified
|
raw
history blame
3.43 kB
---
license: apache-2.0
language:
- en
base_model:
- google/siglip2-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
datasets:
- KarteeMonkey/Demo
tags:
- Game
- Moderation
- art
- code
---
![4.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/yChKP0JSFHLs0wiHD9hFf.png)
# appy-mod-beta1
> **`appy-mod-beta1`** is a **vision-language encoder model** fine-tuned from `siglip2-base-patch16-224` for **binary image classification**. The model is trained to perform **game content moderation**, specifically classifying visual content as either **safe (good)** or **unsafe (bad)**. It utilizes the `SiglipForImageClassification` architecture.
> \[!note]
> SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features
> [https://arxiv.org/pdf/2502.14786](https://arxiv.org/pdf/2502.14786)
```py
Classification Report:
precision recall f1-score support
bad 0.9763 0.9140 0.9441 1755
good 0.9279 0.9803 0.9534 1983
accuracy 0.9492 3738
macro avg 0.9521 0.9471 0.9487 3738
weighted avg 0.9506 0.9492 0.9490 3738
```
![Untitled.png](https://cdn-uploads.huggingface.co/production/uploads/683ff3d6ed4395669d10d6d5/EEX_muJTmjfRLtCIqt2Cc.png)
---
## Label Space: 2 Classes
```
Class 0: bad (Unsafe content)
Class 1: good (Safe content)
```
---
## Install Dependencies
```bash
pip install transformers torch pillow gradio hf_xet
```
---
## Inference Code
```python
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "KarteeMonkey/appy-mod-beta1" # Update this if using a different path
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping
id2label = {
"0": "bad",
"1": "good"
}
def classify_watermark(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_watermark,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=2, label="Game Anomaly Detection"),
title="Game Anomaly Detection SigLIP2",
description="Upload an image to detect whether it contains a anomaly."
)
if __name__ == "__main__":
iface.launch()
```
---
## Intended Use
`appy-mod-beta1` is designed for:
* **Game Content Moderation** – Automated moderation of user-generated or in-game visual content.
* **Parental Control Tools** – Supports identifying unsafe or inappropriate content in children’s games.
* **Online Game Platforms** – Enables scalable and automatic screening of images uploaded by users.
* **Community Safety** – Helps maintain safe and compliant visual environments in multiplayer games and forums.
* **AI Moderation Research** – A sample project for applying vision-language models to safety-critical applications.