Spaces:
Sleeping
Sleeping
Shianndri-Zanniko commited on
Commit ·
c58777f
1
Parent(s): 93d4b41
fix structure
Browse files
app/app.py → app.py
RENAMED
|
@@ -63,11 +63,8 @@ model = CLIPImageClassifier(
|
|
| 63 |
).to(device)
|
| 64 |
|
| 65 |
# Load Weights
|
| 66 |
-
weights_path =
|
| 67 |
-
|
| 68 |
-
"model",
|
| 69 |
-
"best_clip_ai_detector.pth",
|
| 70 |
-
)
|
| 71 |
print(f"Loading weights from {weights_path}...")
|
| 72 |
try:
|
| 73 |
model.load_state_dict(torch.load(weights_path, map_location=device))
|
|
|
|
| 63 |
).to(device)
|
| 64 |
|
| 65 |
# Load Weights
|
| 66 |
+
weights_path = "best_clip_ai_detector.pth",
|
| 67 |
+
|
|
|
|
|
|
|
|
|
|
| 68 |
print(f"Loading weights from {weights_path}...")
|
| 69 |
try:
|
| 70 |
model.load_state_dict(torch.load(weights_path, map_location=device))
|
app/README.md
DELETED
|
@@ -1,24 +0,0 @@
|
|
| 1 |
-
# AI Image Detector Application
|
| 2 |
-
|
| 3 |
-
This directory contains the deployment code for the AI Image Detector. It provides a simple web interface using Gradio, allowing users to upload images and receive real-time predictions on whether the image is real or AI-generated.
|
| 4 |
-
|
| 5 |
-
## Files
|
| 6 |
-
|
| 7 |
-
- `app.py`: The main Python script that loads the trained model and launches the Gradio interface.
|
| 8 |
-
|
| 9 |
-
## Setup and Usage
|
| 10 |
-
|
| 11 |
-
1. Ensure you have installed the project dependencies listed in the root `requirements.txt`.
|
| 12 |
-
2. Ensure the trained model weights file `best_clip_ai_detector.pth` is located in the `../model/` directory relative to this folder.
|
| 13 |
-
3. Run the application:
|
| 14 |
-
|
| 15 |
-
```bash
|
| 16 |
-
python app.py
|
| 17 |
-
```
|
| 18 |
-
|
| 19 |
-
4. The script will initialize the model (downloading the base CLIP model if necessary) and start a local web server.
|
| 20 |
-
5. Access the interface via the URL displayed in the console (typically `http://127.0.0.1:7860`).
|
| 21 |
-
|
| 22 |
-
## Model Details
|
| 23 |
-
|
| 24 |
-
The application uses a `CLIPImageClassifier` class that loads a pre-trained `openai/clip-vit-base-patch32` model from the `transformers` library. It adds a custom classification head on top of the CLIP vision model to perform binary classification (Real vs. AI-Generated).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
model/best_clip_ai_detector.pth → best_clip_ai_detector.pth
RENAMED
|
File without changes
|