Update README.md
Browse files
README.md
CHANGED
|
@@ -13,6 +13,16 @@ tags:
|
|
| 13 |
|
| 14 |
BAILU is a highly efficient deepfake detection model designed to identify AI-generated images from various image generation models. With only **2M parameters (~8MB)**, it achieves **95.88% overall accuracy** by analyzing artifacts/signatures unique to AI generation pipelines.
|
| 15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
## 🎯 Key Features
|
| 17 |
|
| 18 |
- **Ultra-Lightweight**: 2M parameters, ~8MB model size - runs on CPU or GPU
|
|
@@ -39,16 +49,6 @@ BAILU is a highly efficient deepfake detection model designed to identify AI-gen
|
|
| 39 |
- **Scheduler**: CosineAnnealingLR (T_max=50)
|
| 40 |
- **Loss**: Binary Cross-Entropy with Logits
|
| 41 |
|
| 42 |
-
## 🌍 Why Open-Source Matters for Deepfake Detection
|
| 43 |
-
|
| 44 |
-
This model was only possible because companies like Black Forest Labs and Stability AI release their models publicly. Private, closed-source models create detection blind spots—we cannot defend against what we cannot study.
|
| 45 |
-
We strongly encourage all AI companies to open-source their models to enable:
|
| 46 |
-
|
| 47 |
-
- Effective deepfake detection research
|
| 48 |
-
- Transparency in AI development
|
| 49 |
-
- Collaborative safety measures
|
| 50 |
-
- Public trust through verifiable defenses
|
| 51 |
-
|
| 52 |
Detection must keep pace with generation. That requires open access.
|
| 53 |
## ⚠️ Important Limitations
|
| 54 |
|
|
|
|
| 13 |
|
| 14 |
BAILU is a highly efficient deepfake detection model designed to identify AI-generated images from various image generation models. With only **2M parameters (~8MB)**, it achieves **95.88% overall accuracy** by analyzing artifacts/signatures unique to AI generation pipelines.
|
| 15 |
|
| 16 |
+
## 🌍 Why Open-Source Matters for Deepfake Detection
|
| 17 |
+
|
| 18 |
+
This model was only possible because companies like Black Forest Labs and Stability AI release their models publicly. Private, closed-source models create detection blind spots—we cannot defend against what we cannot study.
|
| 19 |
+
We strongly encourage all AI companies to open-source their models to enable:
|
| 20 |
+
|
| 21 |
+
- Effective deepfake detection research
|
| 22 |
+
- Transparency in AI development
|
| 23 |
+
- Collaborative safety measures
|
| 24 |
+
- Public trust through verifiable defenses
|
| 25 |
+
|
| 26 |
## 🎯 Key Features
|
| 27 |
|
| 28 |
- **Ultra-Lightweight**: 2M parameters, ~8MB model size - runs on CPU or GPU
|
|
|
|
| 49 |
- **Scheduler**: CosineAnnealingLR (T_max=50)
|
| 50 |
- **Loss**: Binary Cross-Entropy with Logits
|
| 51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
Detection must keep pace with generation. That requires open access.
|
| 53 |
## ⚠️ Important Limitations
|
| 54 |
|