Spaces:
Sleeping
title: Deepfake Detector
emoji: π΅οΈ
colorFrom: purple
colorTo: blue
sdk: docker
app_port: 7860
pinned: false
Deepfake Detector API
A FastAPI-based API for detecting deepfakes in images, videos, and audio.
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/ |
GET | Health check |
/docs |
GET | Swagger UI |
/signup |
POST | Create account |
/login |
POST | Login |
/usage |
GET | Check API usage |
/predict_image |
POST | Detect deepfake in image |
/predict_video |
POST | Detect deepfake in video |
/predict_audio |
POST | Detect deepfake in audio |
Authentication
All prediction endpoints require API key in header:
x-api-key: sk-live-your-key-here
Usage
- Create account via
/signup - Save your API key (shown only once!)
- Use key in
x-api-keyheader for predictions
================================================================================ DEEPFAKE DETECTOR - BACKEND PROJECT FLOW
Last Updated: 2026-01-22 Author: AI Assistant (Antigravity)
================================================================================ PROJECT OVERVIEW
This is a Deepfake Detection API built with FastAPI that detects deepfakes in:
- Images (using TensorFlow/Keras model)
- Videos (using CNN-LSTM model with MTCNN face detection)
- Audio (using HuggingFace transformers)
================================================================================ TECH STACK
Backend Framework: FastAPI (Python) Database: MongoDB Atlas (Cloud) ML Frameworks: TensorFlow, Keras, PyTorch, Transformers Face Detection: MTCNN Authentication: Bcrypt (password hashing), SHA-256 (API key hashing) Deployment: Docker + Render
================================================================================ FOLDER STRUCTURE
deepfake-detector/ βββ src/ β βββ main.py # FastAPI app, all endpoints β βββ auth.py # Authentication & API key management β βββ database.py # MongoDB connection β βββ config.py # Project configuration β βββ predict.py # Image prediction logic β βββ predict_video_model.py # Video prediction logic β βββ video_model.py # Video model architecture βββ models/ β βββ baseline_model.h5 # Image deepfake model β βββ video_model_v2.keras # Video deepfake model β βββ finetuned_model.h5 # Encoder for video model βββ .env # Environment variables (secrets) βββ .env.example # Template for .env βββ requirements.txt # Python dependencies βββ Dockerfile # Docker configuration βββ render.yaml # Render deployment config βββ start.sh # Startup script for Docker βββ app.py # Streamlit frontend
================================================================================ API ENDPOINTS
PUBLIC ENDPOINTS (No auth required):
GET / β Health check POST /signup β Create account, get API key (shown ONCE) POST /login β View API key prefix POST /regenerate-key β Generate new API key (shown ONCE)
PROTECTED ENDPOINTS (Require x-api-key header):
GET /usage β Check API usage & remaining quota POST /predict_image β Detect deepfake in image POST /predict_video β Detect deepfake in video POST /predict_audio β Detect deepfake in audio
================================================================================ AUTHENTICATION SYSTEM (OpenAI-style)
Implemented by: AI Assistant
SIGNUP FLOW:
- User provides email + password
- Password hashed with bcrypt
- API key generated: sk-live-{random_hex_48chars}
- API key hashed with SHA-256 before storing
- Raw API key shown ONCE to user
LOGIN FLOW:
- Verify email + password
- Only show API key PREFIX (e.g., sk-live-8cc37354...)
- Full key is NOT recoverable (security feature)
REGENERATE KEY FLOW:
- Verify email + password
- Generate new API key
- Old key becomes invalid immediately
- New key shown ONCE
API KEY VALIDATION:
- User sends: x-api-key: sk-live-xxxxx
- Server hashes incoming key
- Compares hash with stored hash
- If match β request allowed + rate limit checked
================================================================================ RATE LIMITING SYSTEM
Implemented by: AI Assistant
- Default limit: 100 requests/day per user
- Counter resets at midnight UTC
- When exceeded: HTTP 429 Too Many Requests
- Usage tracked in MongoDB: { "requests_today": 5, "last_request_date": "2026-01-22", "total_requests": 150, "rate_limit": 100 }
================================================================================ USER SCHEMA (MongoDB)
{ "_id": ObjectId, "email": "user@example.com", "password_hash": "bcrypt_hash_here", "api_key_hash": "sha256_hash_here", "api_key_prefix": "sk-live-8cc37354...", "requests_today": 5, "last_request_date": "2026-01-22", "total_requests": 150, "rate_limit": 100, "created_at": ISODate, "last_login": ISODate }
================================================================================ ML MODELS USED
IMAGE MODEL (baseline_model.h5)
- TensorFlow/Keras CNN
- Input: JPG/PNG images
- Output: REAL/FAKE + confidence %
VIDEO MODEL (video_model_v2.keras)
- CNN-LSTM architecture
- Uses finetuned_model.h5 as feature encoder
- Face extraction via MTCNN
- Input: MP4/MOV/AVI videos
- Output: REAL/FAKE + confidence %
AUDIO MODEL (HuggingFace)
- Model: motheecreator/Deepfake-audio-detection
- Loaded via transformers pipeline
- Input: WAV/MP3/FLAC/OGG/M4A
- Output: REAL/FAKE + confidence %
================================================================================ WHAT WAS IMPLEMENTED BY AI ASSISTANT
API KEY MANAGEMENT SYSTEM
- src/auth.py (complete rewrite)
- OpenAI-style sk-live-xxx format
- SHA-256 hashing for security
- Key shown only once at signup
MONGODB INTEGRATION
- src/database.py (new file)
- Async connection with motor
- Index creation for email & api_key
RATE LIMITING
- 100 requests/day default
- Auto-reset at midnight
- Usage tracking per user
USAGE ENDPOINT
- GET /usage to check quota
- Returns remaining requests
ENVIRONMENT CONFIGURATION
- .env file for secrets
- .env.example as template
- python-dotenv integration
CORS MIDDLEWARE
- Added to allow API access from browsers
PROTECTED ENDPOINTS
- All /predict_* endpoints require API key
- Dependency injection: Depends(validate_api_key)
RENDER DEPLOYMENT CONFIG
- Updated render.yaml for Docker
- Environment variable placeholders
================================================================================ HOW TO RUN LOCALLY
Install dependencies: pip install -r requirements.txt
Set environment variables (create .env): MONGODB_URI=mongodb+srv://user:pass@cluster.mongodb.net/ DATABASE_NAME=deepfake_detector
Start server: uvicorn src.main:app --host 127.0.0.1 --port 8000
Open docs: http://127.0.0.1:8000/docs
================================================================================ HOW TO DEPLOY (RENDER)
- Push to GitHub
- Create Web Service on Render
- Add environment variable:
- MONGODB_URI = your_mongodb_connection_string
- Deploy
================================================================================ END OF DOCUMENT
use format for ui building
import streamlit as st import requests import os
--- Page Configuration ---
st.set_page_config( page_title="Deepfake Detector", page_icon="π΅οΈ", layout="wide" )
--- Define API URL ---
For local development: http://127.0.0.1:8000
For production: Set API_URL environment variable to your deployed backend URL
Example: API_URL=https://your-backend.up.railway.app
API_URL = os.getenv("API_URL", "http://127.0.0.1:8000")
If running on Railway/Render, use internal networking or deployed URL
The frontend MUST know the backend URL
--- Session State for API Key ---
if "api_key" not in st.session_state: st.session_state.api_key = "" if "logged_in" not in st.session_state: st.session_state.logged_in = False if "user_email" not in st.session_state: st.session_state.user_email = ""
--- Helper function to make authenticated requests ---
def make_api_request(endpoint, files=None, timeout=30): """Make authenticated API request with API key header.""" headers = {"x-api-key": st.session_state.api_key} try: response = requests.post( f"{API_URL}/{endpoint}", files=files, headers=headers, timeout=timeout ) return response except requests.exceptions.RequestException as e: return None
--- Sidebar: Authentication ---
with st.sidebar: st.header("π Authentication")
if not st.session_state.logged_in:
tab1, tab2 = st.tabs(["Login", "Signup"])
with tab1:
st.subheader("Login")
login_email = st.text_input("Email", key="login_email")
login_password = st.text_input("Password", type="password", key="login_pass")
if st.button("Login", key="login_btn"):
with st.spinner("Logging in..."):
try:
response = requests.post(
f"{API_URL}/login",
json={"email": login_email, "password": login_password},
timeout=10
)
if response.status_code == 200:
result = response.json()
st.success("Login successful!")
st.info("β οΈ Your API key prefix: " + result["api_key"])
st.warning("Enter your full API key below if you have it saved.")
st.session_state.user_email = login_email
else:
st.error(response.json().get("detail", "Login failed"))
except Exception as e:
st.error(f"Connection error: {e}")
with tab2:
st.subheader("Signup")
signup_email = st.text_input("Email", key="signup_email")
signup_password = st.text_input("Password", type="password", key="signup_pass")
if st.button("Create Account", key="signup_btn"):
with st.spinner("Creating account..."):
try:
response = requests.post(
f"{API_URL}/signup",
json={"email": signup_email, "password": signup_password},
timeout=10
)
if response.status_code == 200:
result = response.json()
st.success("Account created!")
st.warning("β οΈ SAVE YOUR API KEY NOW! It will NOT be shown again:")
st.code(result["api_key"], language=None)
st.session_state.api_key = result["api_key"]
st.session_state.logged_in = True
st.session_state.user_email = signup_email
st.rerun()
else:
st.error(response.json().get("detail", "Signup failed"))
except Exception as e:
st.error(f"Connection error: {e}")
st.divider()
st.subheader("Enter API Key")
api_key_input = st.text_input("API Key (sk-live-...)", type="password", key="api_key_input")
if st.button("Use API Key"):
if api_key_input.startswith("sk-live-"):
st.session_state.api_key = api_key_input
st.session_state.logged_in = True
st.success("API key saved!")
st.rerun()
else:
st.error("Invalid API key format. Should start with 'sk-live-'")
else:
st.success(f"β
Logged in as: {st.session_state.user_email or 'User'}")
st.text(f"Key: {st.session_state.api_key[:20]}...")
# Check usage
if st.button("Check Usage"):
headers = {"x-api-key": st.session_state.api_key}
try:
response = requests.get(f"{API_URL}/usage", headers=headers, timeout=10)
if response.status_code == 200:
usage = response.json()
st.metric("Requests Today", usage["requests_today"])
st.metric("Remaining", usage["remaining"])
st.metric("Daily Limit", usage["rate_limit"])
else:
st.error("Could not fetch usage")
except:
st.error("Connection error")
if st.button("Logout"):
st.session_state.api_key = ""
st.session_state.logged_in = False
st.session_state.user_email = ""
st.rerun()
--- Main App ---
st.title("Deepfake Detector π΅οΈ") st.write("Welcome! This app uses advanced AI models to detect deepfakes in images, videos, and audio.")
Check if logged in
if not st.session_state.logged_in: st.warning("β οΈ Please login or enter your API key in the sidebar to use the detector.") st.stop()
--- Create three columns for the detectors ---
col1, col2, col3 = st.columns(3)
--- Column 1: Image Detector ---
with col1: st.header("πΌοΈ Image Detector") image_file = st.file_uploader("Upload an Image", type=['jpg', 'png', 'jpeg'], key="image")
if image_file:
st.image(image_file, caption="Uploaded Image.", use_container_width=True)
if st.button("Detect Image"):
files = {"file": (image_file.name, image_file, image_file.type)}
with st.spinner("Analyzing image... Please wait."):
response = make_api_request("predict_image", files=files, timeout=30)
if response is None:
st.error("Could not connect to the backend API")
elif response.status_code == 200:
result = response.json()
confidence = result.get('confidence', 0.0)
if result['prediction'] == 'FAKE':
st.error(f"**Prediction: FAKE** (Confidence: {confidence:.2f}%)")
else:
st.success(f"**Prediction: REAL** (Confidence: {confidence:.2f}%)")
elif response.status_code == 429:
st.error("β οΈ Rate limit exceeded! Try again tomorrow.")
elif response.status_code == 401:
st.error("β Invalid API key. Please check your key.")
else:
st.error(f"Error from API: {response.text}")
--- Column 2: Video Detector ---
with col2: st.header("π¬ Video Detector") video_file = st.file_uploader("Upload a Video", type=['mp4', 'mov', 'avi'], key="video")
if video_file:
st.video(video_file)
if st.button("Detect Video"):
files = {"file": (video_file.name, video_file, video_file.type)}
with st.spinner("Analyzing video... This may take a minute or two."):
response = make_api_request("predict_video", files=files, timeout=300)
if response is None:
st.error("Could not connect to the backend API")
elif response.status_code == 200:
result = response.json()
confidence = result.get('confidence', 0.0)
if result['prediction'] == 'FAKE':
st.error(f"**Prediction: FAKE** (Confidence: {confidence:.2f}%)")
elif result['prediction'] == 'REAL':
st.success(f"**Prediction: REAL** (Confidence: {confidence:.2f}%)")
else:
st.warning(f"Prediction Error: {result.get('detail', 'Unknown error')}")
elif response.status_code == 429:
st.error("β οΈ Rate limit exceeded! Try again tomorrow.")
elif response.status_code == 401:
st.error("β Invalid API key. Please check your key.")
else:
st.error(f"Error from API: {response.text}")
--- Column 3: Audio Detector ---
with col3: st.header("π΅ Audio Detector") audio_file = st.file_uploader("Upload Audio", type=['wav', 'mp3', 'flac', 'ogg', 'm4a'], key="audio")
if audio_file:
st.audio(audio_file)
if st.button("Detect Audio"):
files = {"file": (audio_file.name, audio_file, audio_file.type)}
with st.spinner("Analyzing audio..."):
response = make_api_request("predict_audio", files=files, timeout=300)
if response is None:
st.error("Could not connect to the backend API")
elif response.status_code == 200:
result = response.json()
confidence = result.get('confidence', 0.0)
prediction = result.get('prediction', 'UNKNOWN')
if prediction == 'FAKE':
st.error(f"**Prediction: FAKE** (Confidence: {confidence:.2f}%)")
elif prediction == 'REAL':
st.success(f"**Prediction: REAL** (Confidence: {confidence:.2f}%)")
else:
st.warning(f"Prediction: {prediction}")
elif response.status_code == 429:
st.error("β οΈ Rate limit exceeded! Try again tomorrow.")
elif response.status_code == 401:
st.error("β Invalid API key. Please check your key.")
else:
st.error(f"Error from API: {response.text}")
--- Footer ---
st.divider() st.caption("Powered by FastAPI + TensorFlow + HuggingFace Transformers")