MrDevCoder01 commited on
Commit
c9caccb
·
1 Parent(s): f57fa44

Initial upload of DeepSecure backend

Browse files
Files changed (9) hide show
  1. .gitignore +4 -0
  2. Dockerfile +25 -0
  3. README.md +191 -7
  4. app.py +203 -0
  5. model.h5 +3 -0
  6. requirements.txt +78 -0
  7. start_server.sh +12 -0
  8. yolo11n-pose.pt +3 -0
  9. yolo11n.pt +3 -0
.gitignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ venv/
2
+ __pycache__/
3
+ *.pyc
4
+ .DS_Store
Dockerfile ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ # Set working directory
4
+ WORKDIR /app
5
+
6
+ # Install system dependencies required for OpenCV
7
+ RUN apt-get update && apt-get install -y \
8
+ libgl1-mesa-glx \
9
+ libglib2.0-0 \
10
+ && rm -rf /var/lib/apt/lists/*
11
+
12
+ # Copy requirements first to leverage Docker cache
13
+ COPY requirements.txt .
14
+
15
+ # Install Python dependencies
16
+ RUN pip install --no-cache-dir -r requirements.txt
17
+
18
+ # Copy the rest of the application
19
+ COPY . .
20
+
21
+ # Expose port 7860 (Hugging Face default)
22
+ EXPOSE 7860
23
+
24
+ # Start the application
25
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -1,11 +1,195 @@
1
  ---
2
- title: DSBackend
3
- emoji: 📚
4
- colorFrom: green
5
- colorTo: purple
6
- sdk: docker
7
- pinned: false
8
  license: mit
 
 
 
 
 
 
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
 
2
  license: mit
3
+ language:
4
+ - en
5
+ metrics:
6
+ - accuracy
7
+ library_name: tf-keras
8
+ pipeline_tag: image-classification
9
  ---
10
 
11
+ # Deepfake Detection Backend & Model (V1)
12
+
13
+ This repository contains a Convolutional Neural Network (CNN)-based model fine-tuned for deepfake classification, now wrapped in a high-performance **FastAPI** backend that natively supports processing both images and frame-by-frame videos.
14
+
15
+ ## Core Advancements
16
+ To drastically improve real-world accuracy (especially on webcams and scaling distortions), we implemented **Ultralytics YOLO11-Pose** (`yolo11n-pose.pt`) for facial extraction.
17
+
18
+ The underlying CNN (`model.h5`) excels only when evaluated on *tight facial crops* matching its training data. Generative YOLO bounding boxes are too loose and capture background noise. By extracting tracking keypoints (eyes, nose, ears) and explicitly drawing bounding configurations around them via YOLO11, we mathematically generate tight facial configurations, ensuring that the CNN captures exactly what it was trained to see, regardless of camera distance.
19
+
20
+ ### Key Features:
21
+ - **Model Architecture:** Convolutional Neural Network (CNN)
22
+ - **Input Size:** 128x128 pixels (Tight facial crop)
23
+ - **Face Extractor:** Ultralytics YOLO11-Pose (`yolo11n-pose.pt`)
24
+ - **Video Processing:** Extracts and analyzes 1 in every 5 frames (~6 fps) for robust temporal spoof detection. Deepfake videos are flagged as "Fake" if *any* evaluated frame's prediction score exceeds 50%.
25
+ - **Number of Classes:** 2 (Real, Fake)
26
+ - **API Framework:** FastAPI, Uvicorn, Python-Multipart
27
+
28
+ ## Processing Flow & Algorithm
29
+
30
+ The system natively processes both images and videos using a unified core prediction pipeline. The following describes the step-by-step logic.
31
+
32
+ ### 1. Media Handling Flow
33
+
34
+ **For Images:**
35
+ 1. The image is parsed and decoded directly from the HTTP request.
36
+ 2. The image is passed to the **Core Prediction Pipeline**.
37
+ 3. A confidence score is returned, classifying the image as "Real" or "Fake".
38
+
39
+ **For Videos:**
40
+ 1. The video is saved to a temporary file and read using OpenCV.
41
+ 2. Frames are iteratively extracted.
42
+ 3. To optimize performance without sacrificing temporal accuracy, **1 in every 5 frames** (~6 FPS for a 30 FPS video) is analyzed.
43
+ 4. Each selected frame is individually passed to the **Core Prediction Pipeline**.
44
+ 5. The backend collects a list of `confidence_scores` from the analyzed frames.
45
+ 6. The video is flagged as "Fake" if the **maximum** confidence score among all frames (i.e., the most manipulated frame) exceeds 0.5.
46
+
47
+ ### 2. Core Prediction Pipeline (Pseudocode)
48
+
49
+ To definitively locate and strictly frame the face, the YOLO11-Pose pipeline extracts 5 specific facial keypoints: **Nose, Left Eye, Right Eye, Left Ear, and Right Ear**.
50
+
51
+ ```python
52
+ function process_frame(frame):
53
+ # Step 1: Detect Face & Extract Keypoints (YOLO11-Pose)
54
+ results = yolo_pose_model.predict(frame)
55
+
56
+ if face_keypoints_found(results):
57
+ # Eyes, nose, and ears detected
58
+ bounding_box = calculate_tight_box_from_keypoints()
59
+ face_crop = crop_image(frame, bounding_box)
60
+ elif person_bounding_box_found(results):
61
+ # Fallback to standard object detection box if keypoints fail
62
+ bounding_box = shrink_box_to_approximate_face()
63
+ face_crop = crop_image(frame, bounding_box)
64
+ else:
65
+ # Extreme fallback if no person is detected
66
+ face_crop = frame
67
+
68
+ # Step 2: Preprocessing
69
+ resized_face = resize_image(face_crop, width=128, height=128)
70
+ normalized_face = resized_face / 255.0
71
+ model_input = expand_dimensions(normalized_face)
72
+
73
+ # Step 3: CNN Model Inference
74
+ confidence_score = cnn_model.predict(model_input)
75
+
76
+ return confidence_score
77
+ ```
78
+
79
+ ## Training Performance
80
+
81
+ Below are the graphs illustrating the training and validation accuracy and loss for the model:
82
+
83
+ ![Model Training/Validation Graph 1](Unknown.png)
84
+
85
+ ![Model Training/Validation Graph 2](Unknown-2.png)
86
+
87
+ ## Installation
88
+
89
+ 1. Create a Python 3.11 virtual environment and activate it:
90
+ ```bash
91
+ python3.11 -m venv venv
92
+ source venv/bin/activate
93
+ ```
94
+ 2. Install the required dependencies:
95
+ ```bash
96
+ pip install -r requirements.txt
97
+ ```
98
+
99
+ ## Running the API Server
100
+
101
+ We provide a convenient startup script to launch the FastAPI backend:
102
+ ```bash
103
+ chmod +x start_server.sh
104
+ ./start_server.sh
105
+ ```
106
+ The server will bind to `0.0.0.0:8000`, making the `/predict` endpoint available.
107
+
108
+ ## Usage (API)
109
+
110
+ You can send a `POST` request with an image or video to the `/predict` endpoint using `multipart/form-data`:
111
+
112
+ ```python
113
+ import requests
114
+
115
+ url = "http://localhost:8000/predict"
116
+ file_path = "sample_video.mp4" # Or an image.jpg
117
+
118
+ with open(file_path, "rb") as file:
119
+ files = {"file": file}
120
+ response = requests.post(url, files=files)
121
+
122
+ print(response.json())
123
+ ```
124
+
125
+ **JSON Output Structure (Video):**
126
+ ```json
127
+ {
128
+ "filename": "sample_video.mp4",
129
+ "type": "video",
130
+ "prediction": "Fake",
131
+ "confidence_score": 0.8921,
132
+ "frames_analyzed": 120,
133
+ "fake_frames_count": 14,
134
+ "max_fake_score": 0.8921,
135
+ "avg_score": 0.3102
136
+ }
137
+ ```
138
+ *Note: A score closer to `1.0` is recognized as heavily manipulated. A score closer to `0.0` is authentic. An inference resulting in `max_fake_score` ≥ 0.5 triggers a "Fake" prediction limit.*
139
+
140
+ ## Usage (Direct Python Inference)
141
+
142
+ If you'd like to use the YOLO11 inference pipeline directly in your Python code without the API server, feel free to adapt this minimal inference script:
143
+
144
+ ```python
145
+ import cv2
146
+ import numpy as np
147
+ import warnings
148
+ from tensorflow.keras.preprocessing import image
149
+ from tensorflow.keras.models import load_model
150
+ from ultralytics import YOLO
151
+
152
+ warnings.filterwarnings('ignore', category=UserWarning)
153
+
154
+ # Load Models
155
+ model = load_model('model.h5', compile=False)
156
+ detector = YOLO('yolo11n-pose.pt')
157
+
158
+ def detect_and_predict(img_path):
159
+ img = cv2.imread(img_path)
160
+
161
+ # 1. Detect Face using YOLO11-Pose Keypoints
162
+ results = detector.predict(img, verbose=False)
163
+ if len(results) > 0 and results[0].keypoints is not None and len(results[0].keypoints.xy[0]) > 0:
164
+ kpts = results[0].keypoints.xy[0].cpu().numpy()
165
+ valid_kpts = np.array([k for k in kpts[0:5] if k[0] > 0 and k[1] > 0]) # Eyes, nose, ears
166
+
167
+ if len(valid_kpts) > 0:
168
+ x_min, y_min = np.min(valid_kpts, axis=0)
169
+ x_max, y_max = np.max(valid_kpts, axis=0)
170
+
171
+ # Expand tight box to capture full face (forehead to jaw)
172
+ w, h = x_max - x_min, y_max - y_min
173
+ if w > 0 and h > 0:
174
+ x1 = max(0, int(x_min - w * 0.3))
175
+ y1 = max(0, int(y_min - h * 0.5))
176
+ x2 = min(img.shape[1], int(x_max + w * 0.3))
177
+ y2 = min(img.shape[0], int(y_max + h * 0.8))
178
+
179
+ face = img[y1:y2, x1:x2]
180
+ if face.size > 0:
181
+ face = cv2.resize(face, (128, 128))
182
+
183
+ # 2. Preprocess & Predict
184
+ img_array = np.expand_dims(image.img_to_array(face), axis=0) / 255.0
185
+ score = float(model.predict(img_array, verbose=0)[0][0])
186
+
187
+ prediction = 'Fake' if score >= 0.5 else 'Real'
188
+ print(f"Prediction: {prediction} (Score: {score:.4f})")
189
+ return
190
+
191
+ print("Could not detect a clear face.")
192
+
193
+ # Try it out
194
+ detect_and_predict('path_to_your_image.jpg')
195
+ ```
app.py ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, File, UploadFile, HTTPException
2
+ from fastapi.middleware.cors import CORSMiddleware
3
+ import uvicorn
4
+ import cv2
5
+ import numpy as np
6
+ import os
7
+ import tempfile
8
+ import shutil
9
+ import warnings
10
+ import logging
11
+ import os
12
+
13
+ # Suppress Keras and TensorFlow warnings
14
+ warnings.filterwarnings('ignore', category=UserWarning)
15
+ os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
16
+ logging.getLogger('absl').setLevel(logging.ERROR)
17
+
18
+ from tensorflow.keras.preprocessing import image
19
+ from tensorflow.keras.models import load_model
20
+ from ultralytics import YOLO
21
+
22
+ app = FastAPI(title="Deepfake Detection API")
23
+
24
+ # Update CORS for frontend connectivity
25
+ app.add_middleware(
26
+ CORSMiddleware,
27
+ allow_origins=["*"], # In production, replace with your frontend URL
28
+ allow_credentials=True,
29
+ allow_methods=["*"],
30
+ allow_headers=["*"],
31
+ )
32
+
33
+ @app.get("/")
34
+ async def health_check():
35
+ return {"status": "online", "message": "Backend is running!"}
36
+
37
+ print("Loading model and YOLO11...")
38
+ model = load_model('model.h5', compile=False)
39
+ # YOLOv11 pose model for extremely tight and precise facial feature cropping (matches MTCNN style)
40
+ detector = YOLO('yolo11n-pose.pt')
41
+ print("Model loaded successfully.")
42
+
43
+ def detect_and_crop_face(img):
44
+ """Detects face/person and crops it to 128x128."""
45
+ # Run YOLO11 pose detection
46
+ results = detector.predict(img, verbose=False)
47
+
48
+ if len(results) > 0 and results[0].keypoints is not None and len(results[0].keypoints.xy[0]) > 0:
49
+ # Get the first person's keypoints
50
+ kpts = results[0].keypoints.xy[0].cpu().numpy()
51
+
52
+ # 0: nose, 1: left eye, 2: right eye, 3: left ear, 4: right ear
53
+ face_kpts = kpts[0:5]
54
+ # Filter out keypoints that weren't detected
55
+ valid_kpts = [k for k in face_kpts if k[0] > 0 and k[1] > 0]
56
+
57
+ if valid_kpts:
58
+ valid_kpts = np.array(valid_kpts)
59
+ x_min, y_min = np.min(valid_kpts, axis=0)
60
+ x_max, y_max = np.max(valid_kpts, axis=0)
61
+
62
+ # Expand this tight keypoint box to capture the full face (forehead, chin, cheeks)
63
+ w = x_max - x_min
64
+ h = y_max - y_min
65
+
66
+ # Safety for edge cases
67
+ if w > 0 and h > 0:
68
+ pad_x = w * 0.3
69
+ pad_y_top = h * 0.5 # Expand more upward for the forehead
70
+ pad_y_bot = h * 0.8 # Expand downward for the chin/mouth
71
+
72
+ final_x1 = max(0, int(x_min - pad_x))
73
+ final_y1 = max(0, int(y_min - pad_y_top))
74
+ final_x2 = min(img.shape[1], int(x_max + pad_x))
75
+ final_y2 = min(img.shape[0], int(y_max + pad_y_bot))
76
+
77
+ face = img[final_y1:final_y2, final_x1:final_x2]
78
+
79
+ if face.size > 0:
80
+ return cv2.resize(face, (128, 128))
81
+
82
+ # Fallback to normal YOLO box heuristic if face keypoints fail but person is found
83
+ if len(results) > 0 and len(results[0].boxes) > 0:
84
+ box = results[0].boxes[0].xyxy[0].cpu().numpy()
85
+ x1, y1, x2, y2 = map(int, box)
86
+ x1, y1 = max(0, x1), max(0, y1)
87
+ x2, y2 = min(img.shape[1], x2), min(img.shape[0], y2)
88
+
89
+ h = y2 - y1
90
+ w = x2 - x1
91
+ if h > w * 1.5:
92
+ y2 = y1 + int(h * 0.3)
93
+
94
+ face = img[y1:y2, x1:x2]
95
+ if face.size > 0:
96
+ return cv2.resize(face, (128, 128))
97
+
98
+ # If no person is detected at all
99
+ return cv2.resize(img, (128, 128))
100
+
101
+ def preprocess_face(face):
102
+ """Formats the cropped face for the model."""
103
+ img_array = image.img_to_array(face)
104
+ img_array = np.expand_dims(img_array, axis=0)
105
+ img_array /= 255.0 # Normalize
106
+ return img_array
107
+
108
+ def process_image(img):
109
+ """Processes a single BGR image array and returns the fake probability."""
110
+ face = detect_and_crop_face(img)
111
+ processed_image = preprocess_face(face)
112
+ prediction = model.predict(processed_image, verbose=0)
113
+ return float(prediction[0][0])
114
+
115
+ @app.post("/predict")
116
+ async def predict_media(file: UploadFile = File(...)):
117
+ filename = file.filename.lower()
118
+
119
+ is_video = filename.endswith(('.mp4', '.avi', '.mov', '.mkv'))
120
+ is_image = filename.endswith(('.jpg', '.jpeg', '.png', '.bmp'))
121
+
122
+ if not is_image and not is_video:
123
+ raise HTTPException(status_code=400, detail="Unsupported file format.")
124
+
125
+ try:
126
+ if is_image:
127
+ # Read image directly from bytes
128
+ contents = await file.read()
129
+ nparr = np.frombuffer(contents, np.uint8)
130
+ img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
131
+
132
+ if img is None:
133
+ raise HTTPException(status_code=400, detail="Invalid image file.")
134
+
135
+ score = process_image(img)
136
+ result = "Real" if score < 0.5 else "Fake"
137
+
138
+ return {
139
+ "filename": filename,
140
+ "type": "image",
141
+ "prediction": result,
142
+ "confidence_score": score
143
+ }
144
+
145
+ elif is_video:
146
+ # Save video to a temporary file for cv2.VideoCapture
147
+ with tempfile.NamedTemporaryFile(delete=False, suffix=".mp4") as temp_video:
148
+ shutil.copyfileobj(file.file, temp_video)
149
+ temp_video_path = temp_video.name
150
+
151
+ cap = cv2.VideoCapture(temp_video_path)
152
+ if not cap.isOpened():
153
+ os.unlink(temp_video_path)
154
+ raise HTTPException(status_code=400, detail="Could not open video file.")
155
+
156
+ frame_scores = []
157
+ frame_count = 0
158
+
159
+ # Process 1 frame every 5 frames (~6 fps for 30fps video) for better accuracy
160
+ while True:
161
+ ret, frame = cap.read()
162
+ if not ret:
163
+ break
164
+
165
+ if frame_count % 5 == 0:
166
+ score = process_image(frame)
167
+ frame_scores.append(score)
168
+
169
+ frame_count += 1
170
+
171
+ cap.release()
172
+ os.unlink(temp_video_path)
173
+
174
+ if not frame_scores:
175
+ raise HTTPException(status_code=400, detail="Could not extract frames from video.")
176
+
177
+ # Deepfakes often only manipulate specific frames, so average score can mask the spoof.
178
+ # We use max_score to find the most manipulated frame.
179
+ max_score = max(frame_scores)
180
+ avg_score = sum(frame_scores) / len(frame_scores)
181
+
182
+ fake_frames_count = sum(1 for s in frame_scores if s >= 0.5)
183
+
184
+ final_result = "Real" if max_score < 0.5 else "Fake"
185
+
186
+ return {
187
+ "filename": filename,
188
+ "type": "video",
189
+ "prediction": final_result,
190
+ "confidence_score": max_score,
191
+ "frames_analyzed": len(frame_scores),
192
+ "fake_frames_count": fake_frames_count,
193
+ "max_fake_score": max_score,
194
+ "avg_score": avg_score
195
+ }
196
+
197
+ except Exception as e:
198
+ import traceback
199
+ traceback.print_exc()
200
+ raise HTTPException(status_code=500, detail=str(e))
201
+
202
+ if __name__ == "__main__":
203
+ uvicorn.run(app, host="0.0.0.0", port=7860)
model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f54d9db020da33f99f861d41dc1334ec33adc14991ada4033a4ece790d0904e
3
+ size 312843624
requirements.txt ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ absl-py==2.4.0
2
+ annotated-doc==0.0.4
3
+ annotated-types==0.7.0
4
+ anyio==4.13.0
5
+ astunparse==1.6.3
6
+ certifi==2026.2.25
7
+ charset-normalizer==3.4.7
8
+ click==8.3.2
9
+ contourpy==1.3.3
10
+ cycler==0.12.1
11
+ efficientnet==1.1.1
12
+ fastapi==0.136.0
13
+ filelock==3.28.0
14
+ flatbuffers==25.12.19
15
+ fonttools==4.62.1
16
+ fsspec==2026.3.0
17
+ gast==0.7.0
18
+ google-pasta==0.2.0
19
+ grpcio==1.80.0
20
+ h11==0.16.0
21
+ h5py==3.14.0
22
+ idna==3.11
23
+ ImageIO==2.37.3
24
+ Jinja2==3.1.6
25
+ joblib==1.5.3
26
+ keras==3.14.0
27
+ Keras-Applications==1.0.8
28
+ kiwisolver==1.5.0
29
+ lazy-loader==0.5
30
+ libclang==18.1.1
31
+ lz4==4.4.5
32
+ markdown-it-py==4.0.0
33
+ MarkupSafe==3.0.3
34
+ matplotlib==3.10.8
35
+ mdurl==0.1.2
36
+ ml_dtypes==0.5.4
37
+ mpmath==1.3.0
38
+ mtcnn==1.0.0
39
+ namex==0.1.0
40
+ networkx==3.6.1
41
+ numpy==2.4.4
42
+ opencv-python==4.13.0.92
43
+ opt_einsum==3.4.0
44
+ optree==0.19.0
45
+ packaging==26.1
46
+ pandas==3.0.2
47
+ pillow==12.2.0
48
+ polars==1.39.3
49
+ polars-runtime-32==1.39.3
50
+ protobuf==7.34.1
51
+ psutil==7.2.2
52
+ pydantic==2.13.1
53
+ pydantic_core==2.46.1
54
+ Pygments==2.20.0
55
+ pyparsing==3.3.2
56
+ python-dateutil==2.9.0.post0
57
+ python-multipart==0.0.26
58
+ PyYAML==6.0.3
59
+ requests==2.33.1
60
+ rich==15.0.0
61
+ scikit-image==0.26.0
62
+ scipy==1.17.1
63
+ six==1.17.0
64
+ split-folders==0.6.1
65
+ starlette==1.0.0
66
+ sympy==1.14.0
67
+ tensorflow==2.21.0
68
+ termcolor==3.3.0
69
+ tifffile==2026.3.3
70
+ torch==2.11.0
71
+ torchvision==0.26.0
72
+ typing-inspection==0.4.2
73
+ typing_extensions==4.15.0
74
+ ultralytics==8.4.38
75
+ ultralytics-thop==2.0.18
76
+ urllib3==2.6.3
77
+ uvicorn==0.44.0
78
+ wrapt==2.1.2
start_server.sh ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Navigate to the directory containing this script (optional but good practice)
4
+ cd "$(dirname "$0")"
5
+
6
+ echo "Starting Deepfake Detection Backend Server..."
7
+
8
+ # Activate the virtual environment
9
+ source venv/bin/activate
10
+
11
+ # Run the backend using the app.py file we created
12
+ python3 app.py
yolo11n-pose.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:869e83fcdffdc7371fa4e34cd8e51c838cc729571d1635e5141e3075e9319dc0
3
+ size 6255593
yolo11n.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ebbc80d4a7680d14987a577cd21342b65ecfd94632bd9a8da63ae6417644ee1
3
+ size 5613764