VanNguyen1214 commited on
Commit
07f3e72
·
verified ·
1 Parent(s): 721dc71

Upload 60 files

Browse files
Files changed (8) hide show
  1. README.md +158 -3
  2. app.py +182 -182
  3. apply_color_transfer.py +118 -3
  4. baldhead.py +24 -5
  5. overlay.py +93 -93
  6. requirements.txt +34 -34
  7. setup.py +131 -0
  8. swapface.py +52 -52
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
- title: Ghep Image
3
- emoji: 📉
4
  colorFrom: pink
5
  colorTo: blue
6
  sdk: gradio
@@ -9,4 +9,159 @@ app_file: app.py
9
  pinned: false
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: AI Wig Try-On System
3
+ emoji: 💇‍♀️
4
  colorFrom: pink
5
  colorTo: blue
6
  sdk: gradio
 
9
  pinned: false
10
  ---
11
 
12
+ # 🎭 AI Wig Try-On System
13
+
14
+ Hệ thống thử tóc giả thông minh sử dụng AI - Tự động phân loại khuôn mặt và gợi ý tóc giả phù hợp.
15
+
16
+ ## ✨ Tính năng chính
17
+
18
+ - 🤖 **Phân loại khuôn mặt tự động**: AI nhận diện 5 dạng mặt (Heart, Oblong, Oval, Round, Square)
19
+ - 🎯 **Gợi ý tóc giả thông minh**: Tự động hiển thị tóc phù hợp với khuôn mặt
20
+ - 🎨 **Face Swap chất lượng cao**: Sử dụng công nghệ roop để face swapping tự nhiên
21
+ - 🌈 **Color Transfer tự động**: Điều chỉnh màu da từ vùng trán để hài hòa
22
+ - 🖼️ **Giao diện thân thiện**: Web interface đơn giản với Gradio
23
+
24
+ ## 🏗️ Kiến trúc hệ thống
25
+
26
+ ```
27
+ 📁 Project Structure
28
+ ├── app.py # Main Gradio interface
29
+ ├── detect_face.py # Face shape classification (EfficientNet-B4)
30
+ ├── overlay.py # Hair overlay processing
31
+ ├── swapface.py # Face swapping using roop
32
+ ├── apply_color_transfer.py # Color matching from forehead region
33
+ ├── baldhead.py # Hair removal using GAN
34
+ ├── segmentation.py # Hair/face segmentation
35
+ ├── setup.py # Project setup checker
36
+ └── example_wigs/ # Wig samples organized by face shape
37
+ ├── Heart/
38
+ ├── Oblong/
39
+ ├── Oval/
40
+ ├── Round/
41
+ └── Square/
42
+ ```
43
+
44
+ ## 🚀 Cài đặt và chạy
45
+
46
+ ### 1. Cài đặt dependencies
47
+ ```bash
48
+ pip install -r requirements.txt
49
+ ```
50
+
51
+ ### 2. Kiểm tra setup
52
+ ```bash
53
+ python setup.py
54
+ ```
55
+
56
+ ### 3. Chạy ứng dụng
57
+ ```bash
58
+ python app.py
59
+ ```
60
+
61
+ ## 🎯 Cách sử dụng
62
+
63
+ 1. **Upload ảnh khuôn mặt** vào khung "Background"
64
+ 2. **Tự động phân loại**: Hệ thống sẽ phân tích khuôn mặt và hiển thị tóc giả gợi ý
65
+ 3. **Chọn tóc giả**: Click vào gallery để chọn kiểu tóc mong muốn
66
+ 4. **Xử lý**: Nhấn nút "🔄 Run" để thực hiện face swap
67
+ 5. **Kết quả**: Xem kết quả cuối cùng với màu da được điều chỉnh tự nhiên
68
+
69
+ ## 🤖 AI Models được sử dụng
70
+
71
+ - **Face Classification**: EfficientNet-B4 custom trained
72
+ - **Hair Segmentation**: SegFormer from Hugging Face
73
+ - **Face Detection**: MediaPipe + RetinaFace
74
+ - **Hair Removal**: Custom GAN model
75
+ - **Face Swapping**: roop library
76
+ - **Color Transfer**: LAB color space histogram matching
77
+
78
+ ## 🎨 Color Transfer Technology
79
+
80
+ Hệ thống tự động:
81
+ - 🎯 **Detect vùng trán** bằng MediaPipe
82
+ - 🌈 **Lấy màu reference** từ vùng trán (30% phần trên khuôn mặt)
83
+ - 🔄 **Apply color matching** với opacity 90%
84
+ - ✨ **Kết quả tự nhiên** với màu da hài hòa
85
+
86
+ ## 📊 Supported Face Shapes
87
+
88
+ | Face Shape | Description | Wig Recommendations |
89
+ |------------|-------------|-------------------|
90
+ | **Heart** | Trán rộng, cằm nhọn | Tóc bob, lob, layers |
91
+ | **Oblong** | Mặt dài, tỷ lệ cao | Tóc có volume, fringe |
92
+ | **Oval** | Cân đối, lý tưởng | Phù hợp mọi kiểu tóc |
93
+ | **Round** | Mặt tròn, má đầy | Tóc dài, layers |
94
+ | **Square** | Góc cạnh, hàm vuông | Tóc xoăn, soft waves |
95
+
96
+ ## 🛠️ Technical Requirements
97
+
98
+ - Python 3.8+
99
+ - CUDA-compatible GPU (recommended)
100
+ - RAM: 8GB+
101
+ - Storage: 5GB+ for models
102
+
103
+ ## 📋 Key Dependencies
104
+
105
+ - `gradio` - Web interface
106
+ - `torch` + `torchvision` - Deep learning
107
+ - `transformers` - Hugging Face models
108
+ - `mediapipe` - Face detection
109
+ - `opencv-python` - Image processing
110
+ - `insightface` - Face analysis
111
+ - `tensorflow` - GAN models
112
+
113
+ ## 🏃‍♂️ Quick Start
114
+
115
+ ```bash
116
+ # Clone and setup
117
+ git clone <repo-url>
118
+ cd be_rejection
119
+
120
+ # Install requirements
121
+ pip install -r requirements.txt
122
+
123
+ # Check setup
124
+ python setup.py
125
+
126
+ # Run app
127
+ python app.py
128
+ ```
129
+
130
+ ## 🎭 Adding New Wigs
131
+
132
+ Để thêm tóc giả mới:
133
+ 1. Đặt ảnh tóc giả vào folder tương ứng trong `example_wigs/`
134
+ 2. Format: PNG/JPG với background trong suốt (khuyến nghị)
135
+ 3. Kích thước: Tối thiểu 512x512px
136
+ 4. Chất lượng: Ảnh rõ nét, góc chụp thẳng
137
+
138
+ ## 🔧 Troubleshooting
139
+
140
+ **Lỗi model không load:**
141
+ ```bash
142
+ # Kiểm tra kết nối internet và Hugging Face Hub
143
+ python -c "from transformers import pipeline; print('HF Hub OK')"
144
+ ```
145
+
146
+ **Lỗi CUDA:**
147
+ ```bash
148
+ # Chuyển sang CPU mode
149
+ export CUDA_VISIBLE_DEVICES=""
150
+ ```
151
+
152
+ **Lỗi memory:**
153
+ - Giảm batch size
154
+ - Sử dụng GPU có RAM lớn hơn
155
+ - Giảm kích thước ảnh input
156
+
157
+ ## 📝 License
158
+
159
+ Dự ��n này được phát triển cho mục đích giáo dục và nghiên cứu.
160
+
161
+ ## 🤝 Contributing
162
+
163
+ Contributions are welcome! Please feel free to submit a Pull Request.
164
+
165
+ ---
166
+
167
+ *Powered by AI • Made with ❤️ in Vietnam*
app.py CHANGED
@@ -1,182 +1,182 @@
1
-
2
- import gradio as gr
3
- from overlay import overlay_source
4
- from detect_face import predict, NUM_CLASSES
5
- from swapface import swap_face_now
6
-
7
- import os
8
- from pathlib import Path
9
- from PIL import Image
10
-
11
-
12
- BASE_DIR = Path(__file__).parent # thư mục chứa app.py
13
- FOLDER = BASE_DIR / "example_wigs"
14
-
15
- # --- Hàm load ảnh từ folder ---
16
- def load_images_from_folder(folder_path: str) -> list[str]:
17
- """
18
- Trả về list[str] chứa tất cả các hình (jpg, png, gif, bmp) trong folder_path.
19
- """
20
- supported = {'.jpg', '.jpeg', '.png', '.gif', '.bmp'}
21
- if not os.path.isdir(folder_path):
22
- print(f"Cảnh báo: '{folder_path}' không phải folder hợp lệ.")
23
- return []
24
- files = [
25
- os.path.join(folder_path, fn)
26
- for fn in os.listdir(folder_path)
27
- if os.path.splitext(fn)[1].lower() in supported
28
- ]
29
- if not files:
30
- print(f"Không tìm thấy hình trong: {folder_path}")
31
- return files
32
-
33
-
34
- def on_gallery_select(evt: gr.SelectData):
35
- """
36
- Khi click thumbnail: trả về
37
- 1) filepath để nạp vào Image Source
38
- 2) tên file (basename) để hiển thị trong Textbox
39
- """
40
- val = evt.value
41
-
42
- # --- logic trích filepath y như cũ ---
43
- if isinstance(val, dict):
44
- img = val.get("image")
45
- if isinstance(img, str):
46
- filepath = img
47
- elif isinstance(img, dict):
48
- filepath = img.get("path") or img.get("url")
49
- else:
50
- filepath = next(
51
- (v for v in val.values() if isinstance(v, str) and os.path.isfile(v)),
52
- None
53
- )
54
- elif isinstance(val, str):
55
- filepath = val
56
- else:
57
- raise ValueError(f"Kiểu không hỗ trợ: {type(val)}")
58
-
59
- # Lấy tên file không có phần mở rộng
60
- filename = os.path.splitext(os.path.basename(filepath))[0] if filepath else ""
61
- return filepath, filename
62
-
63
- # --- Hàm xác định folder dựa trên phân lớp ---
64
- def infer_folder(image) -> str:
65
- cls = predict(image)["predicted_class"]
66
- folder = str(FOLDER / cls)
67
- return folder
68
-
69
- # --- Hàm gộp: phân loại + load ảnh ---
70
- def handle_bg_change(image):
71
- """
72
- Khi thay đổi background:
73
- 1. Phân loại khuôn mặt
74
- 2. Load ảnh từ folder tương ứng
75
- """
76
- if image is None:
77
- return "", []
78
-
79
- try:
80
- folder = infer_folder(image)
81
- images = load_images_from_folder(folder)
82
- return folder, images
83
- except Exception as e:
84
- print(f"Lỗi xử lý ảnh: {e}")
85
- return "", []
86
-
87
- # --- Pipeline đầy đủ: ghép tóc + swap face cuối ---
88
- def complete_pipeline(background: Image.Image, source: Image.Image):
89
- if background is None or source is None:
90
- return None
91
-
92
- try:
93
- # Import color transfer function
94
- from apply_color_transfer import apply_color_transfer_to_output
95
-
96
- # Bước 1: overlay tóc
97
- intermediate_result = overlay_source(background, source)
98
- if intermediate_result is None:
99
- print("Overlay failed.")
100
- return None
101
-
102
- # Bước 2: swap face - truyền trực tiếp PIL Images
103
- final_result = swap_face_now(
104
- background.convert("RGB"),
105
- intermediate_result.convert("RGB")
106
- )
107
-
108
- if final_result is None:
109
- print("Swap failed.")
110
- result_img = intermediate_result # fallback
111
- else:
112
- print("Swap success.")
113
- result_img = final_result # đây đã là ảnh PIL.Image
114
-
115
- # Bước 3: Apply color transfer để match màu da
116
- result_with_color_transfer = apply_color_transfer_to_output(
117
- swapped_img=result_img,
118
- original_bg=background,
119
- opacity=0.9
120
- )
121
-
122
- return result_with_color_transfer
123
-
124
- except Exception as e:
125
- print(f"Lỗi trong complete_pipeline: {e}")
126
- return None
127
-
128
- # --- Xây dựng giao diện Gradio ---
129
- def build_demo():
130
- with gr.Blocks(title="Xử lý hai hình ảnh", theme=gr.themes.Soft()) as demo:
131
- gr.Markdown("Upload Background & Source, click **Run** to try on wigs.")
132
-
133
- with gr.Row():
134
- bg = gr.Image(type="pil", label="Background", height=500)
135
- src = gr.Image(type="pil", label="Source", height=500, interactive=False)
136
- out = gr.Image(label="Result", height=500, interactive=False)
137
-
138
- folder_path_box = gr.Textbox(label="Folder path", visible=False)
139
-
140
-
141
- with gr.Row():
142
- src_name_box = gr.Textbox(
143
- label="Wigs Name",
144
- interactive=False,
145
- show_copy_button=True , # tuỳ chọn – tiện copy đường dẫn
146
- scale = 1
147
- )
148
- gallery = gr.Gallery(
149
- label="Recommend For You",
150
- height=300,
151
- value=[],
152
- type="filepath",
153
- interactive=False,
154
- columns=5,
155
- object_fit="cover",
156
- allow_preview=True,
157
- scale = 8
158
- )
159
- btn = gr.Button("🔄 Run", variant="primary",scale = 1)
160
-
161
-
162
-
163
- # Chạy pipeline đầy đủ: ghép tóc + swap face cuối
164
- btn.click(fn=complete_pipeline, inputs=[bg, src], outputs=[out])
165
- # Khi đổi ảnh background, tự động phân loại và load ảnh gợi ý
166
- bg.change(
167
- fn=handle_bg_change,
168
- inputs=[bg],
169
- outputs=[folder_path_box, gallery],
170
- show_progress=True
171
- )
172
- # Nút tải lại ảnh thủ công (backup)
173
- # Khi chọn ảnh trong gallery, cập nhật vào khung Source
174
- gallery.select(
175
- fn=on_gallery_select,
176
- outputs=[src, src_name_box]
177
- )
178
-
179
- return demo
180
-
181
- if __name__ == "__main__":
182
- build_demo().launch()
 
1
+
2
+ import gradio as gr
3
+ from overlay import overlay_source
4
+ from detect_face import predict, NUM_CLASSES
5
+ from swapface import swap_face_now
6
+
7
+ import os
8
+ from pathlib import Path
9
+ from PIL import Image
10
+
11
+
12
+ BASE_DIR = Path(__file__).parent # thư mục chứa app.py
13
+ FOLDER = BASE_DIR / "example_wigs"
14
+
15
+ # --- Hàm load ảnh từ folder ---
16
+ def load_images_from_folder(folder_path: str) -> list[str]:
17
+ """
18
+ Trả về list[str] chứa tất cả các hình (jpg, png, gif, bmp) trong folder_path.
19
+ """
20
+ supported = {'.jpg', '.jpeg', '.png', '.gif', '.bmp'}
21
+ if not os.path.isdir(folder_path):
22
+ print(f"Cảnh báo: '{folder_path}' không phải folder hợp lệ.")
23
+ return []
24
+ files = [
25
+ os.path.join(folder_path, fn)
26
+ for fn in os.listdir(folder_path)
27
+ if os.path.splitext(fn)[1].lower() in supported
28
+ ]
29
+ if not files:
30
+ print(f"Không tìm thấy hình trong: {folder_path}")
31
+ return files
32
+
33
+
34
+ def on_gallery_select(evt: gr.SelectData):
35
+ """
36
+ Khi click thumbnail: trả về
37
+ 1) filepath để nạp vào Image Source
38
+ 2) tên file (basename) để hiển thị trong Textbox
39
+ """
40
+ val = evt.value
41
+
42
+ # --- logic trích filepath y như cũ ---
43
+ if isinstance(val, dict):
44
+ img = val.get("image")
45
+ if isinstance(img, str):
46
+ filepath = img
47
+ elif isinstance(img, dict):
48
+ filepath = img.get("path") or img.get("url")
49
+ else:
50
+ filepath = next(
51
+ (v for v in val.values() if isinstance(v, str) and os.path.isfile(v)),
52
+ None
53
+ )
54
+ elif isinstance(val, str):
55
+ filepath = val
56
+ else:
57
+ raise ValueError(f"Kiểu không hỗ trợ: {type(val)}")
58
+
59
+ # Lấy tên file không có phần mở rộng
60
+ filename = os.path.splitext(os.path.basename(filepath))[0] if filepath else ""
61
+ return filepath, filename
62
+
63
+ # --- Hàm xác định folder dựa trên phân lớp ---
64
+ def infer_folder(image) -> str:
65
+ cls = predict(image)["predicted_class"]
66
+ folder = str(FOLDER / cls)
67
+ return folder
68
+
69
+ # --- Hàm gộp: phân loại + load ảnh ---
70
+ def handle_bg_change(image):
71
+ """
72
+ Khi thay đổi background:
73
+ 1. Phân loại khuôn mặt
74
+ 2. Load ảnh từ folder tương ứng
75
+ """
76
+ if image is None:
77
+ return "", []
78
+
79
+ try:
80
+ folder = infer_folder(image)
81
+ images = load_images_from_folder(folder)
82
+ return folder, images
83
+ except Exception as e:
84
+ print(f"Lỗi xử lý ảnh: {e}")
85
+ return "", []
86
+
87
+ # --- Pipeline đầy đủ: ghép tóc + swap face cuối ---
88
+ def complete_pipeline(background: Image.Image, source: Image.Image):
89
+ if background is None or source is None:
90
+ return None
91
+
92
+ try:
93
+ # Import color transfer function
94
+ from apply_color_transfer import apply_color_transfer_to_output
95
+
96
+ # Bước 1: overlay tóc
97
+ intermediate_result = overlay_source(background, source)
98
+ if intermediate_result is None:
99
+ print("Overlay failed.")
100
+ return None
101
+
102
+ # Bước 2: swap face - truyền trực tiếp PIL Images
103
+ final_result = swap_face_now(
104
+ background.convert("RGB"),
105
+ intermediate_result.convert("RGB")
106
+ )
107
+
108
+ if final_result is None:
109
+ print("Swap failed.")
110
+ result_img = intermediate_result # fallback
111
+ else:
112
+ print("Swap success.")
113
+ result_img = final_result # đây đã là ảnh PIL.Image
114
+
115
+ # Bước 3: Apply color transfer để match màu da
116
+ result_with_color_transfer = apply_color_transfer_to_output(
117
+ swapped_img=result_img,
118
+ original_bg=background,
119
+ opacity=0.9
120
+ )
121
+
122
+ return result_with_color_transfer
123
+
124
+ except Exception as e:
125
+ print(f"Lỗi trong complete_pipeline: {e}")
126
+ return None
127
+
128
+ # --- Xây dựng giao diện Gradio ---
129
+ def build_demo():
130
+ with gr.Blocks(title="Xử lý hai hình ảnh", theme=gr.themes.Soft()) as demo:
131
+ gr.Markdown("Upload Background & Source, click **Run** to try on wigs.")
132
+
133
+ with gr.Row():
134
+ bg = gr.Image(type="pil", label="Background", height=500)
135
+ src = gr.Image(type="pil", label="Source", height=500, interactive=False)
136
+ out = gr.Image(label="Result", height=500, interactive=False)
137
+
138
+ folder_path_box = gr.Textbox(label="Folder path", visible=False)
139
+
140
+
141
+ with gr.Row():
142
+ src_name_box = gr.Textbox(
143
+ label="Wigs Name",
144
+ interactive=False,
145
+ show_copy_button=True , # tuỳ chọn – tiện copy đường dẫn
146
+ scale = 1
147
+ )
148
+ gallery = gr.Gallery(
149
+ label="Recommend For You",
150
+ height=300,
151
+ value=[],
152
+ type="filepath",
153
+ interactive=False,
154
+ columns=5,
155
+ object_fit="cover",
156
+ allow_preview=True,
157
+ scale = 8
158
+ )
159
+ btn = gr.Button("🔄 Run", variant="primary",scale = 1)
160
+
161
+
162
+
163
+ # Chạy pipeline đầy đủ: ghép tóc + swap face cuối
164
+ btn.click(fn=complete_pipeline, inputs=[bg, src], outputs=[out])
165
+ # Khi đổi ảnh background, tự động phân loại và load ảnh gợi ý
166
+ bg.change(
167
+ fn=handle_bg_change,
168
+ inputs=[bg],
169
+ outputs=[folder_path_box, gallery],
170
+ show_progress=True
171
+ )
172
+ # Nút tải lại ảnh thủ công (backup)
173
+ # Khi chọn ảnh trong gallery, cập nhật vào khung Source
174
+ gallery.select(
175
+ fn=on_gallery_select,
176
+ outputs=[src, src_name_box]
177
+ )
178
+
179
+ return demo
180
+
181
+ if __name__ == "__main__":
182
+ build_demo().launch()
apply_color_transfer.py CHANGED
@@ -1,16 +1,79 @@
1
  import cv2
2
  import numpy as np
3
  from PIL import Image
 
4
 
 
 
5
 
6
- def apply_color_transfer_to_output(swapped_img, original_bg, opacity=0.7):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  """
8
  Apply color transfer to make the swapped face match the original background lighting.
 
9
 
10
  Args:
11
  swapped_img: PIL Image - The image after face swapping
12
  original_bg: PIL Image - The original background image for color reference
13
  opacity: float - Opacity of color transfer (0.0 = no effect, 1.0 = full effect)
 
14
 
15
  Returns:
16
  PIL Image - The color-corrected result
@@ -23,8 +86,26 @@ def apply_color_transfer_to_output(swapped_img, original_bg, opacity=0.7):
23
  swapped_np = np.array(swapped_img.convert('RGB'))
24
  original_np = np.array(original_bg.convert('RGB'))
25
 
26
- # Apply color transfer using histogram matching
27
- color_corrected = histogram_matching(swapped_np, original_np)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  # Blend original and color-corrected image based on opacity
30
  result = blend_images(swapped_np, color_corrected, opacity)
@@ -90,6 +171,40 @@ def histogram_matching(source, reference):
90
  return result
91
 
92
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
  def match_histograms(source, reference):
94
  """
95
  Match histogram of source image to reference image.
 
1
  import cv2
2
  import numpy as np
3
  from PIL import Image
4
+ import mediapipe as mp
5
 
6
+ # MediaPipe Face Detection
7
+ mp_face_detection = mp.solutions.face_detection.FaceDetection(model_selection=1, min_detection_confidence=0.5)
8
 
9
+
10
+ def get_forehead_region(image):
11
+ """
12
+ Detect forehead region using MediaPipe face detection.
13
+ Returns the highest point of the face (forehead area).
14
+
15
+ Args:
16
+ image: PIL Image or numpy array
17
+
18
+ Returns:
19
+ tuple: (x, y, width, height) of forehead region or None if no face detected
20
+ """
21
+ try:
22
+ # Convert to numpy if PIL Image
23
+ if isinstance(image, Image.Image):
24
+ img_array = np.array(image.convert('RGB'))
25
+ else:
26
+ img_array = image
27
+
28
+ # Detect face
29
+ results = mp_face_detection.process(img_array)
30
+
31
+ if not results.detections:
32
+ return None
33
+
34
+ # Get first detected face
35
+ detection = results.detections[0]
36
+ bbox = detection.location_data.relative_bounding_box
37
+
38
+ h, w = img_array.shape[:2]
39
+
40
+ # Convert relative coordinates to absolute
41
+ x = int(bbox.xmin * w)
42
+ y = int(bbox.ymin * h)
43
+ width = int(bbox.width * w)
44
+ height = int(bbox.height * h)
45
+
46
+ # Define forehead region (upper 30% of face)
47
+ forehead_height = int(height * 0.3)
48
+ forehead_y = max(0, y - int(forehead_height * 0.2)) # Slightly above face bbox
49
+
50
+ # Ensure forehead region is within image bounds
51
+ forehead_x = max(0, x + int(width * 0.2)) # Start from 20% into face width
52
+ forehead_width = int(width * 0.6) # Use 60% of face width
53
+
54
+ # Clamp to image boundaries
55
+ forehead_x = max(0, min(forehead_x, w - 1))
56
+ forehead_y = max(0, min(forehead_y, h - 1))
57
+ forehead_width = max(1, min(forehead_width, w - forehead_x))
58
+ forehead_height = max(1, min(forehead_height, h - forehead_y))
59
+
60
+ return (forehead_x, forehead_y, forehead_width, forehead_height)
61
+
62
+ except Exception as e:
63
+ print(f"Forehead detection failed: {e}")
64
+ return None
65
+
66
+
67
+ def apply_color_transfer_to_output(swapped_img, original_bg, opacity=0.7, color_region=None):
68
  """
69
  Apply color transfer to make the swapped face match the original background lighting.
70
+ Automatically uses forehead region as color reference by default.
71
 
72
  Args:
73
  swapped_img: PIL Image - The image after face swapping
74
  original_bg: PIL Image - The original background image for color reference
75
  opacity: float - Opacity of color transfer (0.0 = no effect, 1.0 = full effect)
76
+ color_region: tuple or None - (x, y, width, height) for specific color sampling region
77
 
78
  Returns:
79
  PIL Image - The color-corrected result
 
86
  swapped_np = np.array(swapped_img.convert('RGB'))
87
  original_np = np.array(original_bg.convert('RGB'))
88
 
89
+ # If no specific region provided, automatically detect forehead
90
+ if color_region is None:
91
+ color_region = get_forehead_region(original_bg)
92
+ if color_region:
93
+ print(f"Auto-detected forehead region: {color_region}")
94
+
95
+ # If specific region is provided or auto-detected, use it for color reference
96
+ if color_region is not None:
97
+ x, y, w, h = color_region
98
+ # Extract color reference from specific region
99
+ reference_region = original_np[y:y+h, x:x+w]
100
+ if reference_region.size > 0:
101
+ # Apply color transfer using the selected region
102
+ color_corrected = histogram_matching_with_region(swapped_np, original_np, reference_region)
103
+ else:
104
+ # Fallback to full image if region is invalid
105
+ color_corrected = histogram_matching(swapped_np, original_np)
106
+ else:
107
+ # Apply color transfer using histogram matching on full image
108
+ color_corrected = histogram_matching(swapped_np, original_np)
109
 
110
  # Blend original and color-corrected image based on opacity
111
  result = blend_images(swapped_np, color_corrected, opacity)
 
171
  return result
172
 
173
 
174
+ def histogram_matching_with_region(source, reference_full, reference_region):
175
+ """
176
+ Perform histogram matching using a specific region as color reference.
177
+
178
+ Args:
179
+ source: numpy array - Source image to be color corrected
180
+ reference_full: numpy array - Full reference image (not used directly)
181
+ reference_region: numpy array - Specific region to extract color from
182
+
183
+ Returns:
184
+ numpy array - Color corrected image
185
+ """
186
+ # Convert to LAB color space
187
+ source_lab = cv2.cvtColor(source, cv2.COLOR_RGB2LAB)
188
+ region_lab = cv2.cvtColor(reference_region, cv2.COLOR_RGB2LAB)
189
+
190
+ # Split channels
191
+ source_l, source_a, source_b = cv2.split(source_lab)
192
+ region_l, region_a, region_b = cv2.split(region_lab)
193
+
194
+ # Apply histogram matching using the region's histogram
195
+ matched_l = match_histograms(source_l, region_l.flatten().reshape(-1, 1).squeeze())
196
+ matched_a = match_histograms(source_a, region_a.flatten().reshape(-1, 1).squeeze())
197
+ matched_b = match_histograms(source_b, region_b.flatten().reshape(-1, 1).squeeze())
198
+
199
+ # Merge channels back
200
+ matched_lab = cv2.merge([matched_l, matched_a, matched_b])
201
+
202
+ # Convert back to RGB
203
+ result = cv2.cvtColor(matched_lab, cv2.COLOR_LAB2RGB)
204
+
205
+ return result
206
+
207
+
208
  def match_histograms(source, reference):
209
  """
210
  Match histogram of source image to reference image.
baldhead.py CHANGED
@@ -213,17 +213,36 @@ def build_generator():
213
 
214
  HF_REPO_ID = "VanNguyen1214/baldhead"
215
  HF_FILENAME = "model_G_5_170.hdf5"
216
- HF_TOKEN = os.environ["HUGGINGFACEHUB_API_TOKEN"]
217
 
218
  def load_generator_from_hub():
219
  """
220
  Download the .hdf5 weights from HF Hub into cache,
221
  rebuild the generator, then load weights.
222
  """
223
- local_path = hf_hub_download(repo_id=HF_REPO_ID, filename=HF_FILENAME,token=HF_TOKEN)
224
- gen = build_generator()
225
- gen.load_weights(local_path)
226
- return gen
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
227
 
228
  # Load once at startup
229
  try:
 
213
 
214
  HF_REPO_ID = "VanNguyen1214/baldhead"
215
  HF_FILENAME = "model_G_5_170.hdf5"
 
216
 
217
  def load_generator_from_hub():
218
  """
219
  Download the .hdf5 weights from HF Hub into cache,
220
  rebuild the generator, then load weights.
221
  """
222
+ try:
223
+ # Try with token first if available
224
+ token = os.environ.get("HUGGINGFACEHUB_API_TOKEN")
225
+ local_path = hf_hub_download(
226
+ repo_id=HF_REPO_ID,
227
+ filename=HF_FILENAME,
228
+ token=token,
229
+ local_dir="models/baldhead"
230
+ )
231
+ gen = build_generator()
232
+ gen.load_weights(local_path)
233
+ return gen
234
+ except Exception as e:
235
+ print(f"[WARNING] Could not download from HF Hub: {e}")
236
+ # Try to load from local cache if exists
237
+ local_cache_path = f"models/baldhead/{HF_FILENAME}"
238
+ if os.path.exists(local_cache_path):
239
+ print(f"[INFO] Loading from local cache: {local_cache_path}")
240
+ gen = build_generator()
241
+ gen.load_weights(local_cache_path)
242
+ return gen
243
+ else:
244
+ print(f"[ERROR] No local cache found at {local_cache_path}")
245
+ return None
246
 
247
  # Load once at startup
248
  try:
overlay.py CHANGED
@@ -1,93 +1,93 @@
1
- import numpy as np
2
- from PIL import Image
3
- import mediapipe as mp
4
-
5
- from baldhead import inference # cạo tóc background
6
- from segmentation import extract_hair_face_full_forehead
7
- from swapface import swap_face_now
8
- from apply_color_transfer import apply_color_transfer_to_output
9
-
10
- # MediaPipe Face Detection
11
- mp_fd = mp.solutions.face_detection.FaceDetection(model_selection=1,
12
- min_detection_confidence=0.5)
13
-
14
- def get_face_bbox(img: Image.Image) -> tuple[int,int,int,int] | None:
15
- arr = np.array(img.convert("RGB"))
16
- res = mp_fd.process(arr)
17
- if not res.detections:
18
- return None
19
- d = res.detections[0].location_data.relative_bounding_box
20
- h, w = arr.shape[:2]
21
- x1 = int(d.xmin * w)
22
- y1 = int(d.ymin * h)
23
- x2 = x1 + int(d.width * w)
24
- y2 = y1 + int(d.height * h)
25
- return x1, y1, x2, y2
26
-
27
- def compute_scale(w_bg, h_bg, w_src, h_src) -> float:
28
- return ((w_bg / w_src) + (h_bg / h_src)) / 2
29
-
30
- def compute_offset(bbox_bg, bbox_src, scale) -> tuple[int,int]:
31
- x1, y1, x2, y2 = bbox_bg
32
- bg_cx = x1 + (x2 - x1)//2
33
- bg_cy = y1 + (y2 - y1)//2
34
- sx1, sy1, sx2, sy2 = bbox_src
35
- src_cx = int((sx1 + (sx2 - sx1)//2) * scale)
36
- src_cy = int((sy1 + (sy2 - sy1)//2) * scale)
37
- return bg_cx - src_cx, bg_cy - src_cy
38
-
39
- def paste_with_alpha(bg: np.ndarray, src: np.ndarray, offset: tuple[int,int]) -> Image.Image:
40
- res = bg.copy()
41
- x, y = offset
42
- h, w = src.shape[:2]
43
- x1, y1 = max(x,0), max(y,0)
44
- x2 = min(x+w, bg.shape[1])
45
- y2 = min(y+h, bg.shape[0])
46
- if x1>=x2 or y1>=y2:
47
- return Image.fromarray(res)
48
- cs = src[y1-y:y2-y, x1-x:x2-x]
49
- cd = res[y1:y2, x1:x2]
50
- mask = cs[...,3] > 0
51
- if cd.shape[2] == 3:
52
- cd[mask] = cs[mask][...,:3]
53
- else:
54
- cd[mask] = cs[mask]
55
- res[y1:y2, x1:x2] = cd
56
- return Image.fromarray(res)
57
-
58
- def overlay_source(background: Image.Image, source: Image.Image):
59
- # 1) detect bboxes
60
- bbox_bg = get_face_bbox(background)
61
- bbox_src = get_face_bbox(source)
62
- if bbox_bg is None:
63
- return None, "❌ No face in background."
64
- if bbox_src is None:
65
- return None, "❌ No face in source."
66
-
67
- # 2) compute scale & resize source
68
- w_bg, h_bg = bbox_bg[2]-bbox_bg[0], bbox_bg[3]-bbox_bg[1]
69
- w_src, h_src = bbox_src[2]-bbox_src[0], bbox_src[3]-bbox_src[1]
70
- scale = compute_scale(w_bg, h_bg, w_src, h_src)
71
- src_scaled = source.resize(
72
- (int(source.width*scale), int(source.height*scale)),
73
- Image.Resampling.LANCZOS
74
- )
75
-
76
- # 3) compute offset
77
- offset = compute_offset(bbox_bg, bbox_src, scale)
78
-
79
- # 4) baldhead background
80
- bg_bald = inference(background)
81
-
82
- # 5) extract hair-only from source
83
- full_head = extract_hair_face_full_forehead(src_scaled)
84
-
85
- # 6) paste onto bald background
86
- src_change = paste_with_alpha(
87
- np.array(bg_bald.convert("RGBA")),
88
- np.array(full_head),
89
- offset
90
- )
91
-
92
- # Return the intermediate result for face swapping in complete_pipeline
93
- return src_change.convert("RGB")
 
1
+ import numpy as np
2
+ from PIL import Image
3
+ import mediapipe as mp
4
+
5
+ from baldhead import inference # cạo tóc background
6
+ from segmentation import extract_hair_face_full_forehead
7
+ from swapface import swap_face_now
8
+ from apply_color_transfer import apply_color_transfer_to_output
9
+
10
+ # MediaPipe Face Detection
11
+ mp_fd = mp.solutions.face_detection.FaceDetection(model_selection=1,
12
+ min_detection_confidence=0.5)
13
+
14
+ def get_face_bbox(img: Image.Image) -> tuple[int,int,int,int] | None:
15
+ arr = np.array(img.convert("RGB"))
16
+ res = mp_fd.process(arr)
17
+ if not res.detections:
18
+ return None
19
+ d = res.detections[0].location_data.relative_bounding_box
20
+ h, w = arr.shape[:2]
21
+ x1 = int(d.xmin * w)
22
+ y1 = int(d.ymin * h)
23
+ x2 = x1 + int(d.width * w)
24
+ y2 = y1 + int(d.height * h)
25
+ return x1, y1, x2, y2
26
+
27
+ def compute_scale(w_bg, h_bg, w_src, h_src) -> float:
28
+ return ((w_bg / w_src) + (h_bg / h_src)) / 2
29
+
30
+ def compute_offset(bbox_bg, bbox_src, scale) -> tuple[int,int]:
31
+ x1, y1, x2, y2 = bbox_bg
32
+ bg_cx = x1 + (x2 - x1)//2
33
+ bg_cy = y1 + (y2 - y1)//2
34
+ sx1, sy1, sx2, sy2 = bbox_src
35
+ src_cx = int((sx1 + (sx2 - sx1)//2) * scale)
36
+ src_cy = int((sy1 + (sy2 - sy1)//2) * scale)
37
+ return bg_cx - src_cx, bg_cy - src_cy
38
+
39
+ def paste_with_alpha(bg: np.ndarray, src: np.ndarray, offset: tuple[int,int]) -> Image.Image:
40
+ res = bg.copy()
41
+ x, y = offset
42
+ h, w = src.shape[:2]
43
+ x1, y1 = max(x,0), max(y,0)
44
+ x2 = min(x+w, bg.shape[1])
45
+ y2 = min(y+h, bg.shape[0])
46
+ if x1>=x2 or y1>=y2:
47
+ return Image.fromarray(res)
48
+ cs = src[y1-y:y2-y, x1-x:x2-x]
49
+ cd = res[y1:y2, x1:x2]
50
+ mask = cs[...,3] > 0
51
+ if cd.shape[2] == 3:
52
+ cd[mask] = cs[mask][...,:3]
53
+ else:
54
+ cd[mask] = cs[mask]
55
+ res[y1:y2, x1:x2] = cd
56
+ return Image.fromarray(res)
57
+
58
+ def overlay_source(background: Image.Image, source: Image.Image):
59
+ # 1) detect bboxes
60
+ bbox_bg = get_face_bbox(background)
61
+ bbox_src = get_face_bbox(source)
62
+ if bbox_bg is None:
63
+ return None, "❌ No face in background."
64
+ if bbox_src is None:
65
+ return None, "❌ No face in source."
66
+
67
+ # 2) compute scale & resize source
68
+ w_bg, h_bg = bbox_bg[2]-bbox_bg[0], bbox_bg[3]-bbox_bg[1]
69
+ w_src, h_src = bbox_src[2]-bbox_src[0], bbox_src[3]-bbox_src[1]
70
+ scale = compute_scale(w_bg, h_bg, w_src, h_src)
71
+ src_scaled = source.resize(
72
+ (int(source.width*scale), int(source.height*scale)),
73
+ Image.Resampling.LANCZOS
74
+ )
75
+
76
+ # 3) compute offset
77
+ offset = compute_offset(bbox_bg, bbox_src, scale)
78
+
79
+ # 4) baldhead background
80
+ bg_bald = inference(background)
81
+
82
+ # 5) extract hair-only from source
83
+ full_head = extract_hair_face_full_forehead(src_scaled)
84
+
85
+ # 6) paste onto bald background
86
+ src_change = paste_with_alpha(
87
+ np.array(bg_bald.convert("RGBA")),
88
+ np.array(full_head),
89
+ offset
90
+ )
91
+
92
+ # Return the intermediate result for face swapping in complete_pipeline
93
+ return src_change.convert("RGB")
requirements.txt CHANGED
@@ -1,35 +1,35 @@
1
- --extra-index-url https://download.pytorch.org/whl/cu118 # Dòng này có vẻ là comment hoặc cấu hình cho pip, không phải là một gói
2
- # spaces # Dòng này không rõ ràng là một gói, có thể là ghi chú. Nếu không phải gói, hãy xóa đi.
3
- huggingface_hub>=0.20.3
4
- numpy==1.23.5
5
- transformers==4.30.0
6
- opencv-python-headless==4.7.0.72
7
- onnx==1.14.0
8
- insightface==0.7.3
9
- psutil==5.9.5
10
- tk==0.1.0 # Lưu ý: tk thường được bao gồm trong bản cài đặt Python chuẩn, không phải lúc nào cũng cần cài qua pip.
11
- customtkinter==5.1.3
12
- pillow==9.5.0
13
- torch==2.0.1+cu118; sys_platform != 'darwin'
14
- torch==2.0.1; sys_platform == 'darwin'
15
- torchvision==0.15.2+cu118; sys_platform != 'darwin'
16
- torchvision==0.15.2; sys_platform == 'darwin'
17
- # onnxruntime==1.15.0; # Bỏ comment cho dòng này nếu bạn muốn cố định phiên bản cho mọi OS
18
- # sys_platform == 'darwin' and platform_machine != 'arm64' # Comment
19
- onnxruntime-silicon==1.13.1; sys_platform == 'darwin' and platform_machine == 'arm64'
20
- onnxruntime-gpu==1.15.0; sys_platform != 'darwin' # Nên giữ lại dòng này cho non-darwin GPU
21
- onnxruntime==1.15.0; sys_platform == 'darwin' and platform_machine != 'arm64' # Thêm lại dòng onnxruntime cho Mac Intel
22
- tensorflow==2.12.0
23
- # sys_platform != 'darwin' # Comment
24
- opennsfw2==0.10.2
25
- # protobuf==4.23.2 # Thay thế dòng này
26
- protobuf==4.25.3 # *** THAY ĐỔI QUAN TRỌNG ***
27
- tqdm==4.65.0
28
- gfpgan==1.3.8
29
- # torch # Dòng này không cần thiết vì torch đã được định nghĩa ở trên với phiên bản cụ thể.
30
-
31
- # Thêm các thư viện mới cần thiết cho app.py đã cập nhật
32
- scikit-image>=0.19 # Hoặc một phiên bản cụ thể hơn nếu bạn muốn, ví dụ: scikit-image==0.19.3
33
- mediapipe==0.10.14 # *** THÊM MỚI HOẶC CẬP NHẬT *** (Phiên bản này yêu cầu protobuf >=4.25.3)
34
- git+https://github.com/keras-team/keras-contrib.git
35
  retina-face==0.0.13
 
1
+ --extra-index-url https://download.pytorch.org/whl/cu118 # Dòng này có vẻ là comment hoặc cấu hình cho pip, không phải là một gói
2
+ # spaces # Dòng này không rõ ràng là một gói, có thể là ghi chú. Nếu không phải gói, hãy xóa đi.
3
+ huggingface_hub>=0.20.3
4
+ numpy==1.23.5
5
+ transformers==4.30.0
6
+ opencv-python-headless==4.7.0.72
7
+ onnx==1.14.0
8
+ insightface==0.7.3
9
+ psutil==5.9.5
10
+ tk==0.1.0 # Lưu ý: tk thường được bao gồm trong bản cài đặt Python chuẩn, không phải lúc nào cũng cần cài qua pip.
11
+ customtkinter==5.1.3
12
+ pillow==9.5.0
13
+ torch==2.0.1+cu118; sys_platform != 'darwin'
14
+ torch==2.0.1; sys_platform == 'darwin'
15
+ torchvision==0.15.2+cu118; sys_platform != 'darwin'
16
+ torchvision==0.15.2; sys_platform == 'darwin'
17
+ # onnxruntime==1.15.0; # Bỏ comment cho dòng này nếu bạn muốn cố định phiên bản cho mọi OS
18
+ # sys_platform == 'darwin' and platform_machine != 'arm64' # Comment
19
+ onnxruntime-silicon==1.13.1; sys_platform == 'darwin' and platform_machine == 'arm64'
20
+ onnxruntime-gpu==1.15.0; sys_platform != 'darwin' # Nên giữ lại dòng này cho non-darwin GPU
21
+ onnxruntime==1.15.0; sys_platform == 'darwin' and platform_machine != 'arm64' # Thêm lại dòng onnxruntime cho Mac Intel
22
+ tensorflow==2.12.0
23
+ # sys_platform != 'darwin' # Comment
24
+ opennsfw2==0.10.2
25
+ # protobuf==4.23.2 # Thay thế dòng này
26
+ protobuf==4.25.3 # *** THAY ĐỔI QUAN TRỌNG ***
27
+ tqdm==4.65.0
28
+ gfpgan==1.3.8
29
+ # torch # Dòng này không cần thiết vì torch đã được định nghĩa ở trên với phiên bản cụ thể.
30
+
31
+ # Thêm các thư viện mới cần thiết cho app.py đã cập nhật
32
+ scikit-image>=0.19 # Hoặc một phiên bản cụ thể hơn nếu bạn muốn, ví dụ: scikit-image==0.19.3
33
+ mediapipe==0.10.14 # *** THÊM MỚI HOẶC CẬP NHẬT *** (Phiên bản này yêu cầu protobuf >=4.25.3)
34
+ git+https://github.com/keras-team/keras-contrib.git
35
  retina-face==0.0.13
setup.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Setup script for FaceSwap AI Wig Try-On Project
3
+ Ensures all models and dependencies are properly configured
4
+ """
5
+
6
+ import os
7
+ import sys
8
+ from pathlib import Path
9
+
10
+ def check_models():
11
+ """Check if all required models are accessible"""
12
+ print("🔍 Checking AI models...")
13
+
14
+ models_status = {}
15
+
16
+ # Check face detection model
17
+ try:
18
+ from detect_face import predict, _MODEL
19
+ if _MODEL is not None:
20
+ models_status['face_detection'] = "✅ Ready"
21
+ else:
22
+ models_status['face_detection'] = "❌ Failed to load"
23
+ except Exception as e:
24
+ models_status['face_detection'] = f"❌ Error: {e}"
25
+
26
+ # Check segmentation model
27
+ try:
28
+ from segmentation import processor, model
29
+ models_status['segmentation'] = "✅ Ready"
30
+ except Exception as e:
31
+ models_status['segmentation'] = f"❌ Error: {e}"
32
+
33
+ # Check baldhead model
34
+ try:
35
+ from baldhead import GENERATOR
36
+ if GENERATOR is not None:
37
+ models_status['baldhead'] = "✅ Ready"
38
+ else:
39
+ models_status['baldhead'] = "⚠️ Model not loaded (may download on first use)"
40
+ except Exception as e:
41
+ models_status['baldhead'] = f"❌ Error: {e}"
42
+
43
+ # Check roop face swap
44
+ try:
45
+ import roop.core
46
+ models_status['face_swap'] = "✅ Ready"
47
+ except Exception as e:
48
+ models_status['face_swap'] = f"❌ Error: {e}"
49
+
50
+ return models_status
51
+
52
+ def check_directories():
53
+ """Ensure required directories exist"""
54
+ print("📁 Checking directories...")
55
+
56
+ required_dirs = [
57
+ "example_wigs/Heart",
58
+ "example_wigs/Oblong",
59
+ "example_wigs/Oval",
60
+ "example_wigs/Round",
61
+ "example_wigs/Square",
62
+ "models",
63
+ "temp"
64
+ ]
65
+
66
+ for dir_path in required_dirs:
67
+ Path(dir_path).mkdir(parents=True, exist_ok=True)
68
+ print(f" ✅ {dir_path}")
69
+
70
+ def check_wig_samples():
71
+ """Check if wig samples exist"""
72
+ print("🎭 Checking wig samples...")
73
+
74
+ wig_count = {}
75
+ for shape in ["Heart", "Oblong", "Oval", "Round", "Square"]:
76
+ shape_dir = Path(f"example_wigs/{shape}")
77
+ if shape_dir.exists():
78
+ wigs = list(shape_dir.glob("*.png")) + list(shape_dir.glob("*.jpg"))
79
+ wig_count[shape] = len(wigs)
80
+ print(f" {shape}: {len(wigs)} wigs")
81
+ else:
82
+ wig_count[shape] = 0
83
+ print(f" {shape}: 0 wigs ❌")
84
+
85
+ return wig_count
86
+
87
+ def main():
88
+ """Main setup check"""
89
+ print("🚀 FaceSwap AI Wig Try-On - Setup Check")
90
+ print("=" * 50)
91
+
92
+ # Check directories
93
+ check_directories()
94
+ print()
95
+
96
+ # Check wig samples
97
+ wig_count = check_wig_samples()
98
+ print()
99
+
100
+ # Check models
101
+ models_status = check_models()
102
+ print()
103
+
104
+ # Summary
105
+ print("📊 SETUP SUMMARY:")
106
+ print("-" * 30)
107
+
108
+ print("🤖 AI Models:")
109
+ for model_name, status in models_status.items():
110
+ print(f" {model_name}: {status}")
111
+
112
+ print(f"\n🎭 Wig Samples: {sum(wig_count.values())} total")
113
+ for shape, count in wig_count.items():
114
+ print(f" {shape}: {count}")
115
+
116
+ # Check if ready to run
117
+ critical_models = ['face_detection', 'segmentation', 'face_swap']
118
+ ready = all("✅" in models_status.get(model, "") for model in critical_models)
119
+
120
+ if ready and sum(wig_count.values()) > 0:
121
+ print("\n🎉 Project is ready to run!")
122
+ print(" Run: python app.py")
123
+ else:
124
+ print("\n⚠️ Some components need attention:")
125
+ if not ready:
126
+ print(" - AI models need fixing")
127
+ if sum(wig_count.values()) == 0:
128
+ print(" - Add wig samples to example_wigs/ folders")
129
+
130
+ if __name__ == "__main__":
131
+ main()
swapface.py CHANGED
@@ -1,53 +1,53 @@
1
- import os
2
- from PIL import Image
3
- import numpy as np
4
- import roop.globals
5
- from roop.core import start, decode_execution_providers, suggest_max_memory, suggest_execution_threads
6
- from roop.processors.frame.core import get_frame_processors_modules
7
-
8
- def to_pil(img):
9
- """Convert np.ndarray to PIL.Image if needed."""
10
- if isinstance(img, np.ndarray):
11
- return Image.fromarray(img)
12
- return img
13
-
14
- def swap_face_now(source_img, target_img, do_enhance=True):
15
- TEMP_DIR = "temp/"
16
- os.makedirs(TEMP_DIR, exist_ok=True)
17
-
18
- # ✅ Ép kiểu an toàn
19
- source_img = to_pil(source_img)
20
- target_img = to_pil(target_img)
21
-
22
- source_path = os.path.join(TEMP_DIR, "input.jpg")
23
- target_path = os.path.join(TEMP_DIR, "target.jpg")
24
- output_path = os.path.join(TEMP_DIR, "output.jpg")
25
-
26
- source_img.save(source_path)
27
- target_img.save(target_path)
28
-
29
- # Roop config
30
- roop.globals.source_path = source_path
31
- roop.globals.target_path = target_path
32
- roop.globals.output_path = output_path
33
- roop.globals.frame_processors = ["face_swapper", "face_enhancer"] if do_enhance else ["face_swapper"]
34
- roop.globals.headless = True
35
- roop.globals.many_faces = False
36
- roop.globals.max_memory = suggest_max_memory()
37
- roop.globals.execution_providers = decode_execution_providers(["cuda"])
38
- roop.globals.execution_threads = suggest_execution_threads()
39
- roop.globals.keep_fps = False
40
- roop.globals.keep_audio = False
41
- roop.globals.keep_frames = False
42
- roop.globals.video_encoder = None
43
-
44
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
45
- if not frame_processor.pre_check():
46
- return None
47
-
48
- start()
49
-
50
- if not os.path.exists(output_path):
51
- return None
52
-
53
  return Image.open(output_path)
 
1
+ import os
2
+ from PIL import Image
3
+ import numpy as np
4
+ import roop.globals
5
+ from roop.core import start, decode_execution_providers, suggest_max_memory, suggest_execution_threads
6
+ from roop.processors.frame.core import get_frame_processors_modules
7
+
8
+ def to_pil(img):
9
+ """Convert np.ndarray to PIL.Image if needed."""
10
+ if isinstance(img, np.ndarray):
11
+ return Image.fromarray(img)
12
+ return img
13
+
14
+ def swap_face_now(source_img, target_img, do_enhance=True):
15
+ TEMP_DIR = "temp/"
16
+ os.makedirs(TEMP_DIR, exist_ok=True)
17
+
18
+ # ✅ Ép kiểu an toàn
19
+ source_img = to_pil(source_img)
20
+ target_img = to_pil(target_img)
21
+
22
+ source_path = os.path.join(TEMP_DIR, "input.jpg")
23
+ target_path = os.path.join(TEMP_DIR, "target.jpg")
24
+ output_path = os.path.join(TEMP_DIR, "output.jpg")
25
+
26
+ source_img.save(source_path)
27
+ target_img.save(target_path)
28
+
29
+ # Roop config
30
+ roop.globals.source_path = source_path
31
+ roop.globals.target_path = target_path
32
+ roop.globals.output_path = output_path
33
+ roop.globals.frame_processors = ["face_swapper", "face_enhancer"] if do_enhance else ["face_swapper"]
34
+ roop.globals.headless = True
35
+ roop.globals.many_faces = False
36
+ roop.globals.max_memory = suggest_max_memory()
37
+ roop.globals.execution_providers = decode_execution_providers(["cuda"])
38
+ roop.globals.execution_threads = suggest_execution_threads()
39
+ roop.globals.keep_fps = False
40
+ roop.globals.keep_audio = False
41
+ roop.globals.keep_frames = False
42
+ roop.globals.video_encoder = None
43
+
44
+ for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
45
+ if not frame_processor.pre_check():
46
+ return None
47
+
48
+ start()
49
+
50
+ if not os.path.exists(output_path):
51
+ return None
52
+
53
  return Image.open(output_path)