Spaces:
Running
Running
Your Name commited on
Commit ·
9cf1d05
1
Parent(s): 7d8f794
Implement remote image processing via local server, enhancing the editing function with server connection checks and performance settings. Update UI to include local server setup instructions and modify the main editing function to handle server requests.
Browse files- HYBRID_SETUP.md +119 -0
- app.py +135 -46
- app_hybrid.py +269 -0
- local_server.py +197 -0
HYBRID_SETUP.md
ADDED
|
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Hybrid Deployment Setup Instructions
|
| 2 |
+
|
| 3 |
+
This document provides detailed instructions for setting up the hybrid deployment of the AI-Powered Facial and Body Feature Editor, which uses your local GPU for processing while keeping the interface hosted on Hugging Face Spaces.
|
| 4 |
+
|
| 5 |
+
## Overview
|
| 6 |
+
|
| 7 |
+
The hybrid deployment consists of two components:
|
| 8 |
+
1. **Hugging Face Space** - Hosts the user interface and handles file uploads
|
| 9 |
+
2. **Local GPU Server** - Runs on your computer and processes the images using your NVIDIA 3060 GPU
|
| 10 |
+
|
| 11 |
+
This approach gives you the best of both worlds: the convenience of a hosted web application with the performance of your local GPU.
|
| 12 |
+
|
| 13 |
+
## Requirements
|
| 14 |
+
|
| 15 |
+
### For the Local Server
|
| 16 |
+
- Python 3.8 or higher
|
| 17 |
+
- PyTorch with CUDA support
|
| 18 |
+
- NVIDIA GPU with at least 6GB VRAM (your NVIDIA 3060 with 6GB is perfect)
|
| 19 |
+
- The following Python packages:
|
| 20 |
+
- gradio
|
| 21 |
+
- torch
|
| 22 |
+
- torchvision
|
| 23 |
+
- diffusers
|
| 24 |
+
- transformers
|
| 25 |
+
- opencv-python
|
| 26 |
+
- pillow
|
| 27 |
+
- numpy
|
| 28 |
+
|
| 29 |
+
## Step 1: Set Up the Local Server
|
| 30 |
+
|
| 31 |
+
1. Download the entire project zip file and extract it to a folder on your computer.
|
| 32 |
+
|
| 33 |
+
2. Install the required dependencies:
|
| 34 |
+
```bash
|
| 35 |
+
pip install -r requirements.txt
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
3. Make sure you have PyTorch with CUDA support:
|
| 39 |
+
```bash
|
| 40 |
+
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
4. Run the local server:
|
| 44 |
+
```bash
|
| 45 |
+
python local_server.py
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
5. The server will start and display a URL like `https://12345.gradio.app` (your number will be different). This is your local server's public URL.
|
| 49 |
+
|
| 50 |
+
6. Keep this terminal window open while using the application.
|
| 51 |
+
|
| 52 |
+
## Step 2: Update the Hugging Face Space
|
| 53 |
+
|
| 54 |
+
1. In your Hugging Face Space repository, replace the current `app.py` file with the `app_hybrid.py` file.
|
| 55 |
+
|
| 56 |
+
2. Rename `app_hybrid.py` to `app.py` or update the file directly.
|
| 57 |
+
|
| 58 |
+
3. Commit and push the changes:
|
| 59 |
+
```bash
|
| 60 |
+
git add app.py
|
| 61 |
+
git commit -m "Update to hybrid deployment mode"
|
| 62 |
+
git push
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
4. Hugging Face will automatically rebuild your Space with the updated code.
|
| 66 |
+
|
| 67 |
+
## Step 3: Connect the Components
|
| 68 |
+
|
| 69 |
+
1. Once your Hugging Face Space is rebuilt, open it in your browser.
|
| 70 |
+
|
| 71 |
+
2. In the "Local GPU Server Connection" section, paste the URL from your local server (the `https://12345.gradio.app` URL).
|
| 72 |
+
|
| 73 |
+
3. Click "Check Connection" to verify that the connection is working.
|
| 74 |
+
|
| 75 |
+
4. If the connection is successful, you'll see "Connected successfully" in the status field.
|
| 76 |
+
|
| 77 |
+
5. Now you can upload images and use the application as normal, but all processing will be done on your local GPU.
|
| 78 |
+
|
| 79 |
+
## Performance Tuning
|
| 80 |
+
|
| 81 |
+
You can adjust the following settings to balance quality and performance:
|
| 82 |
+
|
| 83 |
+
1. **Inference Steps**: Lower values (10-15) are faster, higher values (30-50) give better quality.
|
| 84 |
+
|
| 85 |
+
2. **Guidance Scale**: Controls how closely the model follows your prompt. Values between 5-10 work well.
|
| 86 |
+
|
| 87 |
+
3. **Processing Resolution**: Lower resolutions are faster but may lose detail. 512x512 is a good balance.
|
| 88 |
+
|
| 89 |
+
## Troubleshooting
|
| 90 |
+
|
| 91 |
+
### Connection Issues
|
| 92 |
+
- Make sure your local server is running
|
| 93 |
+
- Check that you've entered the correct URL
|
| 94 |
+
- Ensure your firewall isn't blocking the connection
|
| 95 |
+
- Try restarting the local server
|
| 96 |
+
|
| 97 |
+
### GPU Memory Issues
|
| 98 |
+
- Reduce the inference steps
|
| 99 |
+
- Use a lower processing resolution
|
| 100 |
+
- Close other GPU-intensive applications
|
| 101 |
+
|
| 102 |
+
### Image Processing Errors
|
| 103 |
+
- Try using a different image
|
| 104 |
+
- Reduce the complexity of your edits
|
| 105 |
+
- Check the local server terminal for error messages
|
| 106 |
+
|
| 107 |
+
## Security Considerations
|
| 108 |
+
|
| 109 |
+
The local server creates a temporary public URL that anyone can access while it's running. For security:
|
| 110 |
+
|
| 111 |
+
1. Only run the local server when you're actively using the application
|
| 112 |
+
2. Stop the server when you're done (Ctrl+C in the terminal)
|
| 113 |
+
3. Don't share your local server URL with others unless you want them to use your GPU
|
| 114 |
+
|
| 115 |
+
## Additional Notes
|
| 116 |
+
|
| 117 |
+
- The local server will use your GPU resources, which might affect other applications running on your computer
|
| 118 |
+
- Processing time will vary based on your GPU capabilities, but should be significantly faster than the CPU-only version
|
| 119 |
+
- You can monitor GPU usage in the local server interface or using tools like NVIDIA Task Manager
|
app.py
CHANGED
|
@@ -1,16 +1,10 @@
|
|
| 1 |
import os
|
| 2 |
import gradio as gr
|
| 3 |
-
import
|
| 4 |
from PIL import Image
|
| 5 |
import numpy as np
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
from utils.feature_detection import detect_features, create_mask
|
| 9 |
-
|
| 10 |
-
# Initialize models
|
| 11 |
-
def initialize_models():
|
| 12 |
-
ledits_model = LEDITSModel()
|
| 13 |
-
return ledits_model
|
| 14 |
|
| 15 |
# Global variables
|
| 16 |
FEATURE_TYPES = ["Eyes", "Nose", "Lips", "Face Shape", "Hair", "Body"]
|
|
@@ -23,50 +17,74 @@ MODIFICATION_PRESETS = {
|
|
| 23 |
"Body": ["Slim", "Athletic", "Curvy", "Muscular"]
|
| 24 |
}
|
| 25 |
|
| 26 |
-
#
|
| 27 |
-
def
|
| 28 |
-
|
|
|
|
| 29 |
if image is None:
|
| 30 |
return None, "Please upload an image first."
|
| 31 |
|
|
|
|
|
|
|
|
|
|
| 32 |
try:
|
| 33 |
-
#
|
| 34 |
-
if
|
| 35 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
else:
|
| 37 |
-
|
| 38 |
|
| 39 |
-
|
| 40 |
-
|
|
|
|
| 41 |
|
| 42 |
-
#
|
| 43 |
-
|
| 44 |
-
|
|
|
|
| 45 |
|
| 46 |
-
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
#
|
| 50 |
-
|
| 51 |
-
prompt = custom_prompt
|
| 52 |
-
else:
|
| 53 |
-
prompt = f"{feature_type} {modification_type}"
|
| 54 |
-
|
| 55 |
-
# Apply edit
|
| 56 |
-
edited_image = ledits_model.edit_image(
|
| 57 |
-
processed_image,
|
| 58 |
-
mask,
|
| 59 |
-
prompt,
|
| 60 |
-
intensity=intensity
|
| 61 |
-
)
|
| 62 |
-
|
| 63 |
-
# Postprocess
|
| 64 |
-
final_image = postprocess_image(edited_image, processed_image, mask)
|
| 65 |
|
| 66 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
except Exception as e:
|
| 69 |
-
return image, f"Error
|
| 70 |
|
| 71 |
# UI Components
|
| 72 |
def create_ui():
|
|
@@ -74,6 +92,37 @@ def create_ui():
|
|
| 74 |
gr.Markdown("# AI-Powered Facial & Body Feature Editor")
|
| 75 |
gr.Markdown("Upload an image and use the controls to edit specific facial and body features.")
|
| 76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
with gr.Row():
|
| 78 |
with gr.Column(scale=1):
|
| 79 |
# Input controls
|
|
@@ -113,6 +162,28 @@ def create_ui():
|
|
| 113 |
placeholder="e.g., blue eyes with long eyelashes"
|
| 114 |
)
|
| 115 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
edit_button = gr.Button("Apply Edit", variant="primary")
|
| 117 |
reset_button = gr.Button("Reset")
|
| 118 |
status_text = gr.Textbox(label="Status", interactive=False)
|
|
@@ -136,14 +207,18 @@ def create_ui():
|
|
| 136 |
)
|
| 137 |
|
| 138 |
edit_button.click(
|
| 139 |
-
fn=
|
| 140 |
inputs=[
|
| 141 |
input_image,
|
| 142 |
feature_type,
|
| 143 |
modification_type,
|
| 144 |
-
intensity,
|
|
|
|
|
|
|
|
|
|
| 145 |
custom_prompt,
|
| 146 |
-
use_custom_prompt
|
|
|
|
| 147 |
],
|
| 148 |
outputs=[output_image, status_text]
|
| 149 |
)
|
|
@@ -157,8 +232,6 @@ def create_ui():
|
|
| 157 |
outputs=[output_image, status_text]
|
| 158 |
)
|
| 159 |
|
| 160 |
-
# Removed examples section to avoid file path errors
|
| 161 |
-
|
| 162 |
# Add ethical usage notice
|
| 163 |
gr.Markdown("""
|
| 164 |
## Ethical Usage Notice
|
|
@@ -172,6 +245,22 @@ def create_ui():
|
|
| 172 |
By using this application, you agree to these terms.
|
| 173 |
""")
|
| 174 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 175 |
return app
|
| 176 |
|
| 177 |
# Launch the app
|
|
|
|
| 1 |
import os
|
| 2 |
import gradio as gr
|
| 3 |
+
import requests
|
| 4 |
from PIL import Image
|
| 5 |
import numpy as np
|
| 6 |
+
import io
|
| 7 |
+
import json
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
# Global variables
|
| 10 |
FEATURE_TYPES = ["Eyes", "Nose", "Lips", "Face Shape", "Hair", "Body"]
|
|
|
|
| 17 |
"Body": ["Slim", "Athletic", "Curvy", "Muscular"]
|
| 18 |
}
|
| 19 |
|
| 20 |
+
# Remote processing function
|
| 21 |
+
def process_with_local_server(image, feature_type, modification_type, intensity,
|
| 22 |
+
num_inference_steps, guidance_scale, resolution,
|
| 23 |
+
custom_prompt="", use_custom_prompt=False, server_url=None):
|
| 24 |
if image is None:
|
| 25 |
return None, "Please upload an image first."
|
| 26 |
|
| 27 |
+
if server_url is None or server_url == "":
|
| 28 |
+
return image, "Error: Local server URL not provided. Please enter your local server URL."
|
| 29 |
+
|
| 30 |
try:
|
| 31 |
+
# Ensure server URL ends with /api/predict/
|
| 32 |
+
if not server_url.endswith("/"):
|
| 33 |
+
server_url += "/"
|
| 34 |
+
if not server_url.endswith("api/predict/"):
|
| 35 |
+
server_url += "api/predict/"
|
| 36 |
+
|
| 37 |
+
# Convert image to bytes
|
| 38 |
+
if isinstance(image, np.ndarray):
|
| 39 |
+
image_pil = Image.fromarray(image)
|
| 40 |
else:
|
| 41 |
+
image_pil = image
|
| 42 |
|
| 43 |
+
img_byte_arr = io.BytesIO()
|
| 44 |
+
image_pil.save(img_byte_arr, format='PNG')
|
| 45 |
+
img_byte_arr.seek(0)
|
| 46 |
|
| 47 |
+
# Prepare the request data
|
| 48 |
+
files = {
|
| 49 |
+
'input_image': ('image.png', img_byte_arr, 'image/png')
|
| 50 |
+
}
|
| 51 |
|
| 52 |
+
data = {
|
| 53 |
+
'feature_type': feature_type,
|
| 54 |
+
'modification_type': modification_type,
|
| 55 |
+
'intensity': str(intensity),
|
| 56 |
+
'num_inference_steps': str(num_inference_steps),
|
| 57 |
+
'guidance_scale': str(guidance_scale),
|
| 58 |
+
'resolution': resolution,
|
| 59 |
+
'custom_prompt': custom_prompt,
|
| 60 |
+
'use_custom_prompt': str(use_custom_prompt).lower()
|
| 61 |
+
}
|
| 62 |
|
| 63 |
+
# Send request to local server
|
| 64 |
+
response = requests.post(server_url, files=files, data=data)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
+
if response.status_code == 200:
|
| 67 |
+
# Parse the response
|
| 68 |
+
result = response.json()
|
| 69 |
+
|
| 70 |
+
# Get the output image
|
| 71 |
+
if 'data' in result and len(result['data']) >= 1:
|
| 72 |
+
output_data = result['data'][0]
|
| 73 |
+
if isinstance(output_data, str) and output_data.startswith('data:image'):
|
| 74 |
+
# Handle base64 encoded image
|
| 75 |
+
import base64
|
| 76 |
+
image_data = output_data.split(',')[1]
|
| 77 |
+
decoded_image = base64.b64decode(image_data)
|
| 78 |
+
output_image = Image.open(io.BytesIO(decoded_image))
|
| 79 |
+
return output_image, "Edit completed successfully."
|
| 80 |
+
|
| 81 |
+
# If we couldn't parse the image from the response
|
| 82 |
+
return image, f"Error: Could not parse response from local server."
|
| 83 |
+
else:
|
| 84 |
+
return image, f"Error: Local server returned status code {response.status_code}."
|
| 85 |
|
| 86 |
except Exception as e:
|
| 87 |
+
return image, f"Error connecting to local server: {str(e)}"
|
| 88 |
|
| 89 |
# UI Components
|
| 90 |
def create_ui():
|
|
|
|
| 92 |
gr.Markdown("# AI-Powered Facial & Body Feature Editor")
|
| 93 |
gr.Markdown("Upload an image and use the controls to edit specific facial and body features.")
|
| 94 |
|
| 95 |
+
# Server connection
|
| 96 |
+
with gr.Group():
|
| 97 |
+
gr.Markdown("### Local GPU Server Connection")
|
| 98 |
+
server_url = gr.Textbox(
|
| 99 |
+
label="Local Server URL",
|
| 100 |
+
placeholder="Enter the URL of your local GPU server (e.g., https://12345.gradio.app)",
|
| 101 |
+
value=""
|
| 102 |
+
)
|
| 103 |
+
server_status = gr.Textbox(label="Server Status", value="Not connected", interactive=False)
|
| 104 |
+
|
| 105 |
+
def check_server(url):
|
| 106 |
+
if not url:
|
| 107 |
+
return "Not connected"
|
| 108 |
+
|
| 109 |
+
try:
|
| 110 |
+
# Ensure URL ends with /
|
| 111 |
+
if not url.endswith("/"):
|
| 112 |
+
url += "/"
|
| 113 |
+
|
| 114 |
+
# Try to connect to the server
|
| 115 |
+
response = requests.get(url)
|
| 116 |
+
if response.status_code == 200:
|
| 117 |
+
return "Connected successfully"
|
| 118 |
+
else:
|
| 119 |
+
return f"Error: Server returned status code {response.status_code}"
|
| 120 |
+
except Exception as e:
|
| 121 |
+
return f"Error connecting to server: {str(e)}"
|
| 122 |
+
|
| 123 |
+
check_button = gr.Button("Check Connection")
|
| 124 |
+
check_button.click(fn=check_server, inputs=server_url, outputs=server_status)
|
| 125 |
+
|
| 126 |
with gr.Row():
|
| 127 |
with gr.Column(scale=1):
|
| 128 |
# Input controls
|
|
|
|
| 162 |
placeholder="e.g., blue eyes with long eyelashes"
|
| 163 |
)
|
| 164 |
|
| 165 |
+
with gr.Group():
|
| 166 |
+
gr.Markdown("### Performance Settings")
|
| 167 |
+
num_inference_steps = gr.Slider(
|
| 168 |
+
minimum=5,
|
| 169 |
+
maximum=50,
|
| 170 |
+
value=20,
|
| 171 |
+
step=1,
|
| 172 |
+
label="Inference Steps (lower = faster, higher = better quality)"
|
| 173 |
+
)
|
| 174 |
+
guidance_scale = gr.Slider(
|
| 175 |
+
minimum=1.0,
|
| 176 |
+
maximum=15.0,
|
| 177 |
+
value=7.5,
|
| 178 |
+
step=0.5,
|
| 179 |
+
label="Guidance Scale (lower = more creative, higher = more accurate)"
|
| 180 |
+
)
|
| 181 |
+
resolution = gr.Dropdown(
|
| 182 |
+
choices=["Original", "512x512", "768x768", "1024x1024"],
|
| 183 |
+
label="Processing Resolution",
|
| 184 |
+
value="512x512"
|
| 185 |
+
)
|
| 186 |
+
|
| 187 |
edit_button = gr.Button("Apply Edit", variant="primary")
|
| 188 |
reset_button = gr.Button("Reset")
|
| 189 |
status_text = gr.Textbox(label="Status", interactive=False)
|
|
|
|
| 207 |
)
|
| 208 |
|
| 209 |
edit_button.click(
|
| 210 |
+
fn=process_with_local_server,
|
| 211 |
inputs=[
|
| 212 |
input_image,
|
| 213 |
feature_type,
|
| 214 |
modification_type,
|
| 215 |
+
intensity,
|
| 216 |
+
num_inference_steps,
|
| 217 |
+
guidance_scale,
|
| 218 |
+
resolution,
|
| 219 |
custom_prompt,
|
| 220 |
+
use_custom_prompt,
|
| 221 |
+
server_url
|
| 222 |
],
|
| 223 |
outputs=[output_image, status_text]
|
| 224 |
)
|
|
|
|
| 232 |
outputs=[output_image, status_text]
|
| 233 |
)
|
| 234 |
|
|
|
|
|
|
|
| 235 |
# Add ethical usage notice
|
| 236 |
gr.Markdown("""
|
| 237 |
## Ethical Usage Notice
|
|
|
|
| 245 |
By using this application, you agree to these terms.
|
| 246 |
""")
|
| 247 |
|
| 248 |
+
# Add local server setup instructions
|
| 249 |
+
gr.Markdown("""
|
| 250 |
+
## Local GPU Server Setup Instructions
|
| 251 |
+
|
| 252 |
+
To use your local GPU for processing:
|
| 253 |
+
|
| 254 |
+
1. Download and run the local_server.py file on your computer
|
| 255 |
+
2. Make sure you have the required dependencies installed
|
| 256 |
+
3. The server will start and provide a public URL (copy this URL)
|
| 257 |
+
4. Paste the URL into the "Local Server URL" field above
|
| 258 |
+
5. Click "Check Connection" to verify
|
| 259 |
+
6. Once connected, all processing will use your local GPU
|
| 260 |
+
|
| 261 |
+
This hybrid approach gives you the best performance while keeping the interface accessible from anywhere.
|
| 262 |
+
""")
|
| 263 |
+
|
| 264 |
return app
|
| 265 |
|
| 266 |
# Launch the app
|
app_hybrid.py
ADDED
|
@@ -0,0 +1,269 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import gradio as gr
|
| 3 |
+
import requests
|
| 4 |
+
from PIL import Image
|
| 5 |
+
import numpy as np
|
| 6 |
+
import io
|
| 7 |
+
import json
|
| 8 |
+
|
| 9 |
+
# Global variables
|
| 10 |
+
FEATURE_TYPES = ["Eyes", "Nose", "Lips", "Face Shape", "Hair", "Body"]
|
| 11 |
+
MODIFICATION_PRESETS = {
|
| 12 |
+
"Eyes": ["Larger", "Smaller", "Change Color", "Change Shape"],
|
| 13 |
+
"Nose": ["Refine", "Reshape", "Resize"],
|
| 14 |
+
"Lips": ["Fuller", "Thinner", "Change Color"],
|
| 15 |
+
"Face Shape": ["Slim", "Round", "Define Jawline", "Soften Features"],
|
| 16 |
+
"Hair": ["Change Color", "Change Style", "Add Volume"],
|
| 17 |
+
"Body": ["Slim", "Athletic", "Curvy", "Muscular"]
|
| 18 |
+
}
|
| 19 |
+
|
| 20 |
+
# Remote processing function
|
| 21 |
+
def process_with_local_server(image, feature_type, modification_type, intensity,
|
| 22 |
+
num_inference_steps, guidance_scale, resolution,
|
| 23 |
+
custom_prompt="", use_custom_prompt=False, server_url=None):
|
| 24 |
+
if image is None:
|
| 25 |
+
return None, "Please upload an image first."
|
| 26 |
+
|
| 27 |
+
if server_url is None or server_url == "":
|
| 28 |
+
return image, "Error: Local server URL not provided. Please enter your local server URL."
|
| 29 |
+
|
| 30 |
+
try:
|
| 31 |
+
# Ensure server URL ends with /api/predict/
|
| 32 |
+
if not server_url.endswith("/"):
|
| 33 |
+
server_url += "/"
|
| 34 |
+
if not server_url.endswith("api/predict/"):
|
| 35 |
+
server_url += "api/predict/"
|
| 36 |
+
|
| 37 |
+
# Convert image to bytes
|
| 38 |
+
if isinstance(image, np.ndarray):
|
| 39 |
+
image_pil = Image.fromarray(image)
|
| 40 |
+
else:
|
| 41 |
+
image_pil = image
|
| 42 |
+
|
| 43 |
+
img_byte_arr = io.BytesIO()
|
| 44 |
+
image_pil.save(img_byte_arr, format='PNG')
|
| 45 |
+
img_byte_arr.seek(0)
|
| 46 |
+
|
| 47 |
+
# Prepare the request data
|
| 48 |
+
files = {
|
| 49 |
+
'input_image': ('image.png', img_byte_arr, 'image/png')
|
| 50 |
+
}
|
| 51 |
+
|
| 52 |
+
data = {
|
| 53 |
+
'feature_type': feature_type,
|
| 54 |
+
'modification_type': modification_type,
|
| 55 |
+
'intensity': str(intensity),
|
| 56 |
+
'num_inference_steps': str(num_inference_steps),
|
| 57 |
+
'guidance_scale': str(guidance_scale),
|
| 58 |
+
'resolution': resolution,
|
| 59 |
+
'custom_prompt': custom_prompt,
|
| 60 |
+
'use_custom_prompt': str(use_custom_prompt).lower()
|
| 61 |
+
}
|
| 62 |
+
|
| 63 |
+
# Send request to local server
|
| 64 |
+
response = requests.post(server_url, files=files, data=data)
|
| 65 |
+
|
| 66 |
+
if response.status_code == 200:
|
| 67 |
+
# Parse the response
|
| 68 |
+
result = response.json()
|
| 69 |
+
|
| 70 |
+
# Get the output image
|
| 71 |
+
if 'data' in result and len(result['data']) >= 1:
|
| 72 |
+
output_data = result['data'][0]
|
| 73 |
+
if isinstance(output_data, str) and output_data.startswith('data:image'):
|
| 74 |
+
# Handle base64 encoded image
|
| 75 |
+
import base64
|
| 76 |
+
image_data = output_data.split(',')[1]
|
| 77 |
+
decoded_image = base64.b64decode(image_data)
|
| 78 |
+
output_image = Image.open(io.BytesIO(decoded_image))
|
| 79 |
+
return output_image, "Edit completed successfully."
|
| 80 |
+
|
| 81 |
+
# If we couldn't parse the image from the response
|
| 82 |
+
return image, f"Error: Could not parse response from local server."
|
| 83 |
+
else:
|
| 84 |
+
return image, f"Error: Local server returned status code {response.status_code}."
|
| 85 |
+
|
| 86 |
+
except Exception as e:
|
| 87 |
+
return image, f"Error connecting to local server: {str(e)}"
|
| 88 |
+
|
| 89 |
+
# UI Components
|
| 90 |
+
def create_ui():
|
| 91 |
+
with gr.Blocks(title="AI-Powered Facial & Body Feature Editor") as app:
|
| 92 |
+
gr.Markdown("# AI-Powered Facial & Body Feature Editor")
|
| 93 |
+
gr.Markdown("Upload an image and use the controls to edit specific facial and body features.")
|
| 94 |
+
|
| 95 |
+
# Server connection
|
| 96 |
+
with gr.Group():
|
| 97 |
+
gr.Markdown("### Local GPU Server Connection")
|
| 98 |
+
server_url = gr.Textbox(
|
| 99 |
+
label="Local Server URL",
|
| 100 |
+
placeholder="Enter the URL of your local GPU server (e.g., https://12345.gradio.app)",
|
| 101 |
+
value=""
|
| 102 |
+
)
|
| 103 |
+
server_status = gr.Textbox(label="Server Status", value="Not connected", interactive=False)
|
| 104 |
+
|
| 105 |
+
def check_server(url):
|
| 106 |
+
if not url:
|
| 107 |
+
return "Not connected"
|
| 108 |
+
|
| 109 |
+
try:
|
| 110 |
+
# Ensure URL ends with /
|
| 111 |
+
if not url.endswith("/"):
|
| 112 |
+
url += "/"
|
| 113 |
+
|
| 114 |
+
# Try to connect to the server
|
| 115 |
+
response = requests.get(url)
|
| 116 |
+
if response.status_code == 200:
|
| 117 |
+
return "Connected successfully"
|
| 118 |
+
else:
|
| 119 |
+
return f"Error: Server returned status code {response.status_code}"
|
| 120 |
+
except Exception as e:
|
| 121 |
+
return f"Error connecting to server: {str(e)}"
|
| 122 |
+
|
| 123 |
+
check_button = gr.Button("Check Connection")
|
| 124 |
+
check_button.click(fn=check_server, inputs=server_url, outputs=server_status)
|
| 125 |
+
|
| 126 |
+
with gr.Row():
|
| 127 |
+
with gr.Column(scale=1):
|
| 128 |
+
# Input controls
|
| 129 |
+
input_image = gr.Image(label="Upload Image", type="pil")
|
| 130 |
+
|
| 131 |
+
with gr.Group():
|
| 132 |
+
gr.Markdown("### Feature Selection")
|
| 133 |
+
feature_type = gr.Dropdown(
|
| 134 |
+
choices=FEATURE_TYPES,
|
| 135 |
+
label="Select Feature",
|
| 136 |
+
value="Eyes"
|
| 137 |
+
)
|
| 138 |
+
|
| 139 |
+
# Initialize with choices for the default feature (Eyes)
|
| 140 |
+
modification_type = gr.Dropdown(
|
| 141 |
+
choices=MODIFICATION_PRESETS["Eyes"],
|
| 142 |
+
label="Modification Type",
|
| 143 |
+
value="Larger"
|
| 144 |
+
)
|
| 145 |
+
|
| 146 |
+
intensity = gr.Slider(
|
| 147 |
+
minimum=0.1,
|
| 148 |
+
maximum=1.0,
|
| 149 |
+
value=0.5,
|
| 150 |
+
step=0.1,
|
| 151 |
+
label="Intensity"
|
| 152 |
+
)
|
| 153 |
+
|
| 154 |
+
with gr.Group():
|
| 155 |
+
gr.Markdown("### Custom Prompt (Advanced)")
|
| 156 |
+
use_custom_prompt = gr.Checkbox(
|
| 157 |
+
label="Use Custom Prompt",
|
| 158 |
+
value=False
|
| 159 |
+
)
|
| 160 |
+
custom_prompt = gr.Textbox(
|
| 161 |
+
label="Custom Prompt",
|
| 162 |
+
placeholder="e.g., blue eyes with long eyelashes"
|
| 163 |
+
)
|
| 164 |
+
|
| 165 |
+
with gr.Group():
|
| 166 |
+
gr.Markdown("### Performance Settings")
|
| 167 |
+
num_inference_steps = gr.Slider(
|
| 168 |
+
minimum=5,
|
| 169 |
+
maximum=50,
|
| 170 |
+
value=20,
|
| 171 |
+
step=1,
|
| 172 |
+
label="Inference Steps (lower = faster, higher = better quality)"
|
| 173 |
+
)
|
| 174 |
+
guidance_scale = gr.Slider(
|
| 175 |
+
minimum=1.0,
|
| 176 |
+
maximum=15.0,
|
| 177 |
+
value=7.5,
|
| 178 |
+
step=0.5,
|
| 179 |
+
label="Guidance Scale (lower = more creative, higher = more accurate)"
|
| 180 |
+
)
|
| 181 |
+
resolution = gr.Dropdown(
|
| 182 |
+
choices=["Original", "512x512", "768x768", "1024x1024"],
|
| 183 |
+
label="Processing Resolution",
|
| 184 |
+
value="512x512"
|
| 185 |
+
)
|
| 186 |
+
|
| 187 |
+
edit_button = gr.Button("Apply Edit", variant="primary")
|
| 188 |
+
reset_button = gr.Button("Reset")
|
| 189 |
+
status_text = gr.Textbox(label="Status", interactive=False)
|
| 190 |
+
|
| 191 |
+
with gr.Column(scale=1):
|
| 192 |
+
# Output display
|
| 193 |
+
output_image = gr.Image(label="Edited Image", type="pil")
|
| 194 |
+
|
| 195 |
+
with gr.Accordion("Edit History", open=False):
|
| 196 |
+
edit_history = gr.State([])
|
| 197 |
+
history_gallery = gr.Gallery(label="Previous Edits")
|
| 198 |
+
|
| 199 |
+
# Event handlers
|
| 200 |
+
def update_modification_choices(feature):
|
| 201 |
+
return gr.Dropdown(choices=MODIFICATION_PRESETS[feature])
|
| 202 |
+
|
| 203 |
+
feature_type.change(
|
| 204 |
+
fn=update_modification_choices,
|
| 205 |
+
inputs=feature_type,
|
| 206 |
+
outputs=modification_type
|
| 207 |
+
)
|
| 208 |
+
|
| 209 |
+
edit_button.click(
|
| 210 |
+
fn=process_with_local_server,
|
| 211 |
+
inputs=[
|
| 212 |
+
input_image,
|
| 213 |
+
feature_type,
|
| 214 |
+
modification_type,
|
| 215 |
+
intensity,
|
| 216 |
+
num_inference_steps,
|
| 217 |
+
guidance_scale,
|
| 218 |
+
resolution,
|
| 219 |
+
custom_prompt,
|
| 220 |
+
use_custom_prompt,
|
| 221 |
+
server_url
|
| 222 |
+
],
|
| 223 |
+
outputs=[output_image, status_text]
|
| 224 |
+
)
|
| 225 |
+
|
| 226 |
+
def reset_image():
|
| 227 |
+
return None, "Image reset."
|
| 228 |
+
|
| 229 |
+
reset_button.click(
|
| 230 |
+
fn=reset_image,
|
| 231 |
+
inputs=[],
|
| 232 |
+
outputs=[output_image, status_text]
|
| 233 |
+
)
|
| 234 |
+
|
| 235 |
+
# Add ethical usage notice
|
| 236 |
+
gr.Markdown("""
|
| 237 |
+
## Ethical Usage Notice
|
| 238 |
+
|
| 239 |
+
This tool is designed for creative and personal use. Please ensure:
|
| 240 |
+
|
| 241 |
+
- You have appropriate rights to edit the images you upload
|
| 242 |
+
- You use this tool responsibly and respect the dignity of individuals
|
| 243 |
+
- You understand that AI-generated modifications are artificial and may not represent reality
|
| 244 |
+
|
| 245 |
+
By using this application, you agree to these terms.
|
| 246 |
+
""")
|
| 247 |
+
|
| 248 |
+
# Add local server setup instructions
|
| 249 |
+
gr.Markdown("""
|
| 250 |
+
## Local GPU Server Setup Instructions
|
| 251 |
+
|
| 252 |
+
To use your local GPU for processing:
|
| 253 |
+
|
| 254 |
+
1. Download and run the local_server.py file on your computer
|
| 255 |
+
2. Make sure you have the required dependencies installed
|
| 256 |
+
3. The server will start and provide a public URL (copy this URL)
|
| 257 |
+
4. Paste the URL into the "Local Server URL" field above
|
| 258 |
+
5. Click "Check Connection" to verify
|
| 259 |
+
6. Once connected, all processing will use your local GPU
|
| 260 |
+
|
| 261 |
+
This hybrid approach gives you the best performance while keeping the interface accessible from anywhere.
|
| 262 |
+
""")
|
| 263 |
+
|
| 264 |
+
return app
|
| 265 |
+
|
| 266 |
+
# Launch the app
|
| 267 |
+
if __name__ == "__main__":
|
| 268 |
+
app = create_ui()
|
| 269 |
+
app.launch(server_name="0.0.0.0", share=False)
|
local_server.py
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import gradio as gr
|
| 3 |
+
import torch
|
| 4 |
+
from PIL import Image
|
| 5 |
+
import numpy as np
|
| 6 |
+
from models.ledits_model import LEDITSModel
|
| 7 |
+
from utils.image_processing import preprocess_image, postprocess_image
|
| 8 |
+
from utils.feature_detection import detect_features, create_mask
|
| 9 |
+
|
| 10 |
+
# Initialize models with GPU support
|
| 11 |
+
def initialize_models(device=None):
|
| 12 |
+
if device is None:
|
| 13 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 14 |
+
|
| 15 |
+
print(f"Using device: {device}")
|
| 16 |
+
if device == "cuda":
|
| 17 |
+
print(f"GPU: {torch.cuda.get_device_name(0)}")
|
| 18 |
+
print(f"Memory: {torch.cuda.get_device_properties(0).total_memory / 1024**3:.2f} GB")
|
| 19 |
+
|
| 20 |
+
ledits_model = LEDITSModel(device=device)
|
| 21 |
+
return ledits_model
|
| 22 |
+
|
| 23 |
+
# Main editing function
|
| 24 |
+
def edit_image(image, feature_type, modification_type, intensity,
|
| 25 |
+
num_inference_steps, guidance_scale, resolution,
|
| 26 |
+
custom_prompt="", use_custom_prompt=False):
|
| 27 |
+
if image is None:
|
| 28 |
+
return None, "Please upload an image first."
|
| 29 |
+
|
| 30 |
+
try:
|
| 31 |
+
# Convert to numpy array if needed
|
| 32 |
+
if isinstance(image, Image.Image):
|
| 33 |
+
image_np = np.array(image)
|
| 34 |
+
else:
|
| 35 |
+
image_np = image
|
| 36 |
+
|
| 37 |
+
# Resize image based on resolution setting
|
| 38 |
+
if resolution != "Original":
|
| 39 |
+
max_dim = int(resolution.split("x")[0])
|
| 40 |
+
height, width = image_np.shape[:2]
|
| 41 |
+
|
| 42 |
+
if height > width:
|
| 43 |
+
new_height = min(max_dim, height)
|
| 44 |
+
new_width = int(width * (new_height / height))
|
| 45 |
+
else:
|
| 46 |
+
new_width = min(max_dim, width)
|
| 47 |
+
new_height = int(height * (new_width / width))
|
| 48 |
+
|
| 49 |
+
image_np = Image.fromarray(image_np).resize((new_width, new_height), Image.LANCZOS)
|
| 50 |
+
image_np = np.array(image_np)
|
| 51 |
+
|
| 52 |
+
# Preprocess image
|
| 53 |
+
processed_image = preprocess_image(image_np)
|
| 54 |
+
|
| 55 |
+
# Detect features and create mask
|
| 56 |
+
features = detect_features(processed_image)
|
| 57 |
+
mask = create_mask(processed_image, feature_type, features)
|
| 58 |
+
|
| 59 |
+
# Get model
|
| 60 |
+
ledits_model = initialize_models()
|
| 61 |
+
|
| 62 |
+
# Prepare prompt
|
| 63 |
+
if use_custom_prompt and custom_prompt:
|
| 64 |
+
prompt = custom_prompt
|
| 65 |
+
else:
|
| 66 |
+
prompt = f"{feature_type} {modification_type}"
|
| 67 |
+
|
| 68 |
+
# Apply edit with custom parameters
|
| 69 |
+
edited_image = ledits_model.edit_image(
|
| 70 |
+
processed_image,
|
| 71 |
+
mask,
|
| 72 |
+
prompt,
|
| 73 |
+
intensity=intensity,
|
| 74 |
+
guidance_scale=guidance_scale,
|
| 75 |
+
num_inference_steps=num_inference_steps
|
| 76 |
+
)
|
| 77 |
+
|
| 78 |
+
# Postprocess
|
| 79 |
+
final_image = postprocess_image(edited_image, processed_image, mask)
|
| 80 |
+
|
| 81 |
+
return final_image, "Edit completed successfully."
|
| 82 |
+
|
| 83 |
+
except Exception as e:
|
| 84 |
+
import traceback
|
| 85 |
+
traceback.print_exc()
|
| 86 |
+
return image, f"Error during editing: {str(e)}"
|
| 87 |
+
|
| 88 |
+
# Create the server interface
|
| 89 |
+
def create_server():
|
| 90 |
+
with gr.Blocks(title="AI-Powered Facial & Body Feature Editor - Local Server") as server:
|
| 91 |
+
gr.Markdown("# AI-Powered Facial & Body Feature Editor - Local GPU Server")
|
| 92 |
+
gr.Markdown("This is the local GPU server component. Keep this running to process edits from the Hugging Face Space.")
|
| 93 |
+
|
| 94 |
+
# Input components
|
| 95 |
+
with gr.Row():
|
| 96 |
+
with gr.Column():
|
| 97 |
+
input_image = gr.Image(label="Input Image", type="pil")
|
| 98 |
+
feature_type = gr.Dropdown(
|
| 99 |
+
choices=["Eyes", "Nose", "Lips", "Face Shape", "Hair", "Body"],
|
| 100 |
+
label="Feature Type",
|
| 101 |
+
value="Eyes"
|
| 102 |
+
)
|
| 103 |
+
modification_type = gr.Dropdown(
|
| 104 |
+
choices=["Larger", "Smaller", "Change Color", "Change Shape"],
|
| 105 |
+
label="Modification Type",
|
| 106 |
+
value="Larger"
|
| 107 |
+
)
|
| 108 |
+
intensity = gr.Slider(
|
| 109 |
+
minimum=0.1,
|
| 110 |
+
maximum=1.0,
|
| 111 |
+
value=0.5,
|
| 112 |
+
step=0.1,
|
| 113 |
+
label="Intensity"
|
| 114 |
+
)
|
| 115 |
+
custom_prompt = gr.Textbox(
|
| 116 |
+
label="Custom Prompt",
|
| 117 |
+
placeholder="e.g., blue eyes with long eyelashes"
|
| 118 |
+
)
|
| 119 |
+
use_custom_prompt = gr.Checkbox(
|
| 120 |
+
label="Use Custom Prompt",
|
| 121 |
+
value=False
|
| 122 |
+
)
|
| 123 |
+
|
| 124 |
+
# Performance settings
|
| 125 |
+
with gr.Group():
|
| 126 |
+
gr.Markdown("### Performance Settings")
|
| 127 |
+
num_inference_steps = gr.Slider(
|
| 128 |
+
minimum=5,
|
| 129 |
+
maximum=50,
|
| 130 |
+
value=20,
|
| 131 |
+
step=1,
|
| 132 |
+
label="Inference Steps (lower = faster, higher = better quality)"
|
| 133 |
+
)
|
| 134 |
+
guidance_scale = gr.Slider(
|
| 135 |
+
minimum=1.0,
|
| 136 |
+
maximum=15.0,
|
| 137 |
+
value=7.5,
|
| 138 |
+
step=0.5,
|
| 139 |
+
label="Guidance Scale (lower = more creative, higher = more accurate)"
|
| 140 |
+
)
|
| 141 |
+
resolution = gr.Dropdown(
|
| 142 |
+
choices=["Original", "512x512", "768x768", "1024x1024"],
|
| 143 |
+
label="Processing Resolution",
|
| 144 |
+
value="512x512"
|
| 145 |
+
)
|
| 146 |
+
|
| 147 |
+
with gr.Column():
|
| 148 |
+
output_image = gr.Image(label="Output Image", type="pil")
|
| 149 |
+
status_text = gr.Textbox(label="Status", interactive=False)
|
| 150 |
+
|
| 151 |
+
# GPU info display
|
| 152 |
+
gpu_info = gr.Textbox(label="GPU Information", interactive=False)
|
| 153 |
+
|
| 154 |
+
def get_gpu_info():
|
| 155 |
+
if torch.cuda.is_available():
|
| 156 |
+
return f"GPU: {torch.cuda.get_device_name(0)}\nVRAM: {torch.cuda.get_device_properties(0).total_memory / 1024**3:.2f} GB\nCUDA Version: {torch.version.cuda}"
|
| 157 |
+
else:
|
| 158 |
+
return "No GPU detected. Using CPU mode."
|
| 159 |
+
|
| 160 |
+
# Display GPU info on load
|
| 161 |
+
gpu_info.value = get_gpu_info()
|
| 162 |
+
|
| 163 |
+
# Create the API endpoint
|
| 164 |
+
edit_image_api = gr.Interface(
|
| 165 |
+
fn=edit_image,
|
| 166 |
+
inputs=[
|
| 167 |
+
input_image,
|
| 168 |
+
feature_type,
|
| 169 |
+
modification_type,
|
| 170 |
+
intensity,
|
| 171 |
+
num_inference_steps,
|
| 172 |
+
guidance_scale,
|
| 173 |
+
resolution,
|
| 174 |
+
custom_prompt,
|
| 175 |
+
use_custom_prompt
|
| 176 |
+
],
|
| 177 |
+
outputs=[output_image, status_text],
|
| 178 |
+
title="AI-Powered Facial & Body Feature Editor API",
|
| 179 |
+
description="Local GPU server API endpoint",
|
| 180 |
+
allow_flagging="never"
|
| 181 |
+
)
|
| 182 |
+
|
| 183 |
+
return server
|
| 184 |
+
|
| 185 |
+
# Launch the server
|
| 186 |
+
if __name__ == "__main__":
|
| 187 |
+
# Check for GPU
|
| 188 |
+
if torch.cuda.is_available():
|
| 189 |
+
print(f"GPU detected: {torch.cuda.get_device_name(0)}")
|
| 190 |
+
print(f"VRAM: {torch.cuda.get_device_properties(0).total_memory / 1024**3:.2f} GB")
|
| 191 |
+
else:
|
| 192 |
+
print("No GPU detected. Using CPU mode.")
|
| 193 |
+
|
| 194 |
+
# Create and launch server
|
| 195 |
+
server = create_server()
|
| 196 |
+
server.queue()
|
| 197 |
+
server.launch(server_name="0.0.0.0", share=True)
|