lily_fast_api / docs /USER_GUIDE.md
gbrabbit's picture
Fresh start for HF Spaces deployment
526927a

Lily LLM API ์‚ฌ์šฉ์ž ๊ฐ€์ด๋“œ

๐Ÿ“‹ ๋ชฉ์ฐจ

  1. ์‹œ์ž‘ํ•˜๊ธฐ
  2. ๊ธฐ๋ณธ ๊ธฐ๋Šฅ
  3. ๊ณ ๊ธ‰ ๊ธฐ๋Šฅ
  4. ๋ฌธ์ œ ํ•ด๊ฒฐ
  5. ๋ชจ๋ฒ” ์‚ฌ๋ก€

๐Ÿš€ ์‹œ์ž‘ํ•˜๊ธฐ

์‹œ์Šคํ…œ ์š”๊ตฌ์‚ฌํ•ญ

  • ์ตœ์†Œ ์‚ฌ์–‘:

    • CPU: 4์ฝ”์–ด ์ด์ƒ
    • RAM: 8GB ์ด์ƒ
    • ์ €์žฅ๊ณต๊ฐ„: 20GB ์ด์ƒ
    • GPU: ์„ ํƒ์‚ฌํ•ญ (CUDA ์ง€์› ์‹œ ์„ฑ๋Šฅ ํ–ฅ์ƒ)
  • ๊ถŒ์žฅ ์‚ฌ์–‘:

    • CPU: 8์ฝ”์–ด ์ด์ƒ
    • RAM: 16GB ์ด์ƒ
    • ์ €์žฅ๊ณต๊ฐ„: 50GB ์ด์ƒ
    • GPU: NVIDIA RTX 3060 ์ด์ƒ (CUDA ์ง€์›)

์„ค์น˜ ๋ฐ ์‹คํ–‰

1. Docker๋ฅผ ์‚ฌ์šฉํ•œ ๋ฐฐํฌ (๊ถŒ์žฅ)

# ์ €์žฅ์†Œ ํด๋ก 
git clone <repository-url>
cd lily_generate_package

# ๋ฐฐํฌ ์‹คํ–‰
chmod +x scripts/deploy.sh
./scripts/deploy.sh deploy

# ์ƒํƒœ ํ™•์ธ
./scripts/deploy.sh status

2. ๋กœ์ปฌ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ

# ๊ฐ€์ƒํ™˜๊ฒฝ ์ƒ์„ฑ
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# ์˜์กด์„ฑ ์„ค์น˜
pip install -r requirements.txt

# NLTK ๋ฐ์ดํ„ฐ ๋‹ค์šด๋กœ๋“œ
python -c "import nltk; nltk.download('punkt'); nltk.download('punkt_tab')"

# ์„œ๋ฒ„ ์‹คํ–‰
python run_server_v2.py

์ฒซ ๋ฒˆ์งธ ์š”์ฒญ

# ์„œ๋ฒ„ ์ƒํƒœ ํ™•์ธ
curl http://localhost:8001/health

# ๋ชจ๋ธ ๋ชฉ๋ก ์กฐํšŒ
curl http://localhost:8001/models

# ๊ฐ„๋‹จํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ
curl -X POST http://localhost:8001/generate \
  -H "Content-Type: application/x-www-form-urlencoded" \
  -d "prompt=์•ˆ๋…•ํ•˜์„ธ์š”!&model_id=polyglot-ko-1.3b-chat&max_length=100"

๐Ÿค– ๊ธฐ๋ณธ ๊ธฐ๋Šฅ

1. ํ…์ŠคํŠธ ์ƒ์„ฑ

๋‹จ์ˆœ ํ…์ŠคํŠธ ์ƒ์„ฑ

import requests

def generate_text(prompt, model_id="polyglot-ko-1.3b-chat"):
    url = "http://localhost:8001/generate"
    data = {
        "prompt": prompt,
        "model_id": model_id,
        "max_length": 200,
        "temperature": 0.7,
        "top_p": 0.9,
        "do_sample": True
    }
    
    response = requests.post(url, data=data)
    return response.json()

# ์‚ฌ์šฉ ์˜ˆ์ œ
result = generate_text("์ธ๊ณต์ง€๋Šฅ์˜ ๋ฏธ๋ž˜์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”.")
print(result["generated_text"])

ํŒŒ๋ผ๋ฏธํ„ฐ ์„ค๋ช…

ํŒŒ๋ผ๋ฏธํ„ฐ ์„ค๋ช… ๊ธฐ๋ณธ๊ฐ’ ๋ฒ”์œ„
prompt ์ž…๋ ฅ ํ…์ŠคํŠธ ํ•„์ˆ˜ -
model_id ์‚ฌ์šฉํ•  ๋ชจ๋ธ polyglot-ko-1.3b-chat ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ ๋ชฉ๋ก
max_length ์ตœ๋Œ€ ํ† ํฐ ์ˆ˜ 200 1-4000
temperature ์ฐฝ์˜์„ฑ ์กฐ์ ˆ 0.7 0.0-2.0
top_p ๋ˆ„์  ํ™•๋ฅ  ์ž„๊ณ„๊ฐ’ 0.9 0.0-1.0
do_sample ์ƒ˜ํ”Œ๋ง ์‚ฌ์šฉ ์—ฌ๋ถ€ True True/False

2. ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ฒ˜๋ฆฌ

์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ํ•จ๊ป˜ ์ฒ˜๋ฆฌ

def generate_multimodal(prompt, image_files, model_id="kanana-1.5-v-3b-instruct"):
    url = "http://localhost:8001/generate-multimodal"
    
    files = []
    for i, image_file in enumerate(image_files):
        files.append(('image_files', (f'image_{i}.jpg', open(image_file, 'rb'), 'image/jpeg')))
    
    data = {
        "prompt": prompt,
        "model_id": model_id,
        "max_length": 200,
        "temperature": 0.7
    }
    
    response = requests.post(url, files=files, data=data)
    return response.json()

# ์‚ฌ์šฉ ์˜ˆ์ œ
result = generate_multimodal(
    "์ด ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”.",
    ["image1.jpg", "image2.jpg"]
)
print(result["generated_text"])

3. ์‚ฌ์šฉ์ž ๊ด€๋ฆฌ

์‚ฌ์šฉ์ž ๋“ฑ๋ก ๋ฐ ๋กœ๊ทธ์ธ

def register_user(username, email, password):
    url = "http://localhost:8001/auth/register"
    data = {
        "username": username,
        "email": email,
        "password": password
    }
    
    response = requests.post(url, data=data)
    return response.json()

def login_user(username, password):
    url = "http://localhost:8001/auth/login"
    data = {
        "username": username,
        "password": password
    }
    
    response = requests.post(url, data=data)
    return response.json()

# ์‚ฌ์šฉ ์˜ˆ์ œ
# 1. ์‚ฌ์šฉ์ž ๋“ฑ๋ก
register_result = register_user("testuser", "test@example.com", "password123")
access_token = register_result["access_token"]

# 2. ๋กœ๊ทธ์ธ
login_result = login_user("testuser", "password123")
access_token = login_result["access_token"]

์ธ์ฆ์ด ํ•„์š”ํ•œ ์š”์ฒญ

def authenticated_request(url, data, token):
    headers = {"Authorization": f"Bearer {token}"}
    response = requests.post(url, data=data, headers=headers)
    return response.json()

# ์‚ฌ์šฉ ์˜ˆ์ œ
result = authenticated_request(
    "http://localhost:8001/generate",
    {"prompt": "์•ˆ๋…•ํ•˜์„ธ์š”!", "model_id": "polyglot-ko-1.3b-chat"},
    access_token
)

๐Ÿ“„ ๊ณ ๊ธ‰ ๊ธฐ๋Šฅ

1. ๋ฌธ์„œ ์ฒ˜๋ฆฌ (RAG)

๋ฌธ์„œ ์—…๋กœ๋“œ

def upload_document(file_path, user_id, token=None):
    url = "http://localhost:8001/document/upload"
    
    with open(file_path, 'rb') as f:
        files = {'file': f}
        data = {'user_id': user_id}
        headers = {"Authorization": f"Bearer {token}"} if token else {}
        
        response = requests.post(url, files=files, data=data, headers=headers)
        return response.json()

# ์‚ฌ์šฉ ์˜ˆ์ œ
result = upload_document("document.pdf", "user123", access_token)
document_id = result["document_id"]

RAG ์ฟผ๋ฆฌ

def rag_query(query, user_id, token=None):
    url = "http://localhost:8001/rag/generate"
    
    data = {
        "query": query,
        "user_id": user_id,
        "max_length": 300,
        "temperature": 0.7
    }
    headers = {"Authorization": f"Bearer {token}"} if token else {}
    
    response = requests.post(url, data=data, headers=headers)
    return response.json()

# ์‚ฌ์šฉ ์˜ˆ์ œ
result = rag_query("์ธ๊ณต์ง€๋Šฅ์˜ ๋ฏธ๋ž˜์— ๋Œ€ํ•ด ์•Œ๋ ค์ฃผ์„ธ์š”.", "user123", access_token)
print(result["response"])
print("์ถœ์ฒ˜:", result["sources"])

ํ•˜์ด๋ธŒ๋ฆฌ๋“œ RAG (์ด๋ฏธ์ง€ + ๋ฌธ์„œ)

def hybrid_rag_query(query, image_files, user_id, token=None):
    url = "http://localhost:8001/rag/generate-hybrid"
    
    files = []
    for i, image_file in enumerate(image_files):
        files.append(('image_files', (f'image_{i}.jpg', open(image_file, 'rb'), 'image/jpeg')))
    
    data = {
        "query": query,
        "user_id": user_id,
        "max_length": 300,
        "temperature": 0.7
    }
    headers = {"Authorization": f"Bearer {token}"} if token else {}
    
    response = requests.post(url, files=files, data=data, headers=headers)
    return response.json()

2. ์ฑ„ํŒ… ์„ธ์…˜ ๊ด€๋ฆฌ

์„ธ์…˜ ์ƒ์„ฑ ๋ฐ ๋ฉ”์‹œ์ง€ ๊ด€๋ฆฌ

def create_chat_session(user_id, session_name, token=None):
    url = "http://localhost:8001/session/create"
    
    data = {
        "user_id": user_id,
        "session_name": session_name
    }
    headers = {"Authorization": f"Bearer {token}"} if token else {}
    
    response = requests.post(url, data=data, headers=headers)
    return response.json()

def add_chat_message(session_id, user_id, content, token=None):
    url = "http://localhost:8001/chat/message"
    
    data = {
        "session_id": session_id,
        "user_id": user_id,
        "message_type": "text",
        "content": content
    }
    headers = {"Authorization": f"Bearer {token}"} if token else {}
    
    response = requests.post(url, data=data, headers=headers)
    return response.json()

def get_chat_history(session_id, token=None):
    url = f"http://localhost:8001/chat/history/{session_id}"
    headers = {"Authorization": f"Bearer {token}"} if token else {}
    
    response = requests.get(url, headers=headers)
    return response.json()

# ์‚ฌ์šฉ ์˜ˆ์ œ
# 1. ์„ธ์…˜ ์ƒ์„ฑ
session_result = create_chat_session("user123", "AI ์ƒ๋‹ด", access_token)
session_id = session_result["session_id"]

# 2. ๋ฉ”์‹œ์ง€ ์ถ”๊ฐ€
add_chat_message(session_id, "user123", "์•ˆ๋…•ํ•˜์„ธ์š”!", access_token)

# 3. ์ฑ„ํŒ… ๊ธฐ๋ก ์กฐํšŒ
history = get_chat_history(session_id, access_token)
for message in history:
    print(f"{message['timestamp']}: {message['content']}")

3. ๋ฐฑ๊ทธ๋ผ์šด๋“œ ์ž‘์—…

๋ฌธ์„œ ์ฒ˜๋ฆฌ ์ž‘์—…

def start_document_processing(file_path, user_id, token=None):
    url = "http://localhost:8001/tasks/document/process"
    
    data = {
        "file_path": file_path,
        "user_id": user_id
    }
    headers = {"Authorization": f"Bearer {token}"} if token else {}
    
    response = requests.post(url, data=data, headers=headers)
    return response.json()

def check_task_status(task_id, token=None):
    url = f"http://localhost:8001/tasks/{task_id}"
    headers = {"Authorization": f"Bearer {token}"} if token else {}
    
    response = requests.get(url, headers=headers)
    return response.json()

# ์‚ฌ์šฉ ์˜ˆ์ œ
# 1. ์ž‘์—… ์‹œ์ž‘
task_result = start_document_processing("/path/to/document.pdf", "user123", access_token)
task_id = task_result["task_id"]

# 2. ์ž‘์—… ์ƒํƒœ ํ™•์ธ
import time
while True:
    status = check_task_status(task_id, access_token)
    print(f"์ƒํƒœ: {status['status']}, ์ง„ํ–‰๋ฅ : {status.get('progress', 0)}%")
    
    if status['status'] in ['SUCCESS', 'FAILURE']:
        break
    
    time.sleep(5)

4. ๋ชจ๋‹ˆํ„ฐ๋ง

์„ฑ๋Šฅ ๋ชจ๋‹ˆํ„ฐ๋ง

def start_monitoring():
    url = "http://localhost:8001/monitoring/start"
    response = requests.post(url)
    return response.json()

def get_monitoring_status():
    url = "http://localhost:8001/monitoring/status"
    response = requests.get(url)
    return response.json()

def get_system_health():
    url = "http://localhost:8001/monitoring/health"
    response = requests.get(url)
    return response.json()

# ์‚ฌ์šฉ ์˜ˆ์ œ
# 1. ๋ชจ๋‹ˆํ„ฐ๋ง ์‹œ์ž‘
start_monitoring()

# 2. ์ƒํƒœ ํ™•์ธ
status = get_monitoring_status()
print(f"CPU ์‚ฌ์šฉ๋ฅ : {status['current_metrics']['cpu_percent']}%")
print(f"๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋ฅ : {status['current_metrics']['memory_percent']}%")

# 3. ์‹œ์Šคํ…œ ๊ฑด๊ฐ• ์ƒํƒœ
health = get_system_health()
print(f"์‹œ์Šคํ…œ ์ƒํƒœ: {health['status']}")
for recommendation in health['recommendations']:
    print(f"๊ถŒ์žฅ์‚ฌํ•ญ: {recommendation}")

๐Ÿ”Œ WebSocket ์‹ค์‹œ๊ฐ„ ์ฑ„ํŒ…

WebSocket ํด๋ผ์ด์–ธํŠธ

class LilyLLMWebSocket {
    constructor(userId) {
        this.userId = userId;
        this.ws = null;
        this.messageHandlers = [];
    }
    
    connect() {
        this.ws = new WebSocket(`ws://localhost:8001/ws/${this.userId}`);
        
        this.ws.onopen = () => {
            console.log('WebSocket ์—ฐ๊ฒฐ๋จ');
        };
        
        this.ws.onmessage = (event) => {
            const data = JSON.parse(event.data);
            this.handleMessage(data);
        };
        
        this.ws.onclose = () => {
            console.log('WebSocket ์—ฐ๊ฒฐ ์ข…๋ฃŒ');
        };
        
        this.ws.onerror = (error) => {
            console.error('WebSocket ์˜ค๋ฅ˜:', error);
        };
    }
    
    sendMessage(message, sessionId) {
        if (this.ws && this.ws.readyState === WebSocket.OPEN) {
            this.ws.send(JSON.stringify({
                type: 'chat',
                message: message,
                session_id: sessionId
            }));
        }
    }
    
    addMessageHandler(handler) {
        this.messageHandlers.push(handler);
    }
    
    handleMessage(data) {
        this.messageHandlers.forEach(handler => handler(data));
    }
    
    disconnect() {
        if (this.ws) {
            this.ws.close();
        }
    }
}

// ์‚ฌ์šฉ ์˜ˆ์ œ
const wsClient = new LilyLLMWebSocket('user123');
wsClient.connect();

wsClient.addMessageHandler((data) => {
    console.log('๋ฉ”์‹œ์ง€ ์ˆ˜์‹ :', data);
});

wsClient.sendMessage('์•ˆ๋…•ํ•˜์„ธ์š”!', 'session123');

๐Ÿšจ ๋ฌธ์ œ ํ•ด๊ฒฐ

์ผ๋ฐ˜์ ์ธ ๋ฌธ์ œ๋“ค

1. ์„œ๋ฒ„ ์—ฐ๊ฒฐ ์‹คํŒจ

์ฆ์ƒ: Connection refused ๋˜๋Š” Failed to establish a new connection

ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•:

# ์„œ๋ฒ„ ์ƒํƒœ ํ™•์ธ
curl http://localhost:8001/health

# ์„œ๋ฒ„ ์žฌ์‹œ์ž‘
./scripts/deploy.sh restart

# ๋กœ๊ทธ ํ™•์ธ
./scripts/deploy.sh logs

2. ๋ฉ”๋ชจ๋ฆฌ ๋ถ€์กฑ

์ฆ์ƒ: Out of memory ๋˜๋Š” ์‘๋‹ต ์†๋„ ์ €ํ•˜

ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•:

# ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰ ํ™•์ธ
docker stats

# ๋ถˆํ•„์š”ํ•œ ์ปจํ…Œ์ด๋„ˆ ์ •๋ฆฌ
docker system prune -f

# ๋ฆฌ์†Œ์Šค ์ œํ•œ ์„ค์ • (docker-compose.yml)
services:
  lily-llm-api:
    deploy:
      resources:
        limits:
          memory: 4G

3. ๋ชจ๋ธ ๋กœ๋”ฉ ์‹คํŒจ

์ฆ์ƒ: Model not found ๋˜๋Š” ๋ชจ๋ธ ๊ด€๋ จ ์˜ค๋ฅ˜

ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•:

# ๋ชจ๋ธ ๋ชฉ๋ก ํ™•์ธ
curl http://localhost:8001/models

# ๋ชจ๋ธ ํŒŒ์ผ ํ™•์ธ
ls -la models/

# ์„œ๋ฒ„ ์žฌ์‹œ์ž‘
./scripts/deploy.sh restart

4. ์ธ์ฆ ์˜ค๋ฅ˜

์ฆ์ƒ: 401 Unauthorized ๋˜๋Š” 403 Forbidden

ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•:

# ํ† ํฐ ๊ฐฑ์‹ 
def refresh_token(refresh_token):
    url = "http://localhost:8001/auth/refresh"
    data = {"refresh_token": refresh_token}
    response = requests.post(url, data=data)
    return response.json()

# ์ƒˆ๋กœ์šด ํ† ํฐ์œผ๋กœ ์š”์ฒญ
new_tokens = refresh_token(old_refresh_token)
access_token = new_tokens["access_token"]

์„ฑ๋Šฅ ์ตœ์ ํ™”

1. ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ

def batch_generate_texts(prompts, model_id="polyglot-ko-1.3b-chat"):
    results = []
    for prompt in prompts:
        result = generate_text(prompt, model_id)
        results.append(result)
    return results

# ์‚ฌ์šฉ ์˜ˆ์ œ
prompts = [
    "์ฒซ ๋ฒˆ์งธ ์งˆ๋ฌธ์ž…๋‹ˆ๋‹ค.",
    "๋‘ ๋ฒˆ์งธ ์งˆ๋ฌธ์ž…๋‹ˆ๋‹ค.",
    "์„ธ ๋ฒˆ์งธ ์งˆ๋ฌธ์ž…๋‹ˆ๋‹ค."
]
results = batch_generate_texts(prompts)

2. ์บ์‹ฑ ํ™œ์šฉ

import redis
import json

class CachedLilyLLMClient:
    def __init__(self, base_url="http://localhost:8001"):
        self.base_url = base_url
        self.redis_client = redis.Redis(host='localhost', port=6379, db=0)
    
    def generate_text_with_cache(self, prompt, model_id="polyglot-ko-1.3b-chat"):
        # ์บ์‹œ ํ‚ค ์ƒ์„ฑ
        cache_key = f"text_gen:{hash(prompt + model_id)}"
        
        # ์บ์‹œ์—์„œ ํ™•์ธ
        cached_result = self.redis_client.get(cache_key)
        if cached_result:
            return json.loads(cached_result)
        
        # API ํ˜ธ์ถœ
        result = generate_text(prompt, model_id)
        
        # ์บ์‹œ์— ์ €์žฅ (1์‹œ๊ฐ„)
        self.redis_client.setex(cache_key, 3600, json.dumps(result))
        
        return result

๐Ÿ“š ๋ชจ๋ฒ” ์‚ฌ๋ก€

1. ์—๋Ÿฌ ์ฒ˜๋ฆฌ

import requests
from requests.exceptions import RequestException

def safe_api_call(func, *args, **kwargs):
    try:
        return func(*args, **kwargs)
    except RequestException as e:
        print(f"๋„คํŠธ์›Œํฌ ์˜ค๋ฅ˜: {e}")
        return None
    except Exception as e:
        print(f"์˜ˆ์ƒ์น˜ ๋ชปํ•œ ์˜ค๋ฅ˜: {e}")
        return None

# ์‚ฌ์šฉ ์˜ˆ์ œ
result = safe_api_call(generate_text, "์•ˆ๋…•ํ•˜์„ธ์š”!")
if result:
    print(result["generated_text"])

2. ์žฌ์‹œ๋„ ๋กœ์ง

import time
from functools import wraps

def retry_on_failure(max_retries=3, delay=1):
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            for attempt in range(max_retries):
                try:
                    return func(*args, **kwargs)
                except Exception as e:
                    if attempt == max_retries - 1:
                        raise e
                    print(f"์‹œ๋„ {attempt + 1} ์‹คํŒจ, {delay}์ดˆ ํ›„ ์žฌ์‹œ๋„...")
                    time.sleep(delay)
            return None
        return wrapper
    return decorator

# ์‚ฌ์šฉ ์˜ˆ์ œ
@retry_on_failure(max_retries=3, delay=2)
def robust_generate_text(prompt):
    return generate_text(prompt)

3. ๋น„๋™๊ธฐ ์ฒ˜๋ฆฌ

import asyncio
import aiohttp

async def async_generate_text(session, prompt, model_id="polyglot-ko-1.3b-chat"):
    url = "http://localhost:8001/generate"
    data = {
        "prompt": prompt,
        "model_id": model_id,
        "max_length": 200,
        "temperature": 0.7
    }
    
    async with session.post(url, data=data) as response:
        return await response.json()

async def batch_generate_async(prompts):
    async with aiohttp.ClientSession() as session:
        tasks = [async_generate_text(session, prompt) for prompt in prompts]
        results = await asyncio.gather(*tasks)
        return results

# ์‚ฌ์šฉ ์˜ˆ์ œ
prompts = ["์งˆ๋ฌธ1", "์งˆ๋ฌธ2", "์งˆ๋ฌธ3"]
results = asyncio.run(batch_generate_async(prompts))

4. ๋กœ๊น…

import logging

# ๋กœ๊น… ์„ค์ •
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler('lily_llm_client.log'),
        logging.StreamHandler()
    ]
)

logger = logging.getLogger(__name__)

def generate_text_with_logging(prompt, model_id="polyglot-ko-1.3b-chat"):
    logger.info(f"ํ…์ŠคํŠธ ์ƒ์„ฑ ์‹œ์ž‘: {prompt[:50]}...")
    
    try:
        result = generate_text(prompt, model_id)
        logger.info(f"ํ…์ŠคํŠธ ์ƒ์„ฑ ์„ฑ๊ณต: {len(result['generated_text'])} ๋ฌธ์ž")
        return result
    except Exception as e:
        logger.error(f"ํ…์ŠคํŠธ ์ƒ์„ฑ ์‹คํŒจ: {e}")
        raise

๐Ÿ“ž ์ง€์›

๋„์›€๋ง ๋ฆฌ์†Œ์Šค

  • API ๋ฌธ์„œ: http://localhost:8001/docs
  • ReDoc ๋ฌธ์„œ: http://localhost:8001/redoc
  • GitHub Issues: ํ”„๋กœ์ ํŠธ ์ €์žฅ์†Œ์˜ Issues ์„น์…˜
  • ๋กœ๊ทธ ํŒŒ์ผ: ./logs/ ๋””๋ ‰ํ† ๋ฆฌ

๋””๋ฒ„๊น… ํŒ

  1. ๋กœ๊ทธ ํ™•์ธ: ํ•ญ์ƒ ๋กœ๊ทธ๋ฅผ ๋จผ์ € ํ™•์ธํ•˜์„ธ์š”
  2. ๋‹จ๊ณ„๋ณ„ ํ…Œ์ŠคํŠธ: ๋ณต์žกํ•œ ์š”์ฒญ์„ ์ž‘์€ ๋‹จ์œ„๋กœ ๋‚˜๋ˆ„์–ด ํ…Œ์ŠคํŠธํ•˜์„ธ์š”
  3. ๋„คํŠธ์›Œํฌ ํ™•์ธ: ๋ฐฉํ™”๋ฒฝ์ด๋‚˜ ํ”„๋ก์‹œ ์„ค์ •์„ ํ™•์ธํ•˜์„ธ์š”
  4. ๋ฆฌ์†Œ์Šค ๋ชจ๋‹ˆํ„ฐ๋ง: CPU, ๋ฉ”๋ชจ๋ฆฌ, ๋””์Šคํฌ ์‚ฌ์šฉ๋Ÿ‰์„ ์ฃผ๊ธฐ์ ์œผ๋กœ ํ™•์ธํ•˜์„ธ์š”

์„ฑ๋Šฅ ํŒ

  1. ์ ์ ˆํ•œ ๋ชจ๋ธ ์„ ํƒ: ์ž‘์—…์— ๋งž๋Š” ๋ชจ๋ธ์„ ์„ ํƒํ•˜์„ธ์š”
  2. ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ: ์—ฌ๋Ÿฌ ์š”์ฒญ์„ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜์„ธ์š”
  3. ์บ์‹ฑ ํ™œ์šฉ: ๋ฐ˜๋ณต๋˜๋Š” ์š”์ฒญ์€ ์บ์‹œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”
  4. ๋น„๋™๊ธฐ ์ฒ˜๋ฆฌ: ๋Œ€๋Ÿ‰์˜ ์š”์ฒญ์€ ๋น„๋™๊ธฐ๋กœ ์ฒ˜๋ฆฌํ•˜์„ธ์š”