Spaces:
Sleeping
Sleeping
| # Lily LLM API μ¬μ©μ κ°μ΄λ | |
| ## π λͺ©μ°¨ | |
| 1. [μμνκΈ°](#μμνκΈ°) | |
| 2. [κΈ°λ³Έ κΈ°λ₯](#κΈ°λ³Έ-κΈ°λ₯) | |
| 3. [κ³ κΈ κΈ°λ₯](#κ³ κΈ-κΈ°λ₯) | |
| 4. [λ¬Έμ ν΄κ²°](#λ¬Έμ -ν΄κ²°) | |
| 5. [λͺ¨λ² μ¬λ‘](#λͺ¨λ²-μ¬λ‘) | |
| ## π μμνκΈ° | |
| ### μμ€ν μꡬμ¬ν | |
| - **μ΅μ μ¬μ**: | |
| - CPU: 4μ½μ΄ μ΄μ | |
| - RAM: 8GB μ΄μ | |
| - μ μ₯곡κ°: 20GB μ΄μ | |
| - GPU: μ νμ¬ν (CUDA μ§μ μ μ±λ₯ ν₯μ) | |
| - **κΆμ₯ μ¬μ**: | |
| - CPU: 8μ½μ΄ μ΄μ | |
| - RAM: 16GB μ΄μ | |
| - μ μ₯곡κ°: 50GB μ΄μ | |
| - GPU: NVIDIA RTX 3060 μ΄μ (CUDA μ§μ) | |
| ### μ€μΉ λ° μ€ν | |
| #### 1. Dockerλ₯Ό μ¬μ©ν λ°°ν¬ (κΆμ₯) | |
| ```bash | |
| # μ μ₯μ ν΄λ‘ | |
| git clone <repository-url> | |
| cd lily_generate_package | |
| # λ°°ν¬ μ€ν | |
| chmod +x scripts/deploy.sh | |
| ./scripts/deploy.sh deploy | |
| # μν νμΈ | |
| ./scripts/deploy.sh status | |
| ``` | |
| #### 2. λ‘컬 κ°λ° νκ²½ | |
| ```bash | |
| # κ°μνκ²½ μμ± | |
| python -m venv venv | |
| source venv/bin/activate # Windows: venv\Scripts\activate | |
| # μμ‘΄μ± μ€μΉ | |
| pip install -r requirements.txt | |
| # NLTK λ°μ΄ν° λ€μ΄λ‘λ | |
| python -c "import nltk; nltk.download('punkt'); nltk.download('punkt_tab')" | |
| # μλ² μ€ν | |
| python run_server_v2.py | |
| ``` | |
| ### 첫 λ²μ§Έ μμ² | |
| ```bash | |
| # μλ² μν νμΈ | |
| curl http://localhost:8001/health | |
| # λͺ¨λΈ λͺ©λ‘ μ‘°ν | |
| curl http://localhost:8001/models | |
| # κ°λ¨ν ν μ€νΈ μμ± | |
| curl -X POST http://localhost:8001/generate \ | |
| -H "Content-Type: application/x-www-form-urlencoded" \ | |
| -d "prompt=μλ νμΈμ!&model_id=polyglot-ko-1.3b-chat&max_length=100" | |
| ``` | |
| ## π€ κΈ°λ³Έ κΈ°λ₯ | |
| ### 1. ν μ€νΈ μμ± | |
| #### λ¨μ ν μ€νΈ μμ± | |
| ```python | |
| import requests | |
| def generate_text(prompt, model_id="polyglot-ko-1.3b-chat"): | |
| url = "http://localhost:8001/generate" | |
| data = { | |
| "prompt": prompt, | |
| "model_id": model_id, | |
| "max_length": 200, | |
| "temperature": 0.7, | |
| "top_p": 0.9, | |
| "do_sample": True | |
| } | |
| response = requests.post(url, data=data) | |
| return response.json() | |
| # μ¬μ© μμ | |
| result = generate_text("μΈκ³΅μ§λ₯μ λ―Έλμ λν΄ μ€λͺ ν΄μ£ΌμΈμ.") | |
| print(result["generated_text"]) | |
| ``` | |
| #### νλΌλ―Έν° μ€λͺ | |
| | νλΌλ―Έν° | μ€λͺ | κΈ°λ³Έκ° | λ²μ | | |
| |----------|------|--------|------| | |
| | `prompt` | μ λ ₯ ν μ€νΈ | νμ | - | | |
| | `model_id` | μ¬μ©ν λͺ¨λΈ | polyglot-ko-1.3b-chat | μ¬μ© κ°λ₯ν λͺ¨λΈ λͺ©λ‘ | | |
| | `max_length` | μ΅λ ν ν° μ | 200 | 1-4000 | | |
| | `temperature` | μ°½μμ± μ‘°μ | 0.7 | 0.0-2.0 | | |
| | `top_p` | λμ νλ₯ μκ³κ° | 0.9 | 0.0-1.0 | | |
| | `do_sample` | μνλ§ μ¬μ© μ¬λΆ | True | True/False | | |
| ### 2. λ©ν°λͺ¨λ¬ μ²λ¦¬ | |
| #### μ΄λ―Έμ§μ ν μ€νΈ ν¨κ» μ²λ¦¬ | |
| ```python | |
| def generate_multimodal(prompt, image_files, model_id="kanana-1.5-v-3b-instruct"): | |
| url = "http://localhost:8001/generate-multimodal" | |
| files = [] | |
| for i, image_file in enumerate(image_files): | |
| files.append(('image_files', (f'image_{i}.jpg', open(image_file, 'rb'), 'image/jpeg'))) | |
| data = { | |
| "prompt": prompt, | |
| "model_id": model_id, | |
| "max_length": 200, | |
| "temperature": 0.7 | |
| } | |
| response = requests.post(url, files=files, data=data) | |
| return response.json() | |
| # μ¬μ© μμ | |
| result = generate_multimodal( | |
| "μ΄ μ΄λ―Έμ§μ λν΄ μ€λͺ ν΄μ£ΌμΈμ.", | |
| ["image1.jpg", "image2.jpg"] | |
| ) | |
| print(result["generated_text"]) | |
| ``` | |
| ### 3. μ¬μ©μ κ΄λ¦¬ | |
| #### μ¬μ©μ λ±λ‘ λ° λ‘κ·ΈμΈ | |
| ```python | |
| def register_user(username, email, password): | |
| url = "http://localhost:8001/auth/register" | |
| data = { | |
| "username": username, | |
| "email": email, | |
| "password": password | |
| } | |
| response = requests.post(url, data=data) | |
| return response.json() | |
| def login_user(username, password): | |
| url = "http://localhost:8001/auth/login" | |
| data = { | |
| "username": username, | |
| "password": password | |
| } | |
| response = requests.post(url, data=data) | |
| return response.json() | |
| # μ¬μ© μμ | |
| # 1. μ¬μ©μ λ±λ‘ | |
| register_result = register_user("testuser", "test@example.com", "password123") | |
| access_token = register_result["access_token"] | |
| # 2. λ‘κ·ΈμΈ | |
| login_result = login_user("testuser", "password123") | |
| access_token = login_result["access_token"] | |
| ``` | |
| #### μΈμ¦μ΄ νμν μμ² | |
| ```python | |
| def authenticated_request(url, data, token): | |
| headers = {"Authorization": f"Bearer {token}"} | |
| response = requests.post(url, data=data, headers=headers) | |
| return response.json() | |
| # μ¬μ© μμ | |
| result = authenticated_request( | |
| "http://localhost:8001/generate", | |
| {"prompt": "μλ νμΈμ!", "model_id": "polyglot-ko-1.3b-chat"}, | |
| access_token | |
| ) | |
| ``` | |
| ## π κ³ κΈ κΈ°λ₯ | |
| ### 1. λ¬Έμ μ²λ¦¬ (RAG) | |
| #### λ¬Έμ μ λ‘λ | |
| ```python | |
| def upload_document(file_path, user_id, token=None): | |
| url = "http://localhost:8001/document/upload" | |
| with open(file_path, 'rb') as f: | |
| files = {'file': f} | |
| data = {'user_id': user_id} | |
| headers = {"Authorization": f"Bearer {token}"} if token else {} | |
| response = requests.post(url, files=files, data=data, headers=headers) | |
| return response.json() | |
| # μ¬μ© μμ | |
| result = upload_document("document.pdf", "user123", access_token) | |
| document_id = result["document_id"] | |
| ``` | |
| #### RAG 쿼리 | |
| ```python | |
| def rag_query(query, user_id, token=None): | |
| url = "http://localhost:8001/rag/generate" | |
| data = { | |
| "query": query, | |
| "user_id": user_id, | |
| "max_length": 300, | |
| "temperature": 0.7 | |
| } | |
| headers = {"Authorization": f"Bearer {token}"} if token else {} | |
| response = requests.post(url, data=data, headers=headers) | |
| return response.json() | |
| # μ¬μ© μμ | |
| result = rag_query("μΈκ³΅μ§λ₯μ λ―Έλμ λν΄ μλ €μ£ΌμΈμ.", "user123", access_token) | |
| print(result["response"]) | |
| print("μΆμ²:", result["sources"]) | |
| ``` | |
| #### νμ΄λΈλ¦¬λ RAG (μ΄λ―Έμ§ + λ¬Έμ) | |
| ```python | |
| def hybrid_rag_query(query, image_files, user_id, token=None): | |
| url = "http://localhost:8001/rag/generate-hybrid" | |
| files = [] | |
| for i, image_file in enumerate(image_files): | |
| files.append(('image_files', (f'image_{i}.jpg', open(image_file, 'rb'), 'image/jpeg'))) | |
| data = { | |
| "query": query, | |
| "user_id": user_id, | |
| "max_length": 300, | |
| "temperature": 0.7 | |
| } | |
| headers = {"Authorization": f"Bearer {token}"} if token else {} | |
| response = requests.post(url, files=files, data=data, headers=headers) | |
| return response.json() | |
| ``` | |
| ### 2. μ±ν μΈμ κ΄λ¦¬ | |
| #### μΈμ μμ± λ° λ©μμ§ κ΄λ¦¬ | |
| ```python | |
| def create_chat_session(user_id, session_name, token=None): | |
| url = "http://localhost:8001/session/create" | |
| data = { | |
| "user_id": user_id, | |
| "session_name": session_name | |
| } | |
| headers = {"Authorization": f"Bearer {token}"} if token else {} | |
| response = requests.post(url, data=data, headers=headers) | |
| return response.json() | |
| def add_chat_message(session_id, user_id, content, token=None): | |
| url = "http://localhost:8001/chat/message" | |
| data = { | |
| "session_id": session_id, | |
| "user_id": user_id, | |
| "message_type": "text", | |
| "content": content | |
| } | |
| headers = {"Authorization": f"Bearer {token}"} if token else {} | |
| response = requests.post(url, data=data, headers=headers) | |
| return response.json() | |
| def get_chat_history(session_id, token=None): | |
| url = f"http://localhost:8001/chat/history/{session_id}" | |
| headers = {"Authorization": f"Bearer {token}"} if token else {} | |
| response = requests.get(url, headers=headers) | |
| return response.json() | |
| # μ¬μ© μμ | |
| # 1. μΈμ μμ± | |
| session_result = create_chat_session("user123", "AI μλ΄", access_token) | |
| session_id = session_result["session_id"] | |
| # 2. λ©μμ§ μΆκ° | |
| add_chat_message(session_id, "user123", "μλ νμΈμ!", access_token) | |
| # 3. μ±ν κΈ°λ‘ μ‘°ν | |
| history = get_chat_history(session_id, access_token) | |
| for message in history: | |
| print(f"{message['timestamp']}: {message['content']}") | |
| ``` | |
| ### 3. λ°±κ·ΈλΌμ΄λ μμ | |
| #### λ¬Έμ μ²λ¦¬ μμ | |
| ```python | |
| def start_document_processing(file_path, user_id, token=None): | |
| url = "http://localhost:8001/tasks/document/process" | |
| data = { | |
| "file_path": file_path, | |
| "user_id": user_id | |
| } | |
| headers = {"Authorization": f"Bearer {token}"} if token else {} | |
| response = requests.post(url, data=data, headers=headers) | |
| return response.json() | |
| def check_task_status(task_id, token=None): | |
| url = f"http://localhost:8001/tasks/{task_id}" | |
| headers = {"Authorization": f"Bearer {token}"} if token else {} | |
| response = requests.get(url, headers=headers) | |
| return response.json() | |
| # μ¬μ© μμ | |
| # 1. μμ μμ | |
| task_result = start_document_processing("/path/to/document.pdf", "user123", access_token) | |
| task_id = task_result["task_id"] | |
| # 2. μμ μν νμΈ | |
| import time | |
| while True: | |
| status = check_task_status(task_id, access_token) | |
| print(f"μν: {status['status']}, μ§νλ₯ : {status.get('progress', 0)}%") | |
| if status['status'] in ['SUCCESS', 'FAILURE']: | |
| break | |
| time.sleep(5) | |
| ``` | |
| ### 4. λͺ¨λν°λ§ | |
| #### μ±λ₯ λͺ¨λν°λ§ | |
| ```python | |
| def start_monitoring(): | |
| url = "http://localhost:8001/monitoring/start" | |
| response = requests.post(url) | |
| return response.json() | |
| def get_monitoring_status(): | |
| url = "http://localhost:8001/monitoring/status" | |
| response = requests.get(url) | |
| return response.json() | |
| def get_system_health(): | |
| url = "http://localhost:8001/monitoring/health" | |
| response = requests.get(url) | |
| return response.json() | |
| # μ¬μ© μμ | |
| # 1. λͺ¨λν°λ§ μμ | |
| start_monitoring() | |
| # 2. μν νμΈ | |
| status = get_monitoring_status() | |
| print(f"CPU μ¬μ©λ₯ : {status['current_metrics']['cpu_percent']}%") | |
| print(f"λ©λͺ¨λ¦¬ μ¬μ©λ₯ : {status['current_metrics']['memory_percent']}%") | |
| # 3. μμ€ν κ±΄κ° μν | |
| health = get_system_health() | |
| print(f"μμ€ν μν: {health['status']}") | |
| for recommendation in health['recommendations']: | |
| print(f"κΆμ₯μ¬ν: {recommendation}") | |
| ``` | |
| ## π WebSocket μ€μκ° μ±ν | |
| ### WebSocket ν΄λΌμ΄μΈνΈ | |
| ```javascript | |
| class LilyLLMWebSocket { | |
| constructor(userId) { | |
| this.userId = userId; | |
| this.ws = null; | |
| this.messageHandlers = []; | |
| } | |
| connect() { | |
| this.ws = new WebSocket(`ws://localhost:8001/ws/${this.userId}`); | |
| this.ws.onopen = () => { | |
| console.log('WebSocket μ°κ²°λ¨'); | |
| }; | |
| this.ws.onmessage = (event) => { | |
| const data = JSON.parse(event.data); | |
| this.handleMessage(data); | |
| }; | |
| this.ws.onclose = () => { | |
| console.log('WebSocket μ°κ²° μ’ λ£'); | |
| }; | |
| this.ws.onerror = (error) => { | |
| console.error('WebSocket μ€λ₯:', error); | |
| }; | |
| } | |
| sendMessage(message, sessionId) { | |
| if (this.ws && this.ws.readyState === WebSocket.OPEN) { | |
| this.ws.send(JSON.stringify({ | |
| type: 'chat', | |
| message: message, | |
| session_id: sessionId | |
| })); | |
| } | |
| } | |
| addMessageHandler(handler) { | |
| this.messageHandlers.push(handler); | |
| } | |
| handleMessage(data) { | |
| this.messageHandlers.forEach(handler => handler(data)); | |
| } | |
| disconnect() { | |
| if (this.ws) { | |
| this.ws.close(); | |
| } | |
| } | |
| } | |
| // μ¬μ© μμ | |
| const wsClient = new LilyLLMWebSocket('user123'); | |
| wsClient.connect(); | |
| wsClient.addMessageHandler((data) => { | |
| console.log('λ©μμ§ μμ :', data); | |
| }); | |
| wsClient.sendMessage('μλ νμΈμ!', 'session123'); | |
| ``` | |
| ## π¨ λ¬Έμ ν΄κ²° | |
| ### μΌλ°μ μΈ λ¬Έμ λ€ | |
| #### 1. μλ² μ°κ²° μ€ν¨ | |
| **μ¦μ**: `Connection refused` λλ `Failed to establish a new connection` | |
| **ν΄κ²° λ°©λ²**: | |
| ```bash | |
| # μλ² μν νμΈ | |
| curl http://localhost:8001/health | |
| # μλ² μ¬μμ | |
| ./scripts/deploy.sh restart | |
| # λ‘κ·Έ νμΈ | |
| ./scripts/deploy.sh logs | |
| ``` | |
| #### 2. λ©λͺ¨λ¦¬ λΆμ‘± | |
| **μ¦μ**: `Out of memory` λλ μλ΅ μλ μ ν | |
| **ν΄κ²° λ°©λ²**: | |
| ```bash | |
| # λ©λͺ¨λ¦¬ μ¬μ©λ νμΈ | |
| docker stats | |
| # λΆνμν 컨ν μ΄λ μ 리 | |
| docker system prune -f | |
| # 리μμ€ μ ν μ€μ (docker-compose.yml) | |
| services: | |
| lily-llm-api: | |
| deploy: | |
| resources: | |
| limits: | |
| memory: 4G | |
| ``` | |
| #### 3. λͺ¨λΈ λ‘λ© μ€ν¨ | |
| **μ¦μ**: `Model not found` λλ λͺ¨λΈ κ΄λ ¨ μ€λ₯ | |
| **ν΄κ²° λ°©λ²**: | |
| ```bash | |
| # λͺ¨λΈ λͺ©λ‘ νμΈ | |
| curl http://localhost:8001/models | |
| # λͺ¨λΈ νμΌ νμΈ | |
| ls -la models/ | |
| # μλ² μ¬μμ | |
| ./scripts/deploy.sh restart | |
| ``` | |
| #### 4. μΈμ¦ μ€λ₯ | |
| **μ¦μ**: `401 Unauthorized` λλ `403 Forbidden` | |
| **ν΄κ²° λ°©λ²**: | |
| ```python | |
| # ν ν° κ°±μ | |
| def refresh_token(refresh_token): | |
| url = "http://localhost:8001/auth/refresh" | |
| data = {"refresh_token": refresh_token} | |
| response = requests.post(url, data=data) | |
| return response.json() | |
| # μλ‘μ΄ ν ν°μΌλ‘ μμ² | |
| new_tokens = refresh_token(old_refresh_token) | |
| access_token = new_tokens["access_token"] | |
| ``` | |
| ### μ±λ₯ μ΅μ ν | |
| #### 1. λ°°μΉ μ²λ¦¬ | |
| ```python | |
| def batch_generate_texts(prompts, model_id="polyglot-ko-1.3b-chat"): | |
| results = [] | |
| for prompt in prompts: | |
| result = generate_text(prompt, model_id) | |
| results.append(result) | |
| return results | |
| # μ¬μ© μμ | |
| prompts = [ | |
| "첫 λ²μ§Έ μ§λ¬Έμ λλ€.", | |
| "λ λ²μ§Έ μ§λ¬Έμ λλ€.", | |
| "μΈ λ²μ§Έ μ§λ¬Έμ λλ€." | |
| ] | |
| results = batch_generate_texts(prompts) | |
| ``` | |
| #### 2. μΊμ± νμ© | |
| ```python | |
| import redis | |
| import json | |
| class CachedLilyLLMClient: | |
| def __init__(self, base_url="http://localhost:8001"): | |
| self.base_url = base_url | |
| self.redis_client = redis.Redis(host='localhost', port=6379, db=0) | |
| def generate_text_with_cache(self, prompt, model_id="polyglot-ko-1.3b-chat"): | |
| # μΊμ ν€ μμ± | |
| cache_key = f"text_gen:{hash(prompt + model_id)}" | |
| # μΊμμμ νμΈ | |
| cached_result = self.redis_client.get(cache_key) | |
| if cached_result: | |
| return json.loads(cached_result) | |
| # API νΈμΆ | |
| result = generate_text(prompt, model_id) | |
| # μΊμμ μ μ₯ (1μκ°) | |
| self.redis_client.setex(cache_key, 3600, json.dumps(result)) | |
| return result | |
| ``` | |
| ## π λͺ¨λ² μ¬λ‘ | |
| ### 1. μλ¬ μ²λ¦¬ | |
| ```python | |
| import requests | |
| from requests.exceptions import RequestException | |
| def safe_api_call(func, *args, **kwargs): | |
| try: | |
| return func(*args, **kwargs) | |
| except RequestException as e: | |
| print(f"λ€νΈμν¬ μ€λ₯: {e}") | |
| return None | |
| except Exception as e: | |
| print(f"μμμΉ λͺ»ν μ€λ₯: {e}") | |
| return None | |
| # μ¬μ© μμ | |
| result = safe_api_call(generate_text, "μλ νμΈμ!") | |
| if result: | |
| print(result["generated_text"]) | |
| ``` | |
| ### 2. μ¬μλ λ‘μ§ | |
| ```python | |
| import time | |
| from functools import wraps | |
| def retry_on_failure(max_retries=3, delay=1): | |
| def decorator(func): | |
| @wraps(func) | |
| def wrapper(*args, **kwargs): | |
| for attempt in range(max_retries): | |
| try: | |
| return func(*args, **kwargs) | |
| except Exception as e: | |
| if attempt == max_retries - 1: | |
| raise e | |
| print(f"μλ {attempt + 1} μ€ν¨, {delay}μ΄ ν μ¬μλ...") | |
| time.sleep(delay) | |
| return None | |
| return wrapper | |
| return decorator | |
| # μ¬μ© μμ | |
| @retry_on_failure(max_retries=3, delay=2) | |
| def robust_generate_text(prompt): | |
| return generate_text(prompt) | |
| ``` | |
| ### 3. λΉλκΈ° μ²λ¦¬ | |
| ```python | |
| import asyncio | |
| import aiohttp | |
| async def async_generate_text(session, prompt, model_id="polyglot-ko-1.3b-chat"): | |
| url = "http://localhost:8001/generate" | |
| data = { | |
| "prompt": prompt, | |
| "model_id": model_id, | |
| "max_length": 200, | |
| "temperature": 0.7 | |
| } | |
| async with session.post(url, data=data) as response: | |
| return await response.json() | |
| async def batch_generate_async(prompts): | |
| async with aiohttp.ClientSession() as session: | |
| tasks = [async_generate_text(session, prompt) for prompt in prompts] | |
| results = await asyncio.gather(*tasks) | |
| return results | |
| # μ¬μ© μμ | |
| prompts = ["μ§λ¬Έ1", "μ§λ¬Έ2", "μ§λ¬Έ3"] | |
| results = asyncio.run(batch_generate_async(prompts)) | |
| ``` | |
| ### 4. λ‘κΉ | |
| ```python | |
| import logging | |
| # λ‘κΉ μ€μ | |
| logging.basicConfig( | |
| level=logging.INFO, | |
| format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', | |
| handlers=[ | |
| logging.FileHandler('lily_llm_client.log'), | |
| logging.StreamHandler() | |
| ] | |
| ) | |
| logger = logging.getLogger(__name__) | |
| def generate_text_with_logging(prompt, model_id="polyglot-ko-1.3b-chat"): | |
| logger.info(f"ν μ€νΈ μμ± μμ: {prompt[:50]}...") | |
| try: | |
| result = generate_text(prompt, model_id) | |
| logger.info(f"ν μ€νΈ μμ± μ±κ³΅: {len(result['generated_text'])} λ¬Έμ") | |
| return result | |
| except Exception as e: | |
| logger.error(f"ν μ€νΈ μμ± μ€ν¨: {e}") | |
| raise | |
| ``` | |
| ## π μ§μ | |
| ### λμλ§ λ¦¬μμ€ | |
| - **API λ¬Έμ**: `http://localhost:8001/docs` | |
| - **ReDoc λ¬Έμ**: `http://localhost:8001/redoc` | |
| - **GitHub Issues**: νλ‘μ νΈ μ μ₯μμ Issues μΉμ | |
| - **λ‘κ·Έ νμΌ**: `./logs/` λλ ν 리 | |
| ### λλ²κΉ ν | |
| 1. **λ‘κ·Έ νμΈ**: νμ λ‘κ·Έλ₯Ό λ¨Όμ νμΈνμΈμ | |
| 2. **λ¨κ³λ³ ν μ€νΈ**: 볡μ‘ν μμ²μ μμ λ¨μλ‘ λλμ΄ ν μ€νΈνμΈμ | |
| 3. **λ€νΈμν¬ νμΈ**: λ°©νλ²½μ΄λ νλ‘μ μ€μ μ νμΈνμΈμ | |
| 4. **리μμ€ λͺ¨λν°λ§**: CPU, λ©λͺ¨λ¦¬, λμ€ν¬ μ¬μ©λμ μ£ΌκΈ°μ μΌλ‘ νμΈνμΈμ | |
| ### μ±λ₯ ν | |
| 1. **μ μ ν λͺ¨λΈ μ ν**: μμ μ λ§λ λͺ¨λΈμ μ ννμΈμ | |
| 2. **λ°°μΉ μ²λ¦¬**: μ¬λ¬ μμ²μ ν λ²μ μ²λ¦¬νμΈμ | |
| 3. **μΊμ± νμ©**: λ°λ³΅λλ μμ²μ μΊμλ₯Ό μ¬μ©νμΈμ | |
| 4. **λΉλκΈ° μ²λ¦¬**: λλμ μμ²μ λΉλκΈ°λ‘ μ²λ¦¬νμΈμ |