BladeSzaSza commited on
Commit
f2cfe15
·
1 Parent(s): b5ef886

initial commit

Browse files
.claude/settings.local.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "permissions": {
3
+ "allow": [
4
+ "Bash(mkdir:*)"
5
+ ],
6
+ "deny": []
7
+ }
8
+ }
.gitignore ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ build/
8
+ develop-eggs/
9
+ dist/
10
+ downloads/
11
+ eggs/
12
+ .eggs/
13
+ lib/
14
+ lib64/
15
+ parts/
16
+ sdist/
17
+ var/
18
+ wheels/
19
+ *.egg-info/
20
+ .installed.cfg
21
+ *.egg
22
+ MANIFEST
23
+
24
+ # Virtual Environment
25
+ venv/
26
+ ENV/
27
+ env/
28
+ .venv
29
+
30
+ # IDE
31
+ .idea/
32
+ .vscode/
33
+ *.swp
34
+ *.swo
35
+ *~
36
+
37
+ # Data files
38
+ data/saves/
39
+ data/monsters/
40
+ data/models/
41
+ data/cache/
42
+ *.db
43
+ *.sqlite
44
+
45
+ # Logs
46
+ logs/
47
+ *.log
48
+
49
+ # Model files
50
+ *.bin
51
+ *.pth
52
+ *.pt
53
+ *.gguf
54
+ *.safetensors
55
+
56
+ # OS
57
+ .DS_Store
58
+ Thumbs.db
59
+
60
+ # Environment
61
+ .env
62
+ .env.local
63
+
64
+ # Jupyter
65
+ .ipynb_checkpoints/
66
+
67
+ # Testing
68
+ .pytest_cache/
69
+ .coverage
70
+ htmlcov/
71
+
72
+ # HuggingFace
73
+ wandb/
74
+ runs/
CLAUDE.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+ ## Project Overview
6
+
7
+ DigiPal is a Gradio-based web application designed as a "Digital friend of the next era". This is a Hugging Face Space application that creates a simple interactive interface.
8
+
9
+ ## Architecture
10
+
11
+ - **Framework**: Gradio 5.34.2 for web interface
12
+ - **Language**: Python
13
+ - **Deployment**: Hugging Face Spaces
14
+ - **License**: Apache 2.0
15
+
16
+ ## Core Structure
17
+
18
+ - `app.py` - Main application entry point containing the Gradio interface
19
+ - `README.md` - Hugging Face Space configuration and metadata
20
+
21
+ ## Development Commands
22
+
23
+ Since this is a simple Gradio application, development is straightforward:
24
+
25
+ ```bash
26
+ # Run the application locally
27
+ python app.py
28
+ ```
29
+
30
+ The application will launch a Gradio interface accessible via web browser.
31
+
32
+ ## Key Implementation Details
33
+
34
+ The application currently implements a basic greeting function through a Gradio interface. The main components are:
35
+ - Simple text input/output interface
36
+ - Gradio demo that launches on execution
37
+ - Hugging Face Space configuration for deployment
38
+
39
+ ## Hugging Face Space Configuration
40
+
41
+ The project is configured as a Hugging Face Space with:
42
+ - SDK: Gradio 5.34.2
43
+ - App file: app.py
44
+ - Color theme: red to pink gradient
45
+ - Emoji: 😻
Dockerfile ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ # Set environment variables
4
+ ENV PYTHONUNBUFFERED=1
5
+ ENV PYTHONDONTWRITEBYTECODE=1
6
+ ENV TRANSFORMERS_CACHE=/app/data/cache
7
+ ENV HF_HOME=/app/data/cache
8
+
9
+ # Install system dependencies
10
+ RUN apt-get update && apt-get install -y \
11
+ git \
12
+ ffmpeg \
13
+ libsndfile1 \
14
+ && rm -rf /var/lib/apt/lists/*
15
+
16
+ # Set working directory
17
+ WORKDIR /app
18
+
19
+ # Copy requirements and install Python dependencies
20
+ COPY requirements.txt .
21
+ RUN pip install --no-cache-dir -r requirements.txt
22
+
23
+ # Copy application code
24
+ COPY . .
25
+
26
+ # Create necessary directories
27
+ RUN mkdir -p data/saves data/models data/cache logs config
28
+
29
+ # Expose port
30
+ EXPOSE 7860
31
+
32
+ # Health check
33
+ HEALTHCHECK --interval=30s --timeout=30s --start-period=60s --retries=3 \
34
+ CMD curl -f http://localhost:7860/health || exit 1
35
+
36
+ # Run the application
37
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,14 +1,74 @@
1
  ---
2
- title: DigiPal
3
- emoji: 😻
4
- colorFrom: red
5
- colorTo: pink
6
  sdk: gradio
7
  sdk_version: 5.34.2
8
  app_file: app.py
9
  pinned: false
10
- license: apache-2.0
11
- short_description: Digital friend of the next era
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: DigiPal Advanced Monster Companion
3
+ emoji: 🐾
4
+ colorFrom: purple
5
+ colorTo: blue
6
  sdk: gradio
7
  sdk_version: 5.34.2
8
  app_file: app.py
9
  pinned: false
10
+ license: mit
11
+ models:
12
+ - Qwen/Qwen2.5-1.5B-Instruct
13
+ - openai/whisper-base
14
+ datasets: []
15
+ tags:
16
+ - gaming
17
+ - ai-companion
18
+ - monster-raising
19
+ - conversation
20
+ - speech-recognition
21
+ suggested_hardware: t4-medium
22
+ suggested_storage: medium
23
  ---
24
 
25
+ # 🐾 DigiPal - Advanced AI Monster Companion
26
+
27
+ The next generation of virtual monster companions powered by **Qwen 2.5**, **Whisper**, and advanced AI technologies. Experience deep emotional connections with your digital pet through natural conversation, comprehensive care systems, and sophisticated evolution mechanics.
28
+
29
+ ## ✨ Features
30
+
31
+ ### 🧠 Advanced AI Personality System
32
+ - **Qwen 2.5-powered conversations** with contextual memory
33
+ - **Dynamic personality traits** that evolve with care
34
+ - **Emotional state recognition** and appropriate responses
35
+ - **Voice chat support** with Whisper speech recognition
36
+
37
+ ### 🎮 Comprehensive Monster Care
38
+ - **Six-dimensional care system** (health, happiness, hunger, energy, discipline, cleanliness)
39
+ - **Real-time stat degradation** that continues even when offline
40
+ - **Complex evolution requirements** inspired by classic monster-raising games
41
+ - **Training mini-games** that affect monster development
42
+
43
+ ### 🌟 Next-Generation Features
44
+ - **Cross-session persistence** with browser state management
45
+ - **Real-time streaming updates** using Gradio 5.34.2
46
+ - **Zero GPU optimization** for efficient resource usage
47
+ - **Advanced breeding system** with genetic inheritance
48
+
49
+ ## 🚀 Technology Stack
50
+
51
+ - **HuggingFace Transformers v4.52.4** with Flash Attention 2
52
+ - **Gradio 5.34.2** with modern state management
53
+ - **Qwen 2.5 models** optimized for conversation
54
+ - **Faster Whisper** for efficient speech processing
55
+ - **Zero GPU deployment** for scalable AI inference
56
+
57
+ ## 🎯 Getting Started
58
+
59
+ 1. **Create Your Monster**: Choose a name and personality type
60
+ 2. **Start Caring**: Feed, train, and interact with your companion
61
+ 3. **Build Relationships**: Use voice or text chat to bond
62
+ 4. **Watch Evolution**: Meet requirements to unlock new forms
63
+ 5. **Explore Breeding**: Combine monsters for unique offspring
64
+
65
+ ## 💡 Tips for Best Experience
66
+
67
+ - **Regular interaction** builds stronger relationships
68
+ - **Balanced care** prevents evolution mistakes
69
+ - **Voice chat** creates deeper emotional connections
70
+ - **Training variety** unlocks special evolution paths
71
+
72
+ ---
73
+
74
+ *Experience the future of AI companionship with DigiPal!*
app.py CHANGED
@@ -1,7 +1,177 @@
1
- import gradio as gr
 
 
 
 
 
2
 
3
- def greet(name):
4
- return "Hello " + name + "!!"
 
 
 
 
 
5
 
6
- demo = gr.Interface(fn=greet, inputs="text", outputs="text")
7
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ DigiPal - Advanced AI Monster Companion
4
+ Built with HuggingFace Transformers v4.52.4 & Gradio 5.34.2
5
+ Optimized for Qwen 2.5 models and Zero GPU deployment
6
+ """
7
 
8
+ import os
9
+ import sys
10
+ import logging
11
+ import asyncio
12
+ from pathlib import Path
13
+ import signal
14
+ from typing import Dict, Any
15
 
16
+ # Add src to Python path
17
+ sys.path.insert(0, str(Path(__file__).parent / "src"))
18
+
19
+ # Import core components
20
+ from src.ui.gradio_interface import ModernDigiPalInterface
21
+ from src.deployment.zero_gpu_optimizer import ZeroGPUOptimizer
22
+ from src.utils.performance_tracker import PerformanceTracker
23
+
24
+ def setup_logging(log_level: str = "INFO"):
25
+ """Setup comprehensive logging configuration"""
26
+
27
+ # Create logs directory
28
+ logs_dir = Path("logs")
29
+ logs_dir.mkdir(exist_ok=True)
30
+
31
+ # Configure logging
32
+ logging.basicConfig(
33
+ level=getattr(logging, log_level.upper()),
34
+ format='%(asctime)s - %(name)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s',
35
+ handlers=[
36
+ logging.FileHandler(logs_dir / "digipal.log"),
37
+ logging.StreamHandler(sys.stdout)
38
+ ]
39
+ )
40
+
41
+ # Set specific logger levels
42
+ logging.getLogger("transformers").setLevel(logging.WARNING)
43
+ logging.getLogger("torch").setLevel(logging.WARNING)
44
+ logging.getLogger("gradio").setLevel(logging.INFO)
45
+
46
+ def setup_environment():
47
+ """Setup environment variables and configurations"""
48
+
49
+ # Create necessary directories
50
+ directories = [
51
+ "data/saves",
52
+ "data/monsters",
53
+ "data/models",
54
+ "data/cache",
55
+ "logs",
56
+ "config"
57
+ ]
58
+
59
+ for directory in directories:
60
+ Path(directory).mkdir(parents=True, exist_ok=True)
61
+
62
+ # Set environment variables for optimization
63
+ os.environ["TOKENIZERS_PARALLELISM"] = "false" # Avoid tokenizer warnings
64
+ os.environ["TRANSFORMERS_CACHE"] = str(Path("data/cache").absolute())
65
+ os.environ["HF_HOME"] = str(Path("data/cache").absolute())
66
+
67
+ # CUDA optimization settings
68
+ if "CUDA_VISIBLE_DEVICES" not in os.environ:
69
+ os.environ["CUDA_VISIBLE_DEVICES"] = "0"
70
+
71
+ # Memory optimization
72
+ os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:128"
73
+
74
+ async def initialize_components() -> Dict[str, Any]:
75
+ """Initialize all application components"""
76
+ logger = logging.getLogger(__name__)
77
+
78
+ try:
79
+ # Initialize performance tracker
80
+ performance_tracker = PerformanceTracker()
81
+ await performance_tracker.initialize()
82
+
83
+ # Initialize GPU optimizer
84
+ gpu_optimizer = ZeroGPUOptimizer()
85
+ resources = await gpu_optimizer.detect_available_resources()
86
+
87
+ logger.info(f"Detected resources: {resources}")
88
+
89
+ # Initialize main interface
90
+ interface = ModernDigiPalInterface()
91
+ await interface.initialize()
92
+
93
+ return {
94
+ "interface": interface,
95
+ "performance_tracker": performance_tracker,
96
+ "gpu_optimizer": gpu_optimizer,
97
+ "resources": resources
98
+ }
99
+
100
+ except Exception as e:
101
+ logger.error(f"Component initialization failed: {e}")
102
+ raise
103
+
104
+ def handle_shutdown(signum, frame):
105
+ """Handle graceful shutdown"""
106
+ logger = logging.getLogger(__name__)
107
+ logger.info(f"Received signal {signum}, shutting down gracefully...")
108
+
109
+ # Cleanup operations would go here
110
+ sys.exit(0)
111
+
112
+ def main():
113
+ """Main application entry point"""
114
+
115
+ # Setup signal handlers
116
+ signal.signal(signal.SIGINT, handle_shutdown)
117
+ signal.signal(signal.SIGTERM, handle_shutdown)
118
+
119
+ # Setup environment
120
+ setup_logging(os.getenv("LOG_LEVEL", "INFO"))
121
+ setup_environment()
122
+
123
+ logger = logging.getLogger(__name__)
124
+ logger.info("Starting DigiPal Advanced Monster Companion...")
125
+
126
+ try:
127
+ # Check Python version
128
+ if sys.version_info < (3, 8):
129
+ raise RuntimeError("Python 3.8 or higher is required")
130
+
131
+ # Initialize async event loop
132
+ if sys.platform == "win32":
133
+ asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
134
+
135
+ loop = asyncio.new_event_loop()
136
+ asyncio.set_event_loop(loop)
137
+
138
+ # Initialize components
139
+ components = loop.run_until_complete(initialize_components())
140
+
141
+ logger.info("All components initialized successfully")
142
+
143
+ # Launch configuration based on environment
144
+ launch_config = {
145
+ "server_name": os.getenv("SERVER_NAME", "0.0.0.0"),
146
+ "server_port": int(os.getenv("SERVER_PORT", "7860")),
147
+ "share": os.getenv("SHARE", "false").lower() == "true",
148
+ "debug": os.getenv("DEBUG", "false").lower() == "true",
149
+ "show_error": True,
150
+ "enable_queue": True,
151
+ "max_threads": int(os.getenv("MAX_THREADS", "40")),
152
+ "auth": None # Can be configured for production
153
+ }
154
+
155
+ # Add SSL configuration for production
156
+ if os.getenv("SSL_ENABLED", "false").lower() == "true":
157
+ launch_config.update({
158
+ "ssl_keyfile": os.getenv("SSL_KEYFILE"),
159
+ "ssl_certfile": os.getenv("SSL_CERTFILE"),
160
+ "ssl_keyfile_password": os.getenv("SSL_PASSWORD")
161
+ })
162
+
163
+ logger.info(f"Launching interface with config: {launch_config}")
164
+
165
+ # Launch the interface
166
+ components["interface"].launch(**launch_config)
167
+
168
+ except KeyboardInterrupt:
169
+ logger.info("Application stopped by user")
170
+ except Exception as e:
171
+ logger.error(f"Application failed to start: {e}")
172
+ raise
173
+ finally:
174
+ logger.info("DigiPal application shutdown complete")
175
+
176
+ if __name__ == "__main__":
177
+ main()
requirements.txt ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Core ML Framework - Latest optimized versions
2
+ transformers==4.52.4
3
+ torch>=2.2.0
4
+ torchaudio>=2.2.0
5
+ gradio==5.34.2
6
+
7
+ # Qwen 2.5 Optimization Stack
8
+ auto-gptq>=0.7.1
9
+ optimum>=1.16.0
10
+ accelerate>=0.26.1
11
+ bitsandbytes>=0.42.0
12
+
13
+ # Enhanced Audio Processing
14
+ faster-whisper>=1.0.0
15
+ librosa>=0.10.1
16
+ soundfile>=0.12.1
17
+ webrtcvad>=2.0.10
18
+
19
+ # Production Backend
20
+ fastapi>=0.108.0
21
+ uvicorn[standard]>=0.25.0
22
+ pydantic>=2.5.0
23
+
24
+ # Advanced State Management
25
+ apscheduler>=3.10.4
26
+ aiosqlite>=0.19.0
27
+
28
+ # Zero GPU Optimization
29
+ spaces>=0.28.0
30
+
31
+ # Core Utilities
32
+ numpy>=1.24.0
33
+ pandas>=2.1.0
34
+ pillow>=10.1.0
35
+ python-dateutil>=2.8.2
36
+ emoji>=2.8.0
37
+ psutil>=5.9.0
38
+
39
+ # Async Support
40
+ aiofiles>=23.2.0
41
+ asyncio-mqtt>=0.16.1
42
+
43
+ # Scientific Computing
44
+ scipy>=1.11.0
45
+ scikit-learn>=1.3.0
46
+
47
+ # Development Tools
48
+ pytest>=7.4.0
49
+ black>=23.0.0
src/ai/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # AI module initialization
src/ai/qwen_processor.py ADDED
@@ -0,0 +1,395 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from transformers import (
3
+ AutoModelForCausalLM,
4
+ AutoTokenizer,
5
+ pipeline,
6
+ BitsAndBytesConfig
7
+ )
8
+ from optimum.gptq import GPTQConfig
9
+ import asyncio
10
+ import logging
11
+ from typing import Dict, List, Optional, Any
12
+ import json
13
+ import time
14
+ from dataclasses import dataclass
15
+
16
+ @dataclass
17
+ class ModelConfig:
18
+ model_name: str
19
+ max_memory_gb: float
20
+ inference_speed: str # "fast", "balanced", "quality"
21
+ use_quantization: bool = True
22
+ use_flash_attention: bool = True
23
+
24
+ class QwenProcessor:
25
+ def __init__(self, config: ModelConfig):
26
+ self.config = config
27
+ self.logger = logging.getLogger(__name__)
28
+
29
+ # Optimized model configurations
30
+ self.model_configs = {
31
+ "fast": {
32
+ "model_name": "Qwen/Qwen2.5-0.5B-Instruct",
33
+ "torch_dtype": torch.float16,
34
+ "device_map": "auto",
35
+ "attn_implementation": "flash_attention_2"
36
+ },
37
+ "balanced": {
38
+ "model_name": "Qwen/Qwen2.5-1.5B-Instruct",
39
+ "torch_dtype": torch.bfloat16,
40
+ "device_map": "auto",
41
+ "attn_implementation": "flash_attention_2"
42
+ },
43
+ "quality": {
44
+ "model_name": "Qwen/Qwen2.5-3B-Instruct",
45
+ "torch_dtype": torch.bfloat16,
46
+ "device_map": "sequential",
47
+ "attn_implementation": "flash_attention_2"
48
+ }
49
+ }
50
+
51
+ self.model = None
52
+ self.tokenizer = None
53
+ self.pipeline = None
54
+ self.conversation_cache = {}
55
+
56
+ # Performance tracking
57
+ self.inference_times = []
58
+ self.memory_usage = []
59
+
60
+ async def initialize(self):
61
+ """Initialize the Qwen 2.5 model with optimizations"""
62
+ try:
63
+ model_config = self.model_configs[self.config.inference_speed]
64
+
65
+ # Quantization configuration
66
+ if self.config.use_quantization:
67
+ quantization_config = BitsAndBytesConfig(
68
+ load_in_4bit=True,
69
+ bnb_4bit_compute_dtype=torch.bfloat16,
70
+ bnb_4bit_use_double_quant=True,
71
+ bnb_4bit_quant_type="nf4"
72
+ )
73
+ else:
74
+ quantization_config = None
75
+
76
+ # Load tokenizer
77
+ self.tokenizer = AutoTokenizer.from_pretrained(
78
+ model_config["model_name"],
79
+ trust_remote_code=True,
80
+ use_fast=True
81
+ )
82
+
83
+ # Load model with optimizations
84
+ self.model = AutoModelForCausalLM.from_pretrained(
85
+ model_config["model_name"],
86
+ torch_dtype=model_config["torch_dtype"],
87
+ device_map=model_config["device_map"],
88
+ trust_remote_code=True,
89
+ attn_implementation=model_config["attn_implementation"] if self.config.use_flash_attention else None,
90
+ quantization_config=quantization_config,
91
+ use_cache=True,
92
+ low_cpu_mem_usage=True
93
+ )
94
+
95
+ # Enable optimizations
96
+ if hasattr(self.model, "to_bettertransformer"):
97
+ self.model = self.model.to_bettertransformer()
98
+
99
+ # Compile model for faster inference (PyTorch 2.0+)
100
+ if hasattr(torch, "compile") and torch.cuda.is_available():
101
+ self.model = torch.compile(self.model, mode="reduce-overhead")
102
+
103
+ # Create pipeline
104
+ self.pipeline = pipeline(
105
+ "text-generation",
106
+ model=self.model,
107
+ tokenizer=self.tokenizer,
108
+ device_map="auto",
109
+ batch_size=1,
110
+ return_full_text=False
111
+ )
112
+
113
+ self.logger.info(f"Qwen 2.5 model initialized: {model_config['model_name']}")
114
+
115
+ except Exception as e:
116
+ self.logger.error(f"Failed to initialize Qwen model: {e}")
117
+ raise
118
+
119
+ async def generate_monster_response(self,
120
+ monster_data: Dict[str, Any],
121
+ user_input: str,
122
+ conversation_history: List[Dict[str, str]] = None) -> Dict[str, Any]:
123
+ """Generate contextual monster response using Qwen 2.5"""
124
+ start_time = time.time()
125
+
126
+ try:
127
+ # Build monster personality context
128
+ personality_prompt = self._build_personality_prompt(monster_data)
129
+
130
+ # Create conversation context
131
+ conversation_context = self._build_conversation_context(
132
+ conversation_history or [], monster_data
133
+ )
134
+
135
+ # Build system prompt
136
+ system_prompt = f"""You are {monster_data['name']}, a virtual monster companion.
137
+
138
+ {personality_prompt}
139
+
140
+ Current State:
141
+ - Health: {monster_data['stats']['health']}/100
142
+ - Happiness: {monster_data['stats']['happiness']}/100
143
+ - Energy: {monster_data['stats']['energy']}/100
144
+ - Emotional State: {monster_data['emotional_state']}
145
+ - Activity: {monster_data['current_activity']}
146
+
147
+ Instructions:
148
+ - Respond as this specific monster with this personality
149
+ - Keep responses to 1-2 sentences maximum
150
+ - Include 1-2 relevant emojis
151
+ - Show personality through word choice and tone
152
+ - React appropriately to your current stats and emotional state
153
+ - Remember past conversations and build on them
154
+
155
+ {conversation_context}"""
156
+
157
+ # Format messages for Qwen 2.5
158
+ messages = [
159
+ {"role": "system", "content": system_prompt},
160
+ {"role": "user", "content": user_input}
161
+ ]
162
+
163
+ # Generate response
164
+ prompt = self.tokenizer.apply_chat_template(
165
+ messages,
166
+ tokenize=False,
167
+ add_generation_prompt=True
168
+ )
169
+
170
+ # Optimized generation parameters
171
+ generation_kwargs = {
172
+ "max_new_tokens": 128,
173
+ "temperature": 0.8,
174
+ "top_p": 0.9,
175
+ "top_k": 50,
176
+ "do_sample": True,
177
+ "pad_token_id": self.tokenizer.eos_token_id,
178
+ "repetition_penalty": 1.1,
179
+ "no_repeat_ngram_size": 3
180
+ }
181
+
182
+ # Generate with error handling
183
+ outputs = self.pipeline(prompt, **generation_kwargs)
184
+ response_text = outputs[0]["generated_text"].strip()
185
+
186
+ # Post-process response
187
+ processed_response = self._post_process_response(response_text, monster_data)
188
+
189
+ # Track performance
190
+ inference_time = time.time() - start_time
191
+ self.inference_times.append(inference_time)
192
+
193
+ # Analyze response for emotional impact
194
+ emotional_impact = self._analyze_emotional_impact(user_input, processed_response)
195
+
196
+ return {
197
+ "response": processed_response,
198
+ "inference_time": inference_time,
199
+ "emotional_impact": emotional_impact,
200
+ "confidence": 0.85, # Placeholder for confidence scoring
201
+ "model_info": {
202
+ "model_name": self.config.model_name,
203
+ "inference_speed": self.config.inference_speed
204
+ }
205
+ }
206
+
207
+ except Exception as e:
208
+ self.logger.error(f"Response generation failed: {e}")
209
+ return {
210
+ "response": self._get_fallback_response(monster_data),
211
+ "inference_time": time.time() - start_time,
212
+ "emotional_impact": {"happiness": 0.1},
213
+ "confidence": 0.1,
214
+ "error": str(e)
215
+ }
216
+
217
+ def _build_personality_prompt(self, monster_data: Dict[str, Any]) -> str:
218
+ """Build personality description for the monster"""
219
+ personality = monster_data.get('personality', {})
220
+
221
+ # Core personality traits
222
+ primary_type = personality.get('primary_type', 'playful')
223
+ traits = []
224
+
225
+ # Big Five personality factors
226
+ if personality.get('extraversion', 0.5) > 0.7:
227
+ traits.append("very outgoing and social")
228
+ elif personality.get('extraversion', 0.5) < 0.3:
229
+ traits.append("more reserved and introspective")
230
+
231
+ if personality.get('agreeableness', 0.5) > 0.7:
232
+ traits.append("extremely friendly and cooperative")
233
+ elif personality.get('agreeableness', 0.5) < 0.3:
234
+ traits.append("more independent and sometimes stubborn")
235
+
236
+ if personality.get('conscientiousness', 0.5) > 0.7:
237
+ traits.append("very disciplined and organized")
238
+ elif personality.get('conscientiousness', 0.5) < 0.3:
239
+ traits.append("more spontaneous and carefree")
240
+
241
+ if personality.get('openness', 0.5) > 0.7:
242
+ traits.append("very curious and imaginative")
243
+ elif personality.get('openness', 0.5) < 0.3:
244
+ traits.append("more practical and traditional")
245
+
246
+ # Learned preferences
247
+ favorites = personality.get('favorite_foods', [])
248
+ dislikes = personality.get('disliked_foods', [])
249
+
250
+ personality_text = f"Personality Type: {primary_type.title()}\n"
251
+
252
+ if traits:
253
+ personality_text += f"You are {', '.join(traits)}.\n"
254
+
255
+ if favorites:
256
+ personality_text += f"Your favorite foods are: {', '.join(favorites[:3])}.\n"
257
+
258
+ if dislikes:
259
+ personality_text += f"You dislike: {', '.join(dislikes[:3])}.\n"
260
+
261
+ # Relationship context
262
+ relationship_level = personality.get('relationship_level', 0)
263
+ if relationship_level > 80:
264
+ personality_text += "You have a very strong bond with your caretaker.\n"
265
+ elif relationship_level > 50:
266
+ personality_text += "You trust and like your caretaker.\n"
267
+ elif relationship_level > 20:
268
+ personality_text += "You're getting to know your caretaker.\n"
269
+ else:
270
+ personality_text += "You're still warming up to your caretaker.\n"
271
+
272
+ return personality_text
273
+
274
+ def _build_conversation_context(self,
275
+ history: List[Dict[str, str]],
276
+ monster_data: Dict[str, Any]) -> str:
277
+ """Build conversation context from recent history"""
278
+ if not history:
279
+ return "This is your first conversation together."
280
+
281
+ # Get recent messages (last 3 exchanges)
282
+ recent_history = history[-6:] if len(history) > 6 else history
283
+
284
+ context = "Recent conversation:\n"
285
+ for i, msg in enumerate(recent_history):
286
+ if msg.get('role') == 'user':
287
+ context += f"Human: {msg.get('content', '')}\n"
288
+ else:
289
+ context += f"You: {msg.get('content', '')}\n"
290
+
291
+ return context
292
+
293
+ def _post_process_response(self, response: str, monster_data: Dict[str, Any]) -> str:
294
+ """Post-process the generated response"""
295
+ # Remove any unwanted prefixes/suffixes
296
+ response = response.strip()
297
+
298
+ # Remove common artifacts
299
+ unwanted_prefixes = ["Assistant:", "Monster:", "DigiPal:", monster_data['name'] + ":"]
300
+ for prefix in unwanted_prefixes:
301
+ if response.startswith(prefix):
302
+ response = response[len(prefix):].strip()
303
+
304
+ # Ensure appropriate length
305
+ sentences = response.split('.')
306
+ if len(sentences) > 2:
307
+ response = '. '.join(sentences[:2]) + '.'
308
+
309
+ # Add emojis if missing
310
+ if not self._has_emoji(response):
311
+ response = self._add_contextual_emoji(response, monster_data)
312
+
313
+ return response
314
+
315
+ def _has_emoji(self, text: str) -> bool:
316
+ """Check if text contains emojis"""
317
+ import emoji
318
+ return bool(emoji.emoji_count(text))
319
+
320
+ def _add_contextual_emoji(self, response: str, monster_data: Dict[str, Any]) -> str:
321
+ """Add appropriate emoji based on context"""
322
+ emotional_state = monster_data.get('emotional_state', 'neutral')
323
+
324
+ emoji_map = {
325
+ 'ecstatic': ' 🤩',
326
+ 'happy': ' 😊',
327
+ 'content': ' 😌',
328
+ 'neutral': ' 🙂',
329
+ 'melancholy': ' 😔',
330
+ 'sad': ' 😢',
331
+ 'angry': ' 😠',
332
+ 'sick': ' 🤒',
333
+ 'excited': ' 😆',
334
+ 'tired': ' 😴'
335
+ }
336
+
337
+ return response + emoji_map.get(emotional_state, ' 🙂')
338
+
339
+ def _analyze_emotional_impact(self, user_input: str, response: str) -> Dict[str, float]:
340
+ """Analyze the emotional impact of the interaction"""
341
+ # Simple keyword-based analysis (can be enhanced with sentiment models)
342
+ positive_keywords = ['love', 'good', 'great', 'amazing', 'wonderful', 'happy', 'fun']
343
+ negative_keywords = ['bad', 'sad', 'angry', 'hate', 'terrible', 'awful', 'sick']
344
+
345
+ user_input_lower = user_input.lower()
346
+
347
+ impact = {
348
+ 'happiness': 0.0,
349
+ 'stress': 0.0,
350
+ 'bonding': 0.0
351
+ }
352
+
353
+ # Analyze user input sentiment
354
+ for keyword in positive_keywords:
355
+ if keyword in user_input_lower:
356
+ impact['happiness'] += 0.1
357
+ impact['bonding'] += 0.05
358
+
359
+ for keyword in negative_keywords:
360
+ if keyword in user_input_lower:
361
+ impact['happiness'] -= 0.1
362
+ impact['stress'] += 0.1
363
+
364
+ # Base interaction bonus
365
+ impact['bonding'] += 0.02 # Small bonding increase for any interaction
366
+
367
+ return impact
368
+
369
+ def _get_fallback_response(self, monster_data: Dict[str, Any]) -> str:
370
+ """Get fallback response when AI generation fails"""
371
+ fallback_responses = [
372
+ f"*{monster_data['name']} looks at you curiously* 🤔",
373
+ f"*{monster_data['name']} makes a happy sound* 😊",
374
+ f"*{monster_data['name']} tilts head thoughtfully* 💭",
375
+ f"*{monster_data['name']} seems interested* 👀"
376
+ ]
377
+
378
+ import random
379
+ return random.choice(fallback_responses)
380
+
381
+ def get_performance_stats(self) -> Dict[str, Any]:
382
+ """Get model performance statistics"""
383
+ if not self.inference_times:
384
+ return {"status": "No inference data available"}
385
+
386
+ avg_time = sum(self.inference_times) / len(self.inference_times)
387
+
388
+ return {
389
+ "average_inference_time": avg_time,
390
+ "total_inferences": len(self.inference_times),
391
+ "fastest_inference": min(self.inference_times),
392
+ "slowest_inference": max(self.inference_times),
393
+ "tokens_per_second": 128 / avg_time, # Approximate
394
+ "model_config": self.config.__dict__
395
+ }
src/ai/speech_engine.py ADDED
@@ -0,0 +1,327 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import asyncio
2
+ import numpy as np
3
+ from faster_whisper import WhisperModel
4
+ import torch
5
+ import webrtcvad
6
+ import logging
7
+ from typing import Dict, List, Optional, Tuple, Any
8
+ import time
9
+ from dataclasses import dataclass
10
+ import io
11
+ import wave
12
+
13
+ @dataclass
14
+ class SpeechConfig:
15
+ model_size: str = "base" # tiny, base, small, medium, large-v3
16
+ device: str = "auto"
17
+ compute_type: str = "float16"
18
+ use_vad: bool = True
19
+ vad_aggressiveness: int = 2 # 0-3, higher = more aggressive
20
+ chunk_duration_ms: int = 30 # VAD chunk size
21
+ sample_rate: int = 16000
22
+
23
+ class AdvancedSpeechEngine:
24
+ def __init__(self, config: SpeechConfig):
25
+ self.config = config
26
+ self.logger = logging.getLogger(__name__)
27
+
28
+ # Model configurations optimized for gaming
29
+ self.model_configs = {
30
+ "tiny": {"memory_gb": 1, "speed": "fastest", "accuracy": "basic"},
31
+ "base": {"memory_gb": 2, "speed": "fast", "accuracy": "good"},
32
+ "small": {"memory_gb": 3, "speed": "medium", "accuracy": "better"},
33
+ "medium": {"memory_gb": 6, "speed": "slower", "accuracy": "high"},
34
+ "large-v3": {"memory_gb": 12, "speed": "slowest", "accuracy": "best"}
35
+ }
36
+
37
+ self.whisper_model = None
38
+ self.vad_model = None
39
+
40
+ # Performance tracking
41
+ self.transcription_times = []
42
+ self.accuracy_scores = []
43
+
44
+ # Audio processing
45
+ self.audio_buffer = []
46
+ self.is_processing = False
47
+
48
+ async def initialize(self):
49
+ """Initialize the speech recognition system"""
50
+ try:
51
+ # Determine optimal device
52
+ device = self.config.device
53
+ if device == "auto":
54
+ device = "cuda" if torch.cuda.is_available() else "cpu"
55
+
56
+ # Initialize Faster Whisper
57
+ self.whisper_model = WhisperModel(
58
+ self.config.model_size,
59
+ device=device,
60
+ compute_type=self.config.compute_type,
61
+ download_root="data/models/"
62
+ )
63
+
64
+ # Initialize VAD if enabled
65
+ if self.config.use_vad:
66
+ self.vad_model = webrtcvad.Vad(self.config.vad_aggressiveness)
67
+
68
+ self.logger.info(f"Speech engine initialized: {self.config.model_size} on {device}")
69
+
70
+ except Exception as e:
71
+ self.logger.error(f"Failed to initialize speech engine: {e}")
72
+ raise
73
+
74
+ async def process_audio_stream(self, audio_data: np.ndarray) -> Dict[str, Any]:
75
+ """Process streaming audio for real-time transcription"""
76
+ start_time = time.time()
77
+
78
+ try:
79
+ # Convert audio format if needed
80
+ if len(audio_data.shape) > 1:
81
+ audio_data = audio_data.mean(axis=1) # Convert to mono
82
+
83
+ # Normalize audio
84
+ audio_data = audio_data.astype(np.float32)
85
+ if np.max(np.abs(audio_data)) > 0:
86
+ audio_data = audio_data / np.max(np.abs(audio_data))
87
+
88
+ # Voice Activity Detection
89
+ if self.config.use_vad:
90
+ has_speech = self._detect_speech_activity(audio_data)
91
+ if not has_speech:
92
+ return {
93
+ "success": True,
94
+ "transcription": "",
95
+ "confidence": 0.0,
96
+ "processing_time": time.time() - start_time,
97
+ "has_speech": False
98
+ }
99
+
100
+ # Transcribe with Faster Whisper
101
+ segments, info = self.whisper_model.transcribe(
102
+ audio_data,
103
+ language="en",
104
+ beam_size=1, # Faster inference
105
+ temperature=0.0,
106
+ condition_on_previous_text=True,
107
+ compression_ratio_threshold=2.4,
108
+ log_prob_threshold=-1.0,
109
+ no_speech_threshold=0.6
110
+ )
111
+
112
+ # Combine segments
113
+ transcription = ""
114
+ avg_confidence = 0.0
115
+ segment_count = 0
116
+
117
+ for segment in segments:
118
+ transcription += segment.text + " "
119
+ avg_confidence += segment.avg_logprob
120
+ segment_count += 1
121
+
122
+ transcription = transcription.strip()
123
+
124
+ if segment_count > 0:
125
+ avg_confidence = avg_confidence / segment_count
126
+ confidence = self._logprob_to_confidence(avg_confidence)
127
+ else:
128
+ confidence = 0.0
129
+
130
+ processing_time = time.time() - start_time
131
+ self.transcription_times.append(processing_time)
132
+
133
+ # Analyze speech characteristics
134
+ speech_analysis = self._analyze_speech_characteristics(audio_data, transcription)
135
+
136
+ return {
137
+ "success": True,
138
+ "transcription": transcription,
139
+ "confidence": confidence,
140
+ "processing_time": processing_time,
141
+ "has_speech": True,
142
+ "speech_analysis": speech_analysis,
143
+ "detected_language": info.language if hasattr(info, 'language') else "en",
144
+ "language_probability": info.language_probability if hasattr(info, 'language_probability') else 1.0
145
+ }
146
+
147
+ except Exception as e:
148
+ self.logger.error(f"Audio processing failed: {e}")
149
+ return {
150
+ "success": False,
151
+ "transcription": "",
152
+ "confidence": 0.0,
153
+ "processing_time": time.time() - start_time,
154
+ "error": str(e)
155
+ }
156
+
157
+ def _detect_speech_activity(self, audio_data: np.ndarray) -> bool:
158
+ """Detect if audio contains speech using WebRTC VAD"""
159
+ try:
160
+ # Convert to 16-bit PCM
161
+ pcm_data = (audio_data * 32767).astype(np.int16)
162
+
163
+ # Split into chunks for VAD processing
164
+ chunk_size = int(self.config.sample_rate * self.config.chunk_duration_ms / 1000)
165
+ speech_chunks = 0
166
+ total_chunks = 0
167
+
168
+ for i in range(0, len(pcm_data), chunk_size):
169
+ chunk = pcm_data[i:i+chunk_size]
170
+
171
+ # Pad chunk if necessary
172
+ if len(chunk) < chunk_size:
173
+ chunk = np.pad(chunk, (0, chunk_size - len(chunk)), mode='constant')
174
+
175
+ # Convert to bytes
176
+ chunk_bytes = chunk.tobytes()
177
+
178
+ # Check for speech
179
+ if self.vad_model.is_speech(chunk_bytes, self.config.sample_rate):
180
+ speech_chunks += 1
181
+
182
+ total_chunks += 1
183
+
184
+ # Consider it speech if > 30% of chunks contain speech
185
+ speech_ratio = speech_chunks / total_chunks if total_chunks > 0 else 0
186
+ return speech_ratio > 0.3
187
+
188
+ except Exception as e:
189
+ self.logger.warning(f"VAD processing failed: {e}")
190
+ return True # Default to processing if VAD fails
191
+
192
+ def _logprob_to_confidence(self, avg_logprob: float) -> float:
193
+ """Convert log probability to confidence score"""
194
+ # Empirical mapping from log probability to confidence
195
+ # Faster Whisper typically gives log probs between -3.0 and 0.0
196
+ confidence = max(0.0, min(1.0, (avg_logprob + 3.0) / 3.0))
197
+ return confidence
198
+
199
+ def _analyze_speech_characteristics(self, audio_data: np.ndarray, transcription: str) -> Dict[str, Any]:
200
+ """Analyze speech characteristics for emotional context"""
201
+ try:
202
+ import librosa
203
+
204
+ # Basic audio features
205
+ duration = len(audio_data) / self.config.sample_rate
206
+
207
+ # Energy/Volume analysis
208
+ rms_energy = np.sqrt(np.mean(audio_data ** 2))
209
+
210
+ # Pitch analysis
211
+ pitches, magnitudes = librosa.piptrack(
212
+ y=audio_data,
213
+ sr=self.config.sample_rate,
214
+ threshold=0.1
215
+ )
216
+
217
+ # Extract fundamental frequency
218
+ pitch_values = pitches[magnitudes > np.max(magnitudes) * 0.1]
219
+ if len(pitch_values) > 0:
220
+ avg_pitch = np.mean(pitch_values)
221
+ pitch_variance = np.var(pitch_values)
222
+ else:
223
+ avg_pitch = 0.0
224
+ pitch_variance = 0.0
225
+
226
+ # Speaking rate (words per minute)
227
+ word_count = len(transcription.split()) if transcription else 0
228
+ speaking_rate = (word_count / duration * 60) if duration > 0 else 0
229
+
230
+ # Emotional indicators (basic)
231
+ emotions = {
232
+ "excitement": min(1.0, rms_energy * 10), # Higher energy = more excited
233
+ "calmness": max(0.0, 1.0 - (pitch_variance / 1000)), # Lower pitch variance = calmer
234
+ "engagement": min(1.0, speaking_rate / 200), # Normal speaking rate indicates engagement
235
+ "stress": min(1.0, max(0.0, (avg_pitch - 200) / 100)) # Higher pitch can indicate stress
236
+ }
237
+
238
+ return {
239
+ "duration": duration,
240
+ "energy": rms_energy,
241
+ "average_pitch": avg_pitch,
242
+ "pitch_variance": pitch_variance,
243
+ "speaking_rate": speaking_rate,
244
+ "word_count": word_count,
245
+ "emotional_indicators": emotions
246
+ }
247
+
248
+ except Exception as e:
249
+ self.logger.warning(f"Speech analysis failed: {e}")
250
+ return {
251
+ "duration": 0.0,
252
+ "energy": 0.0,
253
+ "emotional_indicators": {}
254
+ }
255
+
256
+ async def batch_transcribe(self, audio_files: List[str]) -> List[Dict[str, Any]]:
257
+ """Batch transcribe multiple audio files"""
258
+ results = []
259
+
260
+ for audio_file in audio_files:
261
+ try:
262
+ # Load audio file
263
+ import librosa
264
+ audio_data, _ = librosa.load(audio_file, sr=self.config.sample_rate)
265
+
266
+ # Process
267
+ result = await self.process_audio_stream(audio_data)
268
+ result["file_path"] = audio_file
269
+
270
+ results.append(result)
271
+
272
+ except Exception as e:
273
+ self.logger.error(f"Failed to process {audio_file}: {e}")
274
+ results.append({
275
+ "success": False,
276
+ "file_path": audio_file,
277
+ "error": str(e)
278
+ })
279
+
280
+ return results
281
+
282
+ def get_performance_stats(self) -> Dict[str, Any]:
283
+ """Get speech processing performance statistics"""
284
+ if not self.transcription_times:
285
+ return {"status": "No transcription data available"}
286
+
287
+ avg_time = sum(self.transcription_times) / len(self.transcription_times)
288
+
289
+ return {
290
+ "average_processing_time": avg_time,
291
+ "total_transcriptions": len(self.transcription_times),
292
+ "fastest_transcription": min(self.transcription_times),
293
+ "slowest_transcription": max(self.transcription_times),
294
+ "model_config": self.config.__dict__,
295
+ "estimated_real_time_factor": avg_time / 1.0 # Assuming 1 second audio clips
296
+ }
297
+
298
+ def optimize_for_hardware(self, available_vram_gb: float) -> SpeechConfig:
299
+ """Optimize speech config based on available hardware"""
300
+ if available_vram_gb >= 12:
301
+ return SpeechConfig(
302
+ model_size="large-v3",
303
+ device="cuda",
304
+ compute_type="float16",
305
+ use_vad=True
306
+ )
307
+ elif available_vram_gb >= 6:
308
+ return SpeechConfig(
309
+ model_size="medium",
310
+ device="cuda",
311
+ compute_type="float16",
312
+ use_vad=True
313
+ )
314
+ elif available_vram_gb >= 3:
315
+ return SpeechConfig(
316
+ model_size="small",
317
+ device="cuda",
318
+ compute_type="int8",
319
+ use_vad=True
320
+ )
321
+ else:
322
+ return SpeechConfig(
323
+ model_size="base",
324
+ device="cpu",
325
+ compute_type="int8",
326
+ use_vad=True
327
+ )
src/core/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # Core module initialization
src/core/evolution_system.py ADDED
@@ -0,0 +1,655 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import asyncio
2
+ import logging
3
+ from typing import Dict, List, Optional, Any, Tuple
4
+ from datetime import datetime, timedelta
5
+ from enum import Enum
6
+ import random
7
+ import json
8
+
9
+ from .monster_engine import Monster, EvolutionStage, MonsterPersonalityType, EmotionalState
10
+
11
+ class EvolutionTrigger(str, Enum):
12
+ TIME_BASED = "time_based"
13
+ STAT_BASED = "stat_based"
14
+ CARE_BASED = "care_based"
15
+ ITEM_BASED = "item_based"
16
+ SPECIAL_EVENT = "special_event"
17
+ TRAINING_BASED = "training_based"
18
+ RELATIONSHIP_BASED = "relationship_based"
19
+
20
+ class EvolutionPath(str, Enum):
21
+ NORMAL = "normal"
22
+ VARIANT = "variant"
23
+ SPECIAL = "special"
24
+ CORRUPTED = "corrupted"
25
+ LEGENDARY = "legendary"
26
+
27
+ class EvolutionSystem:
28
+ def __init__(self):
29
+ self.logger = logging.getLogger(__name__)
30
+
31
+ # Evolution trees and requirements
32
+ self.evolution_trees = self._initialize_evolution_trees()
33
+ self.evolution_requirements = self._initialize_evolution_requirements()
34
+ self.special_conditions = self._initialize_special_conditions()
35
+
36
+ # Evolution modifiers
37
+ self.care_quality_thresholds = {
38
+ "excellent": 1.8,
39
+ "good": 1.4,
40
+ "average": 1.0,
41
+ "poor": 0.6,
42
+ "terrible": 0.3
43
+ }
44
+
45
+ def _initialize_evolution_trees(self) -> Dict[str, Dict[str, List[Dict[str, Any]]]]:
46
+ """Initialize the complete evolution tree structure"""
47
+ return {
48
+ "Botamon": {
49
+ EvolutionStage.BABY: [
50
+ {
51
+ "species": "Koromon",
52
+ "path": EvolutionPath.NORMAL,
53
+ "requirements": {
54
+ "age_minutes": 60,
55
+ "care_mistakes_max": 0,
56
+ "health_min": 80
57
+ }
58
+ }
59
+ ]
60
+ },
61
+ "Koromon": {
62
+ EvolutionStage.CHILD: [
63
+ {
64
+ "species": "Agumon",
65
+ "path": EvolutionPath.NORMAL,
66
+ "requirements": {
67
+ "age_minutes": 1440, # 24 hours
68
+ "stats_min": {"offense": 150, "life": 1200},
69
+ "training_min": {"strength": 30},
70
+ "care_quality_min": 1.0
71
+ }
72
+ },
73
+ {
74
+ "species": "Betamon",
75
+ "path": EvolutionPath.VARIANT,
76
+ "requirements": {
77
+ "age_minutes": 1440,
78
+ "stats_min": {"defense": 150, "brains": 120},
79
+ "training_min": {"intelligence": 30},
80
+ "care_quality_min": 1.2
81
+ }
82
+ },
83
+ {
84
+ "species": "Kunemon",
85
+ "path": EvolutionPath.CORRUPTED,
86
+ "requirements": {
87
+ "age_minutes": 1440,
88
+ "care_mistakes_min": 3,
89
+ "happiness_max": 40
90
+ }
91
+ }
92
+ ]
93
+ },
94
+ "Agumon": {
95
+ EvolutionStage.ADULT: [
96
+ {
97
+ "species": "Greymon",
98
+ "path": EvolutionPath.NORMAL,
99
+ "requirements": {
100
+ "age_minutes": 4320, # 72 hours
101
+ "stats_min": {"offense": 250, "life": 1800},
102
+ "training_min": {"strength": 80},
103
+ "care_quality_min": 1.3,
104
+ "battle_wins_min": 5
105
+ }
106
+ },
107
+ {
108
+ "species": "Tyrannomon",
109
+ "path": EvolutionPath.VARIANT,
110
+ "requirements": {
111
+ "age_minutes": 4320,
112
+ "stats_min": {"offense": 300, "life": 2000},
113
+ "training_min": {"strength": 100, "endurance": 50},
114
+ "care_quality_min": 1.1,
115
+ "discipline_min": 70
116
+ }
117
+ },
118
+ {
119
+ "species": "Meramon",
120
+ "path": EvolutionPath.SPECIAL,
121
+ "requirements": {
122
+ "age_minutes": 4320,
123
+ "stats_min": {"offense": 200, "brains": 180},
124
+ "training_min": {"spirit": 60},
125
+ "special_item": "Fire_Crystal",
126
+ "care_quality_min": 1.5
127
+ }
128
+ }
129
+ ]
130
+ },
131
+ "Greymon": {
132
+ EvolutionStage.PERFECT: [
133
+ {
134
+ "species": "MetalGreymon",
135
+ "path": EvolutionPath.NORMAL,
136
+ "requirements": {
137
+ "age_minutes": 8640, # 144 hours (6 days)
138
+ "stats_min": {"offense": 400, "life": 2800, "defense": 300},
139
+ "training_min": {"strength": 150, "technique": 100},
140
+ "care_quality_min": 1.6,
141
+ "battle_wins_min": 15,
142
+ "relationship_level_min": 80
143
+ }
144
+ },
145
+ {
146
+ "species": "SkullGreymon",
147
+ "path": EvolutionPath.CORRUPTED,
148
+ "requirements": {
149
+ "age_minutes": 8640,
150
+ "stats_min": {"offense": 450},
151
+ "care_mistakes_min": 8,
152
+ "overtraining": True,
153
+ "happiness_max": 30
154
+ }
155
+ }
156
+ ]
157
+ },
158
+ "MetalGreymon": {
159
+ EvolutionStage.ULTIMATE: [
160
+ {
161
+ "species": "WarGreymon",
162
+ "path": EvolutionPath.LEGENDARY,
163
+ "requirements": {
164
+ "age_minutes": 14400, # 10 days
165
+ "stats_min": {"offense": 600, "life": 4000, "defense": 500, "brains": 400},
166
+ "training_min": {"strength": 200, "technique": 150, "spirit": 100},
167
+ "care_quality_min": 1.8,
168
+ "battle_wins_min": 50,
169
+ "relationship_level_min": 95,
170
+ "special_achievements": ["Perfect_Care_Week", "Master_Trainer"]
171
+ }
172
+ }
173
+ ]
174
+ }
175
+ }
176
+
177
+ def _initialize_evolution_requirements(self) -> Dict[str, Any]:
178
+ """Initialize detailed evolution requirement checkers"""
179
+ return {
180
+ "age_requirements": {
181
+ "check": lambda monster, req: monster.lifecycle.age_minutes >= req,
182
+ "display": lambda req: f"Age: {req/1440:.1f} days"
183
+ },
184
+ "stat_requirements": {
185
+ "check": self._check_stat_requirements,
186
+ "display": lambda req: f"Stats: {', '.join([f'{k}≥{v}' for k, v in req.items()])}"
187
+ },
188
+ "training_requirements": {
189
+ "check": self._check_training_requirements,
190
+ "display": lambda req: f"Training: {', '.join([f'{k}≥{v}' for k, v in req.items()])}"
191
+ },
192
+ "care_quality_requirements": {
193
+ "check": lambda monster, req: monster.stats.care_quality_score >= req,
194
+ "display": lambda req: f"Care Quality: {req:.1f}"
195
+ },
196
+ "item_requirements": {
197
+ "check": self._check_item_requirements,
198
+ "display": lambda req: f"Required Item: {req}"
199
+ },
200
+ "special_requirements": {
201
+ "check": self._check_special_requirements,
202
+ "display": lambda req: f"Special: {', '.join(req) if isinstance(req, list) else req}"
203
+ }
204
+ }
205
+
206
+ def _initialize_special_conditions(self) -> Dict[str, Any]:
207
+ """Initialize special evolution conditions"""
208
+ return {
209
+ "perfect_care_week": {
210
+ "description": "No care mistakes for 7 consecutive days",
211
+ "check": self._check_perfect_care_week
212
+ },
213
+ "master_trainer": {
214
+ "description": "Complete all training types to level 150+",
215
+ "check": self._check_master_trainer
216
+ },
217
+ "bond_master": {
218
+ "description": "Reach maximum relationship level",
219
+ "check": lambda monster: monster.personality.relationship_level >= 100
220
+ },
221
+ "evolution_master": {
222
+ "description": "Successfully evolve 10+ monsters",
223
+ "check": self._check_evolution_master
224
+ },
225
+ "overtraining": {
226
+ "description": "Training stats significantly exceed normal limits",
227
+ "check": self._check_overtraining
228
+ }
229
+ }
230
+
231
+ async def check_evolution_eligibility(self, monster: Monster) -> Dict[str, Any]:
232
+ """Check if monster is eligible for evolution and return detailed info"""
233
+ try:
234
+ current_species = monster.species
235
+ current_stage = monster.lifecycle.stage
236
+
237
+ # Get possible evolutions
238
+ possible_evolutions = self.evolution_trees.get(current_species, {}).get(current_stage, [])
239
+
240
+ if not possible_evolutions:
241
+ return {
242
+ "can_evolve": False,
243
+ "reason": "No evolution paths available",
244
+ "possible_evolutions": []
245
+ }
246
+
247
+ evolution_results = []
248
+
249
+ for evolution_option in possible_evolutions:
250
+ species = evolution_option["species"]
251
+ path = evolution_option["path"]
252
+ requirements = evolution_option["requirements"]
253
+
254
+ # Check each requirement
255
+ met_requirements = []
256
+ missing_requirements = []
257
+
258
+ for req_type, req_value in requirements.items():
259
+ is_met = await self._check_requirement(monster, req_type, req_value)
260
+
261
+ requirement_info = {
262
+ "type": req_type,
263
+ "requirement": req_value,
264
+ "current_value": self._get_current_value(monster, req_type),
265
+ "is_met": is_met
266
+ }
267
+
268
+ if is_met:
269
+ met_requirements.append(requirement_info)
270
+ else:
271
+ missing_requirements.append(requirement_info)
272
+
273
+ # Calculate evolution readiness percentage
274
+ total_requirements = len(met_requirements) + len(missing_requirements)
275
+ readiness_percentage = (len(met_requirements) / total_requirements * 100) if total_requirements > 0 else 0
276
+
277
+ evolution_results.append({
278
+ "species": species,
279
+ "path": path.value,
280
+ "readiness_percentage": readiness_percentage,
281
+ "can_evolve": len(missing_requirements) == 0,
282
+ "met_requirements": met_requirements,
283
+ "missing_requirements": missing_requirements,
284
+ "estimated_time_to_eligible": self._estimate_time_to_eligible(missing_requirements)
285
+ })
286
+
287
+ # Find the best evolution option
288
+ eligible_evolutions = [e for e in evolution_results if e["can_evolve"]]
289
+ best_option = max(evolution_results, key=lambda x: x["readiness_percentage"]) if evolution_results else None
290
+
291
+ return {
292
+ "can_evolve": len(eligible_evolutions) > 0,
293
+ "eligible_evolutions": eligible_evolutions,
294
+ "best_option": best_option,
295
+ "all_options": evolution_results,
296
+ "evolution_locked": monster.lifecycle.evolution_locked_until and
297
+ monster.lifecycle.evolution_locked_until > datetime.now()
298
+ }
299
+
300
+ except Exception as e:
301
+ self.logger.error(f"Evolution eligibility check failed: {e}")
302
+ return {
303
+ "can_evolve": False,
304
+ "reason": f"Error checking evolution: {str(e)}",
305
+ "possible_evolutions": []
306
+ }
307
+
308
+ async def trigger_evolution(self, monster: Monster, target_species: str = None) -> Dict[str, Any]:
309
+ """Trigger monster evolution"""
310
+ try:
311
+ # Check if evolution is locked
312
+ if monster.lifecycle.evolution_locked_until and monster.lifecycle.evolution_locked_until > datetime.now():
313
+ return {
314
+ "success": False,
315
+ "reason": "Evolution is temporarily locked",
316
+ "unlock_time": monster.lifecycle.evolution_locked_until
317
+ }
318
+
319
+ # Get evolution eligibility
320
+ eligibility = await self.check_evolution_eligibility(monster)
321
+
322
+ if not eligibility["can_evolve"]:
323
+ return {
324
+ "success": False,
325
+ "reason": "Evolution requirements not met",
326
+ "eligibility": eligibility
327
+ }
328
+
329
+ # Select evolution target
330
+ eligible_evolutions = eligibility["eligible_evolutions"]
331
+
332
+ if target_species:
333
+ # Specific evolution requested
334
+ target_evolution = next((e for e in eligible_evolutions if e["species"] == target_species), None)
335
+ if not target_evolution:
336
+ return {
337
+ "success": False,
338
+ "reason": f"Cannot evolve to {target_species}",
339
+ "available_options": [e["species"] for e in eligible_evolutions]
340
+ }
341
+ else:
342
+ # Choose best available evolution
343
+ target_evolution = max(eligible_evolutions, key=lambda x: x["readiness_percentage"])
344
+
345
+ # Store previous state
346
+ previous_species = monster.species
347
+ previous_stage = monster.lifecycle.stage
348
+
349
+ # Apply evolution
350
+ await self._apply_evolution(monster, target_evolution)
351
+
352
+ # Log evolution event
353
+ evolution_result = {
354
+ "success": True,
355
+ "previous_species": previous_species,
356
+ "previous_stage": previous_stage.value,
357
+ "new_species": monster.species,
358
+ "new_stage": monster.lifecycle.stage.value,
359
+ "evolution_path": target_evolution["path"],
360
+ "stat_bonuses": self._calculate_evolution_bonuses(target_evolution),
361
+ "timestamp": datetime.now()
362
+ }
363
+
364
+ self.logger.info(f"Monster evolved: {previous_species} -> {monster.species}")
365
+
366
+ return evolution_result
367
+
368
+ except Exception as e:
369
+ self.logger.error(f"Evolution trigger failed: {e}")
370
+ return {
371
+ "success": False,
372
+ "reason": f"Evolution failed: {str(e)}"
373
+ }
374
+
375
+ async def _apply_evolution(self, monster: Monster, evolution_data: Dict[str, Any]):
376
+ """Apply evolution changes to monster"""
377
+ # Update basic info
378
+ monster.species = evolution_data["species"]
379
+
380
+ # Determine new stage
381
+ stage_progression = {
382
+ EvolutionStage.EGG: EvolutionStage.BABY,
383
+ EvolutionStage.BABY: EvolutionStage.CHILD,
384
+ EvolutionStage.CHILD: EvolutionStage.ADULT,
385
+ EvolutionStage.ADULT: EvolutionStage.PERFECT,
386
+ EvolutionStage.PERFECT: EvolutionStage.ULTIMATE,
387
+ EvolutionStage.ULTIMATE: EvolutionStage.MEGA
388
+ }
389
+
390
+ new_stage = stage_progression.get(monster.lifecycle.stage)
391
+ if new_stage:
392
+ monster.lifecycle.stage = new_stage
393
+
394
+ # Apply stat bonuses
395
+ bonuses = self._calculate_evolution_bonuses(evolution_data)
396
+ for stat, bonus in bonuses.items():
397
+ if hasattr(monster.stats, stat):
398
+ current_value = getattr(monster.stats, stat)
399
+ new_value = int(current_value * bonus["multiplier"]) + bonus["flat_bonus"]
400
+ setattr(monster.stats, stat, new_value)
401
+
402
+ # Reset some care stats
403
+ monster.stats.happiness = min(100, monster.stats.happiness + 20)
404
+ monster.stats.health = min(100, monster.stats.health + 30)
405
+ monster.stats.energy = min(100, monster.stats.energy + 40)
406
+
407
+ # Update personality based on evolution path
408
+ self._apply_personality_changes(monster, evolution_data["path"])
409
+
410
+ # Set evolution cooldown
411
+ monster.lifecycle.evolution_locked_until = datetime.now() + timedelta(hours=24)
412
+
413
+ # Update emotional state
414
+ monster.emotional_state = EmotionalState.ECSTATIC
415
+
416
+ # Add evolution achievement
417
+ if "special_achievements" not in monster.performance_metrics:
418
+ monster.performance_metrics["special_achievements"] = []
419
+
420
+ monster.performance_metrics["special_achievements"].append({
421
+ "type": "evolution",
422
+ "species": monster.species,
423
+ "timestamp": datetime.now().isoformat()
424
+ })
425
+
426
+ def _calculate_evolution_bonuses(self, evolution_data: Dict[str, Any]) -> Dict[str, Dict[str, float]]:
427
+ """Calculate stat bonuses for evolution"""
428
+ base_bonuses = {
429
+ "life": {"multiplier": 1.3, "flat_bonus": 200},
430
+ "mp": {"multiplier": 1.2, "flat_bonus": 50},
431
+ "offense": {"multiplier": 1.25, "flat_bonus": 30},
432
+ "defense": {"multiplier": 1.25, "flat_bonus": 30},
433
+ "speed": {"multiplier": 1.2, "flat_bonus": 20},
434
+ "brains": {"multiplier": 1.15, "flat_bonus": 25}
435
+ }
436
+
437
+ # Modify bonuses based on evolution path
438
+ path_modifiers = {
439
+ EvolutionPath.NORMAL: 1.0,
440
+ EvolutionPath.VARIANT: 1.1,
441
+ EvolutionPath.SPECIAL: 1.3,
442
+ EvolutionPath.CORRUPTED: 0.9,
443
+ EvolutionPath.LEGENDARY: 1.5
444
+ }
445
+
446
+ evolution_path = EvolutionPath(evolution_data["path"])
447
+ modifier = path_modifiers.get(evolution_path, 1.0)
448
+
449
+ # Apply modifier to bonuses
450
+ modified_bonuses = {}
451
+ for stat, bonus in base_bonuses.items():
452
+ modified_bonuses[stat] = {
453
+ "multiplier": bonus["multiplier"] * modifier,
454
+ "flat_bonus": int(bonus["flat_bonus"] * modifier)
455
+ }
456
+
457
+ return modified_bonuses
458
+
459
+ def _apply_personality_changes(self, monster: Monster, evolution_path: str):
460
+ """Apply personality changes based on evolution path"""
461
+ path_personality_effects = {
462
+ EvolutionPath.NORMAL: {
463
+ "conscientiousness": 0.05,
464
+ "stability": 0.03
465
+ },
466
+ EvolutionPath.VARIANT: {
467
+ "openness": 0.08,
468
+ "curiosity": 0.05
469
+ },
470
+ EvolutionPath.SPECIAL: {
471
+ "extraversion": 0.1,
472
+ "confidence": 0.07
473
+ },
474
+ EvolutionPath.CORRUPTED: {
475
+ "neuroticism": 0.15,
476
+ "aggression": 0.1,
477
+ "happiness_decay_rate": 1.2
478
+ },
479
+ EvolutionPath.LEGENDARY: {
480
+ "all_traits": 0.1,
481
+ "relationship_bonus": 10
482
+ }
483
+ }
484
+
485
+ effects = path_personality_effects.get(EvolutionPath(evolution_path), {})
486
+
487
+ for trait, change in effects.items():
488
+ if trait == "all_traits":
489
+ # Boost all personality traits
490
+ for personality_trait in ["openness", "conscientiousness", "extraversion", "agreeableness"]:
491
+ if hasattr(monster.personality, personality_trait):
492
+ current = getattr(monster.personality, personality_trait)
493
+ setattr(monster.personality, personality_trait, min(1.0, current + change))
494
+ elif trait == "relationship_bonus":
495
+ monster.personality.relationship_level = min(100, monster.personality.relationship_level + change)
496
+ elif hasattr(monster.personality, trait):
497
+ current = getattr(monster.personality, trait)
498
+ setattr(monster.personality, trait, min(1.0, max(0.0, current + change)))
499
+
500
+ async def _check_requirement(self, monster: Monster, req_type: str, req_value: Any) -> bool:
501
+ """Check if a specific requirement is met"""
502
+ try:
503
+ if req_type == "age_minutes":
504
+ return monster.lifecycle.age_minutes >= req_value
505
+
506
+ elif req_type == "care_mistakes_max":
507
+ return monster.lifecycle.care_mistakes <= req_value
508
+
509
+ elif req_type == "care_mistakes_min":
510
+ return monster.lifecycle.care_mistakes >= req_value
511
+
512
+ elif req_type == "stats_min":
513
+ return self._check_stat_requirements(monster, req_value)
514
+
515
+ elif req_type == "training_min":
516
+ return self._check_training_requirements(monster, req_value)
517
+
518
+ elif req_type == "care_quality_min":
519
+ return monster.stats.care_quality_score >= req_value
520
+
521
+ elif req_type == "health_min":
522
+ return monster.stats.health >= req_value
523
+
524
+ elif req_type == "happiness_max":
525
+ return monster.stats.happiness <= req_value
526
+
527
+ elif req_type == "happiness_min":
528
+ return monster.stats.happiness >= req_value
529
+
530
+ elif req_type == "discipline_min":
531
+ return monster.stats.discipline >= req_value
532
+
533
+ elif req_type == "relationship_level_min":
534
+ return monster.personality.relationship_level >= req_value
535
+
536
+ elif req_type == "special_item":
537
+ return req_value in monster.inventory and monster.inventory[req_value] > 0
538
+
539
+ elif req_type == "special_achievements":
540
+ return self._check_special_achievements(monster, req_value)
541
+
542
+ elif req_type == "battle_wins_min":
543
+ return monster.performance_metrics.get("battle_wins", 0) >= req_value
544
+
545
+ elif req_type == "overtraining":
546
+ return self._check_overtraining(monster)
547
+
548
+ else:
549
+ self.logger.warning(f"Unknown requirement type: {req_type}")
550
+ return False
551
+
552
+ except Exception as e:
553
+ self.logger.error(f"Requirement check failed for {req_type}: {e}")
554
+ return False
555
+
556
+ def _check_stat_requirements(self, monster: Monster, requirements: Dict[str, int]) -> bool:
557
+ """Check if stat requirements are met"""
558
+ for stat_name, min_value in requirements.items():
559
+ if hasattr(monster.stats, stat_name):
560
+ current_value = getattr(monster.stats, stat_name)
561
+ if current_value < min_value:
562
+ return False
563
+ else:
564
+ return False
565
+ return True
566
+
567
+ def _check_training_requirements(self, monster: Monster, requirements: Dict[str, int]) -> bool:
568
+ """Check if training requirements are met"""
569
+ for training_type, min_value in requirements.items():
570
+ current_value = monster.stats.training_progress.get(training_type, 0)
571
+ if current_value < min_value:
572
+ return False
573
+ return True
574
+
575
+ def _check_item_requirements(self, monster: Monster, item_name: str) -> bool:
576
+ """Check if monster has required item"""
577
+ return item_name in monster.inventory and monster.inventory[item_name] > 0
578
+
579
+ def _check_special_achievements(self, monster: Monster, required_achievements: List[str]) -> bool:
580
+ """Check if special achievements are unlocked"""
581
+ achievements = monster.performance_metrics.get("special_achievements", [])
582
+ achievement_types = [a.get("type") for a in achievements if isinstance(a, dict)]
583
+
584
+ for required in required_achievements:
585
+ if required not in achievement_types:
586
+ return False
587
+ return True
588
+
589
+ def _check_overtraining(self, monster: Monster) -> bool:
590
+ """Check if monster is overtrained"""
591
+ training_totals = sum(monster.stats.training_progress.values())
592
+ return training_totals > 800 # Threshold for overtraining
593
+
594
+ def _check_perfect_care_week(self, monster: Monster) -> bool:
595
+ """Check if monster had perfect care for a week"""
596
+ # Simplified check - would need more complex tracking in production
597
+ return monster.lifecycle.care_mistakes == 0 and monster.lifecycle.age_minutes >= 10080 # 7 days
598
+
599
+ def _check_master_trainer(self, monster: Monster) -> bool:
600
+ """Check if all training types are at 150+"""
601
+ for training_type in ["strength", "endurance", "intelligence", "dexterity", "spirit", "technique"]:
602
+ if monster.stats.training_progress.get(training_type, 0) < 150:
603
+ return False
604
+ return True
605
+
606
+ def _check_evolution_master(self, monster: Monster) -> bool:
607
+ """Check if player has evolved many monsters"""
608
+ # This would need global tracking in production
609
+ evolutions = [a for a in monster.performance_metrics.get("special_achievements", [])
610
+ if isinstance(a, dict) and a.get("type") == "evolution"]
611
+ return len(evolutions) >= 10
612
+
613
+ def _get_current_value(self, monster: Monster, req_type: str) -> Any:
614
+ """Get current value for a requirement type"""
615
+ value_getters = {
616
+ "age_minutes": lambda: monster.lifecycle.age_minutes,
617
+ "care_mistakes_max": lambda: monster.lifecycle.care_mistakes,
618
+ "care_mistakes_min": lambda: monster.lifecycle.care_mistakes,
619
+ "health_min": lambda: monster.stats.health,
620
+ "happiness_max": lambda: monster.stats.happiness,
621
+ "happiness_min": lambda: monster.stats.happiness,
622
+ "discipline_min": lambda: monster.stats.discipline,
623
+ "care_quality_min": lambda: monster.stats.care_quality_score,
624
+ "relationship_level_min": lambda: monster.personality.relationship_level,
625
+ "battle_wins_min": lambda: monster.performance_metrics.get("battle_wins", 0)
626
+ }
627
+
628
+ getter = value_getters.get(req_type)
629
+ return getter() if getter else "N/A"
630
+
631
+ def _estimate_time_to_eligible(self, missing_requirements: List[Dict[str, Any]]) -> str:
632
+ """Estimate time until evolution requirements are met"""
633
+ time_estimates = []
634
+
635
+ for req in missing_requirements:
636
+ req_type = req["type"]
637
+
638
+ if req_type == "age_minutes":
639
+ current = req["current_value"]
640
+ required = req["requirement"]
641
+ remaining_minutes = required - current
642
+ time_estimates.append(f"{remaining_minutes/1440:.1f} days")
643
+
644
+ elif "training" in req_type:
645
+ # Estimate based on training rate
646
+ time_estimates.append("1-3 days of training")
647
+
648
+ elif "stat" in req_type:
649
+ # Estimate based on training and care
650
+ time_estimates.append("2-5 days of care/training")
651
+
652
+ else:
653
+ time_estimates.append("Variable")
654
+
655
+ return ", ".join(time_estimates) if time_estimates else "Ready now"
src/core/monster_engine.py ADDED
@@ -0,0 +1,365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pydantic import BaseModel, Field, validator
2
+ from typing import Dict, List, Optional, Any, Union
3
+ from datetime import datetime, timedelta
4
+ from enum import Enum
5
+ import uuid
6
+ import asyncio
7
+ import json
8
+ import numpy as np
9
+ from dataclasses import dataclass
10
+
11
+ class EvolutionStage(str, Enum):
12
+ EGG = "egg"
13
+ BABY = "baby"
14
+ CHILD = "child"
15
+ ADULT = "adult"
16
+ PERFECT = "perfect"
17
+ ULTIMATE = "ultimate"
18
+ MEGA = "mega"
19
+
20
+ class MonsterPersonalityType(str, Enum):
21
+ PLAYFUL = "playful"
22
+ SERIOUS = "serious"
23
+ CURIOUS = "curious"
24
+ GENTLE = "gentle"
25
+ ENERGETIC = "energetic"
26
+ CALM = "calm"
27
+ MISCHIEVOUS = "mischievous"
28
+ LOYAL = "loyal"
29
+
30
+ class EmotionalState(str, Enum):
31
+ ECSTATIC = "ecstatic"
32
+ HAPPY = "happy"
33
+ CONTENT = "content"
34
+ NEUTRAL = "neutral"
35
+ MELANCHOLY = "melancholy"
36
+ SAD = "sad"
37
+ ANGRY = "angry"
38
+ SICK = "sick"
39
+ EXCITED = "excited"
40
+ TIRED = "tired"
41
+
42
+ @dataclass
43
+ class StatBonus:
44
+ multiplier: float = 1.0
45
+ flat_bonus: int = 0
46
+ duration_minutes: int = 0
47
+ source: str = ""
48
+
49
+ class AdvancedMonsterStats(BaseModel):
50
+ # Primary Care Stats (0-100)
51
+ health: int = Field(default=100, ge=0, le=100)
52
+ hunger: int = Field(default=100, ge=0, le=100)
53
+ happiness: int = Field(default=100, ge=0, le=100)
54
+ energy: int = Field(default=100, ge=0, le=100)
55
+ discipline: int = Field(default=50, ge=0, le=100)
56
+ cleanliness: int = Field(default=100, ge=0, le=100)
57
+
58
+ # Battle Stats (Digimon World 1 inspired)
59
+ life: int = Field(default=1000, ge=0)
60
+ mp: int = Field(default=100, ge=0)
61
+ offense: int = Field(default=100, ge=0)
62
+ defense: int = Field(default=100, ge=0)
63
+ speed: int = Field(default=100, ge=0)
64
+ brains: int = Field(default=100, ge=0)
65
+
66
+ # Training Progress
67
+ training_progress: Dict[str, int] = Field(default_factory=lambda: {
68
+ "strength": 0,
69
+ "endurance": 0,
70
+ "intelligence": 0,
71
+ "dexterity": 0,
72
+ "spirit": 0,
73
+ "technique": 0
74
+ })
75
+
76
+ # Active Bonuses
77
+ active_bonuses: List[StatBonus] = Field(default_factory=list)
78
+
79
+ # Performance Metrics
80
+ care_quality_score: float = Field(default=1.0, ge=0.0, le=2.0)
81
+ evolution_potential: float = Field(default=1.0, ge=0.0, le=2.0)
82
+
83
+ class AIPersonality(BaseModel):
84
+ # Core Personality Traits
85
+ primary_type: MonsterPersonalityType = Field(default=MonsterPersonalityType.PLAYFUL)
86
+ secondary_type: Optional[MonsterPersonalityType] = Field(default=None)
87
+
88
+ # Trait Values (0.0-1.0)
89
+ openness: float = Field(default=0.5, ge=0.0, le=1.0)
90
+ conscientiousness: float = Field(default=0.5, ge=0.0, le=1.0)
91
+ extraversion: float = Field(default=0.5, ge=0.0, le=1.0)
92
+ agreeableness: float = Field(default=0.5, ge=0.0, le=1.0)
93
+ neuroticism: float = Field(default=0.5, ge=0.0, le=1.0)
94
+
95
+ # Learned Preferences
96
+ favorite_foods: List[str] = Field(default_factory=list)
97
+ disliked_foods: List[str] = Field(default_factory=list)
98
+ preferred_activities: List[str] = Field(default_factory=list)
99
+ communication_style: str = Field(default="friendly")
100
+
101
+ # Emotional Memory
102
+ emotional_memories: List[Dict[str, Any]] = Field(default_factory=list)
103
+ relationship_level: int = Field(default=0, ge=0, le=100)
104
+
105
+ class ConversationContext(BaseModel):
106
+ # Recent Conversation History
107
+ messages: List[Dict[str, Any]] = Field(default_factory=list)
108
+
109
+ # Context Compression
110
+ personality_summary: str = Field(default="")
111
+ relationship_summary: str = Field(default="")
112
+ recent_events_summary: str = Field(default="")
113
+
114
+ # Interaction Statistics
115
+ total_conversations: int = Field(default=0)
116
+ avg_conversation_length: float = Field(default=0.0)
117
+ last_interaction: Optional[datetime] = Field(default=None)
118
+ interaction_frequency: float = Field(default=0.0) # interactions per day
119
+
120
+ # Emotional Context
121
+ current_mood_factors: Dict[str, float] = Field(default_factory=dict)
122
+ mood_history: List[Dict[str, Any]] = Field(default_factory=list)
123
+
124
+ class AdvancedLifecycle(BaseModel):
125
+ # Time Tracking
126
+ age_minutes: float = Field(default=0.0)
127
+ stage: EvolutionStage = Field(default=EvolutionStage.EGG)
128
+ generation: int = Field(default=1)
129
+
130
+ # Care History
131
+ care_mistakes: int = Field(default=0)
132
+ perfect_care_streaks: int = Field(default=0)
133
+ total_training_sessions: int = Field(default=0)
134
+
135
+ # Evolution Data
136
+ evolution_requirements_met: List[str] = Field(default_factory=list)
137
+ evolution_locked_until: Optional[datetime] = Field(default=None)
138
+ special_evolution_conditions: Dict[str, bool] = Field(default_factory=dict)
139
+
140
+ # Lifespan Management
141
+ base_lifespan_minutes: float = Field(default=21600.0) # 15 days
142
+ lifespan_modifiers: List[float] = Field(default_factory=list)
143
+ death_prevention_items: int = Field(default=0)
144
+
145
+ class Monster(BaseModel):
146
+ # Identity
147
+ id: str = Field(default_factory=lambda: str(uuid.uuid4()))
148
+ name: str = Field(default="DigiPal")
149
+ species: str = Field(default="Botamon")
150
+ variant: Optional[str] = Field(default=None)
151
+
152
+ # Core Systems
153
+ stats: AdvancedMonsterStats = Field(default_factory=AdvancedMonsterStats)
154
+ personality: AIPersonality = Field(default_factory=AIPersonality)
155
+ lifecycle: AdvancedLifecycle = Field(default_factory=AdvancedLifecycle)
156
+ conversation: ConversationContext = Field(default_factory=ConversationContext)
157
+
158
+ # Current State
159
+ emotional_state: EmotionalState = Field(default=EmotionalState.CONTENT)
160
+ current_activity: str = Field(default="idle")
161
+ location: str = Field(default="nursery")
162
+
163
+ # Timestamps
164
+ created_at: datetime = Field(default_factory=datetime.now)
165
+ last_update: datetime = Field(default_factory=datetime.now)
166
+ last_interaction: Optional[datetime] = Field(default=None)
167
+
168
+ # Items and Inventory
169
+ inventory: Dict[str, int] = Field(default_factory=dict)
170
+ equipped_items: Dict[str, str] = Field(default_factory=dict)
171
+
172
+ # Breeding and Genetics
173
+ genetic_markers: Dict[str, Any] = Field(default_factory=dict)
174
+ parent_ids: List[str] = Field(default_factory=list)
175
+ offspring_ids: List[str] = Field(default_factory=list)
176
+
177
+ # Performance Tracking
178
+ performance_metrics: Dict[str, float] = Field(default_factory=dict)
179
+
180
+ class Config:
181
+ json_encoders = {
182
+ datetime: lambda v: v.isoformat()
183
+ }
184
+
185
+ def calculate_emotional_state(self) -> EmotionalState:
186
+ """Calculate current emotional state based on multiple factors"""
187
+ # Health-based emotions
188
+ if self.stats.health < 20:
189
+ return EmotionalState.SICK
190
+
191
+ # Happiness-based emotions
192
+ if self.stats.happiness >= 95:
193
+ return EmotionalState.ECSTATIC
194
+ elif self.stats.happiness >= 80:
195
+ return EmotionalState.HAPPY
196
+ elif self.stats.happiness >= 60:
197
+ return EmotionalState.CONTENT
198
+ elif self.stats.happiness >= 40:
199
+ return EmotionalState.NEUTRAL
200
+ elif self.stats.happiness >= 20:
201
+ return EmotionalState.MELANCHOLY
202
+ elif self.stats.happiness >= 10:
203
+ return EmotionalState.SAD
204
+
205
+ # Energy-based emotions
206
+ if self.stats.energy < 20:
207
+ return EmotionalState.TIRED
208
+
209
+ # Discipline-based emotions
210
+ if self.stats.discipline < 20 and self.stats.hunger > 80:
211
+ return EmotionalState.ANGRY
212
+
213
+ # Special conditions
214
+ if self.current_activity in ["training", "playing"]:
215
+ return EmotionalState.EXCITED
216
+
217
+ return EmotionalState.NEUTRAL
218
+
219
+ def get_evolution_readiness(self) -> Dict[str, Any]:
220
+ """Calculate evolution readiness and requirements"""
221
+ current_requirements = self._get_stage_requirements()
222
+ met_requirements = []
223
+ missing_requirements = []
224
+
225
+ for req_type, requirement in current_requirements.items():
226
+ if self._check_requirement(req_type, requirement):
227
+ met_requirements.append(req_type)
228
+ else:
229
+ missing_requirements.append({
230
+ "type": req_type,
231
+ "requirement": requirement,
232
+ "current": self._get_current_value(req_type)
233
+ })
234
+
235
+ readiness_percentage = len(met_requirements) / len(current_requirements) * 100 if current_requirements else 0
236
+
237
+ return {
238
+ "readiness_percentage": readiness_percentage,
239
+ "met_requirements": met_requirements,
240
+ "missing_requirements": missing_requirements,
241
+ "can_evolve": len(missing_requirements) == 0,
242
+ "next_stage": self._get_next_evolution_stage()
243
+ }
244
+
245
+ def _get_stage_requirements(self) -> Dict[str, Any]:
246
+ """Get evolution requirements for current stage"""
247
+ requirements = {
248
+ EvolutionStage.EGG: {
249
+ "age_minutes": 60, # 1 hour
250
+ "care_mistakes_max": 0
251
+ },
252
+ EvolutionStage.BABY: {
253
+ "age_minutes": 1440, # 24 hours
254
+ "stats_min": {"life": 1200, "offense": 120, "defense": 120},
255
+ "care_mistakes_max": 2
256
+ },
257
+ EvolutionStage.CHILD: {
258
+ "age_minutes": 4320, # 72 hours
259
+ "stats_min": {"life": 1500, "offense": 150, "defense": 150, "brains": 150},
260
+ "care_mistakes_max": 5,
261
+ "training_min": {"strength": 50, "intelligence": 50}
262
+ },
263
+ EvolutionStage.ADULT: {
264
+ "age_minutes": 8640, # 144 hours (6 days)
265
+ "stats_min": {"life": 2000, "offense": 200, "defense": 200, "brains": 200},
266
+ "care_mistakes_max": 8,
267
+ "training_min": {"strength": 100, "intelligence": 100},
268
+ "care_quality_min": 1.2
269
+ }
270
+ }
271
+ return requirements.get(self.lifecycle.stage, {})
272
+
273
+ def _check_requirement(self, req_type: str, requirement: Any) -> bool:
274
+ """Check if a specific requirement is met"""
275
+ if req_type == "age_minutes":
276
+ return self.lifecycle.age_minutes >= requirement
277
+ elif req_type == "care_mistakes_max":
278
+ return self.lifecycle.care_mistakes <= requirement
279
+ elif req_type == "stats_min":
280
+ for stat, min_val in requirement.items():
281
+ if getattr(self.stats, stat, 0) < min_val:
282
+ return False
283
+ return True
284
+ elif req_type == "training_min":
285
+ for training, min_val in requirement.items():
286
+ if self.stats.training_progress.get(training, 0) < min_val:
287
+ return False
288
+ return True
289
+ elif req_type == "care_quality_min":
290
+ return self.stats.care_quality_score >= requirement
291
+ return False
292
+
293
+ def _get_current_value(self, req_type: str) -> Any:
294
+ """Get current value for a requirement type"""
295
+ if req_type == "age_minutes":
296
+ return self.lifecycle.age_minutes
297
+ elif req_type == "care_mistakes_max":
298
+ return self.lifecycle.care_mistakes
299
+ elif req_type.startswith("stats_"):
300
+ return {stat: getattr(self.stats, stat, 0) for stat in ["life", "offense", "defense", "brains"]}
301
+ elif req_type.startswith("training_"):
302
+ return self.stats.training_progress
303
+ elif req_type == "care_quality_min":
304
+ return self.stats.care_quality_score
305
+ return None
306
+
307
+ def _get_next_evolution_stage(self) -> Optional[EvolutionStage]:
308
+ """Get the next evolution stage"""
309
+ stage_order = [
310
+ EvolutionStage.EGG,
311
+ EvolutionStage.BABY,
312
+ EvolutionStage.CHILD,
313
+ EvolutionStage.ADULT,
314
+ EvolutionStage.PERFECT,
315
+ EvolutionStage.ULTIMATE,
316
+ EvolutionStage.MEGA
317
+ ]
318
+
319
+ current_index = stage_order.index(self.lifecycle.stage)
320
+ if current_index < len(stage_order) - 1:
321
+ return stage_order[current_index + 1]
322
+ return None
323
+
324
+ def apply_time_effects(self, minutes_elapsed: float):
325
+ """Apply time-based effects to monster"""
326
+ # Age progression
327
+ self.lifecycle.age_minutes += minutes_elapsed
328
+
329
+ # Stat decay rates (per hour)
330
+ decay_rates = {
331
+ "hunger": 2.0,
332
+ "happiness": 0.8,
333
+ "energy": 1.2,
334
+ "cleanliness": 0.6,
335
+ "discipline": 0.2
336
+ }
337
+
338
+ # Apply decay
339
+ hours_elapsed = minutes_elapsed / 60.0
340
+ for stat, rate in decay_rates.items():
341
+ current_value = getattr(self.stats, stat)
342
+ decay_amount = rate * hours_elapsed
343
+
344
+ # Apply personality modifiers
345
+ if stat == "happiness" and self.personality.neuroticism > 0.7:
346
+ decay_amount *= 1.3
347
+ if stat == "energy" and self.personality.extraversion < 0.3:
348
+ decay_amount *= 0.8
349
+
350
+ new_value = max(0, current_value - decay_amount)
351
+ setattr(self.stats, stat, int(new_value))
352
+
353
+ # Health effects from poor care
354
+ if self.stats.hunger < 20:
355
+ health_loss = hours_elapsed * 3
356
+ self.stats.health = max(0, self.stats.health - int(health_loss))
357
+ self.lifecycle.care_mistakes += 1
358
+
359
+ if self.stats.cleanliness < 30:
360
+ health_loss = hours_elapsed * 1.5
361
+ self.stats.health = max(0, self.stats.health - int(health_loss))
362
+
363
+ # Update emotional state
364
+ self.emotional_state = self.calculate_emotional_state()
365
+ self.last_update = datetime.now()
src/deployment/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # Deployment module initialization
src/deployment/zero_gpu_optimizer.py ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import torch
3
+ import psutil
4
+ import logging
5
+ from typing import Dict, Any, Optional
6
+ import spaces
7
+ from functools import wraps
8
+ import asyncio
9
+ import time
10
+
11
+ class ZeroGPUOptimizer:
12
+ def __init__(self):
13
+ self.logger = logging.getLogger(__name__)
14
+ self.is_zero_gpu_available = self._check_zero_gpu_availability()
15
+ self.resource_cache = {}
16
+ self.last_resource_check = 0
17
+
18
+ def _check_zero_gpu_availability(self) -> bool:
19
+ """Check if Zero GPU is available"""
20
+ try:
21
+ import spaces
22
+ return hasattr(spaces, 'GPU')
23
+ except ImportError:
24
+ return False
25
+
26
+ async def detect_available_resources(self) -> Dict[str, Any]:
27
+ """Detect available computational resources"""
28
+ current_time = time.time()
29
+
30
+ # Cache resources for 60 seconds
31
+ if (current_time - self.last_resource_check) < 60 and self.resource_cache:
32
+ return self.resource_cache
33
+
34
+ try:
35
+ # CPU Information
36
+ cpu_count = psutil.cpu_count(logical=True)
37
+ cpu_freq = psutil.cpu_freq()
38
+ memory = psutil.virtual_memory()
39
+
40
+ # GPU Information
41
+ gpu_available = torch.cuda.is_available()
42
+ gpu_count = torch.cuda.device_count() if gpu_available else 0
43
+ gpu_memory_gb = 0
44
+ gpu_name = "None"
45
+
46
+ if gpu_available and gpu_count > 0:
47
+ gpu_memory_bytes = torch.cuda.get_device_properties(0).total_memory
48
+ gpu_memory_gb = gpu_memory_bytes / (1024**3)
49
+ gpu_name = torch.cuda.get_device_name(0)
50
+
51
+ # Check for Zero GPU
52
+ zero_gpu_active = self.is_zero_gpu_available and os.getenv("SPACE_ID") is not None
53
+
54
+ resources = {
55
+ "cpu_count": cpu_count,
56
+ "cpu_frequency_mhz": cpu_freq.current if cpu_freq else 0,
57
+ "total_memory_gb": memory.total / (1024**3),
58
+ "available_memory_gb": memory.available / (1024**3),
59
+ "gpu_available": gpu_available,
60
+ "gpu_count": gpu_count,
61
+ "gpu_memory_gb": gpu_memory_gb,
62
+ "gpu_name": gpu_name,
63
+ "zero_gpu_available": zero_gpu_active,
64
+ "compute_capability": self._determine_compute_capability(gpu_memory_gb, cpu_count)
65
+ }
66
+
67
+ self.resource_cache = resources
68
+ self.last_resource_check = current_time
69
+
70
+ self.logger.info(f"Detected resources: {resources}")
71
+ return resources
72
+
73
+ except Exception as e:
74
+ self.logger.error(f"Resource detection failed: {e}")
75
+ return {
76
+ "cpu_count": 2,
77
+ "gpu_available": False,
78
+ "gpu_memory_gb": 0,
79
+ "compute_capability": "basic"
80
+ }
81
+
82
+ def _determine_compute_capability(self, gpu_memory_gb: float, cpu_count: int) -> str:
83
+ """Determine compute capability tier"""
84
+ if gpu_memory_gb >= 16:
85
+ return "premium" # Can run large models
86
+ elif gpu_memory_gb >= 8:
87
+ return "high" # Can run medium models
88
+ elif gpu_memory_gb >= 4:
89
+ return "medium" # Can run small models
90
+ elif cpu_count >= 8:
91
+ return "cpu_optimized" # CPU inference
92
+ else:
93
+ return "basic" # Limited capability
94
+
95
+ def zero_gpu_decorator(self, duration: int = 120):
96
+ """Decorator for Zero GPU allocation"""
97
+ if not self.is_zero_gpu_available:
98
+ # Fallback for non-Zero GPU environments
99
+ def decorator(func):
100
+ @wraps(func)
101
+ async def wrapper(*args, **kwargs):
102
+ return await func(*args, **kwargs)
103
+ return wrapper
104
+ return decorator
105
+
106
+ # Use actual Zero GPU decorator
107
+ def decorator(func):
108
+ @spaces.GPU(duration=duration)
109
+ @wraps(func)
110
+ async def wrapper(*args, **kwargs):
111
+ return await func(*args, **kwargs)
112
+ return wrapper
113
+ return decorator
114
+
115
+ async def optimize_model_loading(self, model_config: Dict[str, Any]) -> Dict[str, Any]:
116
+ """Optimize model loading based on available resources"""
117
+ resources = await self.detect_available_resources()
118
+
119
+ # Adjust configuration based on resources
120
+ optimized_config = model_config.copy()
121
+
122
+ if resources["compute_capability"] == "basic":
123
+ optimized_config.update({
124
+ "model_name": "Qwen/Qwen2.5-0.5B-Instruct",
125
+ "torch_dtype": "float32",
126
+ "device_map": "cpu",
127
+ "use_quantization": True
128
+ })
129
+ elif resources["compute_capability"] == "cpu_optimized":
130
+ optimized_config.update({
131
+ "model_name": "Qwen/Qwen2.5-1.5B-Instruct",
132
+ "torch_dtype": "float32",
133
+ "device_map": "cpu",
134
+ "use_quantization": True
135
+ })
136
+ elif resources["compute_capability"] == "medium":
137
+ optimized_config.update({
138
+ "model_name": "Qwen/Qwen2.5-1.5B-Instruct",
139
+ "torch_dtype": "float16",
140
+ "device_map": "auto",
141
+ "use_quantization": True
142
+ })
143
+ elif resources["compute_capability"] == "high":
144
+ optimized_config.update({
145
+ "model_name": "Qwen/Qwen2.5-3B-Instruct",
146
+ "torch_dtype": "bfloat16",
147
+ "device_map": "auto",
148
+ "use_quantization": False
149
+ })
150
+ else: # premium
151
+ optimized_config.update({
152
+ "model_name": "Qwen/Qwen2.5-7B-Instruct",
153
+ "torch_dtype": "bfloat16",
154
+ "device_map": "auto",
155
+ "use_quantization": False
156
+ })
157
+
158
+ self.logger.info(f"Optimized model config: {optimized_config}")
159
+ return optimized_config
160
+
161
+ def get_deployment_config(self) -> Dict[str, Any]:
162
+ """Get optimized deployment configuration"""
163
+ resources = asyncio.run(self.detect_available_resources())
164
+
165
+ base_config = {
166
+ "max_threads": min(40, resources["cpu_count"] * 2),
167
+ "enable_queue": True,
168
+ "show_error": True,
169
+ "quiet": False
170
+ }
171
+
172
+ # Adjust based on compute capability
173
+ if resources["compute_capability"] in ["basic", "cpu_optimized"]:
174
+ base_config.update({
175
+ "max_threads": resources["cpu_count"],
176
+ "concurrency_count": 1
177
+ })
178
+ elif resources["compute_capability"] == "medium":
179
+ base_config.update({
180
+ "concurrency_count": 2
181
+ })
182
+ else:
183
+ base_config.update({
184
+ "concurrency_count": 4
185
+ })
186
+
187
+ return base_config
src/ui/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # UI module initialization
src/ui/gradio_interface.py ADDED
@@ -0,0 +1,1064 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import asyncio
3
+ import logging
4
+ import json
5
+ import time
6
+ from typing import Dict, List, Optional, Any, Tuple
7
+ from datetime import datetime, timedelta
8
+ import numpy as np
9
+
10
+ from ..core.monster_engine import Monster, MonsterPersonalityType, EmotionalState
11
+ from ..ai.qwen_processor import QwenProcessor, ModelConfig
12
+ from ..ai.speech_engine import AdvancedSpeechEngine, SpeechConfig
13
+ from .state_manager import AdvancedStateManager
14
+ from ..deployment.zero_gpu_optimizer import ZeroGPUOptimizer
15
+
16
+ class StreamingComponents:
17
+ """Helper class for streaming components"""
18
+ def __init__(self):
19
+ self.logger = logging.getLogger(__name__)
20
+
21
+ class ModernDigiPalInterface:
22
+ def __init__(self):
23
+ self.logger = logging.getLogger(__name__)
24
+
25
+ # Initialize core systems
26
+ self.state_manager = AdvancedStateManager()
27
+ self.streaming = StreamingComponents()
28
+ self.gpu_optimizer = ZeroGPUOptimizer()
29
+
30
+ # AI Systems (will be initialized based on available resources)
31
+ self.qwen_processor = None
32
+ self.speech_engine = None
33
+
34
+ # Performance tracking
35
+ self.performance_metrics = {
36
+ "total_interactions": 0,
37
+ "average_response_time": 0.0,
38
+ "user_satisfaction": 0.0
39
+ }
40
+
41
+ # UI State
42
+ self.current_monster = None
43
+ self.ui_theme = "soft"
44
+
45
+ async def initialize(self):
46
+ """Initialize the interface with optimized configurations"""
47
+ try:
48
+ # Detect available resources
49
+ resources = await self.gpu_optimizer.detect_available_resources()
50
+
51
+ # Initialize AI processors based on resources
52
+ await self._initialize_ai_systems(resources)
53
+
54
+ # Initialize state management
55
+ await self.state_manager.initialize()
56
+
57
+ self.logger.info("DigiPal interface initialized successfully")
58
+
59
+ except Exception as e:
60
+ self.logger.error(f"Failed to initialize interface: {e}")
61
+ raise
62
+
63
+ async def _initialize_ai_systems(self, resources: Dict[str, Any]):
64
+ """Initialize AI systems based on available resources"""
65
+ # Configure Qwen processor
66
+ if resources["gpu_memory_gb"] >= 8:
67
+ model_config = ModelConfig(
68
+ model_name="Qwen/Qwen2.5-3B-Instruct",
69
+ max_memory_gb=resources["gpu_memory_gb"],
70
+ inference_speed="quality"
71
+ )
72
+ elif resources["gpu_memory_gb"] >= 4:
73
+ model_config = ModelConfig(
74
+ model_name="Qwen/Qwen2.5-1.5B-Instruct",
75
+ max_memory_gb=resources["gpu_memory_gb"],
76
+ inference_speed="balanced"
77
+ )
78
+ else:
79
+ model_config = ModelConfig(
80
+ model_name="Qwen/Qwen2.5-0.5B-Instruct",
81
+ max_memory_gb=resources["gpu_memory_gb"],
82
+ inference_speed="fast"
83
+ )
84
+
85
+ self.qwen_processor = QwenProcessor(model_config)
86
+ await self.qwen_processor.initialize()
87
+
88
+ # Configure speech engine
89
+ speech_config = SpeechConfig()
90
+ if resources["gpu_memory_gb"] >= 6:
91
+ speech_config.model_size = "medium"
92
+ speech_config.device = "cuda"
93
+ elif resources["gpu_memory_gb"] >= 3:
94
+ speech_config.model_size = "small"
95
+ speech_config.device = "cuda"
96
+ else:
97
+ speech_config.model_size = "base"
98
+ speech_config.device = "cpu"
99
+
100
+ self.speech_engine = AdvancedSpeechEngine(speech_config)
101
+ await self.speech_engine.initialize()
102
+
103
+ def create_interface(self) -> gr.Blocks:
104
+ """Create the main Gradio interface"""
105
+
106
+ # Custom CSS for modern monster game UI
107
+ custom_css = """
108
+ /* Modern Dark Theme */
109
+ .gradio-container {
110
+ background: linear-gradient(135deg, #1a1a2e 0%, #16213e 50%, #0f3460 100%);
111
+ font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
112
+ }
113
+
114
+ /* Monster Display */
115
+ .monster-display {
116
+ background: linear-gradient(145deg, #2a2a4e, #1e1e3c);
117
+ border: 3px solid #4a9eff;
118
+ border-radius: 20px;
119
+ padding: 20px;
120
+ text-align: center;
121
+ box-shadow: 0 10px 30px rgba(74, 158, 255, 0.3);
122
+ backdrop-filter: blur(10px);
123
+ min-height: 400px;
124
+ }
125
+
126
+ /* Stat Bars */
127
+ .stat-bar {
128
+ background: #1e1e3c;
129
+ border-radius: 15px;
130
+ overflow: hidden;
131
+ margin: 8px 0;
132
+ height: 25px;
133
+ border: 2px solid #333;
134
+ }
135
+
136
+ .stat-fill {
137
+ height: 100%;
138
+ border-radius: 12px;
139
+ transition: width 0.8s ease-in-out;
140
+ background: linear-gradient(90deg, #ff6b6b, #4ecdc4, #45b7d1);
141
+ }
142
+
143
+ /* Care Action Buttons */
144
+ .care-button {
145
+ background: linear-gradient(145deg, #4a9eff, #357abd);
146
+ border: none;
147
+ color: white;
148
+ padding: 12px 24px;
149
+ border-radius: 12px;
150
+ font-weight: bold;
151
+ transition: all 0.3s ease;
152
+ box-shadow: 0 4px 15px rgba(74, 158, 255, 0.4);
153
+ }
154
+
155
+ .care-button:hover {
156
+ transform: translateY(-3px);
157
+ box-shadow: 0 8px 25px rgba(74, 158, 255, 0.6);
158
+ background: linear-gradient(145deg, #5aa7ff, #4a9eff);
159
+ }
160
+
161
+ /* Conversation Area */
162
+ .conversation-container {
163
+ background: rgba(30, 30, 60, 0.8);
164
+ border: 2px solid #4a9eff;
165
+ border-radius: 15px;
166
+ backdrop-filter: blur(10px);
167
+ }
168
+
169
+ /* Mini-game Container */
170
+ .mini-game-area {
171
+ background: linear-gradient(145deg, #2d1b69, #1a1a2e);
172
+ border: 2px solid #8b5cf6;
173
+ border-radius: 15px;
174
+ padding: 20px;
175
+ margin: 10px 0;
176
+ }
177
+
178
+ /* Status Indicators */
179
+ .status-indicator {
180
+ display: inline-block;
181
+ width: 12px;
182
+ height: 12px;
183
+ border-radius: 50%;
184
+ margin-right: 8px;
185
+ }
186
+
187
+ .status-healthy { background: #4ade80; }
188
+ .status-warning { background: #fbbf24; }
189
+ .status-critical { background: #ef4444; }
190
+
191
+ /* Responsive Design */
192
+ @media (max-width: 768px) {
193
+ .monster-display {
194
+ padding: 15px;
195
+ margin: 10px;
196
+ }
197
+
198
+ .care-button {
199
+ padding: 10px 20px;
200
+ margin: 5px;
201
+ }
202
+ }
203
+ """
204
+
205
+ with gr.Blocks(
206
+ css=custom_css,
207
+ title="DigiPal - Advanced Monster Companion",
208
+ theme=gr.themes.Soft()
209
+ ) as interface:
210
+
211
+ # Header
212
+ gr.HTML("""
213
+ <div style="text-align: center; padding: 20px;">
214
+ <h1 style="color: #4a9eff; font-size: 2.5em; margin: 0;">🐾 DigiPal</h1>
215
+ <p style="color: #8b5cf6; font-size: 1.2em;">Advanced AI Monster Companion</p>
216
+ </div>
217
+ """)
218
+
219
+ # State Management - Modern Gradio 5.34.2 patterns
220
+ with gr.Row():
221
+ # Session State for current monster
222
+ current_monster_state = gr.State(None)
223
+
224
+ # Conversation State
225
+ conversation_state = gr.State([])
226
+
227
+ # UI State
228
+ ui_state = gr.State({
229
+ "last_action": None,
230
+ "current_tab": "care",
231
+ "mini_game_active": False
232
+ })
233
+
234
+ # Main Interface Layout
235
+ with gr.Row(equal_height=True):
236
+
237
+ # Left Column - Monster Display and Stats
238
+ with gr.Column(scale=3):
239
+
240
+ # Monster Display Area
241
+ monster_display = gr.HTML(
242
+ value=self._get_default_monster_display(),
243
+ elem_classes="monster-display"
244
+ )
245
+
246
+ # Monster Management Controls
247
+ with gr.Row():
248
+ create_monster_btn = gr.Button(
249
+ "🥚 Create New Monster",
250
+ variant="primary",
251
+ elem_classes="care-button"
252
+ )
253
+ load_monster_btn = gr.Button(
254
+ "📂 Load Monster",
255
+ elem_classes="care-button"
256
+ )
257
+ save_progress_btn = gr.Button(
258
+ "💾 Save Progress",
259
+ elem_classes="care-button"
260
+ )
261
+
262
+ # New Monster Creation
263
+ with gr.Group(visible=False) as monster_creation_group:
264
+ monster_name_input = gr.Textbox(
265
+ label="Monster Name",
266
+ placeholder="Enter your monster's name...",
267
+ max_lines=1
268
+ )
269
+
270
+ personality_type = gr.Dropdown(
271
+ choices=[p.value for p in MonsterPersonalityType],
272
+ label="Personality Type",
273
+ value="playful"
274
+ )
275
+
276
+ confirm_creation_btn = gr.Button(
277
+ "✨ Create Monster",
278
+ variant="primary"
279
+ )
280
+
281
+ # Middle Column - Care Actions and Training
282
+ with gr.Column(scale=2):
283
+
284
+ with gr.Tabs() as care_tabs:
285
+
286
+ # Care Tab
287
+ with gr.TabItem("🍼 Care", id=0):
288
+
289
+ # Feeding Section
290
+ with gr.Group():
291
+ gr.Markdown("### 🍽️ Feeding")
292
+
293
+ food_type = gr.Dropdown(
294
+ choices=[
295
+ "meat", "fish", "fruit", "vegetables",
296
+ "medicine", "supplement", "treat"
297
+ ],
298
+ value="meat",
299
+ label="Food Type"
300
+ )
301
+
302
+ feed_btn = gr.Button(
303
+ "🍖 Feed Monster",
304
+ elem_classes="care-button"
305
+ )
306
+
307
+ # Training Section
308
+ with gr.Group():
309
+ gr.Markdown("### 💪 Training")
310
+
311
+ training_type = gr.Dropdown(
312
+ choices=[
313
+ "strength", "endurance", "intelligence",
314
+ "dexterity", "spirit", "technique"
315
+ ],
316
+ value="strength",
317
+ label="Training Focus"
318
+ )
319
+
320
+ training_intensity = gr.Slider(
321
+ minimum=1,
322
+ maximum=5,
323
+ value=3,
324
+ step=1,
325
+ label="Training Intensity"
326
+ )
327
+
328
+ train_btn = gr.Button(
329
+ "🏋️ Start Training",
330
+ elem_classes="care-button"
331
+ )
332
+
333
+ # Care Actions
334
+ with gr.Group():
335
+ gr.Markdown("### 🧼 Care Actions")
336
+
337
+ with gr.Row():
338
+ clean_btn = gr.Button("🚿 Clean", elem_classes="care-button")
339
+ play_btn = gr.Button("🎮 Play", elem_classes="care-button")
340
+ rest_btn = gr.Button("😴 Rest", elem_classes="care-button")
341
+ discipline_btn = gr.Button("📚 Discipline", elem_classes="care-button")
342
+
343
+ # Evolution Tab
344
+ with gr.TabItem("🦋 Evolution", id=1):
345
+
346
+ evolution_status = gr.HTML(
347
+ value="<p>No monster loaded</p>"
348
+ )
349
+
350
+ evolution_requirements = gr.JSON(
351
+ label="Evolution Requirements",
352
+ value={}
353
+ )
354
+
355
+ trigger_evolution_btn = gr.Button(
356
+ "🌟 Trigger Evolution",
357
+ variant="primary",
358
+ interactive=False
359
+ )
360
+
361
+ # Breeding Tab
362
+ with gr.TabItem("💕 Breeding", id=2):
363
+
364
+ gr.Markdown("### Find a Breeding Partner")
365
+
366
+ partner_search = gr.Dropdown(
367
+ choices=[],
368
+ label="Available Partners",
369
+ interactive=False
370
+ )
371
+
372
+ breeding_compatibility = gr.HTML(
373
+ value="<p>Select a partner to see compatibility</p>"
374
+ )
375
+
376
+ start_breeding_btn = gr.Button(
377
+ "💖 Start Breeding",
378
+ variant="primary",
379
+ interactive=False
380
+ )
381
+
382
+ # Right Column - Conversation and Mini-games
383
+ with gr.Column(scale=3):
384
+
385
+ with gr.Tabs():
386
+
387
+ # Conversation Tab
388
+ with gr.TabItem("💬 Talk", id=0):
389
+
390
+ # Conversation Display
391
+ chatbot = gr.Chatbot(
392
+ value=[],
393
+ height=350,
394
+ label="Conversation with your Monster",
395
+ elem_classes="conversation-container",
396
+ avatar_images=("👤", "🐾")
397
+ )
398
+
399
+ # Text Input
400
+ with gr.Row():
401
+ text_input = gr.Textbox(
402
+ label="Message",
403
+ placeholder="Talk to your monster...",
404
+ scale=4,
405
+ max_lines=3
406
+ )
407
+ send_btn = gr.Button("💬", scale=1)
408
+
409
+ # Voice Input
410
+ with gr.Group():
411
+ gr.Markdown("### 🎤 Voice Chat")
412
+
413
+ with gr.Row():
414
+ audio_input = gr.Audio(
415
+ sources=["microphone"],
416
+ type="numpy",
417
+ label="Voice Input",
418
+ streaming=False
419
+ )
420
+
421
+ voice_btn = gr.Button("🗣️ Send Voice")
422
+
423
+ # Real-time audio streaming (Gradio 5.34.2 feature)
424
+ with gr.Row():
425
+ start_stream_btn = gr.Button("🎙️ Start Live Chat")
426
+ stop_stream_btn = gr.Button("⏹️ Stop", interactive=False)
427
+
428
+ # Mini-games Tab
429
+ with gr.TabItem("🎯 Games", id=1):
430
+
431
+ mini_game_display = gr.HTML(
432
+ value=self._get_mini_game_display(),
433
+ elem_classes="mini-game-area"
434
+ )
435
+
436
+ with gr.Row():
437
+ reaction_game_btn = gr.Button("⚡ Reaction Training")
438
+ memory_game_btn = gr.Button("🧠 Memory Challenge")
439
+ rhythm_game_btn = gr.Button("🎵 Rhythm Game")
440
+ puzzle_game_btn = gr.Button("🧩 Logic Puzzle")
441
+
442
+ game_score_display = gr.JSON(
443
+ label="Game Statistics",
444
+ value={}
445
+ )
446
+
447
+ # Stats Tab
448
+ with gr.TabItem("📊 Statistics", id=2):
449
+
450
+ detailed_stats = gr.JSON(
451
+ label="Detailed Monster Statistics",
452
+ value={}
453
+ )
454
+
455
+ performance_charts = gr.Plot(
456
+ label="Performance Over Time"
457
+ )
458
+
459
+ achievement_display = gr.HTML(
460
+ value="<p>No achievements yet</p>"
461
+ )
462
+
463
+ # Global Status Bar
464
+ with gr.Row():
465
+ status_display = gr.HTML(
466
+ value="<p>Ready to start your monster care journey!</p>",
467
+ elem_id="status-bar"
468
+ )
469
+
470
+ auto_save_indicator = gr.HTML(
471
+ value="<span style='color: green;'>● Auto-save: ON</span>",
472
+ elem_id="auto-save-status"
473
+ )
474
+
475
+ # Hidden components for data flow
476
+ action_result = gr.Textbox(visible=False)
477
+ background_timer = gr.Timer(value=30, active=True) # 30-second updates
478
+
479
+ # Event Handlers with Modern Async Patterns
480
+
481
+ # Monster Creation Flow
482
+ create_monster_btn.click(
483
+ fn=lambda: gr.update(visible=True),
484
+ outputs=monster_creation_group
485
+ )
486
+
487
+ confirm_creation_btn.click(
488
+ fn=self.create_new_monster,
489
+ inputs=[monster_name_input, personality_type],
490
+ outputs=[current_monster_state, monster_display, monster_creation_group]
491
+ )
492
+
493
+ # Care Actions
494
+ feed_btn.click(
495
+ fn=self.feed_monster,
496
+ inputs=[current_monster_state, food_type],
497
+ outputs=[current_monster_state, monster_display, action_result, chatbot]
498
+ )
499
+
500
+ train_btn.click(
501
+ fn=self.train_monster,
502
+ inputs=[current_monster_state, training_type, training_intensity],
503
+ outputs=[current_monster_state, monster_display, action_result]
504
+ )
505
+
506
+ # Conversation Handlers
507
+ send_btn.click(
508
+ fn=self.handle_text_conversation,
509
+ inputs=[current_monster_state, text_input, conversation_state],
510
+ outputs=[chatbot, text_input, conversation_state, current_monster_state]
511
+ )
512
+
513
+ text_input.submit(
514
+ fn=self.handle_text_conversation,
515
+ inputs=[current_monster_state, text_input, conversation_state],
516
+ outputs=[chatbot, text_input, conversation_state, current_monster_state]
517
+ )
518
+
519
+ voice_btn.click(
520
+ fn=self.handle_voice_input,
521
+ inputs=[current_monster_state, audio_input, conversation_state],
522
+ outputs=[chatbot, conversation_state, current_monster_state, action_result]
523
+ )
524
+
525
+ # Real-time streaming (Gradio 5.34.2)
526
+ start_stream_btn.click(
527
+ fn=self.start_voice_streaming,
528
+ outputs=[start_stream_btn, stop_stream_btn]
529
+ )
530
+
531
+ stop_stream_btn.click(
532
+ fn=self.stop_voice_streaming,
533
+ outputs=[start_stream_btn, stop_stream_btn]
534
+ )
535
+
536
+ # Background Updates
537
+ background_timer.tick(
538
+ fn=self.background_update,
539
+ inputs=[current_monster_state],
540
+ outputs=[current_monster_state, monster_display, auto_save_indicator]
541
+ )
542
+
543
+ # Care action handlers
544
+ for btn, action in [(clean_btn, "clean"), (play_btn, "play"),
545
+ (rest_btn, "rest"), (discipline_btn, "discipline")]:
546
+ btn.click(
547
+ fn=lambda monster_state, action=action: self.perform_care_action(monster_state, action),
548
+ inputs=[current_monster_state],
549
+ outputs=[current_monster_state, monster_display, action_result]
550
+ )
551
+
552
+ # Mini-game handlers
553
+ for btn, game in [(reaction_game_btn, "reaction"), (memory_game_btn, "memory"),
554
+ (rhythm_game_btn, "rhythm"), (puzzle_game_btn, "puzzle")]:
555
+ btn.click(
556
+ fn=lambda monster_state, game=game: self.start_mini_game(monster_state, game),
557
+ inputs=[current_monster_state],
558
+ outputs=[mini_game_display, game_score_display]
559
+ )
560
+
561
+ return interface
562
+
563
+ # Implementation methods continue...
564
+
565
+ async def create_new_monster(self, name: str, personality: str) -> Tuple:
566
+ """Create a new monster with specified parameters"""
567
+ try:
568
+ if not name.strip():
569
+ return None, self._get_default_monster_display(), gr.update(visible=True)
570
+
571
+ # Create monster with personality
572
+ monster = Monster(
573
+ name=name.strip(),
574
+ species="Botamon" # Starting species
575
+ )
576
+
577
+ # Set personality
578
+ monster.personality.primary_type = MonsterPersonalityType(personality)
579
+
580
+ # Randomize personality traits based on type
581
+ trait_modifiers = {
582
+ "playful": {"extraversion": 0.8, "openness": 0.7, "agreeableness": 0.6},
583
+ "serious": {"conscientiousness": 0.8, "neuroticism": 0.3, "extraversion": 0.4},
584
+ "curious": {"openness": 0.9, "extraversion": 0.6, "conscientiousness": 0.5},
585
+ "gentle": {"agreeableness": 0.9, "neuroticism": 0.2, "extraversion": 0.5},
586
+ "energetic": {"extraversion": 0.9, "openness": 0.6, "neuroticism": 0.3},
587
+ "calm": {"neuroticism": 0.1, "conscientiousness": 0.7, "agreeableness": 0.7},
588
+ "mischievous": {"openness": 0.8, "extraversion": 0.7, "conscientiousness": 0.3},
589
+ "loyal": {"agreeableness": 0.8, "conscientiousness": 0.9, "neuroticism": 0.2}
590
+ }
591
+
592
+ modifiers = trait_modifiers.get(personality, {})
593
+ for trait, value in modifiers.items():
594
+ if hasattr(monster.personality, trait):
595
+ setattr(monster.personality, trait, value)
596
+
597
+ # Save monster
598
+ await self.state_manager.save_monster(monster)
599
+
600
+ # Generate display
601
+ display_html = self._generate_monster_display(monster)
602
+
603
+ self.current_monster = monster
604
+
605
+ return (
606
+ monster.dict(),
607
+ display_html,
608
+ gr.update(visible=False)
609
+ )
610
+
611
+ except Exception as e:
612
+ self.logger.error(f"Monster creation failed: {e}")
613
+ return None, self._get_error_display(str(e)), gr.update(visible=True)
614
+
615
+ def _get_default_monster_display(self) -> str:
616
+ """Get default monster display when no monster is loaded"""
617
+ return """
618
+ <div style="text-align: center; padding: 40px;">
619
+ <div style="font-size: 4em; margin-bottom: 20px;">🥚</div>
620
+ <h2 style="color: #4a9eff;">No Monster Loaded</h2>
621
+ <p style="color: #8b5cf6;">Create a new monster to begin your journey!</p>
622
+ </div>
623
+ """
624
+
625
+ def _generate_monster_display(self, monster: Monster) -> str:
626
+ """Generate HTML display for the monster"""
627
+ # Monster sprite based on species and stage
628
+ sprite_map = {
629
+ "Botamon": {"egg": "🥚", "baby": "🐣", "child": "🐾", "adult": "🐲"},
630
+ # Add more species...
631
+ }
632
+
633
+ sprite = sprite_map.get(monster.species, {}).get(monster.lifecycle.stage.value, "🐾")
634
+
635
+ # Emotional state emoji
636
+ emotion_emojis = {
637
+ "ecstatic": "🤩", "happy": "😊", "content": "😌", "neutral": "😐",
638
+ "melancholy": "😔", "sad": "😢", "angry": "😠", "sick": "🤒",
639
+ "excited": "😆", "tired": "😴"
640
+ }
641
+
642
+ emotion_emoji = emotion_emojis.get(monster.emotional_state.value, "😐")
643
+
644
+ # Calculate stat colors
645
+ def get_stat_color(value: int) -> str:
646
+ if value >= 80: return "#4ade80" # Green
647
+ elif value >= 60: return "#fbbf24" # Yellow
648
+ elif value >= 40: return "#fb923c" # Orange
649
+ else: return "#ef4444" # Red
650
+
651
+ # Age display
652
+ age_days = monster.lifecycle.age_minutes / 1440
653
+ age_display = f"{age_days:.1f} days"
654
+
655
+ return f"""
656
+ <div style="text-align: center; padding: 20px;">
657
+
658
+ <!-- Monster Sprite -->
659
+ <div style="font-size: 6em; margin: 20px 0;">{sprite}</div>
660
+
661
+ <!-- Monster Info -->
662
+ <h2 style="color: #4a9eff; margin: 10px 0;">{monster.name} {emotion_emoji}</h2>
663
+ <p style="color: #8b5cf6; margin: 5px 0;">
664
+ <strong>{monster.species}</strong> | {monster.lifecycle.stage.value.title()} | {age_display}
665
+ </p>
666
+
667
+ <!-- Mood and Activity -->
668
+ <p style="color: #a78bfa; margin: 10px 0;">
669
+ Feeling {monster.emotional_state.value} while {monster.current_activity}
670
+ </p>
671
+
672
+ <!-- Care Stats -->
673
+ <div style="margin: 20px 0;">
674
+ <h3 style="color: #4a9eff;">Care Status</h3>
675
+
676
+ <div style="text-align: left; max-width: 300px; margin: 0 auto;">
677
+ <div style="margin: 8px 0;">
678
+ <span style="color: white;">Health</span>
679
+ <div class="stat-bar">
680
+ <div class="stat-fill" style="width: {monster.stats.health}%; background: {get_stat_color(monster.stats.health)};"></div>
681
+ </div>
682
+ <span style="color: #888; font-size: 0.9em;">{monster.stats.health}/100</span>
683
+ </div>
684
+
685
+ <div style="margin: 8px 0;">
686
+ <span style="color: white;">Happiness</span>
687
+ <div class="stat-bar">
688
+ <div class="stat-fill" style="width: {monster.stats.happiness}%; background: {get_stat_color(monster.stats.happiness)};"></div>
689
+ </div>
690
+ <span style="color: #888; font-size: 0.9em;">{monster.stats.happiness}/100</span>
691
+ </div>
692
+
693
+ <div style="margin: 8px 0;">
694
+ <span style="color: white;">Hunger</span>
695
+ <div class="stat-bar">
696
+ <div class="stat-fill" style="width: {monster.stats.hunger}%; background: {get_stat_color(monster.stats.hunger)};"></div>
697
+ </div>
698
+ <span style="color: #888; font-size: 0.9em;">{monster.stats.hunger}/100</span>
699
+ </div>
700
+
701
+ <div style="margin: 8px 0;">
702
+ <span style="color: white;">Energy</span>
703
+ <div class="stat-bar">
704
+ <div class="stat-fill" style="width: {monster.stats.energy}%; background: {get_stat_color(monster.stats.energy)};"></div>
705
+ </div>
706
+ <span style="color: #888; font-size: 0.9em;">{monster.stats.energy}/100</span>
707
+ </div>
708
+ </div>
709
+ </div>
710
+
711
+ <!-- Battle Stats -->
712
+ <div style="margin: 20px 0;">
713
+ <h3 style="color: #8b5cf6;">Battle Power</h3>
714
+ <div style="display: grid; grid-template-columns: 1fr 1fr; gap: 10px; max-width: 300px; margin: 0 auto;">
715
+ <div>Life: <strong style="color: #4ade80;">{monster.stats.life}</strong></div>
716
+ <div>MP: <strong style="color: #60a5fa;">{monster.stats.mp}</strong></div>
717
+ <div>Offense: <strong style="color: #f87171;">{monster.stats.offense}</strong></div>
718
+ <div>Defense: <strong style="color: #34d399;">{monster.stats.defense}</strong></div>
719
+ <div>Speed: <strong style="color: #fbbf24;">{monster.stats.speed}</strong></div>
720
+ <div>Brains: <strong style="color: #a78bfa;">{monster.stats.brains}</strong></div>
721
+ </div>
722
+ </div>
723
+
724
+ <!-- Generation and Care Info -->
725
+ <div style="margin: 15px 0; font-size: 0.9em; color: #888;">
726
+ Generation {monster.lifecycle.generation} |
727
+ Care Mistakes: {monster.lifecycle.care_mistakes} |
728
+ Relationship: {monster.personality.relationship_level}/100
729
+ </div>
730
+
731
+ </div>
732
+ """
733
+
734
+ def _get_mini_game_display(self) -> str:
735
+ """Get mini-game display HTML"""
736
+ return """
737
+ <div style="text-align: center; padding: 20px;">
738
+ <h3 style="color: #8b5cf6;">Mini-Games Training Center</h3>
739
+ <p style="color: #a78bfa;">Select a mini-game to train your monster!</p>
740
+ <div style="margin-top: 20px;">
741
+ <p>⚡ Reaction: Improve Speed & Reflexes</p>
742
+ <p>🧠 Memory: Enhance Intelligence</p>
743
+ <p>🎵 Rhythm: Boost Spirit & Happiness</p>
744
+ <p>🧩 Logic: Develop Problem-Solving</p>
745
+ </div>
746
+ </div>
747
+ """
748
+
749
+ def _get_error_display(self, error: str) -> str:
750
+ """Get error display HTML"""
751
+ return f"""
752
+ <div style="text-align: center; padding: 40px;">
753
+ <div style="font-size: 3em; margin-bottom: 20px;">❌</div>
754
+ <h2 style="color: #ef4444;">Error Occurred</h2>
755
+ <p style="color: #f87171;">{error}</p>
756
+ </div>
757
+ """
758
+
759
+ async def feed_monster(self, monster_state: Dict, food_type: str) -> Tuple:
760
+ """Feed the monster"""
761
+ if not monster_state:
762
+ return monster_state, self._get_default_monster_display(), "No monster loaded!", []
763
+
764
+ try:
765
+ monster = Monster(**monster_state)
766
+
767
+ # Food effects
768
+ food_effects = {
769
+ "meat": {"hunger": 30, "happiness": 10},
770
+ "fish": {"hunger": 25, "happiness": 15, "health": 5},
771
+ "fruit": {"hunger": 20, "happiness": 20},
772
+ "vegetables": {"hunger": 25, "happiness": 5, "health": 10},
773
+ "medicine": {"health": 50, "happiness": -10},
774
+ "supplement": {"energy": 20, "happiness": 5},
775
+ "treat": {"happiness": 30, "hunger": 10}
776
+ }
777
+
778
+ effects = food_effects.get(food_type, food_effects["meat"])
779
+
780
+ # Apply effects
781
+ for stat, value in effects.items():
782
+ current = getattr(monster.stats, stat)
783
+ setattr(monster.stats, stat, max(0, min(100, current + value)))
784
+
785
+ # Update emotional state
786
+ monster.emotional_state = monster.calculate_emotional_state()
787
+
788
+ # Save monster
789
+ await self.state_manager.save_monster(monster)
790
+
791
+ # Generate response
792
+ response = f"{monster.name} enjoyed the {food_type}! 😋"
793
+
794
+ return (
795
+ monster.dict(),
796
+ self._generate_monster_display(monster),
797
+ response,
798
+ [[f"Fed {food_type}", response]]
799
+ )
800
+
801
+ except Exception as e:
802
+ self.logger.error(f"Feeding failed: {e}")
803
+ return monster_state, self._get_error_display(str(e)), str(e), []
804
+
805
+ async def train_monster(self, monster_state: Dict, training_type: str, intensity: int) -> Tuple:
806
+ """Train the monster"""
807
+ if not monster_state:
808
+ return monster_state, self._get_default_monster_display(), "No monster loaded!"
809
+
810
+ try:
811
+ monster = Monster(**monster_state)
812
+
813
+ # Check if monster can train
814
+ if monster.stats.energy < 20:
815
+ return monster_state, self._generate_monster_display(monster), f"{monster.name} is too tired to train! 😴"
816
+
817
+ # Training effects
818
+ training_effects = {
819
+ "strength": {"offense": 5 * intensity, "life": 20 * intensity},
820
+ "endurance": {"defense": 5 * intensity, "life": 30 * intensity},
821
+ "intelligence": {"brains": 8 * intensity, "mp": 10 * intensity},
822
+ "dexterity": {"speed": 6 * intensity},
823
+ "spirit": {"mp": 15 * intensity, "happiness": 5},
824
+ "technique": {"offense": 3 * intensity, "defense": 3 * intensity}
825
+ }
826
+
827
+ effects = training_effects.get(training_type, {})
828
+
829
+ # Apply stat increases
830
+ for stat, increase in effects.items():
831
+ if hasattr(monster.stats, stat):
832
+ current = getattr(monster.stats, stat)
833
+ setattr(monster.stats, stat, current + increase)
834
+
835
+ # Update training progress
836
+ if training_type in monster.stats.training_progress:
837
+ monster.stats.training_progress[training_type] += 10 * intensity
838
+
839
+ # Training costs
840
+ monster.stats.energy = max(0, monster.stats.energy - (15 * intensity))
841
+ monster.stats.hunger = max(0, monster.stats.hunger - (10 * intensity))
842
+
843
+ # Update emotional state
844
+ monster.emotional_state = monster.calculate_emotional_state()
845
+ monster.current_activity = "training"
846
+
847
+ # Save monster
848
+ await self.state_manager.save_monster(monster)
849
+
850
+ response = f"{monster.name} completed {training_type} training! 💪"
851
+
852
+ return (
853
+ monster.dict(),
854
+ self._generate_monster_display(monster),
855
+ response
856
+ )
857
+
858
+ except Exception as e:
859
+ self.logger.error(f"Training failed: {e}")
860
+ return monster_state, self._get_error_display(str(e)), str(e)
861
+
862
+ async def handle_text_conversation(self, monster_state: Dict, message: str, conversation_history: List) -> Tuple:
863
+ """Handle text conversation with monster"""
864
+ if not monster_state or not message.strip():
865
+ return conversation_history, ""
866
+
867
+ try:
868
+ monster = Monster(**monster_state)
869
+
870
+ # Generate AI response
871
+ response_data = await self.qwen_processor.generate_monster_response(
872
+ monster.dict(),
873
+ message,
874
+ conversation_history
875
+ )
876
+
877
+ response = response_data["response"]
878
+
879
+ # Update conversation history
880
+ conversation_history.append([message, response])
881
+
882
+ # Update monster state based on interaction
883
+ monster.conversation.total_conversations += 1
884
+ monster.conversation.last_interaction = datetime.now()
885
+ monster.stats.happiness = min(100, monster.stats.happiness + 2)
886
+ monster.personality.relationship_level = min(100, monster.personality.relationship_level + 1)
887
+
888
+ # Apply emotional impact
889
+ emotional_impact = response_data.get("emotional_impact", {})
890
+ for emotion, value in emotional_impact.items():
891
+ if emotion == "happiness":
892
+ monster.stats.happiness = max(0, min(100, monster.stats.happiness + int(value * 10)))
893
+ elif emotion == "bonding":
894
+ monster.personality.relationship_level = min(100, monster.personality.relationship_level + int(value * 5))
895
+
896
+ # Save monster
897
+ await self.state_manager.save_monster(monster)
898
+
899
+ return conversation_history, "", conversation_history, monster.dict()
900
+
901
+ except Exception as e:
902
+ self.logger.error(f"Conversation failed: {e}")
903
+ return conversation_history, ""
904
+
905
+ async def handle_voice_input(self, monster_state: Dict, audio_data, conversation_history: List) -> Tuple:
906
+ """Handle voice input"""
907
+ if not monster_state or audio_data is None:
908
+ return conversation_history, conversation_history, monster_state, ""
909
+
910
+ try:
911
+ # Process speech
912
+ speech_result = await self.speech_engine.process_audio_stream(audio_data[1])
913
+
914
+ if not speech_result["success"]:
915
+ return conversation_history, conversation_history, monster_state, "Speech processing failed"
916
+
917
+ transcribed_text = speech_result["transcription"]
918
+ if not transcribed_text.strip():
919
+ return conversation_history, conversation_history, monster_state, "No speech detected"
920
+
921
+ # Process as text conversation
922
+ new_history, _, updated_history, updated_monster = await self.handle_text_conversation(
923
+ monster_state, transcribed_text, conversation_history
924
+ )
925
+
926
+ return new_history, updated_history, updated_monster, f"Heard: \"{transcribed_text}\""
927
+
928
+ except Exception as e:
929
+ self.logger.error(f"Voice input failed: {e}")
930
+ return conversation_history, conversation_history, monster_state, str(e)
931
+
932
+ async def perform_care_action(self, monster_state: Dict, action: str) -> Tuple:
933
+ """Perform care action on monster"""
934
+ if not monster_state:
935
+ return monster_state, self._get_default_monster_display(), "No monster loaded!"
936
+
937
+ try:
938
+ monster = Monster(**monster_state)
939
+
940
+ care_effects = {
941
+ "clean": {"cleanliness": 50, "happiness": 10},
942
+ "play": {"happiness": 25, "energy": -15, "relationship": 5},
943
+ "rest": {"energy": 40, "happiness": 5},
944
+ "discipline": {"discipline": 20, "happiness": -10}
945
+ }
946
+
947
+ effects = care_effects.get(action, {})
948
+
949
+ # Apply effects
950
+ for stat, value in effects.items():
951
+ if stat == "relationship":
952
+ monster.personality.relationship_level = min(100, monster.personality.relationship_level + value)
953
+ elif hasattr(monster.stats, stat):
954
+ current = getattr(monster.stats, stat)
955
+ setattr(monster.stats, stat, max(0, min(100, current + value)))
956
+
957
+ # Update activity
958
+ monster.current_activity = action
959
+ monster.emotional_state = monster.calculate_emotional_state()
960
+
961
+ # Save monster
962
+ await self.state_manager.save_monster(monster)
963
+
964
+ response = f"{monster.name} is now {action}ing! ✨"
965
+
966
+ return (
967
+ monster.dict(),
968
+ self._generate_monster_display(monster),
969
+ response
970
+ )
971
+
972
+ except Exception as e:
973
+ self.logger.error(f"Care action failed: {e}")
974
+ return monster_state, self._get_error_display(str(e)), str(e)
975
+
976
+ async def background_update(self, monster_state: Dict) -> Tuple:
977
+ """Background update for time-based effects"""
978
+ if not monster_state:
979
+ return monster_state, self._get_default_monster_display(), gr.update()
980
+
981
+ try:
982
+ monster = Monster(**monster_state)
983
+
984
+ # Calculate time elapsed
985
+ time_elapsed = (datetime.now() - monster.last_update).total_seconds() / 60 # minutes
986
+
987
+ # Apply time effects
988
+ monster.apply_time_effects(time_elapsed)
989
+
990
+ # Save monster
991
+ await self.state_manager.save_monster(monster)
992
+
993
+ # Update save indicator
994
+ save_indicator = f"<span style='color: green;'>● Auto-saved at {datetime.now().strftime('%H:%M:%S')}</span>"
995
+
996
+ return (
997
+ monster.dict(),
998
+ self._generate_monster_display(monster),
999
+ save_indicator
1000
+ )
1001
+
1002
+ except Exception as e:
1003
+ self.logger.error(f"Background update failed: {e}")
1004
+ return monster_state, self._get_error_display(str(e)), gr.update()
1005
+
1006
+ def start_mini_game(self, monster_state: Dict, game_type: str) -> Tuple:
1007
+ """Start a mini-game"""
1008
+ if not monster_state:
1009
+ return self._get_mini_game_display(), {}
1010
+
1011
+ # Placeholder for mini-game implementation
1012
+ game_display = f"""
1013
+ <div style="text-align: center; padding: 20px;">
1014
+ <h3 style="color: #8b5cf6;">{game_type.title()} Training</h3>
1015
+ <p>Mini-game implementation coming soon!</p>
1016
+ </div>
1017
+ """
1018
+
1019
+ game_stats = {
1020
+ "game_type": game_type,
1021
+ "status": "not_implemented"
1022
+ }
1023
+
1024
+ return game_display, game_stats
1025
+
1026
+ def start_voice_streaming(self) -> Tuple:
1027
+ """Start voice streaming"""
1028
+ return gr.update(interactive=False), gr.update(interactive=True)
1029
+
1030
+ def stop_voice_streaming(self) -> Tuple:
1031
+ """Stop voice streaming"""
1032
+ return gr.update(interactive=True), gr.update(interactive=False)
1033
+
1034
+ def launch(self, **kwargs):
1035
+ """Launch the Gradio interface with optimized settings"""
1036
+ loop = asyncio.new_event_loop()
1037
+ asyncio.set_event_loop(loop)
1038
+
1039
+ # Initialize async components
1040
+ loop.run_until_complete(self.initialize())
1041
+
1042
+ # Create interface
1043
+ interface = self.create_interface()
1044
+
1045
+ # Launch with production settings
1046
+ launch_config = {
1047
+ "server_name": "0.0.0.0",
1048
+ "server_port": 7860,
1049
+ "share": False,
1050
+ "debug": False,
1051
+ "show_error": True,
1052
+ "quiet": False,
1053
+ "favicon_path": None,
1054
+ "ssl_keyfile": None,
1055
+ "ssl_certfile": None,
1056
+ "ssl_keyfile_password": None,
1057
+ "max_threads": 40,
1058
+ "show_tips": False,
1059
+ "enable_queue": True,
1060
+ **kwargs
1061
+ }
1062
+
1063
+ self.logger.info("Launching DigiPal interface...")
1064
+ return interface.launch(**launch_config)
src/ui/state_manager.py ADDED
@@ -0,0 +1,417 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import asyncio
2
+ import json
3
+ import aiofiles
4
+ import sqlite3
5
+ import aiosqlite
6
+ from pathlib import Path
7
+ from typing import Dict, List, Optional, Any, Union
8
+ from datetime import datetime, timedelta
9
+ import logging
10
+ import pickle
11
+ import gzip
12
+
13
+ from ..core.monster_engine import Monster
14
+
15
+ class AdvancedStateManager:
16
+ def __init__(self, save_dir: str = "data/saves"):
17
+ self.save_dir = Path(save_dir)
18
+ self.save_dir.mkdir(parents=True, exist_ok=True)
19
+
20
+ self.db_path = self.save_dir / "monsters.db"
21
+ self.backup_dir = self.save_dir / "backups"
22
+ self.backup_dir.mkdir(exist_ok=True)
23
+
24
+ self.logger = logging.getLogger(__name__)
25
+
26
+ # In-memory cache for active monsters
27
+ self.monster_cache: Dict[str, Monster] = {}
28
+ self.cache_timestamps: Dict[str, datetime] = {}
29
+ self.cache_timeout = timedelta(minutes=30)
30
+
31
+ # Connection pool
32
+ self.db_pool = None
33
+
34
+ async def initialize(self):
35
+ """Initialize the state management system"""
36
+ try:
37
+ # Create database tables
38
+ await self._create_tables()
39
+
40
+ # Start background tasks
41
+ asyncio.create_task(self._cache_cleanup_task())
42
+ asyncio.create_task(self._auto_backup_task())
43
+
44
+ self.logger.info("State manager initialized successfully")
45
+
46
+ except Exception as e:
47
+ self.logger.error(f"State manager initialization failed: {e}")
48
+ raise
49
+
50
+ async def _create_tables(self):
51
+ """Create database tables for monster storage"""
52
+ async with aiosqlite.connect(self.db_path) as db:
53
+ await db.execute("""
54
+ CREATE TABLE IF NOT EXISTS monsters (
55
+ id TEXT PRIMARY KEY,
56
+ name TEXT NOT NULL,
57
+ species TEXT NOT NULL,
58
+ data BLOB NOT NULL,
59
+ created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
60
+ updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
61
+ is_active BOOLEAN DEFAULT 1
62
+ )
63
+ """)
64
+
65
+ await db.execute("""
66
+ CREATE TABLE IF NOT EXISTS monster_interactions (
67
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
68
+ monster_id TEXT NOT NULL,
69
+ interaction_type TEXT NOT NULL,
70
+ interaction_data TEXT,
71
+ timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
72
+ FOREIGN KEY (monster_id) REFERENCES monsters (id)
73
+ )
74
+ """)
75
+
76
+ await db.execute("""
77
+ CREATE TABLE IF NOT EXISTS evolution_history (
78
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
79
+ monster_id TEXT NOT NULL,
80
+ from_stage TEXT NOT NULL,
81
+ to_stage TEXT NOT NULL,
82
+ evolution_trigger TEXT,
83
+ timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
84
+ FOREIGN KEY (monster_id) REFERENCES monsters (id)
85
+ )
86
+ """)
87
+
88
+ await db.execute("""
89
+ CREATE TABLE IF NOT EXISTS breeding_records (
90
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
91
+ parent1_id TEXT NOT NULL,
92
+ parent2_id TEXT NOT NULL,
93
+ offspring_id TEXT NOT NULL,
94
+ breeding_time_hours REAL,
95
+ compatibility_score REAL,
96
+ timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
97
+ )
98
+ """)
99
+
100
+ # Create indexes for performance
101
+ await db.execute("CREATE INDEX IF NOT EXISTS idx_monsters_active ON monsters (is_active)")
102
+ await db.execute("CREATE INDEX IF NOT EXISTS idx_interactions_monster ON monster_interactions (monster_id)")
103
+ await db.execute("CREATE INDEX IF NOT EXISTS idx_interactions_type ON monster_interactions (interaction_type)")
104
+
105
+ await db.commit()
106
+
107
+ async def save_monster(self, monster: Monster) -> bool:
108
+ """Save monster to persistent storage"""
109
+ try:
110
+ # Update cache
111
+ self.monster_cache[monster.id] = monster
112
+ self.cache_timestamps[monster.id] = datetime.now()
113
+
114
+ # Serialize monster data with compression
115
+ monster_data = gzip.compress(pickle.dumps(monster.dict()))
116
+
117
+ # Save to database
118
+ async with aiosqlite.connect(self.db_path) as db:
119
+ await db.execute("""
120
+ INSERT OR REPLACE INTO monsters
121
+ (id, name, species, data, updated_at)
122
+ VALUES (?, ?, ?, ?, ?)
123
+ """, (
124
+ monster.id,
125
+ monster.name,
126
+ monster.species,
127
+ monster_data,
128
+ datetime.now().isoformat()
129
+ ))
130
+ await db.commit()
131
+
132
+ self.logger.debug(f"Saved monster {monster.name} ({monster.id})")
133
+ return True
134
+
135
+ except Exception as e:
136
+ self.logger.error(f"Failed to save monster {monster.id}: {e}")
137
+ return False
138
+
139
+ async def load_monster(self, monster_id: str) -> Optional[Monster]:
140
+ """Load monster from storage"""
141
+ try:
142
+ # Check cache first
143
+ if monster_id in self.monster_cache:
144
+ cache_time = self.cache_timestamps.get(monster_id)
145
+ if cache_time and (datetime.now() - cache_time) < self.cache_timeout:
146
+ return self.monster_cache[monster_id]
147
+
148
+ # Load from database
149
+ async with aiosqlite.connect(self.db_path) as db:
150
+ async with db.execute(
151
+ "SELECT data FROM monsters WHERE id = ? AND is_active = 1",
152
+ (monster_id,)
153
+ ) as cursor:
154
+ row = await cursor.fetchone()
155
+
156
+ if not row:
157
+ return None
158
+
159
+ # Decompress and deserialize
160
+ monster_data = pickle.loads(gzip.decompress(row[0]))
161
+ monster = Monster(**monster_data)
162
+
163
+ # Update cache
164
+ self.monster_cache[monster_id] = monster
165
+ self.cache_timestamps[monster_id] = datetime.now()
166
+
167
+ self.logger.debug(f"Loaded monster {monster.name} ({monster_id})")
168
+ return monster
169
+
170
+ except Exception as e:
171
+ self.logger.error(f"Failed to load monster {monster_id}: {e}")
172
+ return None
173
+
174
+ async def list_monsters(self, active_only: bool = True) -> List[Dict[str, Any]]:
175
+ """List all monsters with basic information"""
176
+ try:
177
+ where_clause = "WHERE is_active = 1" if active_only else ""
178
+
179
+ async with aiosqlite.connect(self.db_path) as db:
180
+ async with db.execute(f"""
181
+ SELECT id, name, species, created_at, updated_at
182
+ FROM monsters {where_clause}
183
+ ORDER BY updated_at DESC
184
+ """) as cursor:
185
+
186
+ monsters = []
187
+ async for row in cursor:
188
+ monsters.append({
189
+ "id": row[0],
190
+ "name": row[1],
191
+ "species": row[2],
192
+ "created_at": row[3],
193
+ "updated_at": row[4]
194
+ })
195
+
196
+ return monsters
197
+
198
+ except Exception as e:
199
+ self.logger.error(f"Failed to list monsters: {e}")
200
+ return []
201
+
202
+ async def delete_monster(self, monster_id: str, soft_delete: bool = True) -> bool:
203
+ """Delete monster from storage"""
204
+ try:
205
+ # Remove from cache
206
+ self.monster_cache.pop(monster_id, None)
207
+ self.cache_timestamps.pop(monster_id, None)
208
+
209
+ async with aiosqlite.connect(self.db_path) as db:
210
+ if soft_delete:
211
+ # Soft delete - mark as inactive
212
+ await db.execute(
213
+ "UPDATE monsters SET is_active = 0 WHERE id = ?",
214
+ (monster_id,)
215
+ )
216
+ else:
217
+ # Hard delete - remove completely
218
+ await db.execute("DELETE FROM monsters WHERE id = ?", (monster_id,))
219
+ await db.execute("DELETE FROM monster_interactions WHERE monster_id = ?", (monster_id,))
220
+ await db.execute("DELETE FROM evolution_history WHERE monster_id = ?", (monster_id,))
221
+
222
+ await db.commit()
223
+
224
+ self.logger.info(f"Deleted monster {monster_id} (soft={soft_delete})")
225
+ return True
226
+
227
+ except Exception as e:
228
+ self.logger.error(f"Failed to delete monster {monster_id}: {e}")
229
+ return False
230
+
231
+ async def log_interaction(self, monster_id: str, interaction_type: str, interaction_data: Dict[str, Any] = None):
232
+ """Log monster interaction for analytics"""
233
+ try:
234
+ data_json = json.dumps(interaction_data) if interaction_data else None
235
+
236
+ async with aiosqlite.connect(self.db_path) as db:
237
+ await db.execute("""
238
+ INSERT INTO monster_interactions
239
+ (monster_id, interaction_type, interaction_data)
240
+ VALUES (?, ?, ?)
241
+ """, (monster_id, interaction_type, data_json))
242
+ await db.commit()
243
+
244
+ except Exception as e:
245
+ self.logger.error(f"Failed to log interaction: {e}")
246
+
247
+ async def log_evolution(self, monster_id: str, from_stage: str, to_stage: str, trigger: str):
248
+ """Log monster evolution event"""
249
+ try:
250
+ async with aiosqlite.connect(self.db_path) as db:
251
+ await db.execute("""
252
+ INSERT INTO evolution_history
253
+ (monster_id, from_stage, to_stage, evolution_trigger)
254
+ VALUES (?, ?, ?, ?)
255
+ """, (monster_id, from_stage, to_stage, trigger))
256
+ await db.commit()
257
+
258
+ except Exception as e:
259
+ self.logger.error(f"Failed to log evolution: {e}")
260
+
261
+ async def get_monster_statistics(self, monster_id: str) -> Dict[str, Any]:
262
+ """Get comprehensive statistics for a monster"""
263
+ try:
264
+ async with aiosqlite.connect(self.db_path) as db:
265
+ # Get interaction counts
266
+ async with db.execute("""
267
+ SELECT interaction_type, COUNT(*) as count
268
+ FROM monster_interactions
269
+ WHERE monster_id = ?
270
+ GROUP BY interaction_type
271
+ """, (monster_id,)) as cursor:
272
+ interactions = {row[0]: row[1] async for row in cursor}
273
+
274
+ # Get evolution history
275
+ async with db.execute("""
276
+ SELECT from_stage, to_stage, evolution_trigger, timestamp
277
+ FROM evolution_history
278
+ WHERE monster_id = ?
279
+ ORDER BY timestamp
280
+ """, (monster_id,)) as cursor:
281
+ evolutions = [
282
+ {
283
+ "from": row[0],
284
+ "to": row[1],
285
+ "trigger": row[2],
286
+ "timestamp": row[3]
287
+ } async for row in cursor
288
+ ]
289
+
290
+ return {
291
+ "interactions": interactions,
292
+ "evolutions": evolutions,
293
+ "total_interactions": sum(interactions.values()),
294
+ "evolution_count": len(evolutions)
295
+ }
296
+
297
+ except Exception as e:
298
+ self.logger.error(f"Failed to get statistics for {monster_id}: {e}")
299
+ return {}
300
+
301
+ async def create_backup(self) -> str:
302
+ """Create a compressed backup of all monster data"""
303
+ try:
304
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
305
+ backup_file = self.backup_dir / f"monsters_backup_{timestamp}.gz"
306
+
307
+ # Export all active monsters
308
+ monsters = await self.list_monsters(active_only=True)
309
+ backup_data = {
310
+ "timestamp": timestamp,
311
+ "monsters": []
312
+ }
313
+
314
+ for monster_info in monsters:
315
+ monster = await self.load_monster(monster_info["id"])
316
+ if monster:
317
+ backup_data["monsters"].append(monster.dict())
318
+
319
+ # Compress and save
320
+ with gzip.open(backup_file, 'wt') as f:
321
+ json.dump(backup_data, f, default=str, indent=2)
322
+
323
+ self.logger.info(f"Created backup: {backup_file}")
324
+ return str(backup_file)
325
+
326
+ except Exception as e:
327
+ self.logger.error(f"Backup creation failed: {e}")
328
+ return ""
329
+
330
+ async def restore_backup(self, backup_file: str) -> bool:
331
+ """Restore monsters from backup file"""
332
+ try:
333
+ backup_path = Path(backup_file)
334
+ if not backup_path.exists():
335
+ return False
336
+
337
+ with gzip.open(backup_path, 'rt') as f:
338
+ backup_data = json.load(f)
339
+
340
+ restored_count = 0
341
+ for monster_data in backup_data.get("monsters", []):
342
+ try:
343
+ monster = Monster(**monster_data)
344
+ if await self.save_monster(monster):
345
+ restored_count += 1
346
+ except Exception as e:
347
+ self.logger.warning(f"Failed to restore monster: {e}")
348
+
349
+ self.logger.info(f"Restored {restored_count} monsters from backup")
350
+ return restored_count > 0
351
+
352
+ except Exception as e:
353
+ self.logger.error(f"Backup restoration failed: {e}")
354
+ return False
355
+
356
+ async def _cache_cleanup_task(self):
357
+ """Background task to clean up expired cache entries"""
358
+ while True:
359
+ try:
360
+ current_time = datetime.now()
361
+ expired_keys = []
362
+
363
+ for monster_id, timestamp in self.cache_timestamps.items():
364
+ if current_time - timestamp > self.cache_timeout:
365
+ expired_keys.append(monster_id)
366
+
367
+ for key in expired_keys:
368
+ self.monster_cache.pop(key, None)
369
+ self.cache_timestamps.pop(key, None)
370
+
371
+ if expired_keys:
372
+ self.logger.debug(f"Cleaned up {len(expired_keys)} expired cache entries")
373
+
374
+ # Sleep for 10 minutes before next cleanup
375
+ await asyncio.sleep(600)
376
+
377
+ except Exception as e:
378
+ self.logger.error(f"Cache cleanup task failed: {e}")
379
+ await asyncio.sleep(60) # Shorter sleep on error
380
+
381
+ async def _auto_backup_task(self):
382
+ """Background task for automatic backups"""
383
+ while True:
384
+ try:
385
+ # Create backup every 6 hours
386
+ await asyncio.sleep(21600) # 6 hours
387
+
388
+ backup_file = await self.create_backup()
389
+ if backup_file:
390
+ # Clean up old backups (keep last 10)
391
+ await self._cleanup_old_backups()
392
+
393
+ except Exception as e:
394
+ self.logger.error(f"Auto backup task failed: {e}")
395
+ await asyncio.sleep(3600) # Retry in 1 hour on error
396
+
397
+ async def _cleanup_old_backups(self, keep_count: int = 10):
398
+ """Clean up old backup files"""
399
+ try:
400
+ backup_files = list(self.backup_dir.glob("monsters_backup_*.gz"))
401
+ backup_files.sort(key=lambda x: x.stat().st_mtime, reverse=True)
402
+
403
+ for old_backup in backup_files[keep_count:]:
404
+ old_backup.unlink()
405
+ self.logger.debug(f"Removed old backup: {old_backup}")
406
+
407
+ except Exception as e:
408
+ self.logger.error(f"Backup cleanup failed: {e}")
409
+
410
+ def get_cache_stats(self) -> Dict[str, Any]:
411
+ """Get cache performance statistics"""
412
+ return {
413
+ "cached_monsters": len(self.monster_cache),
414
+ "cache_timeout_minutes": self.cache_timeout.total_seconds() / 60,
415
+ "oldest_cache_entry": min(self.cache_timestamps.values()) if self.cache_timestamps else None,
416
+ "newest_cache_entry": max(self.cache_timestamps.values()) if self.cache_timestamps else None
417
+ }
src/utils/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # Utils module initialization
src/utils/performance_tracker.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ import time
3
+ import psutil
4
+ import torch
5
+ from typing import Dict, Any, List
6
+ from datetime import datetime
7
+ import asyncio
8
+
9
+ class PerformanceTracker:
10
+ def __init__(self):
11
+ self.logger = logging.getLogger(__name__)
12
+ self.metrics = {
13
+ "inference_times": [],
14
+ "memory_usage": [],
15
+ "cpu_usage": [],
16
+ "gpu_usage": [],
17
+ "total_requests": 0,
18
+ "successful_requests": 0,
19
+ "failed_requests": 0
20
+ }
21
+ self.start_time = time.time()
22
+
23
+ async def initialize(self):
24
+ """Initialize performance tracking"""
25
+ self.logger.info("Performance tracker initialized")
26
+
27
+ # Start background monitoring
28
+ asyncio.create_task(self._monitor_resources())
29
+
30
+ async def _monitor_resources(self):
31
+ """Background task to monitor system resources"""
32
+ while True:
33
+ try:
34
+ # CPU usage
35
+ cpu_percent = psutil.cpu_percent(interval=1)
36
+ self.metrics["cpu_usage"].append({
37
+ "timestamp": datetime.now().isoformat(),
38
+ "value": cpu_percent
39
+ })
40
+
41
+ # Memory usage
42
+ memory = psutil.virtual_memory()
43
+ self.metrics["memory_usage"].append({
44
+ "timestamp": datetime.now().isoformat(),
45
+ "value": memory.percent
46
+ })
47
+
48
+ # GPU usage (if available)
49
+ if torch.cuda.is_available():
50
+ gpu_memory = torch.cuda.memory_allocated() / torch.cuda.max_memory_allocated()
51
+ self.metrics["gpu_usage"].append({
52
+ "timestamp": datetime.now().isoformat(),
53
+ "value": gpu_memory * 100
54
+ })
55
+
56
+ # Keep only last 100 measurements
57
+ for metric in ["cpu_usage", "memory_usage", "gpu_usage"]:
58
+ if len(self.metrics[metric]) > 100:
59
+ self.metrics[metric] = self.metrics[metric][-100:]
60
+
61
+ await asyncio.sleep(30) # Monitor every 30 seconds
62
+
63
+ except Exception as e:
64
+ self.logger.error(f"Resource monitoring failed: {e}")
65
+ await asyncio.sleep(60)
66
+
67
+ def track_inference(self, duration: float):
68
+ """Track inference time"""
69
+ self.metrics["inference_times"].append(duration)
70
+
71
+ # Keep only last 100 measurements
72
+ if len(self.metrics["inference_times"]) > 100:
73
+ self.metrics["inference_times"] = self.metrics["inference_times"][-100:]
74
+
75
+ def track_request(self, success: bool):
76
+ """Track request outcome"""
77
+ self.metrics["total_requests"] += 1
78
+ if success:
79
+ self.metrics["successful_requests"] += 1
80
+ else:
81
+ self.metrics["failed_requests"] += 1
82
+
83
+ def get_summary(self) -> Dict[str, Any]:
84
+ """Get performance summary"""
85
+ uptime_seconds = time.time() - self.start_time
86
+
87
+ avg_inference = sum(self.metrics["inference_times"]) / len(self.metrics["inference_times"]) if self.metrics["inference_times"] else 0
88
+
89
+ return {
90
+ "uptime_hours": uptime_seconds / 3600,
91
+ "total_requests": self.metrics["total_requests"],
92
+ "success_rate": self.metrics["successful_requests"] / self.metrics["total_requests"] if self.metrics["total_requests"] > 0 else 0,
93
+ "average_inference_time": avg_inference,
94
+ "current_cpu_usage": self.metrics["cpu_usage"][-1]["value"] if self.metrics["cpu_usage"] else 0,
95
+ "current_memory_usage": self.metrics["memory_usage"][-1]["value"] if self.metrics["memory_usage"] else 0
96
+ }