# System Architecture Document (Google Gemini Translation) This document details the system architecture, design principles, and implementation specifics of the emotion and physiological state change prediction model. ## Table of Contents 1. [System Overview](#system-overview) 2. [Overall Architecture](#overall-architecture) 3. [Model Architecture](#model-architecture) 4. [Data Processing Workflow](#data-processing-workflow) 5. [Training Workflow](#training-workflow) 6. [Inference Workflow](#inference-workflow) 7. [Module Design](#module-design) 8. [Design Patterns](#design-patterns) 9. [Performance Optimization](#performance-optimization) 10. [Extensibility Design](#extensibility-design) ## System Overview ### Design Goals This system aims to implement an efficient, scalable, and maintainable emotion and physiological state change prediction model. The main design goals include: 1. **High Performance**: Support GPU acceleration and optimize inference speed. 2. **Modularity**: Clear module partitioning for easy maintenance and extension. 3. **Configurability**: Flexible configuration system to support hyperparameter tuning. 4. **Usability**: Comprehensive CLI tools and Python API. 5. **Extensibility**: Support new model architectures and loss functions. 6. **Observability**: Complete logging and monitoring system. ### Technology Stack - **Deep Learning Framework**: PyTorch 1.12+ - **Data Processing**: NumPy, Pandas, scikit-learn - **Configuration Management**: PyYAML, OmegaConf - **Visualization**: Matplotlib, Seaborn, Plotly - **Command Line**: argparse, Click - **Logging System**: Loguru - **Experiment Tracking**: MLflow, Weights & Biases - **Performance Analysis**: py-spy, memory-profiler ## Overall Architecture ### System Architecture Diagram ``` ┌─────────────────────────────────────────────────────────────────┐ │ User Interface Layer │ ├─────────────────────────────────────────────────────────────────┤ │ CLI Tool │ Python API │ Web API │ Jupyter Notebook │ ├─────────────────────────────────────────────────────────────────┤ │ Business Logic Layer │ ├─────────────────────────────────────────────────────────────────┤ │ Training Manager │ Inference Engine │ Evaluator │ Config Manager │ Log Manager │ ├─────────────────────────────────────────────────────────────────┤ │ Core Model Layer │ ├─────────────────────────────────────────────────────────────────┤ │ PAD Predictor │ Loss Function │ Evaluation Metrics │ Model Factory │ Optimizer │ ├─────────────────────────────────────────────────────────────────┤ │ Data Processing Layer │ ├─────────────────────────────────────────────────────────────────┤ │ Data Loader │ Preprocessor │ Data Augmenter │ Synthetic Data Generator │ ├─────────────────────────────────────────────────────────────────┤ │ Infrastructure Layer │ ├─────────────────────────────────────────────────────────────────┤ │ File System │ GPU Computing │ Memory Management │ Exception Handling │ Utility Functions │ └─────────────────────────────────────────────────────────────────┘ ``` ### Module Dependency Relationships ``` CLI Module → Business Logic Layer → Core Model Layer → Data Processing Layer → Infrastructure Layer ↓ Config Manager → All Modules ↓ Log Manager → All Modules ``` ## Model Architecture ### Network Structure The PAD predictor employs a Multi-Layer Perceptron (MLP) architecture: ``` Input Layer (7 dimensions) ↓ Hidden Layer 1 (128 neurons) + ReLU + Dropout(0.3) ↓ Hidden Layer 2 (64 neurons) + ReLU + Dropout(0.3) ↓ Hidden Layer 3 (32 neurons) + ReLU ↓ Output Layer (5 neurons) + Linear Activation ``` ### Detailed Network Components #### Input Layer - **Dimensions**: 7-dimensional feature vector - **Feature Composition**: - User PAD: 3 dimensions (Pleasure, Arousal, Dominance) - Vitality: 1 dimension (Physiological Vitality Value) - Current PAD: 3 dimensions (Current Emotional State) #### Hidden Layer Design Principles 1. **Layer-by-Layer Compression**: Gradually reduce the number of neurons from 128 → 64 → 32. 2. **Activation Function**: Use ReLU activation function to avoid vanishing gradients. 3. **Regularization**: Use Dropout in the first two layers to prevent overfitting. 4. **Weight Initialization**: Use Xavier uniform initialization, suitable for ReLU activation. #### Output Layer Design - **Dimensions**: 3-dimensional output vector - **Output Composition**: - ΔPAD: 3 dimensions (Change in Emotion: ΔPleasure, ΔArousal, ΔDominance) - ΔPressure: Dynamically calculated from PAD changes (Formula: 1.0 × (-ΔP) + 0.8 × (ΔA) + 0.6 × (-ΔD)) - **Activation Function**: Linear activation, suitable for regression tasks. ### Model Configuration System ```python # Default architecture configuration DEFAULT_ARCHITECTURE = { 'input_dim': 7, 'output_dim': 3, 'hidden_dims': [512, 256, 128], 'dropout_rate': 0.3, 'activation': 'relu', 'weight_init': 'xavier_uniform', 'bias_init': 'zeros' } # Configurable parameters CONFIGURABLE_PARAMS = { 'hidden_dims': { 'type': list, 'default': [128, 64, 32], 'constraints': [ lambda x: len(x) >= 1, lambda x: all(isinstance(n, int) and n > 0 for n in x), lambda x: x == sorted(x, reverse=True) # Decreasing sequence ] }, 'dropout_rate': { 'type': float, 'default': 0.3, 'range': [0.0, 0.9] }, 'activation': { 'type': str, 'default': 'relu', 'choices': ['relu', 'tanh', 'sigmoid', 'leaky_relu'] } } ``` ## Data Processing Workflow ### Data Pipeline ``` Raw Data → Data Validation → Feature Extraction → Data Preprocessing → Data Augmentation → Batch Generation ↓ Model Training/Inference ``` ### Data Preprocessing Workflow #### 1. Data Validation ```python class DataValidator: """Data validator to ensure data quality""" def validate_input_shape(self, data: np.ndarray) -> bool: """Validate input data shape""" return data.shape[1] == 7 def validate_value_ranges(self, data: np.ndarray) -> Dict[str, bool]: """Validate value ranges""" return { 'pad_features_valid': np.all(data[:, :6] >= -1) and np.all(data[:, :6] <= 1), 'vitality_valid': np.all(data[:, 3] >= 0) and np.all(data[:, 3] <= 100) } def check_missing_values(self, data: np.ndarray) -> Dict[str, Any]: """Check for missing values""" return { 'has_missing': np.isnan(data).any(), 'missing_count': np.isnan(data).sum(), 'missing_ratio': np.isnan(data).mean() } ``` #### 2. Feature Engineering ```python class FeatureEngineer: """Feature engineer""" def extract_pad_features(self, data: np.ndarray) -> np.ndarray: """Extract PAD features""" user_pad = data[:, :3] current_pad = data[:, 4:7] return np.hstack([user_pad, current_pad]) def compute_pad_differences(self, data: np.ndarray) -> np.ndarray: """Compute PAD differences""" user_pad = data[:, :3] current_pad = data[:, 4:7] return user_pad - current_pad def create_interaction_features(self, data: np.ndarray) -> np.ndarray: """Create interaction features""" user_pad = data[:, :3] current_pad = data[:, 4:7] # PAD dot product pad_interaction = np.sum(user_pad * current_pad, axis=1, keepdims=True) # PAD Euclidean distance pad_distance = np.linalg.norm(user_pad - current_pad, axis=1, keepdims=True) return np.hstack([data, pad_interaction, pad_distance]) ``` #### 3. Data Standardization ```python class DataNormalizer: """Data normalizer""" def __init__(self, method: str = 'standard'): self.method = method self.scalers = {} def fit_pad_features(self, features: np.ndarray): """Fit PAD feature scaler""" if self.method == 'standard': self.scalers['pad'] = StandardScaler() elif self.method == 'minmax': self.scalers['pad'] = MinMaxScaler(feature_range=(-1, 1)) self.scalers['pad'].fit(features) def fit_vitality_feature(self, features: np.ndarray): """Fit vitality feature scaler""" if self.method == 'standard': self.scalers['vitality'] = StandardScaler() elif self.method == 'minmax': self.scalers['vitality'] = MinMaxScaler(feature_range=(0, 1)) self.scalers['vitality'].fit(features.reshape(-1, 1)) ``` ### Data Augmentation Strategies ```python class DataAugmenter: """Data augmenter""" def __init__(self, noise_std: float = 0.01, mixup_alpha: float = 0.2): self.noise_std = noise_std self.mixup_alpha = mixup_alpha def add_gaussian_noise(self, features: np.ndarray) -> np.ndarray: """Add Gaussian noise""" noise = np.random.normal(0, self.noise_std, features.shape) return features + noise def mixup_augmentation(self, features: np.ndarray, labels: np.ndarray) -> tuple: """Mixup data augmentation""" batch_size = features.shape[0] lam = np.random.beta(self.mixup_alpha, self.mixup_alpha) # Randomly shuffle indices index = np.random.permutation(batch_size) # Mix features and labels mixed_features = lam * features + (1 - lam) * features[index] mixed_labels = lam * labels + (1 - lam) * labels[index] return mixed_features, mixed_labels ``` ## Training Workflow ### Training Architecture ``` Config Loading → Data Preparation → Model Initialization → Training Loop → Model Saving → Result Evaluation ``` ### Training Manager Design ```python class ModelTrainer: """Model training manager""" def __init__(self, model, preprocessor=None, device='auto'): self.model = model self.preprocessor = preprocessor self.device = self._setup_device(device) self.logger = logging.getLogger(__name__) # Training state self.training_state = { 'epoch': 0, 'best_loss': float('inf'), 'patience_counter': 0, 'training_history': [] } def setup_training(self, config: Dict[str, Any]): """Set up the training environment""" # Optimizer setup self.optimizer = self._create_optimizer(config['optimizer']) # Learning rate scheduler self.scheduler = self._create_scheduler(config['scheduler']) # Loss function self.criterion = self._create_criterion(config['loss']) # Early stopping mechanism self.early_stopping = self._setup_early_stopping(config['early_stopping']) # Checkpoint management self.checkpoint_manager = CheckpointManager(config['checkpointing']) def train_epoch(self, train_loader: DataLoader) -> Dict[str, float]: """Train for one epoch""" self.model.train() epoch_loss = 0.0 num_batches = len(train_loader) for batch_idx, (features, labels) in enumerate(train_loader): features = features.to(self.device) labels = labels.to(self.device) # Forward pass self.optimizer.zero_grad() outputs = self.model(features) loss = self.criterion(outputs, labels) # Backward pass loss.backward() # Gradient clipping torch.nn.utils.clip_grad_norm_(self.model.parameters(), max_norm=1.0) # Parameter update self.optimizer.step() epoch_loss += loss.item() # Logging if batch_idx % 100 == 0: self.logger.debug(f'Batch {batch_idx}/{num_batches}, Loss: {loss.item():.6f}') return {'train_loss': epoch_loss / num_batches} def validate_epoch(self, val_loader: DataLoader) -> Dict[str, float]: """Validate for one epoch""" self.model.eval() val_loss = 0.0 num_batches = len(val_loader) with torch.no_grad(): for features, labels in val_loader: features = features.to(self.device) labels = labels.to(self.device) outputs = self.model(features) loss = self.criterion(outputs, labels) val_loss += loss.item() return {'val_loss': val_loss / num_batches} ``` ### Training Strategies #### 1. Learning Rate Scheduling ```python class LearningRateScheduler: """Learning rate scheduling strategy""" @staticmethod def cosine_annealing_scheduler(optimizer, T_max, eta_min=1e-6): """Cosine annealing scheduler""" return torch.optim.lr_scheduler.CosineAnnealingLR( optimizer, T_max=T_max, eta_min=eta_min ) @staticmethod def reduce_on_plateau_scheduler(optimizer, patience=5, factor=0.5): """ReduceLROnPlateau scheduler""" return torch.optim.lr_scheduler.ReduceLROnPlateau( optimizer, mode='min', patience=patience, factor=factor ) @staticmethod def warmup_cosine_scheduler(optimizer, warmup_epochs, total_epochs): """Warmup cosine scheduler""" def lr_lambda(epoch): if epoch < warmup_epochs: return epoch / warmup_epochs else: progress = (epoch - warmup_epochs) / (total_epochs - warmup_epochs) return 0.5 * (1 + math.cos(math.pi * progress)) return torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda) ``` #### 2. Early Stopping Mechanism ```python class EarlyStopping: """Early stopping mechanism""" def __init__(self, patience=10, min_delta=1e-4, mode='min'): self.patience = patience self.min_delta = min_delta self.mode = mode self.counter = 0 self.best_score = None if mode == 'min': self.is_better = lambda x, y: x < y - min_delta else: self.is_better = lambda x, y: x > y + min_delta def __call__(self, score): if self.best_score is None: self.best_score = score return False if self.is_better(score, self.best_score): self.best_score = score self.counter = 0 return False else: self.counter += 1 return self.counter >= self.patience ``` ## Inference Workflow ### Inference Architecture ``` Model Loading → Input Validation → Data Preprocessing → Model Inference → Result Post-processing → Output Formatting ``` ### Inference Engine Design ```python class InferenceEngine: """High-performance inference engine""" def __init__(self, model, preprocessor=None, device='auto'): self.model = model self.preprocessor = preprocessor self.device = self._setup_device(device) self.model.to(self.device) self.model.eval() # Performance optimization self._optimize_model() # Warm-up self._warmup_model() def _optimize_model(self): """Optimize model performance""" # TorchScript optimization try: self.model = torch.jit.script(self.model) self.logger.info("Model optimized to TorchScript format") except Exception as e: self.logger.warning(f"TorchScript optimization failed: {e}") # Mixed precision if self.device.type == 'cuda': self.scaler = torch.cuda.amp.GradScaler() def _warmup_model(self, num_warmup=5): """Warm up the model""" dummy_input = torch.randn(1, 7).to(self.device) with torch.no_grad(): for _ in range(num_warmup): _ = self.model(dummy_input) self.logger.info(f"Model warm-up completed, warm-up runs: {num_warmup}") def predict_single(self, input_data: Union[List, np.ndarray]) -> Dict[str, Any]: """Single sample inference""" # Input validation validated_input = self._validate_input(input_data) # Data preprocessing processed_input = self._preprocess_input(validated_input) # Model inference with torch.no_grad(): if self.device.type == 'cuda': with torch.cuda.amp.autocast(): output = self.model(processed_input) else: output = self.model(processed_input) # Result post-processing result = self._postprocess_output(output) return result def predict_batch(self, input_batch: Union[List, np.ndarray]) -> List[Dict[str, Any]]: """Batch inference""" # Input validation and preprocessing validated_batch = self._validate_batch(input_batch) processed_batch = self._preprocess_batch(validated_batch) # Batch inference batch_size = min(32, len(processed_batch)) results = [] for i in range(0, len(processed_batch), batch_size): batch_input = processed_batch[i:i+batch_size] with torch.no_grad(): if self.device.type == 'cuda': with torch.cuda.amp.autocast(): batch_output = self.model(batch_input) else: batch_output = self.model(batch_input) # Post-processing batch_results = self._postprocess_batch(batch_output) results.extend(batch_results) return results ``` ### Performance Optimization Strategies #### 1. Memory Optimization ```python class MemoryOptimizer: """Memory optimizer""" @staticmethod def optimize_memory_usage(): """Optimize memory usage""" # Clear GPU cache if torch.cuda.is_available(): torch.cuda.empty_cache() # Set memory allocation strategy if torch.cuda.is_available(): torch.cuda.set_per_process_memory_fraction(0.9) @staticmethod def monitor_memory_usage(): """Monitor memory usage""" if torch.cuda.is_available(): allocated = torch.cuda.memory_allocated() / 1024**3 # GB cached = torch.cuda.memory_reserved() / 1024**3 # GB return {'allocated': allocated, 'cached': cached} return {'allocated': 0, 'cached': 0} ``` #### 2. Computation Optimization ```python class ComputeOptimizer: """Computation optimizer""" @staticmethod def enable_tf32(): """Enable TF32 acceleration (Ampere architecture GPUs)""" if torch.cuda.is_available(): torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.allow_tf32 = True @staticmethod def optimize_dataloader(dataloader, num_workers=4, pin_memory=True): """Optimize data loader""" return DataLoader( dataloader.dataset, batch_size=dataloader.batch_size, shuffle=dataloader.shuffle, num_workers=num_workers, pin_memory=pin_memory and torch.cuda.is_available(), persistent_workers=True if num_workers > 0 else False ) ``` ## Module Design ### Core Modules #### 1. Model Module (`src.models/`) ```python # Model module structure src/models/ ├── __init__.py ├── pad_predictor.py # Core predictor ├── loss_functions.py # Loss functions ├── metrics.py # Evaluation metrics ├── model_factory.py # Model factory └── base_model.py # Base model class ``` **Design Principles**: - Single Responsibility: Each class is responsible for only one specific function. - Open/Closed Principle: Open for extension, closed for modification. - Dependency Inversion: Depend on abstractions, not concretions. #### 2. Data Module (`src.data/`) ```python # Data module structure src/data/ ├── __init__.py ├── dataset.py # Dataset class ├── data_loader.py # Data loader ├── preprocessor.py # Data preprocessor ├── synthetic_generator.py # Synthetic data generator └── data_validator.py # Data validator ``` **Design Patterns**: - Strategy Pattern: Different data preprocessing strategies. - Factory Pattern: Data generator factory. - Observer Pattern: Data quality monitoring. #### 3. Utility Module (`src.utils/`) ```python # Utility module structure src/utils/ ├── __init__.py ├── inference_engine.py # Inference engine ├── trainer.py # Trainer ├── logger.py # Logging utility ├── config.py # Configuration management └── exceptions.py # Custom exceptions ``` **Features**: - High-performance inference engine - Flexible training management - Structured logging system - Unified configuration management ## Design Patterns ### 1. Factory Pattern ```python class ModelFactory: """Model factory class""" _models = { 'pad_predictor': PADPredictor, 'advanced_predictor': AdvancedPADPredictor, 'ensemble_predictor': EnsemblePredictor } @classmethod def create_model(cls, model_type: str, config: Dict[str, Any]): """Create a model instance""" if model_type not in cls._models: raise ValueError(f"Unsupported model type: {model_type}") model_class = cls._models[model_type] return model_class(**config) @classmethod def register_model(cls, name: str, model_class): """Register a new model type""" cls._models[name] = model_class ``` ### 2. Strategy Pattern ```python class LossStrategy(ABC): """Abstract base class for loss strategies""" @abstractmethod def compute_loss(self, predictions, targets): pass class WeightedMSELoss(LossStrategy): """Weighted Mean Squared Error Loss""" def compute_loss(self, predictions, targets): # Implement weighted MSE pass class HuberLoss(LossStrategy): """Huber Loss""" def compute_loss(self, predictions, targets): # Implement Huber loss pass class LossContext: """Loss context""" def __init__(self, strategy: LossStrategy): self._strategy = strategy def set_strategy(self, strategy: LossStrategy): self._strategy = strategy def compute_loss(self, predictions, targets): return self._strategy.compute_loss(predictions, targets) ``` ### 3. Observer Pattern ```python class TrainingObserver(ABC): """Abstract base class for training observers""" @abstractmethod def on_epoch_start(self, epoch, metrics): pass @abstractmethod def on_epoch_end(self, epoch, metrics): pass class LoggingObserver(TrainingObserver): """Logging observer""" def on_epoch_end(self, epoch, metrics): self.logger.info(f"Epoch {epoch}: {metrics}") class CheckpointObserver(TrainingObserver): """Checkpoint observer""" def on_epoch_end(self, epoch, metrics): if self.should_save_checkpoint(metrics): self.save_checkpoint(epoch, metrics) class TrainingSubject: """Training subject""" def __init__(self): self._observers = [] def attach(self, observer: TrainingObserver): self._observers.append(observer) def detach(self, observer: TrainingObserver): self._observers.remove(observer) def notify_epoch_end(self, epoch, metrics): for observer in self._observers: observer.on_epoch_end(epoch, metrics) ``` ### 4. Builder Pattern ```python class ModelBuilder: """Model builder""" def __init__(self): self.input_dim = 7 self.output_dim = 3 self.hidden_dims = [128, 64, 32] self.dropout_rate = 0.3 self.activation = 'relu' def with_dimensions(self, input_dim, output_dim): self.input_dim = input_dim self.output_dim = output_dim return self def with_hidden_layers(self, hidden_dims): self.hidden_dims = hidden_dims return self def with_dropout(self, dropout_rate): self.dropout_rate = dropout_rate return self def with_activation(self, activation): self.activation = activation return self def build(self): return PADPredictor( input_dim=self.input_dim, output_dim=self.output_dim, hidden_dims=self.hidden_dims, dropout_rate=self.dropout_rate ) # Example usage model = (ModelBuilder() .with_dimensions(7, 5) .with_hidden_layers([256, 128, 64]) .with_dropout(0.3) .build()) ``` ## Performance Optimization ### 1. Model Optimization #### Quantization ```python class ModelQuantizer: """Model quantizer""" @staticmethod def quantize_model(model, calibration_data): """Dynamically quantize the model""" model.eval() # Dynamic quantization quantized_model = torch.quantization.quantize_dynamic( model, {nn.Linear}, dtype=torch.qint8 ) return quantized_model @staticmethod def quantize_aware_training(model, train_loader): """Quantization-aware training""" model.eval() model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm') torch.quantization.prepare_qat(model, inplace=True) # Quantization-aware training for epoch in range(num_epochs): for batch in train_loader: # Training steps pass # Convert to quantized model quantized_model = torch.quantization.convert(model.eval(), inplace=False) return quantized_model ``` #### Model Pruning ```python class ModelPruner: """Model pruner""" @staticmethod def prune_model(model, pruning_ratio=0.2): """Structured pruning""" import torch.nn.utils.prune as prune # Prune all linear layers for name, module in model.named_modules(): if isinstance(module, nn.Linear): prune.l1_unstructured(module, name='weight', amount=pruning_ratio) return model @staticmethod def remove_pruning(model): """Remove pruning reparameterization""" import torch.nn.utils.prune as prune for name, module in model.named_modules(): if isinstance(module, nn.Linear): prune.remove(module, 'weight') return model ``` ### 2. Inference Optimization #### Batch Inference Optimization ```python class BatchInferenceOptimizer: """Batch inference optimizer""" def __init__(self, model, device): self.model = model self.device = device self.optimal_batch_size = self._find_optimal_batch_size() def _find_optimal_batch_size(self): """Find the optimal batch size""" batch_sizes = [1, 2, 4, 8, 16, 32, 64, 128] best_batch_size = 1 best_throughput = 0 dummy_input = torch.randn(1, 7).to(self.device) for batch_size in batch_sizes: try: # Test batch size batch_input = dummy_input.repeat(batch_size, 1) start_time = time.time() with torch.no_grad(): for _ in range(10): _ = self.model(batch_input) end_time = time.time() throughput = (batch_size * 10) / (end_time - start_time) if throughput > best_throughput: best_throughput = throughput best_batch_size = batch_size except RuntimeError: break # Out of memory return best_batch_size ``` ## Extensibility Design ### 1. Plugin System ```python class PluginManager: """Plugin manager""" def __init__(self): self.plugins = {} self.hooks = defaultdict(list) def register_plugin(self, name: str, plugin): """Register a plugin""" self.plugins[name] = plugin # Register plugin hooks if hasattr(plugin, 'get_hooks'): for hook_name, hook_func in plugin.get_hooks().items(): self.hooks[hook_name].append(hook_func) def execute_hooks(self, hook_name: str, *args, **kwargs): """Execute hooks""" for hook_func in self.hooks[hook_name]: hook_func(*args, **kwargs) class PluginBase(ABC): """Base class for plugins""" @abstractmethod def initialize(self, config): pass @abstractmethod def cleanup(self): pass def get_hooks(self): return {} ``` ### 2. Configuration Extension ```python class ConfigManager: """Configuration manager""" def __init__(self): self.config_schemas = {} self.config_validators = {} def register_config_schema(self, name: str, schema: Dict): """Register a configuration schema""" self.config_schemas[name] = schema def register_validator(self, name: str, validator: callable): """Register a configuration validator""" self.config_validators[name] = validator def validate_config(self, config: Dict[str, Any]) -> bool: """Validate configuration""" for name, validator in self.config_validators.items(): if name in config: if not validator(config[name]): raise ValueError(f"Configuration validation failed: {name}") return True ``` ### 3. Model Registration System ```python class ModelRegistry: """Model registration system""" _models = {} _model_metadata = {} @classmethod def register(cls, name: str, metadata: Dict = None): """Model registration decorator""" def decorator(model_class): cls._models[name] = model_class cls._model_metadata[name] = metadata or {} return model_class return decorator @classmethod def create_model(cls, name: str, **kwargs): """Create a model""" if name not in cls._models: raise ValueError(f"Unregistered model: {name}") model_class = cls._models[name] return model_class(**kwargs) @classmethod def list_models(cls): """List all registered models""" return list(cls._models.keys()) # Example usage @ModelRegistry.register("advanced_pad", {"description": "Advanced PAD Predictor", "version": "2.0"}) class AdvancedPADPredictor(nn.Module): def __init__(self, **kwargs): super().__init__() # Model implementation pass ``` --- This architecture document describes the overall design and implementation details of the system. As the project evolves, the architecture will continue to be optimized and extended. For suggestions or questions, please provide feedback via GitHub Issues.