{ "name": "AdaptiveHybridDFTNet", "title": "AdaptiveHybridDFTNet: Frequency-Adaptive Decomposition with Enhanced Orthogonal Trend-Seasonality Framework for Multivariate Time Series Forecasting", "description": "AdaptiveHybridDFTNet introduces an adaptive frequency selection mechanism and enhances the mathematical rigor of trend-seasonality separation for multivariate time series forecasting. It uses an Extended DFT-based decomposition with an innovative frequency-adaptive scheme to dynamically optimize seasonal component extraction while ensuring strict independence of trend and seasonal signals. Lightweight CNNs process refined seasonal components for short-term patterns, and a Transformer captures long-term dependencies. This scalable and interpretable architecture addresses critical model limitations in decomposition fidelity and component interaction.", "statement": "The novelty of AdaptiveHybridDFTNet lies in its (1) dynamic frequency selection mechanism for Fourier-based decomposition, ensuring dataset-specific seasonal refinement, and (2) orthogonality-constrained trend-seasonality separation, which guarantees that trend and seasonal components are mathematically disentangled. These contributions resolve major theoretical and algorithmic issues in prior models by improving decomposition precision and preserving signal independence, enhancing forecasting accuracy and model interpretability.", "method": "### System Architecture\nAdaptiveHybridDFTNet consists of three main modules: (1) **Adaptive Signal Decomposition Module (ASDM)**, improving Fourier-based seasonal component extraction through a frequency-adaptive mechanism and incorporating orthogonality constraints for decomposition; (2) **Short-Term Temporal Encoder (ST-CNN)**, leveraging lightweight CNNs to process refined seasonal components; and (3) **Long-Term Dependency Module (LT-Transformer)**, designed to capture long-range temporal dependencies from the trend component, with outputs integrated in a unified reconstruction step.\n\n#### Key Enhancements\n1. **Frequency-Adaptive DFT Decomposition**:\n The Fourier decomposition dynamically tunes frequency parameters based on the dataset by analyzing power spectra, replacing fixed-frequency components. This ensures optimal spectral extraction of seasonal components for diverse datasets, improving robustness over manually selected frequencies.\n\n2. **Orthogonality-Constrained Trending**:\n A novel orthogonality constraint is imposed during trend-seasonality decomposition to ensure the extracted components are linearly independent. This is achieved via least-squares minimization of cross-correlation between trends and seasonal signals.\n\n### Mathematical Formulation\n\n#### Multi-Step Adaptive Decomposition\nGiven an input multivariate series \\( X \\in \\mathbb{R}^{B \\times L \\times C} \\):\n1. **Trend Extraction via Moving Average**:\n The trend \\( X_T \\) is extracted using boundary-adjusted moving averages:\n \\[\n X_T^{(i)} = \\frac{1}{\\min(k, L-i)} \\sum_{j=i}^{\\min(i+k,L)} X^{(j)}\n \\]\n Seasonal residual: \\( X_S = X - X_T \\).\n\n2. **Adaptive Frequency Selection**:\n The Fourier frequency \\( d \\) is dynamically selected by maximizing power spectral density:\n \\[\n d^* = \\arg\\max_d \\int_{d-\\Delta}^{d+\\Delta} |\\hat{X}_S(f)|^2 \\, df.\n \\]\n Refined seasonal component:\n \\[\n X_S' = \\text{DFT}_d(X_S) = \\sum_{n=0}^L X_S e^{-j(2 \\pi d^* n / L)}.\n \\]\n\n3. **Orthogonality Constraint**:\n Trend-seasonality independence is enforced:\n \\[\n \\min \\|\\text{Corr}(X_T, X_S')\\|_2.\n \\]\n\n#### Short-Term and Long-Term Modules\n- **ST-CNN**: Convolutional layers (kernel \\( K \\), stride \\( S \\)) process \\( X_S' \\):\n \\[\n f = \\text{CNN}(X_S') \\quad \\text{with activation \\( \\text{ReLU}(\\cdot) \\)}.\n \\]\n- **LT-Transformer**: Self-attention applied to \\( X_T \\):\n \\[\n A = \\text{Softmax}(Q K^T / \\sqrt{d}),\n \\]\n where \\( Q, K, V \\) are derived from \\( X_T \\).\n\n#### Unified Reconstruction\nThe seasonal CNN features \\( f \\) and trend Transformer outputs are concatenated: \n\\[\n\\hat{X} = \\text{Decoder}([f, A]).\n\\]\n\n### Algorithmic Workflow\n**Input:** Multivariate input \\( X \\), moving average window \\( k \\), frequency search interval \\( \\Delta \\), CNN params (kernel \\( K \\), stride \\( S \\)), Transformer depth, attention heads.\n\n**Output:** Forecast \\( \\hat{X} \\).\n\n1. **Adaptive Decomposition**:\n - Compute trend \\( X_T \\) via boundary-adjusted moving average.\n - Derive seasonal \\( X_S \\) from residuals \\( X - X_T \\).\n - Compute power spectra to identify \\( d^* \\), refine \\( X_S \\) with \\( \\text{DFT}_{d^*} \\).\n - Enforce orthogonality between \\( X_T \\) and \\( X_S \\).\n\n2. **Short-Term Encoding**:\n - Pass refined seasonal component \\( X_S' \\) through CNN to compute \\( f \\).\n\n3. **Long-Term Encoding**:\n - Apply self-attention on \\( X_T \\) to obtain attention matrices \\( A \\).\n\n4. **Reconstruction**:\n - Concatenate \\( f \\) and \\( A \\), generate final forecast \\( \\hat{X} \\).\n\n5. **Return \\( \\hat{X} \\).**\n\n### Algorithm Pseudocode\n```plaintext\nAlgorithm AdaptiveHybridDFTNet\nInput: X (time series), \\Delta (frequency range), k (moving avg params), CNN and Transformer configs\nOutput: \\hat{X} (forecast)\n\n1. Function AdaptiveHybridDFTNet\n2. Decompose:\n a. Compute trend: X_T = MovingAvg(X, k, boundary_adjustment=True)\n b. Compute seasonal residual: X_S = X - X_T\n c. Adaptive frequency selection: d^* = argmax(intensity of Fourier power spectra)\n d. Refine seasonal component: X_S' = DFT_d*(X_S)\n e. Enforce orthogonality: minimize ||Corr(X_T, X_S')||\n3. Encode short-term: f = CNN(X_S')\n4. Encode long-term: A = Transformer(X_T)\n5. Merge and decode: \\hat{X} = Decoder([f, A])\n6. Return \\hat{X}\n```\n\n### Implementation Feasibility\n- **Efficiency Upgrades:** Frequency-adaptive decomposition dynamically reduces seasonal noise without increasing complexity, ensuring efficiency (Fourier complexity: \\( O(BL\\log{L}) \\)).\n- **Scalability:** Orthogonal decomposition significantly reduces signal overlap for diverse dataset sizes and dimensions.\n- **Reproducible Settings:** Default hyperparameter guidelines include kernel size (e.g., \\( K = 3 \\)), stride (e.g., \\( S = 1 \\)), and optimal attention heads (e.g., 8)." }