- $K^2$VAE_ A Koopman-Kalman Enhanced Variational AutoEncoder for Probabilistic Time Series Forecasting
- $S^2$FGL_ Spatial Spectral Federated Graph Learning
- $_mathcalVista_mathcalDPO$_ Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models
- $_mathrmμ$nit Scaling_ Simple and Scalable FP8 LLM Training
- $_textttI$^2$MoE$_ Interpretable Multimodal Interaction-aware Mixture-of-Experts
- $∞$-Video_ A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation
- (How) Can Transformers Predict Pseudo-Random Numbers_
- (How) Do Language Models Track State_
- 3D Question Answering via only 2D Vision-Language Models
- 3D-LMVIC_ Learning-based Multi-View Image Compression with 3D Gaussian Geometric Priors
- A Bayesian Model Selection Criterion for Selecting Pretraining Checkpoints
- A Bregman Proximal Viewpoint on Neural Operators
- A Causal World Model Underlying Next Token Prediction_ Exploring GPT in a Controlled Environment
- A Certified Unlearning Approach without Access to Source Data
- A Chaotic Dynamics Framework Inspired by Dorsal Stream for Event Signal Processing
- A Checks-and-Balances Framework for Context-Aware Ethical AI Alignment
- A Classification View on Meta Learning Bandits
- A Closer Look at Backdoor Attacks on CLIP
- A Closer Look at Generalized BH Algorithm for Out-of-Distribution Detection
- A Closer Look at Multimodal Representation Collapse
- A Closer Look at Transformers for Time Series Forecasting_ Understanding Why They Work and Where They Struggle
- A Cognac Shot To Forget Bad Memories_ Corrective Unlearning for Graph Neural Networks
- A Comprehensive Framework for Analyzing the Convergence of Adam_ Bridging the Gap with SGD
- A Computationally Efficient Algorithm for Infinite-Horizon Average-Reward Linear MDPs
- A Cross Modal Knowledge Distillation & Data Augmentation Recipe for Improving Transcriptomics Representations through Morphological Features
- A Dynamical Systems-Inspired Pruning Strategy for Addressing Oversmoothing in Graph Attention Networks
- A First-order Generative Bilevel Optimization Framework for Diffusion Models
- A Forget-and-Grow Strategy for Deep Reinforcement Learning Scaling in Continuous Control
- A General Framework for Inference-time Scaling and Steering of Diffusion Models
- A General Graph Spectral Wavelet Convolution via Chebyshev Order Decomposition
- A General Representation-Based Approach to Multi-Source Domain Adaptation
- A Generalizable Physics-Enhanced State Space Model for Long-Term Dynamics Forecasting in Complex Environments
- A Generalization Result for Convergence in Learning-to-Optimize
- A Generalization Theory for Zero-Shot Prediction
- A Generic Family of Graphical Models_ Diversity, Efficiency, and Heterogeneity
- A Geometric Approach to Personalized Recommendation with Set-Theoretic Constraints Using Box Embeddings
- A Hitchhiker’s Guide to Scaling Law Estimation
- A Large Recurrent Action Model_ xLSTM enables Fast Inference for Robotics Tasks
- A Lens into Interpretable Transformer Mistakes via Semantic Dependency
- A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models
- A Machine Learning Approach to Duality in Statistical Physics
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks
- A Market for Accuracy_ Classification Under Competition
- A Mathematical Framework for AI-Human Integration in Work
- A Memory Efficient Randomized Subspace Optimization Method for Training Large Language Models
- A Meta-learner for Heterogeneous Effects in Difference-in-Differences
- A Mixed-Curvature based Pre-training Paradigm for Multi-Task Vehicle Routing Solver
- A Mixture-Based Framework for Guiding Diffusion Models
- A Model of Place Field Reorganization During Reward Maximization
- A Multi-Region Brain Model to Elucidate the Role of Hippocampus in Spatially Embedded Decision-Making