diff --git "a/2603/2603.06922.md" "b/2603/2603.06922.md" new file mode 100644--- /dev/null +++ "b/2603/2603.06922.md" @@ -0,0 +1,890 @@ +Title: NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks + +URL Source: https://arxiv.org/html/2603.06922 + +Markdown Content: +###### Abstract + +We introduce NerVE, a unified eigenspectral framework for understanding how feed-forward networks (FFNs) in large language models (LLMs) organize and regulate information flow in high-dimensional latent space. Despite FFNs dominating the parameter budget, their high-dimensional dynamics remain poorly understood. NerVE addresses this gap through lightweight, memory-efficient tracking of eigenspectrum dynamics via four complementary metrics: Spectral Entropy (dispersion), Participation Ratio (effective dimensionality), Eigenvalue Early Enrichment (top-heaviness), and Jensen-Shannon divergence (distributional shifts). Our key insight is that FFN nonlinearities reinject variance across eigenmodes, fundamentally governing latent dimension utilization, and that optimizer geometry strongly modulates the extent of this variance reinjection. We validate NerVE across model scales, and diverse architectural and optimizer configurations, each uniquely shaping FFN dynamics: normalization schemes controlling variance flow; FFN weight geometries constraining latent space; positional encoding and activation functions regulating information flow; and optimizer choices redistributing effective capacity across depth. Across these settings, NerVE consistently recovers stable spectral signatures that correlate with model’s generalization ability and respond predictably to design choices, generalizing beyond transformer to MLP-Mixer architectures, providing actionable insights for architectural and optimizer choices beyond trial-and-error. + +1 Introduction +-------------- + +Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of natural language tasks, driven in part by advances in transformer-based architectures. While much emphasis has been devoted to understanding attention mechanisms and token-wise interactions, the role of feed-forward networks (FFNs), particularly their nonlinear components, remains underexplored, despite FFNs dominating both the parameter budget and computational footprint of transformer-based models (Geva et al., [2021](https://arxiv.org/html/2603.06922#bib.bib373 "Transformer feed-forward layers are key-value memories"); de Vries, [2023](https://arxiv.org/html/2603.06922#bib.bib386 "In the long (context) run: it’s not the quadratic attention; it’s the lack of long pre-training data")). + +Despite their apparent simplicity, FFNs perform high-dimensional nonlinear transformations that regulate information flow by reorganizing, compressing, and propagating the information extracted by attention modules across layers. Understanding how these transformations evolve and interact with architectural design choices remains a fundamental open question. + +One challenge in interpreting FFNs is the absence of systematic and efficient tools for characterizing how latent representations are structured and transformed by nonlinear activations. FFN transformations unfold in a high-dimensional latent space which is far less accessible for direct visualization and probing compared to multi-head attention. Prior work, Kobayashi et al. ([2024](https://arxiv.org/html/2603.06922#bib.bib287 "Analyzing feed-forward blocks in transformers through the lens of attention map")) used attention maps to study the input-contextualization effect of FFNs, and Balestriero et al. ([2024](https://arxiv.org/html/2603.06922#bib.bib432 "Characterizing large language model geometry helps solve toxicity detection and generation")) characterized FFN geometry through piecewise-affine spline partitions. Neither lens reveals how nonlinearity redistributes variance, nor captures the rich spectral structure inherent in these transformations. + +![Image 1: Refer to caption](https://arxiv.org/html/2603.06922v1/x1.png) + +Figure 1: NerVE quantifies nonlinear eigenspectrum dynamics in FFNs of GPT-2. FFN nonlinearity (GELU) regulates information flow by reinjecting variance, reactivating under-utilized directions (post-activation SE↑\uparrow and PR↑\uparrow), and flattening the eigenspectrum, less top-heavy (post-activation EEE↓\downarrow). The JS heatmap shows a depth-localized transition band where redistribution is strongest. + +To this end, we introduce NerVE, a unified, online, and memory-efficient framework for analyzing FFN latent geometry through the eigenspectrum analysis. NerVE summarizes pre- and post-activation spectra using four scale-invariant, distribution-aware metrics: spectral entropy (dispersion vs uniformity), participation ratio (effective latent dimensionality), eigenvalue early enrichment (top-heaviness), and Jensen-Shannon divergence (distributional shift). + +From a methodological standpoint, these metrics span a broad theoretical range and expose the complementary facets of the eigenspectrum that any single scalar would obscure, thereby enabling continuous tracking of latent geometric dynamics. Spectral Entropy (SE) captures the uniformity of variance distribution (De Domenico and Biamonte, [2016](https://arxiv.org/html/2603.06922#bib.bib427 "Spectral entropies as information-theoretic tools for complex network comparison")), and Participation Ratio (PR) reflects the geometric notion of effective dimensionality, indicating how many directions meaningfully contribute to total variance (Gao et al., [2017](https://arxiv.org/html/2603.06922#bib.bib425 "A theory of multineuronal dimensionality, dynamics and measurement")). Unlike SE and PR, Eigenvalue Early Enrichment, which quantify the top-heaviness, can distinguish the eigenspectrum utilizing different fractions of the latent space (Marbut et al., [2023](https://arxiv.org/html/2603.06922#bib.bib414 "Reliable measures of spread in high dimensional latent spaces")). Finally, the Jensen-Shannon (JS) divergence provides an information-theoretic distance measure between two eigenspectra (Lin, [1991](https://arxiv.org/html/2603.06922#bib.bib419 "Divergence measures based on the shannon entropy")), quantifying distributional shifts of variance. + +We apply this framework across a diverse range of architectural settings, including LayerNorm placements: PreLN, PostLN, and MixLN (Li et al., [2025](https://arxiv.org/html/2603.06922#bib.bib398 "Mix-LN: unleashing the power of deeper layers by combining pre-LN and post-LN")); normalization-free variants (Jha and Reagen, [2024](https://arxiv.org/html/2603.06922#bib.bib359 "ReLU’s revival: on the entropic overload in normalization-free large language models")); FFN weight-geometry constraints (Miyato et al., [2018](https://arxiv.org/html/2603.06922#bib.bib304 "Spectral normalization for generative adversarial networks"); Salimans and Kingma, [2016](https://arxiv.org/html/2603.06922#bib.bib303 "Weight normalization: a simple reparameterization to accelerate training of deep neural networks")), and hyperspherical constraints (Liu et al., [2017](https://arxiv.org/html/2603.06922#bib.bib406 "Deep hyperspherical learning")); positional encoding scheme (Su et al., [2024](https://arxiv.org/html/2603.06922#bib.bib289 "Roformer: enhanced transformer with rotary position embedding")); and the optimizer choices, including Adam (Kingma, [2015](https://arxiv.org/html/2603.06922#bib.bib437 "Adam: a method for stochastic optimization"); Loshchilov and Hutter, [2019](https://arxiv.org/html/2603.06922#bib.bib438 "Decoupled weight decay regularization")), Muon (Jordan et al., [2024](https://arxiv.org/html/2603.06922#bib.bib442 "Muon: an optimizer for hidden layers in neural networks")), Dion (Ahn et al., [2025](https://arxiv.org/html/2603.06922#bib.bib449 "Dion: distributed orthonormalized updates")), Adafactor (Shazeer and Stern, [2018](https://arxiv.org/html/2603.06922#bib.bib440 "Adafactor: adaptive learning rates with sublinear memory cost")), and SGD; see Table [1](https://arxiv.org/html/2603.06922#S1.T1 "Table 1 ‣ 1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). + +Across settings, a clear pattern emerges: FFN nonlinearities do not merely rescale the activations, they actively reinject the variance across eigenmodes and reawakens the inactive directions in high-dimensional latent space. As shown in Figure [1](https://arxiv.org/html/2603.06922#S1.F1 "Figure 1 ‣ 1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), the post-activation spectra in GPT-2 consistently show increases in SE and PR, and decreases in EEE, while JS heatmaps reveal depth-localized transition bands where redistribution is strongest. This highlight the active role of FFN nonlinearities in regulating information flow and latent geometry that downstream layers further exploit. + +Contributions: Our contributions can be summarized as follows: + +1. 1. +Conceptual. We demonstrate that FFN nonlinearities do not simply rescale activations but actively reorganize eigenspectra, reinjecting variance into under-utilized directions. Moreover, optimizer geometry modulates the extent of variance reinjection, altering the role of FFN nonlinearity from repair (recovering spectral collapse) to refinement (stabilizing a well-conditioned spectrum). + +2. 2. +Framework. We introduce NerVE, a lightweight and memory-efficient methodology for online tracking of FFN eigenspectrum dynamics, using four distribution-aware, scale-invariant metrics. + +3. 3. +Diagnostic. We show that architectural (normalization layers, activation functions, gating, weight geometry, positional encodings), and optimizer (AdamW, Muon, Dion, Adafactor, SGD) choices imprint distinct spectral signatures in FFNs, which can be used for diagnosing the model behaviors. + +4. 4. +Empirical. We validate NerVE on GPT-2 and LLaMA models (71M to 1.3B) trained from scratch on CodeParrot, OpenWebText, FineWeb, and C4 datasets; extend to the non-transformer MLP-Mixer (B/16) on CIFAR-100, confirming cross-architecture generality; and perform extensive robustness studies across normalization variants, optimizer family, and token positions. + +Table 1: Summary of key NerVE findings per experimental axis. + +2 NerVE: A Principled Framework for Eigenspectrum Analysis +---------------------------------------------------------- + +Notations. Let L L be the number of layers, d d the embedding dimension, D D the FFN hidden dimension, B B the batch size, S S the context length, and Σ\Sigma the FFN (pre/post-activation) covariance matrix. + +### 2.1 Formulation of Eigenspectrum-Based Framework + +To understand how information is structured and propagated through FFN latent space, we analyze: (1) variance distribution and its impact on effective dimensionality; (2) how nonlinearity within a layer reshapes this distribution; (3) how these patterns evolve across layers and training. The NerVE framework (Figure [10](https://arxiv.org/html/2603.06922#A1.F10 "Figure 10 ‣ A.1 Framework Overview ‣ Appendix A Implementation Details and Methodological Considerations ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") in Appendix [A.1](https://arxiv.org/html/2603.06922#A1.SS1 "A.1 Framework Overview ‣ Appendix A Implementation Details and Methodological Considerations ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) consists of four main components: i) activation collection, (ii) covariance matrix computation, (iii) eigendecomposition, and (iv) spectral metrics calculation. + +Activation collection. For a FFN with (non-gating) architecture FFN​(x)=W down​σ​(W up​x+b 1)+b 2\text{FFN}(x)=W_{\text{down}}\sigma(W_{\text{up}}x+b_{1})+b_{2}, where σ\sigma is the activation function (_e.g._, ReLU, GELU), we collect PreAct​(X)=W up​x+b 1\text{PreAct}(X)=W_{\text{up}}x+b_{1} and PostAct​(X)=σ​(W up​x+b 1)\text{PostAct}(X)=\sigma(W_{\text{up}}x+b_{1}), the output of the up projection, and input to the down projection (after activation function), respectively. For activation with gating mechanisms (e.g., SwiGLU in LLaMA), the architecture becomes FFN​(x)=W down​(σ​(W gate​x)⊙(W up​x))\text{FFN}(x)=W_{\text{down}}(\sigma(W_{\text{gate}}x)\odot(W_{\text{up}}x)), where ⊙\odot denotes element-wise multiplication, and we collect PreAct​(X)=W gate​x\text{PreAct}(X)=W_{\text{gate}}x and PostAct​(X)=σ​(W gate​x)⊙(W up​x)\text{PostAct}(X)=\sigma(W_{\text{gate}}x)\odot(W_{\text{up}}x). + +Covariance matrix computation. At the logging step t t, for each layer l l, we collect full activation matrices PreAct​(X(l,t))∈ℝ N×D\text{PreAct}(X^{(l,t)})\in\mathbb{R}^{N\times D} and PostAct​(X(l,t))∈ℝ N×D\text{PostAct}(X^{(l,t)})\in\mathbb{R}^{N\times D}, where N=B×S N=B\times S is the total number of tokens in the batch. These tensors, originally shaped [B,S,D][B,S,D], are flattened to [B×S,D][B\times S,D], intentionally discarding sequence order. This allows us to compute an unbiased covariance matrix for all tokens in the batch, treating each token as an independent sample in FFN latent space. + +Computing covariance with all N N tokens in a batch, with no sub-sampling, ensures exact second-order statistics rather than their statistical approximations; thus, spectral analysis captures true statistical properties of distribution. For each set of activations, we compute covariance matrix as follows: + +Σ=(X−μ)T​(X−μ)N−1∈ℝ D×D,​where​X∈ℝ N×D​are activations and​μ=1 N​∑i=1 N X i\Sigma=\frac{(X-\mu)^{T}(X-\mu)}{N-1}\;\;\in\;\mathbb{R}^{\,D\times D,}\;\text{where }X\in\mathbb{R}^{N\times D}\text{ are activations and }\mu=\frac{1}{N}\sum_{i=1}^{N}X_{i}(1) + +This yields two covariance matrices per FFN layer: Σ PreAct(l,t)​(X)\Sigma_{\text{PreAct}}^{(l,t)}(X) and Σ PostAct(l,t)​(X)\Sigma_{\text{PostAct}}^{(l,t)}(X). + +Eigendecomposition For each covariance matrix, we perform eigendecomposition (Σ​v=λ​v\Sigma v=\lambda v), and sorted the eigenvalues in descending order: λ 1≥λ 2≥…≥λ D≥0\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{D}\geq 0. We define Λ=∑i=1 D λ i\Lambda=\sum_{i=1}^{D}\lambda_{i}, the total variance, and normalized eigenvalues to create a probability distribution as λ^i=λ i/Λ\hat{\lambda}_{i}=\lambda_{i}/\Lambda. + +Spectral metrics computation Next, we compute four scalar metrics from the eigenspectrum of Σ PreAct(l,t)​(X)\Sigma_{\text{PreAct}}^{(l,t)}(X) and Σ PostAct(l,t)​(X)\Sigma_{\text{PostAct}}^{(l,t)}(X), which quantify distinct aspects of the eigenspectral dynamics: Spectral Entropy, Participation Ratio, Eigenvalue Early Enrichment, and the Jensen-Shannon divergence. + +### 2.2 Eigenspectrum Metrics for Analyzing High-Dimensional Latent Space + +Spectral Entropy (SE) Spectral Entropy quantifies the uniformity of eigenvalue distribution in high-dimensional latent spaces. Formally, it is the Shannon entropy of the normalized eigenvalue distribution derived from a layer’s covariance matrix: SE=−∑i=1 D λ^i​log⁡λ^i\text{SE}=-\sum_{i=1}^{D}\hat{\lambda}_{i}\log\hat{\lambda}_{i}. + +Mathematically, spectral entropy is equivalent to the von Neumann entropy (vNE) in quantum information theory, which quantifies the degree of quantum entanglement or mixedness of a quantum state (De Domenico and Biamonte, [2016](https://arxiv.org/html/2603.06922#bib.bib427 "Spectral entropies as information-theoretic tools for complex network comparison")). In quantum mechanics, vNE is defined as: S vNE​(ρ)=−Tr​(ρ​ln⁡ρ)S_{\text{vNE}}(\rho)=-\mathrm{Tr}(\rho\ln\rho), where ρ\rho denotes a density matrix, a positive semidefinite operator with unit trace that encapsulates the probabilistic nature of quantum states (Nikitin et al., [2024](https://arxiv.org/html/2603.06922#bib.bib405 "Kernel language entropy: fine-grained uncertainty quantification for LLMs from semantic similarities"); Huang et al., [2023](https://arxiv.org/html/2603.06922#bib.bib404 "ESSEN: improving evolution state estimation for temporal networks using von neumann entropy")). + +For FFN, an analogous density matrix is created by normalizing the covariance matrix by its trace: ρ FFN=𝚺 Tr​(𝚺),where​Tr​(𝚺)=Λ\rho_{\text{FFN}}=\frac{\mathbf{\Sigma}}{\mathrm{Tr}(\mathbf{\Sigma})},\quad\text{where}\;\;\mathrm{Tr}(\mathbf{\Sigma})=\Lambda. Applying this to ρ FFN\rho_{\text{FFN}}, SE becomes the Shannon (or von Neumann) entropy of the normalized eigenvalue distribution SE=−∑i=1 D λ^i​log⁡λ^i\text{SE}=-\sum_{i=1}^{D}\hat{\lambda}_{i}\log\hat{\lambda}_{i}. + +Thus, when the eigenspectrum exhibits significant anisotropy (e.g., λ 1≫λ 2,…,λ D\lambda_{1}\gg\lambda_{2},\dots,\lambda_{D}), SE approaches zero, indicating a collapsed or low-rank representation. Conversely, when eigenspectrum approach uniformity (λ i≈λ j:∀i,j\lambda_{i}\approx\lambda_{j}:\forall i,j), SE approaches its theoretical maximum, ln⁡(D)\ln(D). + +Participation Ratio (PR). It measures effective dimensionality of an eigenspectrum (Hu and Sompolinsky, [2022](https://arxiv.org/html/2603.06922#bib.bib426 "The spectrum of covariance matrices of randomly connected recurrent neuronal networks with linear dynamics")) and quantifies how many dimensions significantly hold variance. Formally, + +PR=(∑i=1 D λ i)2∑i=1 D λ i 2=Λ 2∑i λ i 2=1∑i λ i^2;where 1≤PR≤D\mathrm{PR}=\frac{\Bigl(\sum_{i=1}^{D}\lambda_{i}\Bigr)^{2}}{\sum_{i=1}^{D}\lambda_{i}^{2}}=\frac{\Lambda^{2}}{\sum_{i}\lambda_{i}^{2}}=\frac{1}{\sum_{i}\hat{\lambda_{i}}^{2}};\quad\text{where}\quad 1\leq\mathrm{PR}\leq D(2) + +PR values close to 1 indicate maximal anisotropy (i.e., variance concentrated in a single direction), while a value near D D indicates uniform variance across all dimensions. While SE depends on the entire distribution shape (including small eigenvalues) and measures the uniformity of distribution, PR focuses on how many directions are active and meaningfully contributing to the total variance. + +Early Eigenvalue Enrichment (EEE). It quantifies the top-heaviness of an eigenspectrum by tracking how rapidly the leading principal directions accumulate variance. Specifically, it captures how front-loaded the variance is among the top eigenvalues, by assessing how quickly the cumulative sum surpasses that of a uniform spectrum (Marbut et al., [2023](https://arxiv.org/html/2603.06922#bib.bib414 "Reliable measures of spread in high dimensional latent spaces")). + +Formally, the proportion of variance explained by the top k k principal directions is defined by the normalized cumulative sum at index k k as S~k=1 Λ​∑i=1 k λ i\widetilde{S}_{k}=\frac{1}{\Lambda}\sum_{i=1}^{k}\lambda_{i}, and for comparison, the ideal uniform reference grows linearly as k D\tfrac{k}{D} (see Figure [2](https://arxiv.org/html/2603.06922#S2.F2 "Figure 2 ‣ 2.2 Eigenspectrum Metrics for Analyzing High-Dimensional Latent Space ‣ 2 NerVE: A Principled Framework for Eigenspectrum Analysis ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). The EEE score is then the average vertical distance between the empirical cumulative curve and this ideal line, normalized by the maximal possible value: + +EEE=1 1 2​D​∑k=1 D(S~k−k D)=2×∑k=1 D(∑i=1 k λ i∑i=1 D λ i−k D)×1 D.\mathrm{EEE}=\frac{1}{\tfrac{1}{2}D}\sum_{k=1}^{D}\left(\widetilde{S}_{k}-\frac{k}{D}\right)=2\times\sum_{k=1}^{D}\left(\frac{\sum_{i=1}^{k}\lambda_{i}}{\sum_{i=1}^{D}\lambda_{i}}-\frac{k}{D}\right)\times\frac{1}{D}.(3) + +EEE≈1\mathrm{EEE}\approx 1 indicates that most of the variance is concentrated in the top few directions, forming a steep eigenvalue spectrum; conversely, EEE≈0\mathrm{EEE}\approx 0 corresponds to a nearly uniform spectrum. + +![Image 2: Refer to caption](https://arxiv.org/html/2603.06922v1/x2.png) + +Figure 2: Cumulative variance distribution across a 768-dimensional latent space. Higher values (shown on curves) indicate top-heavy concentration in a few dominant directions, while lower values reflect a more uniform distribution. + +We analyze the cumulative variance distribution of various eigenspectra over a 768-dimensional latent space, using the EEE metric in Figure [2](https://arxiv.org/html/2603.06922#S2.F2 "Figure 2 ‣ 2.2 Eigenspectrum Metrics for Analyzing High-Dimensional Latent Space ‣ 2 NerVE: A Principled Framework for Eigenspectrum Analysis ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). The resulting curves shows a spectrum of dimensional utilization, ranging from extreme anisotropy to fully uniform variance. The One dimension spectrum exhibits an EEE of 1.00, where nearly all variance is concentrated in a single dominant principal component, indicative of a highly degenerate latent representation. As more dimensions begin to carry variance (from 10% to 99%), the EEE value decreases from 0.94 to 0.44, suggesting a gradual transition toward more distributed representations. Notably, EEE’s nonlinear scaling with dimension count highlights its sensitivity to early eigenvalue dominance, making it a valuable diagnostic for understanding how architectural choices and training dynamics shape the effective dimensionality of latent spaces. + +Jensen-Shannon Divergence (JS) Unlike the previous metrics, which describe a single eigenspectra in isolation, JS provides a principled measure of dissimilarity between two eigenspectrum within a layer. Specifically, it quantify the _extent of distributional shifts_ from the _pre_ to _post_ eigenspectrum caused by FFN nonlinearity. For a normalized eigenvalue distributions P pre={λ^i pre}i=1 D P_{\text{pre}}=\{\hat{\lambda}_{i}^{\text{pre}}\}_{i=1}^{D} and P post={λ^i post}i=1 D P_{\text{post}}=\{\hat{\lambda}_{i}^{\text{post}}\}_{i=1}^{D}, where λ^i=λ i/∑j=1 D λ j\hat{\lambda}_{i}=\lambda_{i}/\sum_{j=1}^{D}\lambda_{j}, the JS is defined as (see Amari ([2016](https://arxiv.org/html/2603.06922#bib.bib421 "Information geometry and its applications"), Chapter 4.6.3)): + +JS​(P pre∥P post)=1 2​D KL​(P pre∥M)+1 2​D KL​(P post∥M)\text{JS}(P_{\text{pre}}\parallel P_{\text{post}})=\frac{1}{2}D_{\text{KL}}(P_{\text{pre}}\parallel M)+\frac{1}{2}D_{\text{KL}}(P_{\text{post}}\parallel M)(4) + +where M=P pre+P post 2 M=\frac{P_{\text{pre}}+P_{\text{post}}}{2} is the midpoint distribution and D KL D_{\text{KL}} is Kullback-Leibler divergence: + +D KL​(P∥Q)=∑i=1 D λ^i P​log⁡(λ^i P λ^i Q)D_{\text{KL}}(P\parallel Q)=\sum_{i=1}^{D}\hat{\lambda}_{i}^{P}\log\left(\frac{\hat{\lambda}_{i}^{P}}{\hat{\lambda}_{i}^{Q}}\right)(5) + +For numerical stability, we compute JS for FFN as follows: + +JS​(P pre∥P post)=1 2​∑i=1 D λ^i pre​log⁡(2​λ^i pre λ^i pre+λ^i post)+1 2​∑i=1 D λ^i post​log⁡(2​λ^i post λ^i pre+λ^i post)\begin{split}\text{JS}(P_{\text{pre}}\parallel P_{\text{post}})=\frac{1}{2}\sum_{i=1}^{D}\hat{\lambda}_{i}^{\text{pre}}\log\left(\frac{2\hat{\lambda}_{i}^{\text{pre}}}{\hat{\lambda}_{i}^{\text{pre}}+\hat{\lambda}_{i}^{\text{post}}}\right)+\frac{1}{2}\sum_{i=1}^{D}\hat{\lambda}_{i}^{\text{post}}\log\left(\frac{2\hat{\lambda}_{i}^{\text{post}}}{\hat{\lambda}_{i}^{\text{pre}}+\hat{\lambda}_{i}^{\text{post}}}\right)\end{split}(6) + +3 Experimental Results +---------------------- + +Models and datasets We evaluate the FFN eigenspectrum of two model families: GPT-2 and LLaMA-style architectures. For GPT-2, we train a 125M parameter model on 2.1B tokens from the CodeParrot dataset, which is created from 20M GitHub Python files and preprocessed using [HuggingFace](https://arxiv.org/html/2603.06922#bib.bib112 "CodeParrot") tokenizer of vocabulary size 50K. For LLaMA-style models, we train in-house variants with 71M and 130M parameters on the C4 dataset (Raffel et al., [2020](https://arxiv.org/html/2603.06922#bib.bib441 "Exploring the limits of transfer learning with a unified text-to-text transformer")), tokenized using the T5-base tokenizer with a 32K vocabulary. These LLaMA variants follow the architectural specifications (depth, embedding dimensions, FFN width, positional encoding, and SwiGLU activation) from Li et al. ([2025](https://arxiv.org/html/2603.06922#bib.bib398 "Mix-LN: unleashing the power of deeper layers by combining pre-LN and post-LN")), which adopts downscaling methodology of Lialin et al. ([2024](https://arxiv.org/html/2603.06922#bib.bib399 "ReLoRA: high-rank training through low-rank updates")); Zhao et al. ([2024](https://arxiv.org/html/2603.06922#bib.bib400 "GaLore: memory-efficient LLM training by gradient low-rank projection")). For experiments with RoPE, we train GPT-2 on OpenWebText dataset, following the architectural settings and training recipe from Loshchilov et al. ([2025](https://arxiv.org/html/2603.06922#bib.bib410 "NGPT: normalized transformer with representation learning on the hypersphere")). To study optimizer-dependent (AdamW, Muon, Dion) dynamics, we train GPT-2 350M and 160M variants on FineWeb (Penedo et al., [2024](https://arxiv.org/html/2603.06922#bib.bib439 "The fineweb datasets: decanting the web for the finest text data at scale")) dataset. + +Training setup All experiments are conducted on NVIDIA RTX 3090 GPUs (24 GB). GPT-2 models are trained for 41K steps with context length 128 on the CodeParrot dataset. For RoPE experiments, GPT-2 is trained on 26B tokens from OpenWebText using 4 GPUs with context length 512. LLaMA-71M is trained on 1.1B tokens for 10K steps, while LLaMA-130M, LLaMA-250M, and LLaMA-1.3B variants are trained on 2.2B tokens for 20K steps. All LLaMA models use a context length of 256. For optimizer-specific eigenspectrum analysis, we train GPT-2 models with 512 and 1024 context lengths, following the hyperparameter settings from Ahn et al. ([2025](https://arxiv.org/html/2603.06922#bib.bib449 "Dion: distributed orthonormalized updates")). + +### 3.1 FFN Nonlinearity Reinject Variance and Flatten the Eigenspectrum + +Variance is reinjected, not merely rescaled Figure [1](https://arxiv.org/html/2603.06922#S1.F1 "Figure 1 ‣ 1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") contrasts pre- and post-activation spectral dynamics and highlights the role of nonlinearity withing FFN. The PreAct eigenspectrum is highly top-heavy as most variance is concentrated in a few leading directions. This is reflected by lower SE and PR, indicating a lower utilization of the latent space. Once the nonlinearity is activated, both SE and PR jump upward across training, suggesting that the nonlinearity redistributes variance across more dimensions. In effect, the nonlinearity reawakens previously inactive directions, injecting new degrees of freedom into the latent space. This variance reinjection promotes features disentanglement which facilitate more effective downstream processing in subsequent layers. + +Flattening and reshaping the eigenspectrum The variance redistribution has a noticeable impact on the spectrum shape. The EEE values, which quantifies how sharply leading eigenvalues dominate, drops consistently for post-activation, re-affirming that the spectrum is being flattened. Instead of concentrating variance in a small number of dominant modes, the post-activation spectrum spreads variance more evenly. Moreover, the JS heatmaps shows a distributional shift: post-activation eigenspectra are not merely scaled versions of pre-activation ones but are effectively reordered. + +GELU vs. ReLU: Similar Trajectory, Distinct Dynamics GELU (Figure [1](https://arxiv.org/html/2603.06922#S1.F1 "Figure 1 ‣ 1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) and ReLU (Figure [3](https://arxiv.org/html/2603.06922#S3.F3 "Figure 3 ‣ 3.1 FFN Nonlinearity Reinject Variance and Flatten the Eigenspectrum ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) follow the same qualitative trajectory—variance reinjection (↑\uparrow SE, ↑\uparrow PR), spectral flattening (↓\downarrow EEE), and distributional reordering (↑\uparrow JS)—but differ in pace and extent. While ReLU stabilize SE and PR earlier, suggesting a faster reinjection of variance, GELU progresses more gradually yet ultimately pushes PR post{}_{\text{post}} to higher values. This indicates that smoother nonlinearity in GELU enables a broader subspace exploration, which correlates with GELU’s lower perplexity (Table [2](https://arxiv.org/html/2603.06922#S3.T2 "Table 2 ‣ 3.2 Compensatory Role of FFN Nonlinearity in the Absence of LayerNorms ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). + +These eigen-metrics highlight the functional role of nonlinearity: (1) improving the directional usage of FFN latent space (SE ↑\uparrow, PR ↑\uparrow), reflecting an increased participation of multiple latent directions for encoding information; and (2) reducing the dominance of a few principal direction (EEE ↓\downarrow). Thus, NerVE provides geometric underpinning of the nonlinear expressivity in transformer models. + +![Image 3: Refer to caption](https://arxiv.org/html/2603.06922v1/x3.png) + +Figure 3: Eigenspectrum dynamics illustrate how FFN nonlinearities regulate information flow and reshape the eigenspectrum during training for GPT-2 (ReLU) on CodeParrot. Pre- and post-activation dynamics are shown for SE, PR, and EEE, highlighting how nonlinearities reinject variance and alter spectral structure. JS heatmaps (rightmost) capture the layer-wise distributional shift induced by nonlinearity. In-panel titles report Pearson correlations (r r) between each metric and evaluation loss. + +### 3.2 Compensatory Role of FFN Nonlinearity in the Absence of LayerNorms + +Removing LayerNorm from transformer architectures eliminates their layerwise re-centering and variance normalization, shifting the burden of statistical regularization entirely onto the attention and FFN sub-blocks. This motivates a central question: Can FFN activation functions compensate for the absence of normalization, and if so, to what extent and through what mechanisms? Our findings reveal that, unlike GELU, ReLU-family activations actively compensate the absence for removal of LayerNorms by regulating the FFN latent space variance. + +Spectral inertia in normalization-free GELU models Normalization-free GELU model exhibits spectral inertia in early layers, characterized by EEE post{}_{\text{post}}≈\approx 1 and JS ≈\approx 0 (see Figure [4](https://arxiv.org/html/2603.06922#S3.F4 "Figure 4 ‣ 3.2 Compensatory Role of FFN Nonlinearity in the Absence of LayerNorms ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). This indicates that the nonlinearity in early FFNs fails to reinject variance into the latent space, leaving the eigenspectrum heavily front-loaded. Consequently, variance remains confined to a few dominant subspaces, and there is a significant overlap between SE pre{}_{\text{pre}} and SE post{}_{\text{post}}. Thus, nonlinearity in early FFNs does not activate new directions, and information continues to flow through a narrow subspace in subsequent layers. This spectral bottleneck reflects a downstream consequence of entropic overload, a critical failure mode observed in normalization-free LLMs (Jha and Reagen, [2024](https://arxiv.org/html/2603.06922#bib.bib359 "ReLU’s revival: on the entropic overload in normalization-free large language models")), where a disproportionate number of attention heads in the early layers stuck in higher-entropy states throughout training, squandering the representation diversity of multi-head attention mechanism, and degrades the performance (higher perplexity, see Table [2](https://arxiv.org/html/2603.06922#S3.T2 "Table 2 ‣ 3.2 Compensatory Role of FFN Nonlinearity in the Absence of LayerNorms ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). + +![Image 4: Refer to caption](https://arxiv.org/html/2603.06922v1/x4.png) + +Figure 4: Eigenspectrum dynamics for norm-free GPT-2 (125M) models with GELU (top), ReLU (middle), and learnable-slope Leaky ReLU (bottom). Columns show layer-averaged SE (pre vs. post), PR gain (post to pre), post-activation EEE (yellow regions indicate top-heavy distribution), and JS (yellow regions highlight strong redistribution) across layers and training steps. Norm-free GELU exhibits spectral inertia in layers 0 to 5 (EEE →\rightarrow 1, JS →\rightarrow 0); whereas, ReLU and Leaky ReLU aggressively reinject variance (PR gain >>200×\times) and flattening the spectrum (EEE << 0.3). + +Early FFNs overcompensate to break spectral inertia in normalization-free ReLU models In contrast with GELU, ReLU and learnable-slope Leaky ReLU variant exhibit strong compensatory behavior when LayerNorms are removed. Specifically, in the first two FFN layers, the post-to-pre Participation Ratio (PR) gain surges by ≈20×\approx 20\times to 300×\times (blue curves, Figure [4](https://arxiv.org/html/2603.06922#S3.F4 "Figure 4 ‣ 3.2 Compensatory Role of FFN Nonlinearity in the Absence of LayerNorms ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")), indicating an abrupt reinjection of variance into previously underutilized latent directions. Consequently, the post-activation EEE post{}_{\text{post}} remains consistently low (≈\approx 0.3-0.5) across layers, indicating that the spectrum becomes flatter and more isotropic, rather than top-heavy. This redistribution is further corroborated by non-overlapping SE pre and SE post spectrum, and by JS peaks ≈\approx 0.48 in the early-layer contour maps, confirming the crucial role of nonlinearity in reshaping eigenspectrum in early FFNs + +This aggressive variance injections demonstrate that FFN nonlinearity can partially assume the statistical regularization role of LayerNorm, widening the latent manifold and mitigating spectral bottlenecks. In terms of predictive performance, both ReLU variants reduce the perplexity gap to the LayerNorm baseline by ≈\approx 50% (refer to Table [2](https://arxiv.org/html/2603.06922#S3.T2 "Table 2 ‣ 3.2 Compensatory Role of FFN Nonlinearity in the Absence of LayerNorms ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). + +Table 2: Evaluation perplexity (PPL ↓\downarrow) comparison across GPT-2 baseline models (GELU and ReLU), norm-free models (GELU, ReLU, learnable-slope Leaky ReLU). Parametric normalization (Weight, Spectral, Hyperspherical) are applied to FFNs of norm-free learnable-slope Leaky ReLU models. All models trained on 2.1B tokens from CodeParrot dataset. + +### 3.3 Feed-Forward Networks Weight Geometry and Eigenspectrum Dynamics + +Previously, we have seen that how ReLU variants improve the redistribution of top-heavy eigenvalues in the early layers of normalization-free LLMs. We now analyze how parametric normalization applied to their FFNs further influence eigenspectrum dynamics. Figure[5](https://arxiv.org/html/2603.06922#S3.F5 "Figure 5 ‣ 3.3 Feed-Forward Networks Weight Geometry and Eigenspectrum Dynamics ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") shows the effects of weight, spectral, and hyperspherical normalization applied to FFNs. + +Parametric normalization alters the localization of distributional shifts across layers Despite being applied only to FFN linear layers, each parametric normalization technique induces distinct learning dynamics, as demonstrated by the layerwise JS divergence in Figure [5](https://arxiv.org/html/2603.06922#S3.F5 "Figure 5 ‣ 3.3 Feed-Forward Networks Weight Geometry and Eigenspectrum Dynamics ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") (rightmost column). Specifically, SNorm exhibits highly localized distributional shifts in the mid-to-deeper layers that emerge very early in training. In contrast, WNorm induces distributional shifts in a smaller subset of mid layers that appear very late in training. Meanwhile, HNorm triggers strong shifts in the early layers at the very-beginning of training, which gradually diminish as training progresses. + +![Image 5: Refer to caption](https://arxiv.org/html/2603.06922v1/x5.png) + +Figure 5: Impact of FFN (parametric) normalization in norm-free GPT-2 with learnable-slope leaky ReLU. Eigenspectrum dynamics are quantified by latent capacity (PR_post), spectral regularization and flattening (Δ\Delta EEE and EEE_post), and distributional shift (JS). Top to bottom: Weight, Spectral, and Hyperspherical Normalization. Each method exhibits distinct JS localization and spectral patterns, showing different influences on FFN internal dynamics. + +Spectral normalization achieves superior performance through smooth and sustained spectral flattening By constraining the spectral norm of each FFN weight matrix, SNorm induces early and consistent spectral flattening, reflected in uniformly negative Δ\Delta EEE (Post-Pre) values in Figure[5](https://arxiv.org/html/2603.06922#S3.F5 "Figure 5 ‣ 3.3 Feed-Forward Networks Weight Geometry and Eigenspectrum Dynamics ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), especially in deeper layers. This yields the lowest EEE_post (≈\approx-0.45) among all parametric normalization methods, indicating balanced variance distribution across a moderate number of directions (PR_post ≈200\approx 200) and improved latent space utilization. In contrast, WNorm shows delayed and highly localized flattening in a few mid layers, while HNorm induces early flattening in shallow layers that vanishes as training progresses. + +Hyperspherical normalization underperforms due to early overshooting in eigenspectrum HNorm projects weight vectors onto a unit hypersphere (Loshchilov et al., [2025](https://arxiv.org/html/2603.06922#bib.bib410 "NGPT: normalized transformer with representation learning on the hypersphere"); Lee et al., [2025a](https://arxiv.org/html/2603.06922#bib.bib409 "Hyperspherical normalization for scalable deep reinforcement learning"); Wang and Isola, [2020](https://arxiv.org/html/2603.06922#bib.bib407 "Understanding contrastive representation learning through alignment and uniformity on the hypersphere")), which rapidly expands latent capacity, indicated by a sharp increase in PR_post (exceeding 600). However, this expansion cause an early-overshooting in Δ\Delta EEE dynamics, and EEE_post values remain high across depth, indicating the persistent dominance of a few principal directions. Moreover, the JS divergence patterns also reflect the inefficient use of model’s depth. Hence, the combination of early overshooting, lack of spectral control, and depthwise redundancy leads to HNorm’s degraded perplexity. While large latent capacity can be beneficial, it must be paired with sustained flattening mechanisms to prevent top-heavy eigenspectrum from re-emerging. + +### 3.4 Impact of LayerNorm Positioning on the FFN Latent Space Dimensionality + +PreLN turns width into usable dimensions while PostLN shows diminishing returns at higher width. Across the FFN-width sweep (D D=1 d d–8 d d), the normalized PR, which reflects the effective utilization of available latent space, for PreLN is highest and remains nearly flat as D D increases. Figure [6](https://arxiv.org/html/2603.06922#S3.F6 "Figure 6 ‣ 3.4 Impact of LayerNorm Positioning on the FFN Latent Space Dimensionality ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") shows the layer-consistent behavior for PreLN, highlighting the conversion of added width into usable dimensions. Thus, PreLN offers the best return-on-width. + +![Image 6: Refer to caption](https://arxiv.org/html/2603.06922v1/x6.png) + +Figure 6: LayerNorm positioning and FFN width sweep: Post-activation participation ratio is normalized by D D for PreLN, MixLN, and PostLN configurations. PreLN sustains the highest and most stable utilization of FFN width across the sweep, PostLN incurs diminishing return at higher FFN width, MixLN lies in between but with greater layer-to-layer variability. + +The width utilization is lowest for PostLN and decreases with D D, revealing growing spectral concentration—added capacity is concentrated into fewer dominant directions instead of broadening the effective dimensionality. MixLN is intermediate with medians between PreLN and PostLN and wider layer-to-layer spread, implying a less stable inductive bias across depth. Thus, LayerNorm placement governs how width is spent. Refer to Table [7](https://arxiv.org/html/2603.06922#A4.T7 "Table 7 ‣ Appendix D LayerNorm Positioning and FFN Width Sweep ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") (Appendix [D](https://arxiv.org/html/2603.06922#A4 "Appendix D LayerNorm Positioning and FFN Width Sweep ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) for raw PR_post values, and Figure [11](https://arxiv.org/html/2603.06922#A3.F11 "Figure 11 ‣ C.1 Spectral Signature of LayerNorm Positioning: PreLN, MixLN, and PostLN ‣ Appendix C Eigenspectral Signature ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") (Appendix [C.1](https://arxiv.org/html/2603.06922#A3.SS1 "C.1 Spectral Signature of LayerNorm Positioning: PreLN, MixLN, and PostLN ‣ Appendix C Eigenspectral Signature ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) for a detailed discussion on spectral signatures. + +### 3.5 Eigenspectral Signatures Predict Generalization + +Practical utility of NerVE as online monitoring tool and architectural selection proxies. First, to assess whether NerVE metrics serve as online training diagnostics, we correlate SE and PR with validation loss across training checkpoints for each FFN width variants (Table [3](https://arxiv.org/html/2603.06922#S3.T3 "Table 3 ‣ 3.5 Eigenspectral Signatures Predict Generalization ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), left). Pre-activation correlations exceed |r|≥0.97|r|\geq 0.97 at every width, indicating that spectral metrics track generalization throughout training, and they can be used as forward-pass-only diagnostic. Notably, the post-activation PR correlation strengthens from |r||r| = 0.85 at D D = 1 d d to |r|≥0.93|r|\geq 0.93 at D ≥\geq 2d, suggesting that a modest FFN width is required for producing generalization-predictive spectral signatures. + +Table 3: Correlation between eigenspectrum metrics (SE, PR) and generalization. Left (within-run): Pearson r r between each metric and validation loss over checkpoints at each FFN width (D D=1 d d–8 d d). Right (cross-config): Pearson r r between final metric values and perplexity across the eight width configurations for each architecture and activation. Higher SE and PR consistently implies lower loss. + +Second, for cross-configuration ranking, we correlate the final values of eigen-metrics against final perplexity across the eight width configurations, for each architecture and activation variants (Table [3](https://arxiv.org/html/2603.06922#S3.T3 "Table 3 ‣ 3.5 Eigenspectral Signatures Predict Generalization ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), right). Correlations remain strong across for across the configurations (|r|≥0.85|r|\geq 0.85), with one notable exception—normalization-free ReLU and LeakyReLU, where pre-activation correlations weaken sharply while post-activation correlations strengthen. This inversion directly reflects how FFN nonlinearity overcompensate to break spectral inertial identified in Section[3.2](https://arxiv.org/html/2603.06922#S3.SS2 "3.2 Compensatory Role of FFN Nonlinearity in the Absence of LayerNorms ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), making the post-activation spectrum the more informative diagnostic in this regime. Thus, short preliminary runs with NerVE metrics can rank architectural configurations without training each to convergence. + +### 3.6 Layerwise Dynamics for Positional Encoding: RoPE vs NoPE + +RoPE prevents mid-to-deep spectral collapse, improving depth utilization + +![Image 7: Refer to caption](https://arxiv.org/html/2603.06922v1/x7.png) + +Figure 7: Layerwise participation ratio (PR) comparison (RoPE vs NoPE) in GPT-2 models trained from scratch on 26B token form openwebtext dataset with 512 context length for 100K steps. RoPE sustains higher PR in the middle and deeper layers, indicating better utilization of latent space and network’s depth. + +Figure [7](https://arxiv.org/html/2603.06922#S3.F7 "Figure 7 ‣ 3.6 Layerwise Dynamics for Positional Encoding: RoPE vs NoPE ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") demonstrate that the NoPE’s PR declines in the middle and deeper layers, indicating that representations collapse into a narrow subspace and squander model depth. RoPE, on the other hand, sustains higher PR across the mid-to-deeper layers, improving the depth utilization. This effect aligns with recent evidence that intermediate layers are disproportionately important (Queipo-de-Llano et al., [2026](https://arxiv.org/html/2603.06922#bib.bib433 "Attention sinks and compression valleys in LLMs are two sides of the same coin"); Lad et al., [2025](https://arxiv.org/html/2603.06922#bib.bib435 "Remarkable robustness of LLMs: stages of inference?"); Ikeda et al., [2025](https://arxiv.org/html/2603.06922#bib.bib436 "Layerwise importance analysis of feed-forward networks in transformer-based language models"); Skean et al., [2025](https://arxiv.org/html/2603.06922#bib.bib434 "Layer by layer: uncovering hidden representations in language models")); which collapse under NoPE but remain effective under RoPE. These improved spectral utilization in RoPE helps achieves lower evaluation perplexity (15.20 vs 16.78) than NoPE. Figure [13](https://arxiv.org/html/2603.06922#A3.F13 "Figure 13 ‣ C.3 Spectral Signature of Positional Encoding: NoPE vs RoPE ‣ Appendix C Eigenspectral Signature ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") in Appendix [C.3](https://arxiv.org/html/2603.06922#A3.SS3 "C.3 Spectral Signature of Positional Encoding: NoPE vs RoPE ‣ Appendix C Eigenspectral Signature ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") shows the spectral entropy heatmaps reaffirming that RoPE prevents the mid-to-deep spectral collapse characteristic of NoPE. + +### 3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement + +To understand the optimizer-dependent role of FFN nonlinearity, we examine eigenspectrum dynamics under three LLM optimizers; AdamW, Muon, and Dion; and summarize the observations below. + +![Image 8: Refer to caption](https://arxiv.org/html/2603.06922v1/x8.png) + +Figure 8: Optimizer-dependent FFN eigenspectrum dynamics in GPT2-350M. Rows show AdamW (top), Muon (middle), and Dion (bottom). AdamW has large early PR gains and high JS with relatively high EEE_post, indicating optimizer-induced pre-activation collapse followed by aggressive but incomplete nonlinear repair. Muon shows the smallest PR gains, lowest JS, and lowest EEE_post, and flatter post-spectra. Dion is intermediate and falls between these two regimes, improving over AdamW but not matching Muon’s pre-/post-spectral behavior. The perplexity ordering (Muon >> Dion >> AdamW) aligns with post-activation spectral flatness. + +Muon minimizes nonlinearity burden by preserving activation-compatible eigenspectrum Figure [8](https://arxiv.org/html/2603.06922#S3.F8 "Figure 8 ‣ 3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") shows that Muon maintains uniformly small PR(Post/Pre) gains and consistently low JS divergence throughout training. This combination indicates that Muon keeps the pre-activation FFN eigenspectra high-dimensional and near-isotropic, a property also shown in a recent line of work (Wang et al., [2026](https://arxiv.org/html/2603.06922#bib.bib447 "Muon outperforms adam in tail-end associative memory learning"); Vasudeva et al., [2026](https://arxiv.org/html/2603.06922#bib.bib448 "How muon’s spectral design benefits generalization: a study on imbalanced data")), so the FFN nonlinearity does not need to substantially restructure representations. Dion follows the same trend but less strongly: PR gains are moderate and JS is higher and more uniform across layers, suggesting broader activation-driven reshaping than Muon yet still far less mismatch than AdamW. Thus, geometric updates serve as spectral equalizers, preventing the pre-activation spectrum from drifting into regimes that demand large nonlinear correction. + +The early-layer spectral collapse in AdamW forces FFN nonlinearity into repair mode. + +![Image 9: Refer to caption](https://arxiv.org/html/2603.06922v1/x9.png) + +![Image 10: Refer to caption](https://arxiv.org/html/2603.06922v1/x10.png) + +Figure 9: Layerwise PR_pre (top) over training, and final PR_post per-layer (bottom) for AdamW, Muon, and Dion optimizer. Muon maintains the highest PR_pre across almost all layers, Dion is intermediate, and AdamW shows early-layer collapse. Moreover, Muon concentrates the largest effective dimensionality in middle FFNs. + +In contrast, AdamW exhibits very large PR(Post/Pre) gains concentrated in early layers, indicating optimizer-induced pre-activation collapse, as energy concentrating into a small set of dominant eigenmodes followed by strong nonlinear repair (Figure [9](https://arxiv.org/html/2603.06922#S3.F9 "Figure 9 ‣ 3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). However, this repair does not translate into better utilization, as the PR post{}_{\text{post}} remains lower than Muon and Dion in early and intermediate layers despite their massive PR gains during training. Thus, under AdamW the activation expends capacity primarily to undo collapse rather than refine a healthy spectrum, consistent with AdamW’s worse perplexity (see Table [13](https://arxiv.org/html/2603.06922#A12.T13 "Table 13 ‣ L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). + +Muon concentrates representational capacity where it matters: the middle FFNs. Figure [9](https://arxiv.org/html/2603.06922#S3.F9 "Figure 9 ‣ 3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") isolates where effective dimensionality (PR post{}_{\text{post}}) accumulates after training. Muon achieves the highest PR post{}_{\text{post}} in intermediate FFNs; whereas, Dion inflates PR post{}_{\text{post}} mainly in early FFNs without yielding the best perplexity; instead, the perplexity ordering follows mid FFNs PR post{}_{\text{post}} trend. This highlights the spectral mechanism for Muon’s superiority—well-conditioned latent-space usage in the mid-FFNs. + +The optimizer-dependent dynamics persist across scales and context lengths (Appendix[L.1](https://arxiv.org/html/2603.06922#A12.SS1 "L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")), and extend to Adafactor (Appendix[L.2](https://arxiv.org/html/2603.06922#A12.SS2 "L.2 AdamW vs Adafactor ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) and SGD on non-transformer MLP-Mixer (Appendix[L.3](https://arxiv.org/html/2603.06922#A12.SS3 "L.3 AdamW vs SGD ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). Thus, core findings—nonlinearity reinjects variance and flattens spectra—are optimizer-agnostic while remaining optimizer-dependent in degree, supporting the view that optimizers induce representational biases which should be used as explicit sources of inductive biases (Pascanu et al., [2025](https://arxiv.org/html/2603.06922#bib.bib455 "Optimizers qualitatively alter solutions and we should leverage this")). + +4 Related Work +-------------- + +Prior work leverages spectral signals of weights or representations to understand model internals. RankMe (Garrido et al., [2023](https://arxiv.org/html/2603.06922#bib.bib401 "RankMe: assessing the downstream performance of pretrained self-supervised representations by their rank")) and Diff-eRank (Wei et al., [2024](https://arxiv.org/html/2603.06922#bib.bib413 "Diff-erank: a novel rank-based metric for evaluating large language models")) use spectral-entropy Rank measures to predict downstream accuracy and quantify compression. Bao et al. ([2024](https://arxiv.org/html/2603.06922#bib.bib415 "Self-attention networks localize when QK-eigenspectrum concentrates")) established the relation between spectral concentration of Q​K QK weight matrix and attention localization, which Lee et al. ([2025b](https://arxiv.org/html/2603.06922#bib.bib418 "Mitigating attention localization in small scale: self-attention refinement via one-step belief propagation")) addressed using one-step belief-propagation refinement. Hu et al. ([2025](https://arxiv.org/html/2603.06922#bib.bib416 "Eigenspectrum analysis of neural networks without aspect ratio bias")) showed that heavy-tail nature of weight ESD is biased by layer aspect ratio and propose fixed-aspect sub-ESD averaging to debias. Zhang et al. ([2024](https://arxiv.org/html/2603.06922#bib.bib453 "Why transformers need adam: a hessian perspective")) use JS divergence between Hessian spectra of parameter blocks in networks at initialization to suggest optimizer choice. Ruscio et al. ([2025](https://arxiv.org/html/2603.06922#bib.bib454 "What are you sinking? a geometric approach on attention sink")) used spectral gap and participation ratio on attention eigenspectra to reveal that attention sinks serve as geometric reference frames anchoring token representations. Poole et al. ([2016](https://arxiv.org/html/2603.06922#bib.bib451 "Exponential expressivity in deep neural networks through transient chaos")) used Riemannian geometry with mean-field theory to demonstrate an order-to-chaos phase transition governed by the nonlinearity’s derivative in randomly-initialized networks. Cowsik et al. ([2025](https://arxiv.org/html/2603.06922#bib.bib452 "Geometric dynamics of signal propagation predict trainability of transformers")) used Lyapunov exponents for signal propagation, treating token evolution as a particle system to predict trainability. Dong et al. ([2021](https://arxiv.org/html/2603.06922#bib.bib391 "Attention is not all you need: pure attention loses rank doubly exponentially with depth")) used relative residual norms and path decomposition to demonstrate attention-induced rank collapses. + +In contrast, NerVE directly tracks FFN eigenspectrum dynamics, showing how nonlinearities redistribute variance in latent space, offering a diagnostic for architectural and optimizer choices. + +5 Limitations and Conclusion +---------------------------- + +While NerVE eigen-metrics analyze how FFNs organize variance in LLMs, they do not directly predict downstream task quality. Moreover, computing full eigendecompositions in large dimensions can be costly, often necessitating sampling or approximation (See Appendix [G](https://arxiv.org/html/2603.06922#A7 "Appendix G Token Sub-Sampling and Low-Rank Approximation ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). Despite these constraints, our analysis shows that each metric contributes a distinct and complementary view of high‐dimensional usage. Refer to Appendix [M](https://arxiv.org/html/2603.06922#A13 "Appendix M Limitations of NerVE Framework ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") for further details on limitations of NerVE framework. + +Acknowledgments +--------------- + +This research was supported in part by the NSF CAREER Award #2340137 and a gift award from Google. We thank the anonymous reviewers for their valuable comments and constructive feedback, which significantly improved the quality of this work. + +References +---------- + +* K. Ahn, B. Xu, N. Abreu, Y. Fan, G. Magakyan, P. Sharma, Z. Zhan, and J. Langford (2025)Dion: distributed orthonormalized updates. arXiv preprint arXiv:2504.05295. Cited by: [§L.1](https://arxiv.org/html/2603.06922#A12.SS1.p1.1 "L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [Table 13](https://arxiv.org/html/2603.06922#A12.T13.1.5.3.1 "In L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§3](https://arxiv.org/html/2603.06922#S3.p2.1 "3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* S. Amari (2016)Information geometry and its applications. Vol. 194, Springer. Cited by: [§2.2](https://arxiv.org/html/2603.06922#S2.SS2.p12.3 "2.2 Eigenspectrum Metrics for Analyzing High-Dimensional Latent Space ‣ 2 NerVE: A Principled Framework for Eigenspectrum Analysis ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* R. Balestriero, R. Cosentino, and S. Shekkizhar (2024)Characterizing large language model geometry helps solve toxicity detection and generation. In Forty-first International Conference on Machine Learning (ICML), Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p3.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* H. Bao, R. Hataya, and R. Karakida (2024)Self-attention networks localize when QK-eigenspectrum concentrates. In Proceedings of the 41st International Conference on Machine Learning (ICML), Cited by: [§4](https://arxiv.org/html/2603.06922#S4.p1.1 "4 Related Work ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* [5]V. Boreiko, Z. Bu, and S. Zha Towards understanding of orthogonalization in muon. In High-dimensional Learning Dynamics 2025, Cited by: [§L.1](https://arxiv.org/html/2603.06922#A12.SS1.p1.1 "L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* J. Briët and P. Harremoës (2009)Properties of classical and quantum jensen-shannon divergence. Physical Review A-Atomic, Molecular, and Optical Physics. Cited by: [item 1](https://arxiv.org/html/2603.06922#A2.I2.i1.p1.1 "In B.1 Metrics Design and Justifications ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* A. Cowsik, T. Nebabu, X. Qi, and S. Ganguli (2025)Geometric dynamics of signal propagation predict trainability of transformers. Physical Review E. Cited by: [§4](https://arxiv.org/html/2603.06922#S4.p1.1 "4 Related Work ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* M. De Domenico and J. Biamonte (2016)Spectral entropies as information-theoretic tools for complex network comparison. Physical Review X. Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p5.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§2.2](https://arxiv.org/html/2603.06922#S2.SS2.p2.2 "2.2 Eigenspectrum Metrics for Analyzing High-Dimensional Latent Space ‣ 2 NerVE: A Principled Framework for Eigenspectrum Analysis ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* H. de Vries (2023)In the long (context) run: it’s not the quadratic attention; it’s the lack of long pre-training data. Note: [https://www.harmdevries.com/post/context-length/](https://www.harmdevries.com/post/context-length/)Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p1.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* Y. Dong, J. Cordonnier, and A. Loukas (2021)Attention is not all you need: pure attention loses rank doubly exponentially with depth. In International Conference on Machine Learning (ICML), Cited by: [§L.2](https://arxiv.org/html/2603.06922#A12.SS2.p2.1 "L.2 AdamW vs Adafactor ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [Appendix I](https://arxiv.org/html/2603.06922#A9.p1.1 "Appendix I Eigenspectrum Dynamics in a Non-Transformer Architecture ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§4](https://arxiv.org/html/2603.06922#S4.p1.1 "4 Related Work ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* P. Gao, E. Trautmann, B. Yu, G. Santhanam, S. Ryu, K. Shenoy, and S. Ganguli (2017)A theory of multineuronal dimensionality, dynamics and measurement. BioRxiv. Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p5.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* Q. Garrido, R. Balestriero, L. Najman, and Y. Lecun (2023)RankMe: assessing the downstream performance of pretrained self-supervised representations by their rank. In International conference on machine learning (ICML), Cited by: [§4](https://arxiv.org/html/2603.06922#S4.p1.1 "4 Related Work ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* M. Geva, R. Schuster, J. Berant, and O. Levy (2021)Transformer feed-forward layers are key-value memories. In Empirical Methods in Natural Language Processing (EMNLP), Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p1.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* B. Ghorbani, S. Krishnan, and Y. Xiao (2019)An investigation into neural net optimization via hessian eigenvalue density. In International Conference on Machine Learning (ICML), Cited by: [Appendix G](https://arxiv.org/html/2603.06922#A7.p1.2 "Appendix G Token Sub-Sampling and Low-Rank Approximation ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* N. Halko, P. Martinsson, and J. A. Tropp (2011)Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM review. Cited by: [Appendix G](https://arxiv.org/html/2603.06922#A7.p1.2 "Appendix G Token Sub-Sampling and Low-Rank Approximation ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* Y. Hu and H. Sompolinsky (2022)The spectrum of covariance matrices of randomly connected recurrent neuronal networks with linear dynamics. PLoS computational biology. Cited by: [§2.2](https://arxiv.org/html/2603.06922#S2.SS2.p5.1 "2.2 Eigenspectrum Metrics for Analyzing High-Dimensional Latent Space ‣ 2 NerVE: A Principled Framework for Eigenspectrum Analysis ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* Y. Hu, K. Goel, V. Killiakov, and Y. Yang (2025)Eigenspectrum analysis of neural networks without aspect ratio bias. In Forty-second International Conference on Machine Learning (ICML), Cited by: [§4](https://arxiv.org/html/2603.06922#S4.p1.1 "4 Related Work ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* Q. Huang, Y. Zhang, Z. Zhang, and E. Hancock (2023)ESSEN: improving evolution state estimation for temporal networks using von neumann entropy. In Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS), Cited by: [§2.2](https://arxiv.org/html/2603.06922#S2.SS2.p2.2 "2.2 Eigenspectrum Metrics for Analyzing High-Dimensional Latent Space ‣ 2 NerVE: A Principled Framework for Eigenspectrum Analysis ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* [19]HuggingFace CodeParrot. Note: [https://huggingface.co/learn/nlp-course/chapter7/6](https://huggingface.co/learn/nlp-course/chapter7/6)Cited by: [§3](https://arxiv.org/html/2603.06922#S3.p1.1 "3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* W. Ikeda, K. Yano, R. Takahashi, J. Lee, KeigoShibata, and J. Suzuki (2025)Layerwise importance analysis of feed-forward networks in transformer-based language models. In Second Conference on Language Modeling (COLM), Cited by: [§3.6](https://arxiv.org/html/2603.06922#S3.SS6.p2.1 "3.6 Layerwise Dynamics for Positional Encoding: RoPE vs NoPE ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* N. K. Jha and B. Reagen (2024)ReLU’s revival: on the entropic overload in normalization-free large language models. NeurIPS Workshop on Attributing Model Behavior at Scale. Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§3.2](https://arxiv.org/html/2603.06922#S3.SS2.p2.5 "3.2 Compensatory Role of FFN Nonlinearity in the Absence of LayerNorms ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* K. Jordan, Y. Jin, V. Boza, Y. Jiacheng, F. Cecista, L. Newhouse, and J. Bernstein (2024)Muon: an optimizer for hidden layers in neural networks. URL https://kellerjordan. github. io/posts/muon. Cited by: [Table 13](https://arxiv.org/html/2603.06922#A12.T13.1.4.2.1 "In L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* D. P. Kingma (2015)Adam: a method for stochastic optimization. In International Conference on Learning Representations (ICLR), Cited by: [Appendix I](https://arxiv.org/html/2603.06922#A9.p4.2 "Appendix I Eigenspectrum Dynamics in a Non-Transformer Architecture ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* G. Kobayashi, T. Kuribayashi, S. Yokoi, and K. Inui (2024)Analyzing feed-forward blocks in transformers through the lens of attention map. In The Twelfth International Conference on Learning Representations (ICLR), Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p3.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* V. Lad, J. H. Lee, W. Gurnee, and M. Tegmark (2025)Remarkable robustness of LLMs: stages of inference?. In The Thirty-ninth Annual Conference on Neural Information Processing Systems (NeurIPS), Cited by: [§3.6](https://arxiv.org/html/2603.06922#S3.SS6.p2.1 "3.6 Layerwise Dynamics for Positional Encoding: RoPE vs NoPE ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* C. Lanczos (1950)An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. Journal of research of the National Bureau of Standards. Cited by: [Appendix G](https://arxiv.org/html/2603.06922#A7.p1.2 "Appendix G Token Sub-Sampling and Low-Rank Approximation ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* H. Lee, Y. Lee, T. Seno, D. Kim, P. Stone, and J. Choo (2025a)Hyperspherical normalization for scalable deep reinforcement learning. In International conference on machine learning (ICML), Cited by: [§3.3](https://arxiv.org/html/2603.06922#S3.SS3.p4.1 "3.3 Feed-Forward Networks Weight Geometry and Eigenspectrum Dynamics ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* N. Lee, Y. Kim, M. Oh, S. Kim, J. W. Koo, H. Jo, and J. Lee (2025b)Mitigating attention localization in small scale: self-attention refinement via one-step belief propagation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP), Cited by: [§4](https://arxiv.org/html/2603.06922#S4.p1.1 "4 Related Work ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* P. Li, L. Yin, and S. Liu (2025)Mix-LN: unleashing the power of deeper layers by combining pre-LN and post-LN. In The Thirteenth International Conference on Learning Representations (ICLR), Cited by: [§C.2](https://arxiv.org/html/2603.06922#A3.SS2.p1.1 "C.2 Spectral Signature in Larger LLaMA Models ‣ Appendix C Eigenspectral Signature ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§3](https://arxiv.org/html/2603.06922#S3.p1.1 "3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* V. Lialin, S. Muckatira, N. Shivagunde, and A. Rumshisky (2024)ReLoRA: high-rank training through low-rank updates. In The Twelfth International Conference on Learning Representations (ICLR), Cited by: [§3](https://arxiv.org/html/2603.06922#S3.p1.1 "3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* J. Lin (1991)Divergence measures based on the shannon entropy. IEEE Transactions on Information theory. Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p5.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* J. Liu, J. Su, X. Yao, Z. Jiang, G. Lai, Y. Du, Y. Qin, W. Xu, E. Lu, J. Yan, et al. (2025)Muon is scalable for llm training. arXiv preprint arXiv:2502.16982. Cited by: [§L.1](https://arxiv.org/html/2603.06922#A12.SS1.p1.1 "L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* W. Liu, Y. Zhang, X. Li, Z. Yu, B. Dai, T. Zhao, and L. Song (2017)Deep hyperspherical learning. In Advances in neural information processing systems, Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* I. Loshchilov, C. Hsieh, S. Sun, and B. Ginsburg (2025)NGPT: normalized transformer with representation learning on the hypersphere. In The Thirteenth International Conference on Learning Representations (ICLR), Cited by: [§3.3](https://arxiv.org/html/2603.06922#S3.SS3.p4.1 "3.3 Feed-Forward Networks Weight Geometry and Eigenspectrum Dynamics ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§3](https://arxiv.org/html/2603.06922#S3.p1.1 "3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* I. Loshchilov and F. Hutter (2019)Decoupled weight decay regularization. In International Conference on Learning Representations (ICLR), Cited by: [Table 13](https://arxiv.org/html/2603.06922#A12.T13.1.3.1.1 "In L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* M. Mahoney and C. Martin (2019)Traditional and heavy tailed self regularization in neural network models. In International Conference on Machine Learning (ICML), Cited by: [Appendix N](https://arxiv.org/html/2603.06922#A14.p1.1 "Appendix N Discussion: Why Top-heaviness Over Tail-heaviness ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* A. Marbut, K. McKinney-Bock, and T. Wheeler (2023)Reliable measures of spread in high dimensional latent spaces. In International Conference on Machine Learning (ICML), Cited by: [item(i)](https://arxiv.org/html/2603.06922#A2.I1.i1.p1.1 "In B.1 Metrics Design and Justifications ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§1](https://arxiv.org/html/2603.06922#S1.p5.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§2.2](https://arxiv.org/html/2603.06922#S2.SS2.p8.1 "2.2 Eigenspectrum Metrics for Analyzing High-Dimensional Latent Space ‣ 2 NerVE: A Principled Framework for Eigenspectrum Analysis ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018)Spectral normalization for generative adversarial networks. In International Conference on Learning Representations (ICLR), Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* J. Nair, A. Wierman, and B. Zwart (2022)The fundamentals of heavy tails: properties, emergence, and estimation. Vol. 53, Cambridge University Press. Cited by: [Appendix N](https://arxiv.org/html/2603.06922#A14.p1.1 "Appendix N Discussion: Why Top-heaviness Over Tail-heaviness ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* F. Nielsen and R. Nock (2015)Total jensen divergences: definition, properties and clustering. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Cited by: [item(v)](https://arxiv.org/html/2603.06922#A2.I1.i5.p1.1 "In B.1 Metrics Design and Justifications ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* A. V. Nikitin, J. Kossen, Y. Gal, and P. Marttinen (2024)Kernel language entropy: fine-grained uncertainty quantification for LLMs from semantic similarities. In The 38th Annual Conference on Neural Information Processing Systems(NeurIPS), Cited by: [§2.2](https://arxiv.org/html/2603.06922#S2.SS2.p2.2 "2.2 Eigenspectrum Metrics for Analyzing High-Dimensional Latent Space ‣ 2 NerVE: A Principled Framework for Eigenspectrum Analysis ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* R. Pascanu, C. Lyle, I. Modoranu, N. E. Borras, D. Alistarh, P. Velickovic, S. Chandar, S. De, and J. Martens (2025)Optimizers qualitatively alter solutions and we should leverage this. arXiv preprint arXiv:2507.12224. Cited by: [§3.7](https://arxiv.org/html/2603.06922#S3.SS7.p6.1 "3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* G. Penedo, H. Kydlíček, A. Lozhkov, M. Mitchell, C. A. Raffel, L. Von Werra, T. Wolf, et al. (2024)The fineweb datasets: decanting the web for the finest text data at scale. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: [Table 13](https://arxiv.org/html/2603.06922#A12.T13 "In L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§3](https://arxiv.org/html/2603.06922#S3.p1.1 "3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* B. Poole, S. Lahiri, M. Raghu, J. Sohl-Dickstein, and S. Ganguli (2016)Exponential expressivity in deep neural networks through transient chaos. Advances in neural information processing systems. Cited by: [§4](https://arxiv.org/html/2603.06922#S4.p1.1 "4 Related Work ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* E. Queipo-de-Llano, A. Arroyo, F. Barbero, X. Dong, M. M. Bronstein, Y. LeCun, and R. Shwartz-Ziv (2026)Attention sinks and compression valleys in LLMs are two sides of the same coin. In The Fourteenth International Conference on Learning Representations (ICLR), Cited by: [§3.6](https://arxiv.org/html/2603.06922#S3.SS6.p2.1 "3.6 Layerwise Dynamics for Positional Encoding: RoPE vs NoPE ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu (2020)Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research (JMLR). Cited by: [§L.2](https://arxiv.org/html/2603.06922#A12.SS2.p1.1 "L.2 AdamW vs Adafactor ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§3](https://arxiv.org/html/2603.06922#S3.p1.1 "3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* V. Ruscio, U. Nanni, and F. Silvestri (2025)What are you sinking? a geometric approach on attention sink. In The Thirty-ninth Annual Conference on Neural Information Processing Systems (NeurIPS), Cited by: [§4](https://arxiv.org/html/2603.06922#S4.p1.1 "4 Related Work ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* T. Salimans and D. P. Kingma (2016)Weight normalization: a simple reparameterization to accelerate training of deep neural networks. In Advances in neural information processing systems, Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* I. Shah, A. M. Polloreno, K. Stratos, P. Monk, A. Chaluvaraju, A. Hojel, A. Ma, A. Thomas, A. Tanwer, D. J. Shah, et al. (2025)Practical efficiency of muon for pretraining. arXiv preprint arXiv:2505.02222. Cited by: [§L.1](https://arxiv.org/html/2603.06922#A12.SS1.p1.1 "L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* N. Shazeer and M. Stern (2018)Adafactor: adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning (ICML), Cited by: [Figure 29](https://arxiv.org/html/2603.06922#A12.F29 "In L.2 AdamW vs Adafactor ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§L.2](https://arxiv.org/html/2603.06922#A12.SS2.p1.1 "L.2 AdamW vs Adafactor ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* O. Skean, M. R. Arefin, D. Zhao, N. N. Patel, J. Naghiyev, Y. LeCun, and R. Shwartz-Ziv (2025)Layer by layer: uncovering hidden representations in language models. In Forty-second International Conference on Machine Learning (ICML), Cited by: [§3.6](https://arxiv.org/html/2603.06922#S3.SS6.p2.1 "3.6 Layerwise Dynamics for Positional Encoding: RoPE vs NoPE ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* J. Su, M. Ahmed, Y. Lu, S. Pan, W. Bo, and Y. Liu (2024)Roformer: enhanced transformer with rotary position embedding. In Neurocomputing, Cited by: [§1](https://arxiv.org/html/2603.06922#S1.p6.1 "1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* I. O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit, et al. (2021)Mlp-mixer: an all-mlp architecture for vision. In Advances in neural information processing systems (NeurIPS), Cited by: [Appendix I](https://arxiv.org/html/2603.06922#A9.p1.1 "Appendix I Eigenspectrum Dynamics in a Non-Transformer Architecture ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* B. Vasudeva, P. Deora, Y. Zhao, V. Sharan, and C. Thrampoulidis (2026)How muon’s spectral design benefits generalization: a study on imbalanced data. In The Fourteenth International Conference on Learning Representations (ICLR), Cited by: [§3.7](https://arxiv.org/html/2603.06922#S3.SS7.p2.1 "3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* S. Wang, F. Zhang, J. Li, C. Du, C. Du, T. Pang, Z. Yang, M. Hong, and V. Y. F. Tan (2026)Muon outperforms adam in tail-end associative memory learning. In The Fourteenth International Conference on Learning Representations (ICLR), Cited by: [§3.7](https://arxiv.org/html/2603.06922#S3.SS7.p2.1 "3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* T. Wang and P. Isola (2020)Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International conference on machine learning (ICML), Cited by: [§3.3](https://arxiv.org/html/2603.06922#S3.SS3.p4.1 "3.3 Feed-Forward Networks Weight Geometry and Eigenspectrum Dynamics ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* L. Wei, Z. Tan, C. Li, J. Wang, and W. Huang (2024)Diff-erank: a novel rank-based metric for evaluating large language models. In Advances in Neural Information Processing Systems (NeurIPS), Cited by: [§4](https://arxiv.org/html/2603.06922#S4.p1.1 "4 Related Work ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* Y. Zhang, C. Chen, T. Ding, Z. Li, R. Sun, and Z. Luo (2024)Why transformers need adam: a hessian perspective. In Advances in neural information processing systems, Cited by: [§4](https://arxiv.org/html/2603.06922#S4.p1.1 "4 Related Work ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). +* J. Zhao, Z. Zhang, B. Chen, Z. Wang, A. Anandkumar, and Y. Tian (2024)GaLore: memory-efficient LLM training by gradient low-rank projection. In 5th ICLR Workshop on practical ML for limited/low resource settings (PML4LRS), Cited by: [§3](https://arxiv.org/html/2603.06922#S3.p1.1 "3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). + +Appendix A Implementation Details and Methodological Considerations +------------------------------------------------------------------- + +### A.1 Framework Overview + +![Image 11: Refer to caption](https://arxiv.org/html/2603.06922v1/x11.png) + +Figure 10: For each FFN layer ℓ\ell, we first collect _pre-activation_ (after W up W_{\text{up}}, before σ\sigma) and _post-activation_ (after σ\sigma, before W down W_{\text{down}}). Then, tokens are flattened into X∈ℝ(B×S)×D X\in\mathbb{R}^{(B\times S)\times D} sample matrix, where B B is the global batch size, S S the sequence (context) length, and D D the FFN hidden dimension. Finally, we compute the unbiased sample co-variance of mean-centered activations to get the eigenvalues {λ i}i=1 D\{\lambda_{i}\}_{i=1}^{D}, and sorted them into descending order. Three eigen-metrics are computed on each (pre-/post-) eigenspectrum: spectral entropy (SE) for dispersion, participation ratio (PR) for effective dimensionality, and eigenvalue early enrichment (EEE) for top-heaviness; while Jensen-Shannon Divergence (JS) quantifies the distributional shift between the pre- and post-activation spectra, capturing the geometric restructuring performed by the nonlinearity. + +### A.2 Implementation Details for Computing Covariance Matrices + +During training, activations are collected through registered PyTorch hooks. For pre-activation, we use forward hooks on the output of the up-projection layer. For post-activation, we use pre-forward hooks on the down-projection layer to capture inputs before they enter the down-projection. + +Token aggregation. Within each logging step, we flatten all tokens across the batch and sequence dimensions into a single matrix X∈ℝ N×D X\in\mathbb{R}^{N\times D} where N=B×S N=B\times S, treating each token embedding as an independent sample. We compute the mean-centered version X^=X−μ\hat{X}=X-\mu before forming Σ=1 N−1​X^⊤​X^\Sigma=\frac{1}{N-1}\hat{X}^{\top}\hat{X}. All eigenvalues are sorted in descending order (λ 1≥⋯≥λ D>0\lambda_{1}\geq\cdots\geq\lambda_{D}>0) before any metric computation—required for computing EEE since it depends on the cumulative sum from largest to smallest (Figure [10](https://arxiv.org/html/2603.06922#A1.F10 "Figure 10 ‣ A.1 Framework Overview ‣ Appendix A Implementation Details and Methodological Considerations ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). All results in this paper use the full-batch covariance (no token sub-sampling). The effect of sub-sampling and low-rank approximation is discussed in Appendix [G](https://arxiv.org/html/2603.06922#A7 "Appendix G Token Sub-Sampling and Low-Rank Approximation ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). + +Paired measurement. Pre- and post-activation covariance matrices are computed on the identical set of N N tokens within each layer, ensuring that nonlinear transformations, including JS divergence which measures the geometric restructuring applied by the nonlinearity, are computed on the same input population rather than comparing statistics from different samples. With full-batch computation this pairing is implicit; however, when sub-sampling is employed (Appendix [G](https://arxiv.org/html/2603.06922#A7 "Appendix G Token Sub-Sampling and Low-Rank Approximation ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")), the same token subset must be used for both measurement points. + +Our implementation ensures numerical stability and efficiency through: + +1. 1. +Precision control: Converting all tensors to float32 to avoid precision issues. + +2. 2. +Numerical stability: Adding a small epsilon values (ϵ\epsilon = 1​e−12 1e-12) to prevent division by zero in eigenvalue normalization and entropy computation. + +3. 3. +Memory efficiency: Processing layers sequentially and discarding intermediate covariance matrices after eigendecomposition, so that peak GPU memory is bounded by a single layer’s covariance pair (2​D 2 2D^{2} floats) rather than accumulating across all L L layers (see Appendix [H.2](https://arxiv.org/html/2603.06922#A8.SS2 "H.2 Memory-efficient Eigenspectrum Analysis ‣ Appendix H Computational and Memory Efficiency of NerVE ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). For distributed training, activations are gathered from all GPUs to rank 0 before covariance computation. + +4. 4. +Specialized computation: Using torch.linalg.eigvalsh for symmetric positive semi-definite covariance matrices, which is both faster and more numerically stable. + +This methodological rigor ensures that observed patterns in eigenvalue metrics reflect intrinsic properties of the network architecture rather than measurement artifacts. We provide the full implementation of our framework on project page: [https://nerve-eigenspectrum.github.io](https://nerve-eigenspectrum.github.io/) + +Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide +------------------------------------------------------------------------ + +Table [4](https://arxiv.org/html/2603.06922#A2.T4 "Table 4 ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") summarizes the four eigen-metrics, their inputs, bounds, and what each captures. The remainder of this appendix provides justifications for NerVE eigen-metrics set ([B.1](https://arxiv.org/html/2603.06922#A2.SS1 "B.1 Metrics Design and Justifications ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")), examines inter-metric relationships and failure modes with a diagnostic reference ([B.2](https://arxiv.org/html/2603.06922#A2.SS2 "B.2 Metric Relationships and Failure Modes ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")), and decomposes the geometric work of the nonlinearity into interpretable axes ([B.3](https://arxiv.org/html/2603.06922#A2.SS3 "B.3 Decomposing Nonlinearity-Induced Eigenspectrum Restructuring ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). + +Table 4: Summary of the four NerVE eigen-metrics. Input indicates whether the metric operates on raw (λ\lambda) or normalized (λ^\hat{\lambda}) eigenvalues, and range shows their (bounded) output values. SE, PR, and EEE characterize a single spectrum; while JS quantifies the divergence between the pre- and post-activation spectral shape. All four metrics are invariant to uniform scaling of eigenvalues. + +### B.1 Metrics Design and Justifications + +The metrics should jointly capture distinct geometric aspects of high-dimensional latent space–no single metric suffices. Thus, our selection of eigen-metrics (SE, PR, EEE, JS) are based on the following desiderata for a diagnostic framework applied to high-dimensional eigenspectra: + +1. (i) +Coverage. Different eigenvalue distributions can share the same SE and PR, yet allocate variance very differently across the eigenspectrum. For instance, a spectrum with moderate variance spread uniformly across eigenmods, and a spectrum with the same total spread but concentrated in a few dominant directions followed by a broad low-variance tail could yield identical SE and PR values. EEE resolves this ambiguity by accounting cumulative variance of spectra, and able to differentiate the spectra having different fractions of latent-space utilization (Marbut et al., [2023](https://arxiv.org/html/2603.06922#bib.bib414 "Reliable measures of spread in high dimensional latent spaces")). + +2. (ii) +Complementary sensitivity. The metrics in Table[4](https://arxiv.org/html/2603.06922#A2.T4 "Table 4 ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") emphasize different regions of the eigenspectrum. SE, due to the logarithmic weighting to the normalized eigenvalues (λ^i\hat{\lambda}_{i}), is relatively less sensitive to dominant modes and more responsive to mid-to-tail spectrum. In contrast, PR downweights small eigenvalues and is driven primarily by dominant modes through the quadratic term (∑i λ i 2\sum_{i}\lambda_{i}^{2}). As a result, shifts confined to the tail can change SE while leaving PR nearly unchanged, whereas redistribution among leading eigenvalues can move PR strongly with only minor changes in SE. Since EEE summarizes top-heaviness of cumulative variance, a handful of large eigenvalues drives EEE close to 1, even if the rest are moderate. These complementary sensitivities ensure that changes in any region of the spectrum are reflected by at least one metric. + +3. (iii) +Boundedness. All four metrics have closed-form bounds (Table [4](https://arxiv.org/html/2603.06922#A2.T4 "Table 4 ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")), enabling meaningful comparisons across models with different dimensionality D D, training checkpoints, and architectural configurations. Unbounded metrics could make cross-configuration comparison unreliable. + +4. (iv) +Scale invariance. All four metrics are invariant to uniform scaling of the eigenvalues {λ i}i=1 D\{\lambda_{i}\}_{i=1}^{D}. This is crucial in practice, as the magnitude of the covariance matrix may fluctuate due to various factors such as batch statistics, or learning rate schedules. Scale invariance ensures the metrics remain focused on the shape of the distribution, which truly governs the directionality in the latent space. Moreover, each metric accounts for the entire eigenvalue distribution, unlike simple measures such as the eigenvalue ratio (largest-to-smallest), which are sensitive only to extremes. + +5. (v) +Pre-post restructuring. While SE, PR, and EEE characterize the intrinsic properties of a single distribution, JS quantifies the _information-theoretic distance_ between two distributions (Nielsen and Nock, [2015](https://arxiv.org/html/2603.06922#bib.bib424 "Total jensen divergences: definition, properties and clustering")). Thus, JS is needed to quantify nonlinearity-induced geometric restructuring. Large JS indicate significant variance redistribution across principal components, potentially creating new directions of specialization or eliminating others. Conversely, small JS highlights FFNs which rescale existing directions without fundamentally altering the latent space geometry. + +Why Jensen-Shannon divergence over KL divergence The main benefits of JS divergence (Eq. [6](https://arxiv.org/html/2603.06922#S2.E6 "In 2.2 Eigenspectrum Metrics for Analyzing High-Dimensional Latent Space ‣ 2 NerVE: A Principled Framework for Eigenspectrum Analysis ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")), compared to a simpler KL divergence, for eigenspectrum analysis are: + +1. 1. +Symmetry: JS​(P pre∥P post)=JS​(P post∥P pre)\text{JS}(P_{\text{pre}}\parallel P_{\text{post}})=\text{JS}(P_{\text{post}}\parallel P_{\text{pre}}) enables unbiased comparison of pre-activation and post-activation eigenspectrum (Briët and Harremoës, [2009](https://arxiv.org/html/2603.06922#bib.bib420 "Properties of classical and quantum jensen-shannon divergence")), whereas KL is asymmetric and would prioritize one distribution as the reference. + +2. 2. +Boundedness and interpretability: 0≤JS​(P pre∥P post)≤l​n​(2)0\leq\text{JS}(P_{\text{pre}}\parallel P_{\text{post}})\leq ln(2), facilitating standardized comparisons across layers of different dimensions, and bounded scale makes JS values interpretable under distributional shift, unlike KL which could yields unbounded values in isolation. + +3. 3. +Numerical stability: JS offers superior numerical stability when analyzing eigenspectra with near-zero eigenvalues, which are common in neural network representations. + +### B.2 Metric Relationships and Failure Modes + +While the four metrics are selected to capture distinct spectral properties, they are not statistically independent. In many training regimes, SE and PR move together because both respond to overall spectral flattening. Understanding when the metrics agree, when they diverge, and when individual metrics can mislead is crucial for reliable diagnostics. + +When metrics agree. During standard training of well-configured models (e.g., GPT-2 PreLN with GELU), SE and PR typically co-increase while EEE decreases, highlighting nonlinearity-induced rank inflation through variance reinjection into previously inactive directions. JS decreases across depth, especially in deeper layers, indicating lesser geometric restructuring in deeper FFNs. When all four metrics align in this pattern, the diagnostic is straightforward, the FFN latent space is being utilized effectively. + +When metrics diverge, and why it matters? Divergence between metrics signals that the spectral change is localized to a particular part of the eigenspectrum, which is diagnostically informative: + +1. (i) +SE rises but PR is stable: Variance is redistributing in the mid-to-tail eigenvalues without affecting the top eigenvalues that dominate PR. This happens when nonlinearity reshapes the spectral tail without shifting the dominant directions. + +2. (ii) +PR rises but EEE also rises: This paradoxical pattern occurs when a few new directions gain substantial variance while the rest remain near zero. PR increases because there are more active directions, but EEE increases because the new directions are still much larger than the inactive ones. This signals the trianing stages where network is allocating capacity to new features without distributing variance uniformly. + +3. (iii) +SE and PR rise but JS is near zero: Both pre- and post-activation spectra are flattening, but the nonlinearity is not reshaping the spectrum. This acts as a linear scaler which suggests the FFN may not be fully leveraging its nonlinear capacity. + +When individual metrics can mislead. In the following scenarios, relying on a single eigen-metric could lead to misinterpretation of underlying learning dynamics. + +1. (i) +PR without EEE can mislead on performance: Hyperspherical normalization when used in FFN weights of a norm-free model, it achieves the highest PR_post (exceeding 600) but worse perplexity than Spectral normalization (Section [3.3](https://arxiv.org/html/2603.06922#S3.SS3 "3.3 Feed-Forward Networks Weight Geometry and Eigenspectrum Dynamics ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")), as their Δ\Delta EEE flattening does not persist, EEE_post remains high across depth. + +2. (ii) +PR in bottleneck regimes: At FFN width D=1​d D=1d (no dimensional expansion), PR is constrained to a narrow range regardless of spectral quality. PR values in this regime should not be compared directly to wider configurations without normalization by D D. + +3. (iii) +SE blind to absolute scale: Since SE operates on normalized eigenvalues (λ^i=λ i/Tr⁡(Σ)\hat{\lambda}_{i}=\lambda_{i}/\operatorname{Tr}(\Sigma)), it is insensitive to the to absolute scale, and it should be used to locate the spectral bottlenecks (layers with very low SE, See Figure [11](https://arxiv.org/html/2603.06922#A3.F11 "Figure 11 ‣ C.1 Spectral Signature of LayerNorm Positioning: PreLN, MixLN, and PostLN ‣ Appendix C Eigenspectral Signature ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). Nonetheless, two FFNs with identical spectral shapes but substantially different total variance (Tr​(Σ)\text{Tr}(\Sigma)) would result in identical SE values. Thus, when total variance matters, for instance comparing different layers within a network (activation magnitudes vary substantially across depth), PR should be preferred over SE. + +4. (iv) +EEE saturation: In highly anisotropic spectra (e.g., a single dominant eigenvalue with near-zero rest), EEE saturates near its upper bound and becomes insensitive to further concentration. In this regime, PR (which continues to decrease toward 1) is the more informative metric. + +Diagnostic references. Table [5](https://arxiv.org/html/2603.06922#A2.T5 "Table 5 ‣ B.2 Metric Relationships and Failure Modes ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") summarizes the joint metric signatures most frequently observed across our experiments, along with their interpretation. To avoid misinterpretation, it is important to report all four metrics jointly. + +Table 5: Diagnostic reference for joint eigen-metric signatures. Each row describes a commonly observed pattern, its interpretation, and where in the paper it is empirically demonstrated. + +### B.3 Decomposing Nonlinearity-Induced Eigenspectrum Restructuring + +In addition to JS metric, we use two derived cross-activation quantities: the participation ratio gain PR(Post/Pre) and the EEE difference Δ\Delta EEE = EEE_post −- EEE_pre. These quantities decompose what the nonlinearity does to the eigenspectrum: + +* • +JS measures the total magnitude of spectral change, without indicating direction. + +* • +PR(Post/Pre) measures dimensional expansion, whether nonlinearity activated new directions. + +* • +Δ\Delta EEE measures top-heaviness reduction, whether the nonlinearity suppressed leading eigenvalue dominance. More negative Δ\Delta EEE indicates stronger flattening. + +These metrics are complementary, not redundant: dominant-mode redistributions can raise JS divergence with little PR gain, whereas tail spreading across many small modes can yield large PR gains with a typically more moderate increase in JS divergence. + +Effort does not imply outcome. Large PR gain does not imply high PR post{}_{\text{post}}. Under AdamW (Section[3.7](https://arxiv.org/html/2603.06922#S3.SS7 "3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")), nonlinearities yield the largest PR gains and highest JS, yet the lowest PR post{}_{\text{post}} (vs. Muon/Dion). In contrast, Muon attains the highest PR post{}_{\text{post}} with the smallest PR gains and lowest JS. The difference is that AdamW collapses the pre-activation spectrum, so the nonlinearity expends capacity on _repair_ (large corrective gains) that only recover a mediocre state, whereas Muon preserves a well-conditioned pre-spectrum and requires only modest _refinement_. + +Sustained flattening matters. High PR post{}_{\text{post}} alone is not sufficient: HNorm (Section [3.3](https://arxiv.org/html/2603.06922#S3.SS3 "3.3 Feed-Forward Networks Weight Geometry and Eigenspectrum Dynamics ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) attains larger PR post{}_{\text{post}} yet underperforms, whereas SNorm maintains consistently lower EEE post{}_{\text{post}} throughout training (sustained flattening) and achieves best perplexity among weight-normalization methods. + +Characteristic patterns. Table [6](https://arxiv.org/html/2603.06922#A2.T6 "Table 6 ‣ B.3 Decomposing Nonlinearity-Induced Eigenspectrum Restructuring ‣ Appendix B Eigenspectrum Metrics: Design Principles and Diagnostic Guide ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") summarizes four diagnostically distinct combinations observed across our experiments, each reflecting a qualitatively different regime of nonlinear function. + +Table 6: Four regimes of nonlinear geometric work, distinguished by the joint signature of JS, PR(Post/Pre), and Δ\Delta EEE. JS alone cannot distinguish beneficial restructuring from compensatory repair; PR gain alone cannot distinguish beneficial restructuring from expansion without equalization. + +Appendix C Eigenspectral Signature +---------------------------------- + +### C.1 Spectral Signature of LayerNorm Positioning: PreLN, MixLN, and PostLN + +Figure[11](https://arxiv.org/html/2603.06922#A3.F11 "Figure 11 ‣ C.1 Spectral Signature of LayerNorm Positioning: PreLN, MixLN, and PostLN ‣ Appendix C Eigenspectral Signature ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") illustrates the post-activation spectral signatures (SE_post and PR_post) for three LayerNorm placements—PreLN, PostLN, and MixLN—across GPT2-125M, LLaMA-70M, and LLaMA-130M. These signatures highlight the extent of FFN latent space utilization across layers. Within each model family, the ranking of evaluation perplexities is corroborated by their spectral signatures: models with lower perplexity exhibit higher utilization. + +GPT2-125M![Image 12: Refer to caption](https://arxiv.org/html/2603.06922v1/x12.png) + +LLaMA-70M![Image 13: Refer to caption](https://arxiv.org/html/2603.06922v1/x13.png) + +LLaMA-130M![Image 14: Refer to caption](https://arxiv.org/html/2603.06922v1/x14.png) + +Figure 11: Eigenspectral impact of LayerNorm placement (PreLN, PostLN, MixLN) in GPT-2 and LLaMA variants (70M, 130M). The spectral signatures are shown through post-activation spectral entropy (↑\uparrow) and participation ratio (↑\uparrow), and model’s perplexity is shown on top of each plots. GPT-2 models trained on CodeParrot and LLaMA variants on C4. + +GPT2-125M: Performance follows the order PreLN (PPL = 2.714) >> MixLN (2.808) >> PostLN (2.830). The spectral signatures follow this ranking: PreLN exhibits superior spectral entropy and participation ratio trend across layers, indicating more effective FFN latent space utilization, while PostLN shows the most constrained spectral characteristics. In particular, L​7 L7 in PostLN and MixLN show very-low utilization compared to the PreLN configuration. + +LLaMA-70M: PostLN achieves the lowest perplexity (PPL = 33.6), followed by MixLN (33.9), while PreLN performs worst (34.2). The eigenspectral analysis shows that PreLN exhibits substantially lower spectral entropy in deeper layers (L​7 L7-L​8 L8) compared to PostLN and MixLN. In contrast, PostLN demonstrates superior spectral characteristics with higher participation ratios in deeper layers compared to MixLN, consistent with its lower perplexity. + +LLaMA-130M: PreLN yields the best performance with a perplexity of 26.4, followed closely by PostLN (26.7) and MixLN (26.8). While PostLN exhibits stronger spectral entropy and participation ratio in the mid-depth layers (L​6 L6-L​9 L9), PreLN consistently outperforms across the remaining layers, particularly L​10 L10 where PostLN deteriorates, undermining its overall benefits. In contrast, MixLN shows the worse spectral profile, leads to highest perplexity. + +### C.2 Spectral Signature in Larger LLaMA Models + +![Image 15: Refer to caption](https://arxiv.org/html/2603.06922v1/x15.png) + +LLaMA-250M + +LLaMA-1.3B + +![Image 16: Refer to caption](https://arxiv.org/html/2603.06922v1/x16.png) + +Figure 12: Eigenspectral impact of LayerNorm placement (PreLN vs MixLN) in LLaMA-250 and LLaMA-1.3B models, trained from scratch on C4 dataset for 20K iterations with 256 context length. The spectral signatures are shown through SE_post (↑\uparrow) and PR_post (↑\uparrow), and model’s perplexity is shown on top of each plots. MixLN variant of LLaMA-1.3B destabilized after 7K iterations. + +Figure [12](https://arxiv.org/html/2603.06922#A3.F12 "Figure 12 ‣ C.2 Spectral Signature in Larger LLaMA Models ‣ Appendix C Eigenspectral Signature ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") shows the post-activation eigen-spectral signatures (SE_post and PR_post) for LayerNorm placements, PreLN and MixLN, in LLaMA-250M and LLaMA-1.3B. Note that PostLN is excluded since it becomes unstable at these larger scales (Li et al., [2025](https://arxiv.org/html/2603.06922#bib.bib398 "Mix-LN: unleashing the power of deeper layers by combining pre-LN and post-LN")). These spectral signatures provide a quantitative assessment for latent space utilization in each FFNs. + +LLaMA-250M: MixLN outperforms the PreLN configuration by a margin of 0.3 in terms of PPL improvement (24.2 vs 24.5). The spectral signature of PreLN exhibits a consistent lower SE in layer 7 to 16, and lower PR in mid-depth regions (Figure [12](https://arxiv.org/html/2603.06922#A3.F12 "Figure 12 ‣ C.2 Spectral Signature in Larger LLaMA Models ‣ Appendix C Eigenspectral Signature ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). Whereas, MixLN avoid this mid-network spectral collapse pattern by confining the lower SE bands in early layers while maintaining higher SE and PR through the mid-depth layers, effectively placing most of the usable capacity where the model does the bulk of its computation. This redistribution of spectral entropy and participation ratio across depth is consistent with the lower perplexity achieved by the MixLN model. + +LLaMA-1.3B: At 1.3B scale, MixLN’s strategic LayerNorm placement unable to facilitate favorable capacity distribution, and training collapses after 7K steps. Whereas, PreLN maintain a stable spectral entropy across layers and achieves notably higher participation ratio in the deeper layers throughout training. This divergence in spectral behavior aligns with the final performance gap, as MixLN’s perplexity explodes to 1457.1 compared to 21.2 for PreLN. + +These results suggest that LayerNorm placement significantly influences how effectively the FFN utilizes its latent space. Pre-LN configurations appear to better preserve and amplify feature diversity, especially in deeper layers, thereby enabling higher-dimensional latent representations. The observed gains in the MixLN setup indicate that even partial use of PreLN can compensate for the limitations of PostLN, offering a potential strategy for balancing stability and expressivity. + +### C.3 Spectral Signature of Positional Encoding: NoPE vs RoPE + +Figure [13](https://arxiv.org/html/2603.06922#A3.F13 "Figure 13 ‣ C.3 Spectral Signature of Positional Encoding: NoPE vs RoPE ‣ Appendix C Eigenspectral Signature ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") shows the spectral signature of rotatory positional encoding (RoPE), in contrast with no positional encoding (NoPE). In particular, the layerwise spectral entropy and EEE values of post-activation eigenspectrum are shown. + +![Image 17: Refer to caption](https://arxiv.org/html/2603.06922v1/x17.png) + +![Image 18: Refer to caption](https://arxiv.org/html/2603.06922v1/x18.png) + +Figure 13: Spectral Signature of Positional Encoding (RoPE vs NoPE) in GPT-2 models trained with 512 context size on 26B token form openwebtext dataset + +Appendix D LayerNorm Positioning and FFN Width Sweep +---------------------------------------------------- + +Section [3.4](https://arxiv.org/html/2603.06922#S3.SS4 "3.4 Impact of LayerNorm Positioning on the FFN Latent Space Dimensionality ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") reports the normalized PR_post (PR/D/D) to compare the _efficiency_ of width utilization across LayerNorm configurations. Table [7](https://arxiv.org/html/2603.06922#A4.T7 "Table 7 ‣ Appendix D LayerNorm Positioning and FFN Width Sweep ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") complements this with absolute PR post{}_{\text{post}} values, revealing the scale of the gap that the normalized view can mask. At D=6144 D=6144, PreLN sustains PR≈post 1822{}_{\text{post}}\approx 1822 effective dimensions, compared to 71 71 for PostLN, a 25×25\times difference in absolute latent capacity that Figure [6](https://arxiv.org/html/2603.06922#S3.F6 "Figure 6 ‣ 3.4 Impact of LayerNorm Positioning on the FFN Latent Space Dimensionality ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") can understate. MixLN remains closer to PostLN throughout the width sweep, fluctuating between PR≈post 137{}_{\text{post}}\approx 137–278 278 without scaling proportionally with D D, suggesting that neither MixLN nor PostLN converts additional width into usable dimensions as effectively as PreLN. + +Table 7: PR_post (median ±\pm MAD across 12 layers) at final training checkpoint for different LayerNorm placements and FFN widths in GPT-2 (125M). + +Appendix E Spectral Signature Across FFN Width Sweeps Baseline Models +--------------------------------------------------------------------- + +![Image 19: Refer to caption](https://arxiv.org/html/2603.06922v1/x19.png) + +![Image 20: Refer to caption](https://arxiv.org/html/2603.06922v1/x20.png) + +Figure 14: Eigen metrics in GPT-2 (GELU, D=1​d 1d to 8​d 8d) with Pearson r r to eval loss (r pre,r post r_{\mathrm{pre}},r_{\mathrm{post}}) + +![Image 21: Refer to caption](https://arxiv.org/html/2603.06922v1/x21.png) + +![Image 22: Refer to caption](https://arxiv.org/html/2603.06922v1/x22.png) + +Figure 15: Eigen metrics in GPT-2 (ReLU, D=1​d 1d to 8​d 8d) with Pearson r r to eval loss (r pre,r post r_{\mathrm{pre}},r_{\mathrm{post}}) + +Appendix F Spectral Signature Across FFN Width Sweeps in Norm-free +------------------------------------------------------------------ + +![Image 23: Refer to caption](https://arxiv.org/html/2603.06922v1/x23.png) + +![Image 24: Refer to caption](https://arxiv.org/html/2603.06922v1/x24.png) + +Figure 16: SE and PR in Normalization-free GPT-2 (GELU, D=1​d 1d to 8​d 8d) with Pearson correlation to eval loss (r pre,r post r_{\mathrm{pre}},r_{\mathrm{post}}) + +![Image 25: Refer to caption](https://arxiv.org/html/2603.06922v1/x25.png) + +![Image 26: Refer to caption](https://arxiv.org/html/2603.06922v1/x26.png) + +Figure 17: SE and PR in Normalization-free GPT-2 (ReLU, D=1​d 1d to 8​d 8d) with Pearson correlation to eval loss (r pre,r post r_{\mathrm{pre}},r_{\mathrm{post}}) + +Appendix G Token Sub-Sampling and Low-Rank Approximation +-------------------------------------------------------- + +Full-batch covariance computation is exact but becomes costly at scale (see Appendix [H](https://arxiv.org/html/2603.06922#A8 "Appendix H Computational and Memory Efficiency of NerVE ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). We evaluate two approximation strategies: token-level sub-sampling (using 5%, 10%, 25%, and 50% of tokens) and low-rank eigendecomposition. For the latter, we use RandSVD (Halko et al., [2011](https://arxiv.org/html/2603.06922#bib.bib456 "Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions")), which uses randomized projections (optionally with power iterations) to approximate the top-k k eigenspectrum, and Lanczos iteration (Lanczos, [1950](https://arxiv.org/html/2603.06922#bib.bib457 "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators")). We evaluate both at ranks k∈{256,512}k\in\{256,512\}, covering two widely used paradigms for scalable eigendecomposition (Ghorbani et al., [2019](https://arxiv.org/html/2603.06922#bib.bib458 "An investigation into neural net optimization via hessian eigenvalue density")). + +### G.1 Eigen-metric Fidelity and Diagnostic Validity Under Approximations + +Token sub-sampling better preserves metric fidelity than low-rank truncation Table [8](https://arxiv.org/html/2603.06922#A7.T8 "Table 8 ‣ G.1 Eigen-metric Fidelity and Diagnostic Validity Under Approximations ‣ Appendix G Token Sub-Sampling and Low-Rank Approximation ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") reports the percentage error relative to full-batch eigendecomposition for GPT-2 (GELU). Token sub-sampling preserves all four metrics with minimal distortion—at 10% sampling, the worst-case error is 10.24% (PR_post), with SE and EEE errors below 2%. By contrast, low-rank methods introduce substantial bias: because SE, PR, EEE, and JS aggregate information across the full spectrum, truncating tail eigenvalues systematically distorts the metrics, with PR_post errors exceeding 90% even at rank 512. Therefore, token sub-sampling should be used over low-rank approximation when computational constraints require approximate eigendecomposition. + +Table 8: Percentage of error when sampling and low-rank approximation methods are used in GPT-2. + +Approximation preserves pre-activation but not post-activation diagnostic power. Low absolute error in a metric does not necessarily preserve its _diagnostic utility_: approximations can alter the rank-ordering of configurations and weaken their correlation with network’s generalization performance. Table [9](https://arxiv.org/html/2603.06922#A7.T9 "Table 9 ‣ G.1 Eigen-metric Fidelity and Diagnostic Validity Under Approximations ‣ Appendix G Token Sub-Sampling and Low-Rank Approximation ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") evaluates this correlations between each eigen-metric and validation loss under approximation. + +Table 9: Effect of token-sampling and low-rank approximation on metric–(eval)loss correlation, evaluated on GPT-2 (125M). Token-sampling preserves pre-metric trends but degrades post-metric correlations since tail eigenvalues are under-sampled; low-rank truncation distorts both. + +Token sub-sampling largely preserves _pre-activation_ correlations: SE pre{}_{\text{pre}} remains |r|>0.97|r|>0.97 and PR pre{}_{\text{pre}} remains |r|>0.91|r|>0.91 across sampling ratios (5%-50%). In contrast, _post-activation_ correlations are less stable under sub-sampling (SE post{}_{\text{post}} drops to |r|≈0.34|r|\approx 0.34–0.38 and PR post{}_{\text{post}} to |r|<0.24|r|<0.24, with a sign flip at 25%), suggesting that sub-sampling noise disproportionately affects the mid-to-tail spectrum. + +Low-rank approximations degrade correlations more severely: RandSVD yields weak correlations for both pre- and post-activation metrics (SE pre{}_{\text{pre}}|r|<0.18|r|<0.18, PR pre{}_{\text{pre}}|r|<0.05|r|<0.05). Lanczos performs better for pre-activation metrics (e.g., Lanczos-512: SE pre{}_{\text{pre}}|r|=0.96|r|=0.96, PR pre{}_{\text{pre}}|r|=0.90|r|=0.90), but post-activation correlations remain low (SE post{}_{\text{post}}|r|<0.34|r|<0.34, PR post{}_{\text{post}}|r|<0.14|r|<0.14). Overall, token sub-sampling at ≥10%\geq 10\% preserves most pre-activation diagnostic power with minimal loss, whereas correlation-based analyses of post-activation metrics benefit from full-batch covariance estimation (|r|>0.84|r|>0.84, Table [3](https://arxiv.org/html/2603.06922#S3.T3 "Table 3 ‣ 3.5 Eigenspectral Signatures Predict Generalization ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). Low-rank truncation is unsuitable when preserving metric–loss correlations is the primary objective. + +### G.2 Spectral Signature of Sampling and Low Rank Approximation + +![Image 27: Refer to caption](https://arxiv.org/html/2603.06922v1/x27.png) + +![Image 28: Refer to caption](https://arxiv.org/html/2603.06922v1/x28.png) + +Figure 18: SE and PR in GPT-2 with Pearson r r to eval loss under sub-sampling (5, 10, 25, 50%) + +![Image 29: Refer to caption](https://arxiv.org/html/2603.06922v1/x29.png) + +![Image 30: Refer to caption](https://arxiv.org/html/2603.06922v1/x30.png) + +Figure 19: SE and PR in GPT-2 with Pearson r r to eval loss under low-rank approximation. + +![Image 31: Refer to caption](https://arxiv.org/html/2603.06922v1/x31.png) + +Figure 20: Impact of sampling (10%, 25%, 50%), and low-rank approximation on EEE (GPT-2) + +![Image 32: Refer to caption](https://arxiv.org/html/2603.06922v1/x32.png) + +Figure 21: Impact of sampling (10%, 25%, 50%), and low-rank approximation on JS metric (GPT-2) + +Appendix H Computational and Memory Efficiency of NerVE +------------------------------------------------------- + +### H.1 Computational Complexity and Memory Overheads of NerVE + +Table [10](https://arxiv.org/html/2603.06922#A8.T10 "Table 10 ‣ H.1 Computational Complexity and Memory Overheads of NerVE ‣ Appendix H Computational and Memory Efficiency of NerVE ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") reports wall-clock time and relative overhead for computing eigenspectrum metrics on GPT-2 (a 3072×3072 3072\times 3072 covariance) across approximation methods. Full-batch eigendecomposition takes 14.41s per logging step, corresponding to a 6.4% overhead when logging every 200 steps, and 1.3% when logging every 1000 steps. Token sub-sampling at 5% reduces this to 9.89s, with wall-clock time increasing gradually from 9.89s to 14.41s (full batch). At the recommended logging frequency of every 1000 steps, all methods, including full batch, add approximately 1% overhead, making approximation irrelevant at this scale. Sub-sampling or low-rank methods become relevant only at substantially larger FFN dimensions (D≫3072 D\gg 3072). + +Table 10: Wall-clock time and relative overhead for computing eigenspectrum metrics at various logging frequencies. Overhead is reported as percentage of total training time when eigendecomposition is performed every 200 and 1000 steps on GPT-2 with 3072×\times 3072 FFN covariance matrix size running on AMD EPYC 7502 server with NVIDIA RTX 3090 GPU + +### H.2 Memory-efficient Eigenspectrum Analysis + +To enable scalable eigenvalue analysis during training, we implemented memory optimization strategies, listed in Algorithm [1](https://arxiv.org/html/2603.06922#alg1 "Algorithm 1 ‣ H.2 Memory-efficient Eigenspectrum Analysis ‣ Appendix H Computational and Memory Efficiency of NerVE ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). As shown in Table [11](https://arxiv.org/html/2603.06922#A8.T11 "Table 11 ‣ H.2 Memory-efficient Eigenspectrum Analysis ‣ Appendix H Computational and Memory Efficiency of NerVE ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), these memory optimization significantly reduces the peak GPU memory usage. For instance, for GPT-2 model (D=4 d d) the peak GPU memory usage is restricted to ≈2×36\approx 2\times 36 MB per layer rather than accumulating 2×12×36 2\times 12\times 36 MB = 864 864 MB across all FFNs per logging step. + +Table 11: GPU memory overhead for full-batch eigen-computation across various FFN widths. + +The complete pipeline’s wall-clock time, following the steps in Algorithm [1](https://arxiv.org/html/2603.06922#alg1 "Algorithm 1 ‣ H.2 Memory-efficient Eigenspectrum Analysis ‣ Appendix H Computational and Memory Efficiency of NerVE ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), is measured as follows: + +1 + +2 torch.cuda.synchronize() + +3 start_time=time.time() + +4 + +5 + +6 + +7 + +8 + +9 + +10 + +11 + +12 torch.cuda.synchronize() + +13 end_time=time.time() + +14 overhead=end_time-start_time + +Listing 1: Wall-clock timing measurement for eigenspectrum-based computational overhead + +To enable scalable eigenvalue analysis during training, we implement three memory optimization strategies that significantly reduce GPU memory overhead: + +1. 1. +Eigenvalue-only computation: Since our eigen metrics depend only on eigenvalues, we employ torch.linalg.eigvalsh to compute eigenvalues without eigenvectors. + +1 + +2 + +3 vals=torch.linalg.eigvalsh(cov) + +4 + +5 + +6 + +7 vals,vecs=torch.linalg.eigh(cov) + +8 + +9 vals,vecs=torch.linalg.eig(cov) + +Listing 2: Memory-efficient eigenvalue computation + +2. 2. +Sequential layer processing: Rather than computing eigenvalues for all layers simultaneously, we process layers sequentially with memory cleanup between computations: + +1 + +2 for layer_idx in sorted(self.layer_pre_acts.keys()): + +3 + +4 + +5 self.layer_pre_acts[layer_idx].clear() + +6 gc.collect() + +7 torch.cuda.empty_cache() + +Listing 3: Sequential layer processing with memory cleanup + +This approach maintains peak GPU memory usage at ∼\sim 2×\times 36MB per layer rather than accumulating 2×\times 12×\times 36MB = 864MB across all FFNs per logging step (Table[11](https://arxiv.org/html/2603.06922#A8.T11 "Table 11 ‣ H.2 Memory-efficient Eigenspectrum Analysis ‣ Appendix H Computational and Memory Efficiency of NerVE ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). + +3. 3. +Hybrid storage strategy: We store activation tensors on CPU memory while performing eigenvalue computations on GPU, balancing memory efficiency with computational speed. + +Algorithm 1 Memory-Efficient Eigenspectrum Analysis + +1:for layer + +ℓ=1,…,L\ell=1,\ldots,L +do + +2: + +𝐇 ℓ←GetActivations​(ℓ)\mathbf{H}_{\ell}\leftarrow\text{GetActivations}(\ell) +⊳\triangleright Store on CPU + +3: + +𝝀 ℓ←eigvalsh​(𝐇 ℓ⊤​𝐇 ℓ/N)\bm{\lambda}_{\ell}\leftarrow\text{eigvalsh}(\mathbf{H}_{\ell}^{\top}\mathbf{H}_{\ell}/N) +⊳\triangleright O(d) memory + +4: Compute metrics: SE, PR, EEE, JS from + +𝝀 ℓ\bm{\lambda}_{\ell} + +5:del + +𝐇 ℓ\mathbf{H}_{\ell} +⊳\triangleright Immediate cleanup + +6:end for + +Appendix I Eigenspectrum Dynamics in a Non-Transformer Architecture +------------------------------------------------------------------- + +We selected MLP-Mixer (Tolstikhin et al., [2021](https://arxiv.org/html/2603.06922#bib.bib450 "Mlp-mixer: an all-mlp architecture for vision")) as a non-transformer architecture family model for three key reasons: 1) it has the core architectural block that we studies in this work, wider MLPs/FFNs with LayerNorm and GELU activations, while completely removing the self-attention block; 2) the channel-mixing MLPs/FFNs in MLP-Mixer are functionally analogous to FFNs in Transformers, as they expand and squeeze the latent representation with an intervening nonlinearity, and stacked across the network’s depth; and 3) it isolate the contribution of FFN nonlinear transformations from attention-specific dynamics like rank collapse in self-attention (Dong et al., [2021](https://arxiv.org/html/2603.06922#bib.bib391 "Attention is not all you need: pure attention loses rank doubly exponentially with depth")). + +Moreover, as the MLP-Mixer is designed for vision tasks, it has fundamentally different inductive biases compared to decode-only LLMs. More importantly, it allows us to extend the eigenspectrum analysis from language modeling to vision tasks, verifying the generalization across modalities. This lets us ask: do the eigenspectrum patterns we observe depend fundamentally on the attention mechanism, or are they a more general property of deep feedforward layers? + +Settings: Model, dataset, and training hyperparameters We use MLP-Mixer B/16 models adapted for CIFAR-100 (32×\times 32 images, 4×\times 4 patches, 64 tokens in one sequence) since it closely resembles the key architectural parameters from GPT-2 to enable direct comparison. The architecture consists of 12 Mixer blocks with embedding dimension 768. Each block contains two MLPs: token-mixing (hidden dimension 384) and channel-mixing (hidden dimension 3072), both using LayerNorm and GELU activation. This configuration results in 57.4M trainable parameters. + +We train mixer models for 120 epochs using the Adam optimizer (Kingma, [2015](https://arxiv.org/html/2603.06922#bib.bib437 "Adam: a method for stochastic optimization")) with a learning rate of 1 e e-3 and weight decay of 5 e e-5. We use a batch size of 128 and employ cosine annealing for learning rate scheduling with 5 epochs of linear warmup. For data augmentation, we apply AutoAugment and CutMix to improve generalization. + +Methodology. We apply our NerVE framework to compute eigenspectrum metrics (pre- and post-activation) for all 12 channel-mixing MLPs (dimension 3072) at epochs 1, 10, 20, 40, 80, and 120. We use full-batch covariance estimation with no sampling. Hence, at each logging epoch, we accumulate the statistics across the entire training dataset in a single forward pass which yields covariance matrices from 3.2M samples per layer. Our GPU-optimized implementation accumulates statistics for all layers simultaneously, requiring ∼\sim 432MB of additional GPU memory. + +For an extensive eigenspectral dynamics study, we evaluate four architectural variants by systematically varying the activation functions in the token-mixing (FFN1) and channel-mixing (FFN2) layers: (1) GELU, GELU (baseline); (2) GELU, ReLU; (3) ReLU, GELU; and (4) ReLU, ReLU. We apply the full NerVE analysis to all configurations, tracking eigenspectrum metrics across all 12 layers. Figure [22](https://arxiv.org/html/2603.06922#A9.F22 "Figure 22 ‣ Appendix I Eigenspectrum Dynamics in a Non-Transformer Architecture ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") shows the eigenspectrum dynamics for each configuration, demonstrating how activation function choice impacts spectral properties during training. + +![Image 33: Refer to caption](https://arxiv.org/html/2603.06922v1/x33.png) + +Figure 22: Eigenspectrum dynamics in MLP-Mixer under activation ablations. Rows correspond to the four activation configurations for token-mixing (FFN1) and channel-mixing (FFN2) layers, and columns (from left to right) show spectral entropy (SE), participation ratio (PR), eigenvalue early enrichment (EEE), and Jensen-Shannon divergence (JS) for the channel-mixing FFNs (FFN2). Each panel traces pre- and post-activation metrics over training, showing that ReLU in the channel-mixing MLP (3rd and 4th rows) most strongly increases SE/PR and reduces EEE, i.e., reinjects variance into low-energy directions and flattens the spectrum. + +Observations. First, across all four settings, post-activation eigenspectrum consistently exhibit higher spectral entropy and participation ratio than their pre-activation counterparts, with the gap increasing rapidly in the first 20 to 40 epochs, indicating that the Mixer nonlinearities reliably expand the effective dimensionality of the FFN representations rather than merely reshuffling. + +Second, swapping GELU for ReLU in the channel-mixing FFN (FFN2) has a much stronger effect than changing the token-mixing FFN (MLP1). Configurations with ReLU in FFN2 show larger post-activation SE/PR and a more significant drop in EEE, indicating a flatter, less top-heavy spectrum where variance is redistributed away from the leading eigenmodes. + +Third, the GELU (in FFN1) and ReLU (FFN2) variant, 3rd row in Figure [22](https://arxiv.org/html/2603.06922#A9.F22 "Figure 22 ‣ Appendix I Eigenspectrum Dynamics in a Non-Transformer Architecture ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"), results in most aggressively flattened spectra indicated by highest post-activation PR and lowest post-activation EEE values while maintaining small JS divergence across most layers, suggesting that ReLU in token-mixing FFN drives a more uniformly expanded, higher effective dimensionality latent space rather than inducing sharp layer-specific distortions. + +Finally, JS divergence is concentrated mostly in early layers when feature-mixing (FFN2) as GELU nonlinearity, whereas ReLU in FFN2 has more uniform pattern for layerwise JS divergence, mostly concentrated in deeper layers. This suggest that activation choice mainly modulates how much the boundary layers reshape the eigenspectrum while the interior layers act as relatively stationary propagators of the learned latent representation. + +Accuracy on CIFAR-100 Table [12](https://arxiv.org/html/2603.06922#A9.T12 "Table 12 ‣ Appendix I Eigenspectrum Dynamics in a Non-Transformer Architecture ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") shows the accuracy for activation ablation study in MLP-Mixer. The ReLU in FFN2 leads to a better accuracy since it flattens the spectrum (lower EEE_post) and reduce the top-heaviness of the spectrum. Whereas, GELU in FFN2 leads to a higher EEE values throughout the training. The combination of GELU in FFN1, and ReLU in FFN2 leads to highest PR and lowest EEE, indicating best utilization of latent space in FFN2 and results in best accuracy. + +Table 12: Validation accuracy on CIFAR-100 for different activation function configurations in MLP-Mixer. FFN1 refers to token-mixing MLP, FFN2 refers to channel-mixing MLP. + +Appendix J Token-Position Effects on FFN Eigenspectrum +------------------------------------------------------ + +To make the FFN eigenspectrum analysis explicitly sensitive to sequential structure, we stratify tokens by their position and recompute NerVE metrics within each position groups during their training. For GPT-2, we partition tokens into three groups along the sequence dimension (early, middle, late). For MLP-Mixer on CIFAR-100 we instead split patches into top vs. bottom rows of the 4×\times 4 patch grid. For each configuration we then track effective dimensionality of the pre- and post-eigenspectrum using participation ratio metric, both aggregated across layers and averaged per layer (see Figure [23](https://arxiv.org/html/2603.06922#A10.F23 "Figure 23 ‣ Appendix J Token-Position Effects on FFN Eigenspectrum ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). + +GELU + +![Image 34: Refer to caption](https://arxiv.org/html/2603.06922v1/x34.png) + +![Image 35: Refer to caption](https://arxiv.org/html/2603.06922v1/x35.png) + +![Image 36: Refer to caption](https://arxiv.org/html/2603.06922v1/x36.png) + +ReLU + +![Image 37: Refer to caption](https://arxiv.org/html/2603.06922v1/x37.png) + +![Image 38: Refer to caption](https://arxiv.org/html/2603.06922v1/x38.png) + +![Image 39: Refer to caption](https://arxiv.org/html/2603.06922v1/x39.png) + +NormFree GELU + +![Image 40: Refer to caption](https://arxiv.org/html/2603.06922v1/x40.png) + +![Image 41: Refer to caption](https://arxiv.org/html/2603.06922v1/x41.png) + +![Image 42: Refer to caption](https://arxiv.org/html/2603.06922v1/x42.png) + +NormFree ReLU + +![Image 43: Refer to caption](https://arxiv.org/html/2603.06922v1/x43.png) + +![Image 44: Refer to caption](https://arxiv.org/html/2603.06922v1/x44.png) + +![Image 45: Refer to caption](https://arxiv.org/html/2603.06922v1/x45.png) + +NormFree LReLU + +![Image 46: Refer to caption](https://arxiv.org/html/2603.06922v1/x46.png) + +![Image 47: Refer to caption](https://arxiv.org/html/2603.06922v1/x47.png) + +![Image 48: Refer to caption](https://arxiv.org/html/2603.06922v1/x48.png) + +MLP-Mixer + +![Image 49: Refer to caption](https://arxiv.org/html/2603.06922v1/x49.png) + +![Image 50: Refer to caption](https://arxiv.org/html/2603.06922v1/x50.png) + +![Image 51: Refer to caption](https://arxiv.org/html/2603.06922v1/x51.png) + +Figure 23: Sequence-aware FFN eigenspectrum across token positions. For four GPT-2 (125M) variants (rows 1 to 4) and a non-transformer MLP-Mixer (row 5), tokens are grouped by position. Left: pre-activation PR pre\mathrm{PR}_{\mathrm{pre}}; Middle: post-activation PR post\mathrm{PR}_{\mathrm{post}}; and Right: layer-wise PR post\mathrm{PR}_{\mathrm{post}}. In GPT-2 with baseline, both GELU and ReLU variants, early/middle/late tokens have similar PR pre\mathrm{PR}_{\mathrm{pre}} but diverge strongly in PR post\mathrm{PR}_{\mathrm{post}}, with late tokens using substantially higher effective dimensionality, mainly visible in deeper layers. However, in the normalization-free GPT-2 variants, these position-conditioned differences are largely suppressed in both pre- and post-activations. On the other hand, in a non-transformer family model, MLP-Mixer trained with SGD optimizer, where we split patches into top vs bottom rows, position effects are weak throughout. This show that strong position-dependent FFN geometry is a characteristic of LayerNorm-based GPT-2 + +Baseline GPT-2 exhibits strong position dependence only after the FFN nonlinearity. In both GPT-2 baseline variants (GELU and ReLU), the pre-activation PR shows almost no systematic differences between early, middle, and late tokens: the three boxes largely overlap, suggesting that the FFN representations before nonlinearity have comparable effective dimensionality across positions. In contrast, the post-activation PR shows a distinctive ordering: late tokens consistently exhibits a substantially higher PR than early tokens, with middle tokens in between. This effect is particularly pronounced for GELU, where late-token PR_post is roughly twice that of early tokens, and remains visible under ReLU. The layerwise trend (right column) further reveal that this distinction emerges mainly in the second half of the network: deeper FFNs layers allocate more latent degrees of freedom to later tokens, while early layers treat positions more uniformly. + +Normalization-free GPT-2 suppresses position-dependent FFN geometry. When we remove LayerNorm layers (NormFree-GELU and NormFree-ReLU), the position-induced differences reduce dramatically. Both pre- and post-activation PR distributions for early, middle, and late tokens nearly collapse onto each other, and the layerwise trends show very small gaps between position groups at all depths. That is, once normalization is removed, the FFN eigenspectrum become almost position-agnostic. This contrast suggests that in standard GPT-2, LayerNorm coupled with the nonlinearity amplifies positional biases in the FFN latent space, which is largely disappears in the normalization-free settings. + +Non-transformer MLP-Mixer shows only weak spatial position effects. For the MLP-Mixer model, we perform an analogous top-bottom split over patch positions in baseline model (GELU in both MLPs) when trained from scratch on CIFAR-100. Here, position-conditioned PR differences are small in both pre- and post-eigenspectrum, and the layerwise PR trend for top vs bottom patches almost coincide. This indicates that the strong position dependence observed in GPT-2 FFNs is not a generic property of FFNs: in a non-transformer architecture trained on images, NerVE sees only mild spatial variation in FFN eigenspectrum. + +Appendix K FFN Eigenspectrum Dynamics: LayerNorm vs RMSNorm +----------------------------------------------------------- + +### K.1 LayerNorm vs RMSNorm Ablation on GPT-2 + +To examine how sensitive our FFN eigenspectrum analysis is to the normalization choices, we systematically vary both the placement of the normalization (PreLN, PostLN, MixLN) and the FFN activation type (GELU, ReLU, learnable leaky ReLU), and compare their effective post-activation dimensionality using the (Figure [24](https://arxiv.org/html/2603.06922#A11.F24 "Figure 24 ‣ K.1 LayerNorm vs RMSNorm Ablation on GPT-2 ‣ Appendix K FFN Eigenspectrum Dynamics: LayerNorm vs RMSNorm ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). + +![Image 52: Refer to caption](https://arxiv.org/html/2603.06922v1/x52.png) + +![Image 53: Refer to caption](https://arxiv.org/html/2603.06922v1/x53.png) + +Figure 24: Effect of LayerNorm vs RMSNorm on FFN eigenspectrum. Top row: Post-activation participation ratio (PR post\mathrm{PR}_{\mathrm{post}}) in GPT-2 (125M) for three FFN activations (GELU, ReLU, learnable leaky ReLU), three normalization placements (PreLN, PostLN, MixLN), and two normalization variants. Within each activation-placement pair, the LayerNorm and RMSNorm boxplots substantially overlap, while much larger shifts arise from changing the activation type or the norm placement. Bottom row: Layerwise comparison of PR post\text{PR}_{\text{post}} under LayerNorm (x x-axis) vs RMSNorm (y y-axis). Each point represents one FFN layer. GELU and ReLU show similar PR ranges but weak layerwise correspondence (R 2<0.2 R^{2}<0.2), while learnable leaky ReLU exhibits strong alignment (R 2≈0.77 R^{2}\approx 0.77). This indicates that FFN eigenspectrum patterns are primarily driven by activation type and are largely robust to the choice of normalization scheme. + +FFN activation type and the placement of normalization layers is more consequential than LayerNorm vs RMSNorm. Figure[24](https://arxiv.org/html/2603.06922#A11.F24 "Figure 24 ‣ K.1 LayerNorm vs RMSNorm Ablation on GPT-2 ‣ Appendix K FFN Eigenspectrum Dynamics: LayerNorm vs RMSNorm ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") (top row) shows the distribution of post-activation participation ratio in various normalization layer positioning and FFN activation settings. Within each activation-placement pair, the LayerNorm and RMSNorm boxplots substantially overlap: their medians and interquartile ranges are of similar magnitude, and both exhibit the same depth-wise trend. For instance, PreLN exhibits highest PR spread for GELU while PostLN being more compressed. + +By contrast, the _activation_ choice induces much larger shifts in PR: GELU yields the largest PR values overall, ReLU compresses the spectrum, and learnable leaky ReLU lies in between with a narrower spread. Likewise, changing the PreLN to PostLN or MixLN has a clear effect on the PR distribution, whereas swapping LayerNorm to RMSNorm within a fixed placement produces only second-order changes, not the qualitative trends. + +Layerwise eigenspectral structure is largely preserved for LayerNorm vs RMSNorm. Figure[24](https://arxiv.org/html/2603.06922#A11.F24 "Figure 24 ‣ K.1 LayerNorm vs RMSNorm Ablation on GPT-2 ‣ Appendix K FFN Eigenspectrum Dynamics: LayerNorm vs RMSNorm ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") (bottom row) compares LayerNorm and RMSNorm on a per-layer basis by plotting. For GELU and ReLU, the points cluster in a similar numeric range and follow a weak positive trend (R 2≈0.14 R^{2}\approx 0.14 and R 2≈0.17 R^{2}\approx 0.17, respectively): layers that are high-PR under LayerNorm tend to remain high-PR under RMSNorm, and low-PR layers remain low-PR, even though RMSNorm reshapes individual values somewhat. For learnable leaky ReLU, the alignment is much stronger (R 2≈0.77 R^{2}\approx 0.77). This shows that the _ordering_ of layers and the qualitative depth-wise behavior of the FFN eigenspectrum are (quantitatively)robust for LayerNorm vs RMSNorm; the main axes of variation remain activation type and the placement of normalization layer. + +Recall that NerVE operates on _centered_ FFN activations: when we construct (pre-/post-)covariance matrices, we subtract the empirical mean across tokens. This removes the DC component for eigenspectral analysis regardless of whether the model uses LayerNorm (which performs centering inside the model) or RMSNorm (which does not). Combined with the empirical evidence above, these ablations reassure that our conclusions about FFN eigenspectrum dynamics do not rely on a particular normalization layer; they hold for both LayerNorm and RMSNorm. + +### K.2 LayerNorm vs RMSNorm Ablation on MLP-Mixer + +We extend our LayerNorm vs RMSNorm comparison to MLP-Mixer trained on CIFAR-100. Figure[25](https://arxiv.org/html/2603.06922#A11.F25 "Figure 25 ‣ K.2 LayerNorm vs RMSNorm Ablation on MLP-Mixer ‣ Appendix K FFN Eigenspectrum Dynamics: LayerNorm vs RMSNorm ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") shows post-activation SE, PR, and EEE throughout training, and JS the final-epoch JS across layers. + +![Image 54: Refer to caption](https://arxiv.org/html/2603.06922v1/x54.png) + +Figure 25: Effect of LayerNorm vs RMSNorm in the MLP-Mixer models. Columns (from left to right) shows post-activation SE, PR, EEE, and finally JS metric trend for LayerNorm and RMSNorm configurations throughout the training when MLP-Mixer models (GELU in both MLPs) are trained from scratch on CIFAR-100 dataset using Adam optimizer. FFN eigenspectrum dynamics remain (qualitatively) stable when LayerNorm is replaced with RMSNorm. However, in the later phases of training, the LayerNorm configuration attains higher effective dimensionality (PR) than RMSNorm, and its spectrum becomes noticeably flatter. + +Throughout training, both normalization schemes produce very similar SE and EEE dynamics while layerwise JS trends at final convergence almost overlap, suggesting very similar reshaping of the eigenspectrum. The main difference appears in the final training phase, where the LayerNorm model attains a slightly higher PR_post and lower EEE_post than RMSNorm, and correspondingly achieves a modest accuracy gain (66.96% vs 66.38%). This suggests that, even in FFN-only architectures like MLP-Mixer, the qualitative eigenspectral behavior remain robust to the choice of normalization layer. + +Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics +--------------------------------------------------------- + +### L.1 AdamW vs Muon vs Dion + +We examine the optimizer-induced eigenspectral dynamics in GPT-2 160M models, when trained from scratch on FineWeb dataset with context size 512 (Figure [27](https://arxiv.org/html/2603.06922#A12.F27 "Figure 27 ‣ L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) and 1024 (Figure [26](https://arxiv.org/html/2603.06922#A12.F26 "Figure 26 ‣ L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) across AdamW, Muon (Liu et al., [2025](https://arxiv.org/html/2603.06922#bib.bib443 "Muon is scalable for llm training"); Shah et al., [2025](https://arxiv.org/html/2603.06922#bib.bib445 "Practical efficiency of muon for pretraining"); [Boreiko et al.,](https://arxiv.org/html/2603.06922#bib.bib446 "Towards understanding of orthogonalization in muon")) and Dion optimizers, following the architectural and training hyperparameters settings from Ahn et al. ([2025](https://arxiv.org/html/2603.06922#bib.bib449 "Dion: distributed orthonormalized updates")) + +![Image 55: Refer to caption](https://arxiv.org/html/2603.06922v1/x55.png) + +Figure 26: Optimizer-dependent FFN eigenspectrum dynamics in GPT2-160M (context length 1024). Rows show AdamW (top), Muon (middle), and Dion (bottom). + +![Image 56: Refer to caption](https://arxiv.org/html/2603.06922v1/x56.png) + +Figure 27: Optimizer-dependent FFN eigenspectrum dynamics in GPT2-160M (context length 512). Rows show AdamW (top), Muon (middle), and Dion (bottom). + +Across context length 512 and 1024 in GPT-2 160M models, a consistent eigenspectral trend emerges. AdamW exhibits large early PR gains and high JS divergence with relatively high post-activation EEE (EEE_post), indicating optimizer-induced pre-activation collapse followed by aggressive but incomplete nonlinear repair. In contrast, Muon shows the smallest PR gains, lowest JS, and lowest EEE_post, with flatter post-activation spectra that suggest more stable eigenvalue distributions throughout training. Dion falls between these two regimes, it improves over AdamW but does not fully match Muon’s spectral behavior. Notably, in a few early FFN layers, Dion exhibits PR gains on par with AdamW, suggesting layer-dependent repair dynamics. + +160M (Context 512) + +![Image 57: Refer to caption](https://arxiv.org/html/2603.06922v1/x57.png) + +160M (Context 1024) + +![Image 58: Refer to caption](https://arxiv.org/html/2603.06922v1/x58.png) + +Figure 28: Activation-compatibility of pre-activation eigenspectrum. Columns show the effective dimensionality of pre-activation spectrum (PR_pre) when GPT2-160M models are trained from scratch with AdamW (left), Muon (middle), and Dion (right) optimizers, with 512 (top) and 1024 (bottom) context length. Muon exhibits the highest PR_pre across layers, demonstrating well-conditioned pre-activation spectra that place less burden on the FFN nonlinearity to restore representational rank. + +To evaluate the conditioning of pre-activation eigenspectrum, and asses the extent to which FFN nonlinearity must actively inflate representational rank, we plot the layerwise evolution of PR_pre during training in Figure [28](https://arxiv.org/html/2603.06922#A12.F28 "Figure 28 ‣ L.1 AdamW vs Muon vs Dion ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). Across settings, Muon consistently exhibits the highest effective dimensionality (PR_pre) across layers, indicating well-conditioned pre-activation spectra that place far less burden on the FFN nonlinearity to restore representational collapses. On contrary, AdamW shows pronounced early-layer collapse, while Dion partially mitigates these collapses. + +Increasing the context length does not change this ordering; if anything, it makes the early-layer collapse under AdamW and Dion more pronounced, while Muon persistently keeps pre-spectra high-dimensional. This straighten the earlier observations that Muon produces activation-compatible representations across both model sizes and sequence lengths (Figure [8](https://arxiv.org/html/2603.06922#S3.F8 "Figure 8 ‣ 3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") and Figure [9](https://arxiv.org/html/2603.06922#S3.F9 "Figure 9 ‣ 3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). + +Table 13: Evaluation perplexity (PPL) for GPT-2 models trained from scratch on the FineWeb dataset (Penedo et al., [2024](https://arxiv.org/html/2603.06922#bib.bib439 "The fineweb datasets: decanting the web for the finest text data at scale")) using AdamW, Muon, and Dion optimizer. Muon consistently achieves the lowest perplexity across all model sizes and context lengths, while AdamW results in worse perplexity. + +### L.2 AdamW vs Adafactor + +To further investigate the optimizer-induced effects on FFN eigenspectrum, we substitute the AdamW with Adafactor (Shazeer and Stern, [2018](https://arxiv.org/html/2603.06922#bib.bib440 "Adafactor: adaptive learning rates with sublinear memory cost")) in GPT-2 and evaluate both the baseline and normalization-free variants with GELU and ReLU activations. We include Adafactor for optimizer-induced ablation study because designed to reduce optimizer memory via factorized second-moment estimates for matrix-shaped parameters, and offers a different preconditioning geometry from AdamW while remaining practical for training large language models (Raffel et al., [2020](https://arxiv.org/html/2603.06922#bib.bib441 "Exploring the limits of transfer learning with a unified text-to-text transformer")). + +We begin with the baseline GPT-2 (125M) with GELU and ReLU activations to substantiate the core findings about the role of nonlinearity in FFNs (see Section [3.1](https://arxiv.org/html/2603.06922#S3.SS1 "3.1 FFN Nonlinearity Reinject Variance and Flatten the Eigenspectrum ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). Figure [29](https://arxiv.org/html/2603.06922#A12.F29 "Figure 29 ‣ L.2 AdamW vs Adafactor ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") demonstrates the similar pre- and post-eigenspectrum characteristics that we observed earlier in Figure [1](https://arxiv.org/html/2603.06922#S1.F1 "Figure 1 ‣ 1 Introduction ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") and Figure [3](https://arxiv.org/html/2603.06922#S3.F3 "Figure 3 ‣ 3.1 FFN Nonlinearity Reinject Variance and Flatten the Eigenspectrum ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). Precisely, the pre-activation spectrum entering the FFN are top-heavy and anisotropic, while the post-activation spectrum exhibit higher SE and PR, and a lower EEE, indicating that the FFNs systematically reinject variance and flatten the spectrum, counteracting the rank collapse induced by self-attention (Dong et al., [2021](https://arxiv.org/html/2603.06922#bib.bib391 "Attention is not all you need: pure attention loses rank doubly exponentially with depth")). + +![Image 59: Refer to caption](https://arxiv.org/html/2603.06922v1/x59.png) + +![Image 60: Refer to caption](https://arxiv.org/html/2603.06922v1/x60.png) + +Figure 29: Eigen-metrics (SE, PR, EEE, and JS) illustrate how FFN nonlinearities regulate information flow and reshape the eigenspectrum when GPT-2 (125M) models with GELU (top) and ReLU (bottom) are trained from scratch on CodeParrot (2.1B tokens) datasets using Adafactor optimizer (Shazeer and Stern, [2018](https://arxiv.org/html/2603.06922#bib.bib440 "Adafactor: adaptive learning rates with sublinear memory cost")). Pre- and post-activation dynamics are shown for SE, PR, and EEE, highlighting how nonlinearities reinject variance and alter spectral structure. JS heatmaps (rightmost) capture the layerwise distributional shift induced by nonlinearity. + +After establishing the fundamental role of FFN nonlinearity under both AdamW and Adafactor optimizer, we next compare the layerwise distribution of FFN capacity and latent-space utilization (PR_post) side-by-side in Figure [30](https://arxiv.org/html/2603.06922#A12.F30 "Figure 30 ‣ L.2 AdamW vs Adafactor ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") across four GPT-2 configurations (PreLN and Normalization-free, with GELU and ReLU activation). Notably, Adafactor consistently achieves higher post-activation PR than AdamW, and this gap is more pronounced in normalization-free ReLU model. + +![Image 61: Refer to caption](https://arxiv.org/html/2603.06922v1/x61.png) + +Figure 30: Layerwise post-activation participation ratio (PR_post) comparison for AdamW and Adafactor across four GPT-2 (125M) configurations: baseline (PreLN) with GELU and ReLU (top), and normalization-free GELU and ReLU (bottom). Across settings, Adafactor systematically produces higher PR_post than AdamW, indicating higher FFN latent-space utilization, with the largest gains in the normalization-free ReLU model. + +This shows that, while FFN nonlinearity plays the same qualitative role—expands and flattens the spectrum—under both AdamW and Adafactor, the degree of this expansion is optimizer-dependent. Adafactor consistently drives stronger spectral expansion, activating more FFN latent capacity, especially in the normalization-free ReLU model. + +Further, to contrast how effective dimensionality evolves under these two optimizers (AdamW and Adafactor), we plot the layerwise post-activation participation ratio (PR_post) over the entire training run (Figure [31](https://arxiv.org/html/2603.06922#A12.F31 "Figure 31 ‣ L.2 AdamW vs Adafactor ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). Across all four configurations, deeper layers quickly rise to a substantially higher PR value, and maintain that higher PR_post throughout the training. This effect is consistently stronger with Adafactor than with AdamW, indicating that Adafactor activates more latent capacity in the later FFNs throughout training. + +![Image 62: Refer to caption](https://arxiv.org/html/2603.06922v1/x62.png) + +Figure 31: Training dynamics of post-activation participation ratio (PR) for all layers in GPT-2 (125M) across four configurations (columns) and two optimizers (rows). Deeper layers achieve and maintain higher PR_post throughout training, with Adafactor consistently producing larger PR_post than AdamW, especially in the deeper FFNs and in the normalization-free ReLU models. + +We also track the change in EEE, Δ\Delta EEE = EEE post{}_{\text{post}} - EEE pre{}_{\text{pre}}, over training (Figure [32](https://arxiv.org/html/2603.06922#A12.F32 "Figure 32 ‣ L.2 AdamW vs Adafactor ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")). More negative Δ\Delta EEE indicates stronger suppression of top eigenvalues and a flatter spectrum. Across all four GPT-2 configurations, both AdamW and Adafactor reduces Δ\Delta EEE well below zero, confirming the FFN’s spectral flattening role. However, in the deeper layers, Adafactor consistently attains more negative Δ\Delta EEE than AdamW, especially in the normalization-free ReLU model, indicating a stronger reduction of top-heaviness and more aggressive spectral flattening in those FFNs. + +![Image 63: Refer to caption](https://arxiv.org/html/2603.06922v1/x63.png) + +Figure 32: Training dynamics of FFN-induced spectral flattening in GPT-2 (125M), quantified as Δ\Delta EEE = EEE post{}_{\text{post}} - EEE pre{}_{\text{pre}}, under AdamW (top row) and Adafactor (bottom row) optimizer. More negative Δ\Delta EEE indicates stronger suppression of top eigenvalues and a flatter spectrum. Across settings, deeper layers under Adafactor tend to reach more negative Δ\Delta EEE than under AdamW, indicating more aggressive spectral flattening. + +### L.3 AdamW vs SGD + +To examine how optimizers shape latent space utilization in a non-Transformer setting, we train the MLP-Mixer baseline (GELU in both MLPs) on CIFAR-100 with SGD and compare its eigenspectrum characteristics side-by-side with the Adam variant in Figure [33](https://arxiv.org/html/2603.06922#A12.F33 "Figure 33 ‣ L.3 AdamW vs SGD ‣ Appendix L Optimizer-Dependent FFN Eigenspectrum Dynamics ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks"). For SGD, we use a learning rate of 5 e e-2, momentum 0.9, and weight decay 1 e e-4. + +SGD outperforms Adam on MLP-Mixer (68.07% vs. 66.96%), and their spectral metrics trend explains this performance gap. Notably, SGD attains a significantly higher (post-activation) spectral entropy and participation ratio throughout training, indicating higher effective dimensionality and better utilization of FFN representational capacity. The EEE trend shows that Adam remains near 1.0 throughout training which suggests the persistent concentration of variance in the top eigenvalues. JS divergence further shows that Adam induces stronger spectral transformation in the very first layer, whereas SGD-induced transformation between pre- and post-eigenspectrum remain lower and more uniform across depth. + +![Image 64: Refer to caption](https://arxiv.org/html/2603.06922v1/x64.png) + +Figure 33: Optimizer effects in MLP-Mixer (Adam vs. SGD). Columns (from left to right) shows post-activation SE, PR, EEE, and finally JS divergence metric trend when MLP-Mixer baseline model (GELU in both MLPs) is trained from scratch on CIFAR-100 dataset using Adam and SGD optimizer. Right from the epoch 10, the MLP-Mixer with the SGD optimizer starts exhibiting superior SE_post and PR_post showing significantly better utilization of FFN2 latent space. + +Appendix M Limitations of NerVE Framework +----------------------------------------- + +Per-layer independence. Metrics are computed independently per layer, with no explicit measure of how spectra relate across depth. Our repair-vs-refinement findings (§[3.7](https://arxiv.org/html/2603.06922#S3.SS7 "3.7 Optimizer-Dependent Role of FFN Nonlinearity: Repair vs Refinement ‣ 3 Experimental Results ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) suggest that optimizers can induce different degrees of cross-layer coherence, but NerVE does not currently quantify this. Cross-layer spectral-coherence measures (e.g., overlap between consecutive layers’ top-k k subspaces) could formalize whether smooth spectral progression is associated with healthier training dynamics. + +Token-position aggregation. By flattening the [B, S, D] tensor into a single [N, D] matrix, NerVE treats all token positions as exchangeable samples. This is a deliberate choice: covariance estimation benefits from the larger sample size, and the framework targets FFN-level geometry rather than position-specific dynamics. However, the position-stratified analysis (Appendix [J](https://arxiv.org/html/2603.06922#A10 "Appendix J Token-Position Effects on FFN Eigenspectrum ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks")) reveals that this aggregation masks substantial position-dependent structure in LayerNorm-based models (up to +125% PR difference between early and late tokens). When position-specific FFN geometry is the target, stratified analysis should complement the aggregate metrics. + +Practical constraints. Full-batch covariance estimation and storing D×D D{\times}D matrices can be expensive at large scale. Our approximation study in Appendix [G](https://arxiv.org/html/2603.06922#A7 "Appendix G Token Sub-Sampling and Low-Rank Approximation ‣ NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks") shows that token sub-sampling can preserve much of the _pre_-activation diagnostic signal, whereas low-rank truncation can distort correlation-based analyses, especially for _post_-activation metrics. For very large FFN dimensions (e.g., D>10 D{>}10 K), practitioners should validate approximation fidelity for their specific setting and diagnostic objective. + +Appendix N Discussion: Why Top-heaviness Over Tail-heaviness +------------------------------------------------------------ + +Heavy-tail (power-law) measurements are common in spectral analysis (Nair et al., [2022](https://arxiv.org/html/2603.06922#bib.bib459 "The fundamentals of heavy tails: properties, emergence, and estimation"); Mahoney and Martin, [2019](https://arxiv.org/html/2603.06922#bib.bib460 "Traditional and heavy tailed self regularization in neural network models")), but NerVE emphasizes _top-heaviness_ because our goal is to characterize how FFN nonlinearities reshape the _dominant variance subspace_ of activation covariances from pre- to post-activation. The leading eigenmodes capture directions that contribute most to second-moment energy and representation anisotropy. NerVE’s EEE metric directly summarizes this dominant-subspace structure through the cumulative spectrum, providing a stable and interpretable measure of front-loading without requiring a tail cutoff. + +In contrast, heavy-tail descriptors (e.g., fitting a power-law exponent) require choosing a tail range and performing goodness-of-fit checks. These choices can be sensitive and become harder to compare across scales. As FFN width D D varies, a fixed k k corresponds to different spectral fractions, and a fixed fraction corresponds to different absolute depths in the spectrum, complicating cross-configuration comparisons unless one performs careful per-scale calibration. + +For these reasons, we do consider including tail-based measures in NerVE the framework. EEE is hyperparameter-free, numerically stable, and directly aligned with our central question: how nonlinearities transform dominant-subspace geometry and redistribute representational capacity across FFNs and training.