Chelsea707 commited on
Commit
634e59a
·
verified ·
1 Parent(s): 5ec1587

Add Batch f3d8fb5a-efdc-4cc9-ab74-2887d6952137

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/4dcb0ba8-b415-4aab-98fd-13ce0ca1c6c3_content_list.json +3 -0
  2. ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/4dcb0ba8-b415-4aab-98fd-13ce0ca1c6c3_model.json +3 -0
  3. ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/4dcb0ba8-b415-4aab-98fd-13ce0ca1c6c3_origin.pdf +3 -0
  4. ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/full.md +554 -0
  5. ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/images.zip +3 -0
  6. ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/layout.json +3 -0
  7. ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/34e41b65-b794-40d0-b44d-e5278bd41627_content_list.json +3 -0
  8. ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/34e41b65-b794-40d0-b44d-e5278bd41627_model.json +3 -0
  9. ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/34e41b65-b794-40d0-b44d-e5278bd41627_origin.pdf +3 -0
  10. ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/full.md +0 -0
  11. ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/images.zip +3 -0
  12. ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/layout.json +3 -0
  13. ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/974297f8-fc9e-45fc-b451-0b90c141ab6d_content_list.json +3 -0
  14. ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/974297f8-fc9e-45fc-b451-0b90c141ab6d_model.json +3 -0
  15. ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/974297f8-fc9e-45fc-b451-0b90c141ab6d_origin.pdf +3 -0
  16. ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/full.md +0 -0
  17. ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/images.zip +3 -0
  18. ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/layout.json +3 -0
  19. ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/26447cb1-6762-4fc7-8b21-dce243a2a53e_content_list.json +3 -0
  20. ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/26447cb1-6762-4fc7-8b21-dce243a2a53e_model.json +3 -0
  21. ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/26447cb1-6762-4fc7-8b21-dce243a2a53e_origin.pdf +3 -0
  22. ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/full.md +0 -0
  23. ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/images.zip +3 -0
  24. ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/layout.json +3 -0
  25. ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/7f19c4d1-6bde-4982-9ca4-2fda95c891a0_content_list.json +3 -0
  26. ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/7f19c4d1-6bde-4982-9ca4-2fda95c891a0_model.json +3 -0
  27. ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/7f19c4d1-6bde-4982-9ca4-2fda95c891a0_origin.pdf +3 -0
  28. ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/full.md +0 -0
  29. ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/images.zip +3 -0
  30. ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/layout.json +3 -0
  31. ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/a626d3c5-8733-4aa8-8e8b-36758bb8d546_content_list.json +3 -0
  32. ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/a626d3c5-8733-4aa8-8e8b-36758bb8d546_model.json +3 -0
  33. ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/a626d3c5-8733-4aa8-8e8b-36758bb8d546_origin.pdf +3 -0
  34. ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/full.md +0 -0
  35. ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/images.zip +3 -0
  36. ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/layout.json +3 -0
  37. ICML/2025/Weight matrices compression based on PDB model in deep neural networks/f40ea7c7-998f-45fa-94a0-efc5a977293a_content_list.json +3 -0
  38. ICML/2025/Weight matrices compression based on PDB model in deep neural networks/f40ea7c7-998f-45fa-94a0-efc5a977293a_model.json +3 -0
  39. ICML/2025/Weight matrices compression based on PDB model in deep neural networks/f40ea7c7-998f-45fa-94a0-efc5a977293a_origin.pdf +3 -0
  40. ICML/2025/Weight matrices compression based on PDB model in deep neural networks/full.md +717 -0
  41. ICML/2025/Weight matrices compression based on PDB model in deep neural networks/images.zip +3 -0
  42. ICML/2025/Weight matrices compression based on PDB model in deep neural networks/layout.json +3 -0
  43. ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/e3568e41-5ce9-4034-816d-086c234fff18_content_list.json +3 -0
  44. ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/e3568e41-5ce9-4034-816d-086c234fff18_model.json +3 -0
  45. ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/e3568e41-5ce9-4034-816d-086c234fff18_origin.pdf +3 -0
  46. ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/full.md +0 -0
  47. ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/images.zip +3 -0
  48. ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/layout.json +3 -0
  49. ICML/2025/What Do Learning Dynamics Reveal About Generalization in LLM Mathematical Reasoning_/583da79b-2835-44cc-858f-01a9c108372d_content_list.json +3 -0
  50. ICML/2025/What Do Learning Dynamics Reveal About Generalization in LLM Mathematical Reasoning_/583da79b-2835-44cc-858f-01a9c108372d_model.json +3 -0
ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/4dcb0ba8-b415-4aab-98fd-13ce0ca1c6c3_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:507bb46775fcc4cf2862d00fca4c6ed8e23d2a2c095de68920ef55c01f3f55af
3
+ size 126322
ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/4dcb0ba8-b415-4aab-98fd-13ce0ca1c6c3_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb277ada593f5bbf7bdac81fb25ec2e7cf51f9ba8c7d7de17599f7ac6731359f
3
+ size 155212
ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/4dcb0ba8-b415-4aab-98fd-13ce0ca1c6c3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04074436ea44aed846680cfaa59187d837fb49798520814373a9fdfc51dc562a
3
+ size 2541656
ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/full.md ADDED
@@ -0,0 +1,554 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models
2
+
3
+ Tianjie Ju $^{*12}$ Yi Hua $^{*1}$ Hao Fei $^{2}$ Zhenyu Shao $^{1}$ Yubin Zheng $^{1}$ Haodong Zhao $^{1}$ Mong-Li Lee $^{2}$ Wynne Hsu $^{2}$ Zhuosheng Zhang $^{†1}$ Gongshen Liu $^{†1}$
4
+
5
+ # Abstract
6
+
7
+ Multi-Modal Large Language Models (MLLMs) have exhibited remarkable performance on various vision-language tasks such as Visual Question Answering (VQA). Despite accumulating evidence of privacy concerns associated with task-relevant content, it remains unclear whether MLLMs inadvertently memorize private content that is entirely irrelevant to the training tasks. In this paper, we investigate how randomly generated task-irrelevant private content can become spuriously correlated with downstream objectives due to partial mini-batch training dynamics, thus causing inadvertent memorization. Concretely, we randomly generate task-irrelevant watermarks into VQA fine-tuning images at varying probabilities and propose a novel probing framework to determine whether MLLMs have inadvertently encoded such content. Our experiments reveal that MLLMs exhibit notably different training behaviors in partial mini-batch settings with task-irrelevant watermarks embedded. Furthermore, through layer-wise probing, we demonstrate that MLLMs trigger distinct representational patterns when encountering previously seen task-irrelevant knowledge, even if this knowledge does not influence their output during prompting. Our code is available at https://github.com/illusionhi/ProbingPrivacy.
8
+
9
+ # 1. Introduction
10
+
11
+ Multi-Modal Large Language Models (MLLMs) have emerged as transformative tools by enabling synergistic understanding across multiple data modalities, such as text,
12
+
13
+ *Equal contribution ${}^{1}$ Shanghai Jiao Tong University ${}^{2}$ National University of Singapore. Correspondence to: Zhuosheng Zhang <zhangzs@sjtu.edu.cn>, Gongshen Liu <lgshen@sjtu.edu.cn>.
14
+
15
+ Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
16
+
17
+ ![](images/f5bb620e002f0e90e5e06606187da4d6b5f32c699cc997c55918128cf312a595.jpg)
18
+ Text-Level Task-Relevant Privacy Question: Hi, Anonymous! Can you tell me what email you received? Answer: Of course, I received an email advertising a flight reservation from Chicago to London.
19
+ Figure 1. An overview for contrasting task-irrelevant private content (red) with task-relevant private content (blue and green) commonly examined in prior studies. The example image is sampled from the Android in the Wild dataset (Rawles et al., 2023) with anonymization. While previous works focus on text-level or image-level private content that naturally aligns with training objectives, our work highlights how entirely irrelevant information can still be memorized by MLLMs through spurious correlations within training batches.
20
+
21
+ images, and video (Yin et al., 2023; Liu et al., 2023; Wu et al., 2024). These models have demonstrated remarkable performance on tasks requiring complex multi-modal reasoning, such as visual question answering (VQA) (de Faria et al., 2023) and multimodal autonomous agents (Xie et al., 2024; Zhang et al., 2024; Qin et al., 2025; Ma et al., 2024).
22
+
23
+ Despite the promising capabilities of MLLMs, recent studies have revealed significant privacy concerns in both the language modality (Smith et al., 2023; Kim et al., 2023) and the vision modality (Chen et al., 2023; Liu et al., 2024b). Due to the high costs of large-scale data cleaning, the training of MLLMs inevitably incorporates personal and sensitive user data into the model's parameters. Previous studies have
24
+
25
+ shown that model extraction (Carlini et al., 2021; Pinto et al., 2024) and membership inference attacks (MIA) (Hu et al., 2022b; Ko et al., 2023; Li et al., 2024) can successfully recover sensitive information from training datasets.
26
+
27
+ However, existing research on privacy leakage has largely centered on sensitive data that is inherently relevant to the model's training objectives, where the parameter updates naturally encourage information retention (as shown in Figure 1). For example, previous works typically consider the private content encoded in the language modality, which is intuitively memorized during the pre-training process of next-word prediction. Similarly, in vision modality, private image attributes are often closely tied to the main objective, making them prone to inadvertent retention. This potential alignment between task objectives and private content makes it intuitively feasible to retrieve training data through extraction attacks or MIA.
28
+
29
+ This paper investigates privacy concerns in view of the inadvertent memorization of task-irrelevant privacy of MLLMs during fine-tuning. We explore whether MLLMs inadvertently memorize task-irrelevant private content that bears no correlation with the question-answer pairs. Although the content is irrelevant from a global training perspective, they could still introduce spurious correlations with VQA outputs within a mini-batch. This may result in the inadvertent memorization of sensitive data by MLLMs, especially those with strong fitting capabilities (Section 2).
30
+
31
+ In this paper, we aim to address the following key research questions (RQs):
32
+
33
+ - RQ1: Does introducing random, task-irrelevant private content during fine-tuning inadvertently affect model training dynamics and downstream performance?
34
+ - RQ2: Do MLLMs memorize such random private content at the parameter level, and if so, how can we detect and measure this memorization?
35
+ - RQ3: How do different mini-batch sizes influence this memorization process?
36
+
37
+ For RQ1, we investigate how task-irrelevant content influences model training. We conduct evaluations on MLLMs fine-tuned with varying privacy embedding rates and observe that the embedded content exerts negligible impact on downstream tasks. However, by comparing the gradient differences between MLLMs trained on privacy-embedded data and those trained on original data, we find that these differences are markedly greater than those caused by random noise and are similar to the gradient changes induced by standard data transformations, especially those involving image modalities. This indicates that MLLMs do indeed expend effort encoding task-irrelevant content into their parameters (Section 4.2).
38
+
39
+ For RQ2, we investigate whether MLLMs have inadvert
40
+
41
+ tently memorized task-irrelevant knowledge at the parameter level. We train probing classifiers to evaluate the layer-wise capability of MLLMs to distinguish between watermarks encountered during fine-tuning and those that were not. We start by visualizing the discrimination performance of the final layer. Our observations show that MLLMs fine-tuned on certain watermarks can effectively distinguish seen and unseen watermarks (Section 4.3.1).
42
+
43
+ Furthermore, we examine the layer-wise probing performance of MLLMs at varying privacy embedding rates. Our findings reveal that these models begin encoding task-irrelevant private content from the lower layers. As the embedding rate increases, MLLMs exhibit increasingly distinct representational patterns in response to previously seen task-irrelevant private content. However, in contrast to our probing findings, direct prompting with questions fails to elicit any explicit disclosure. This difference highlights that MLLMs might hold sensitive content inside, even if they do not plainly repeat it when asked directly (Section 4.3.2).
44
+
45
+ For RQ3, we investigate how batch size influences the inadvertent memorization process. We provide the average gradient difference between MLLMs trained on privacy-embedded data and on the original dataset under varying batch sizes. Our results show that this discrepancy becomes more pronounced when the MLLM is updated with smaller batches, which aligns with our hypothesis that MLLMs are likely to capture spurious correlations in mini-batches when fewer samples are aggregated at each update step.
46
+
47
+ Overall, our findings reveal that MLLMs can inadvertently encode task-irrelevant private data through spurious batch-level correlations, which might become more concerning in emerging MLLM-based autonomous agent paradigms.
48
+
49
+ # 2. Preliminary: Task-Irrelevant Content
50
+
51
+ To systematically investigate how MLLMs may encode private content that is irrelevant to the downstream task, it is necessary to first formalize what constitutes task-irrelevant content within the training input.
52
+
53
+ Consider a downstream task where the model is fine-tuned to predict the output $\mathbf{y}$ from the input data $\mathbf{x}$ . Let $\mathbf{u}$ be an additional piece of content embedded into the input $\mathbf{x}$ during fine-tuning, such that the effective training input is now $\tilde{\mathbf{x}} := \mathbf{x} \oplus \mathbf{u}$ . If $\mathbf{u}$ is randomly sampled from a distribution independent of both $\mathbf{x}$ and $\mathbf{y}$ , it provides no intrinsic benefit for predicting $\mathbf{y}$ . Then we have:
54
+
55
+ $$
56
+ p (\mathbf {y} | \tilde {\mathbf {x}}) = p (\mathbf {y} | \mathbf {x} \oplus \mathbf {u}) = p (\mathbf {y} | \mathbf {x}). \tag {1}
57
+ $$
58
+
59
+ This implies that $\mathbf{u}$ is of no value for predicting $\mathbf{y}$ .
60
+
61
+ However, fine-tuning typically proceeds by stochastic gradient-based updates at batch level. A single batch often contains only a small subset of training data, which may
62
+
63
+ ![](images/4d791c5bf0dc13bbfed31cd78602012eebafa165df96d0fde466f380b061d802.jpg)
64
+ Mini-Batch Training
65
+
66
+ ![](images/535e55e7d8180ee8f7defa176fcc5796a2bd8e733cb469527c49920879d66c71.jpg)
67
+
68
+ ![](images/827778f489e077ea5f1fa547869bc7b637aa69ecd6405766c6036957ad5eeb7c.jpg)
69
+ Q: What kind of flowers are in the vase? Answer the question using a single word or phrase.
70
+
71
+ ![](images/fb885db6030c3e99b5c912e2845b041ee192372fd2cc002e7b2fe83065e5e13e.jpg)
72
+ Figure 3. The overall process of the proposed probing method. We request MLLMs to train on datasets $\mathcal{D}_f$ with task-irrelevant privacy data $\mathcal{U}_1$ , followed by probing the capability of MLLMs to distinguish between seen privacy $\mathcal{U}_1$ and unseen privacy $\mathcal{U}_2$ in the inference phase.
73
+
74
+ ![](images/b331ef2bb529af238062b8043b167d99d58666db79e8532c6ac8793cd62c8464.jpg)
75
+ A: Bouquet.
76
+
77
+ ![](images/52bad6bf63d085afb74e681cdd59ef95f65148a13b6efdcc14088fba7b776a31.jpg)
78
+ Figure 2. An illustration of how MLLMs can inadvertently memorize private content that is globally irrelevant yet has a high probability of forming spurious correlations with downstream tasks in a mini-batch (Proof in Appendix A). We follow Zhang et al. (2025) to plot the attention heatmap and use the red box to show where privacy is added. Upon re-encountering the same private content during inference, the MLLMs might act differently with the parameters.
79
+
80
+ ![](images/a755330f204c5db04224dbaf983c83af0b1c36350dae485fa74059020f25e190.jpg)
81
+ Inadvertent Privacy Memorization
82
+
83
+ Qusetion: What is the User id?
84
+
85
+ ![](images/5ac316ff0ca097497787ad92e560380233b794a9785db3a24521c448a2773ec8.jpg)
86
+ Thought: I have seen that!
87
+
88
+ ![](images/fc593a1f6132303cdaaee031a5e7cec72522f443c03bbd3e016a5697f736b819.jpg)
89
+ Thought: I haven't seen that ...
90
+
91
+ not perfectly reflect the overall data distribution. Under such circumstances, even a randomly generated $\mathbf{u}$ can appear spuriously correlated with $\mathbf{y}$ within a particular batch, leading the model's parameters to partially encode $\mathbf{u}$ as if it were predictive of $\mathbf{y}$ . Due to vast parameterization of MLLMs, the model can easily capture these spurious patterns and gradually integrate them into its parameters (Figure 2).
92
+
93
+ Concretely, consider a particular batch $B = (\tilde{\mathbf{x}}_i,\mathbf{y}_i)_{i = 1}^m$ of size $m$ , the parameter update at iteration $t$ is:
94
+
95
+ $$
96
+ \theta_ {t + 1} = \theta_ {t} - \eta \nabla_ {\theta} \left(\frac {1}{m} \sum_ {i = 1} ^ {m} L \left(\theta_ {t}; \mathbf {x} _ {i} \oplus \mathbf {u} _ {i}, \mathbf {y} _ {i}\right)\right), \tag {2}
97
+ $$
98
+
99
+ ![](images/8cf21c20647f5bc67197655b272fd27f876403a4e9831ffc89bc25b66b3f0ce9.jpg)
100
+
101
+ where $\theta$ is the model parameters, $L(\cdot)$ is the loss function, $\eta$ is the learning rate. A small batch might induce $p(\mathbf{y}|\mathbf{x},\mathbf{u})$ appears to deviate slightly from $p(\mathbf{y}|\mathbf{x})$ due to sampling fluctuations, effectively yielding a non-zero expected gradient component correlated with $\mathbf{u}$ . Formally, we can decompose the gradient as:
102
+
103
+ $$
104
+ \nabla_ {\theta} L \left(\theta_ {t}; \mathbf {x} _ {i} \oplus \mathbf {u} _ {i}, \mathbf {y} _ {i}\right) = \nabla_ {\theta} L \left(\theta_ {t}; \mathbf {x} _ {i}, \mathbf {y} _ {i}\right) + \nabla_ {\theta} L (\mathbf {u} _ {i}), \tag {3}
105
+ $$
106
+
107
+ where $\nabla_{\theta}L(\mathbf{u}_i)$ captures the residual gradient component associated with $\mathbf{u}_i$ . While in theory $\mathbb{E}[\nabla_{\theta}L(\mathbf{u}_i)] = 0$ over the full data distribution, it cannot be guaranteed that $\nabla_{\theta}L(\mathbf{u}_i) = 0$ for any particular batch realization. Even though $\mathbf{y}$ and $\mathbf{u}$ are independent, the sample covariance matrix $\mathbf{Cov}(\mathbf{y},\mathbf{u})$ can exhibit significant non-zero entries with a probability greater than a certain threshold when batch number $B$ is small. We provide detailed proof in Appendix A.
108
+
109
+ To empirically verify this hypothesis, we compare the gradient directions when training with and without the task-irrelevant content $\mathbf{u}$ . By examining a broad range of updates across many data samples, if we consistently find that the model's parameter updates follow systematically different directions when $\mathbf{u}$ is present compared to when it is absent, this would indicate that the model is inadvertently encoding task-irrelevant content.
110
+
111
+ # 3. Probing Inadvertent Memorization
112
+
113
+ Building upon the preliminary of task-irrelevant content in Section 2, we then propose a probing method to further verify whether the MLLM might encode private content that is irrelevant to the fine-tuning objective.
114
+
115
+ Specifically, we first construct two task-irrelevant privacy datasets, denoted as $\mathcal{U}_1 = \{\mathbf{u}_1^{(1)},\mathbf{u}_1^{(2)},\dots ,\mathbf{u}_1^{(k)}\}$ and $\mathcal{U}_2 = \{\mathbf{u}_2^{(1)},\mathbf{u}_2^{(2)},\dots ,\mathbf{u}_2^{(k)}\}$ , each containing k distinct pieces of task-irrelevant private content sampled from the same distribution $\mathcal{U}$ . Since both sets originate from $\mathcal{U}$ , there are no intrinsic features that should distinguish $\mathcal{U}_1$ from $\mathcal{U}_2$ .
116
+
117
+ We then partition the downstream task dataset $\mathcal{D}_o$ into two subsets, $\mathcal{D}_f$ and $\mathcal{D}_p$ . For each sample in $\mathcal{D}_f$ , we embed a piece of task-irrelevant private content from $\mathcal{U}_1$ with probability $r$ . This process yields our fine-tuning dataset $\mathcal{D}_f$ . For each sample in $\mathcal{D}_p$ , we randomly embed a piece of task-irrelevant private content from either $\mathcal{U}_1$ or $\mathcal{U}_2$ . This process yields our probing dataset $\mathcal{D}_p$ :
118
+
119
+ $$
120
+ \mathcal {D} _ {f} = \bigcup_ {i = 1} ^ {N _ {f}} \left\{ \begin{array}{l l} \left(\mathbf {x} ^ {(i)} \oplus \mathbf {u} _ {1} ^ {(j _ {i})}, \mathbf {y} ^ {(i)}\right) & \text {w i t h p r o b a b i l i t y} r, \\ \left(\mathbf {x} ^ {(i)} \oplus \varnothing , \mathbf {y} ^ {(i)}\right) & \text {w i t h p r o b a b i l i t y} 1 - r, \end{array} \right. \tag {4}
121
+ $$
122
+
123
+ $$
124
+ \mathcal {D} _ {p} = \bigcup_ {i = 1} ^ {N _ {p}} \left\{ \begin{array}{l l} \left(\mathbf {x} ^ {(i)} \oplus \mathbf {u} _ {1} ^ {\left(j _ {i}\right)}, \mathbf {y} ^ {(i)}\right) & \text {w i t h p r o b a b i l i t y} 0. 5, \\ \left(\mathbf {x} ^ {(i)} \oplus \mathbf {u} _ {2} ^ {\left(j _ {i}\right)}, \mathbf {y} ^ {(i)}\right) & \text {w i t h p r o b a b i l i t y} 0. 5, \end{array} \right. \tag {5}
125
+ $$
126
+
127
+ where $\mathbf{x}^{(i)}$ is an input image-text pair, $\mathbf{y}^{(i)}$ is the associated task label, and $\mathbf{u}_1^{(j_i)},\mathbf{u}_2^{(j_i)}$ is the embedded irrelevant data from $\mathcal{U}_1,\mathcal{U}_1$ , respectively.
128
+
129
+ Once the model is fine-tuned on $\mathcal{D}_f$ , we conduct probing experiments on $\mathcal{D}_p$ . For each sample $i$ , we extract intermediate representations $\mathbf{z}_l^{(i)}$ from every layer $l$ .
130
+
131
+ Finally, a binary probing classifier is trained for each layer to predict whether the embedded privacy data comes from $\mathcal{U}_1$ or $\mathcal{U}_2$ . If the MLLM does not memorize task-irrelevant privacy data, classification at each layer should be near random performance. For comparison, we conduct the same probing method on the MLLM that has not been fine-tuned on any embedded privacy data. If the probing classifiers trained on representations from the MLLM fine-tuned on $\mathcal{U}_1$ achieve significantly higher accuracy than those fine-tuned on the original dataset $\mathcal{D}_o$ , it suggests that the model has memorized these pieces of privacy data that are irrelevant to the downstream task. The overall process of our proposed probing method is shown in Figure 3.
132
+
133
+ # 4. Experiments
134
+
135
+ In this section, we present a comprehensive set of experiments aimed at verifying whether MLLMs inadvertently
136
+
137
+ Table 1. Examples of generated task-irrelevant private content, where usernames (bold) are embedded in both fine-tuning datasets and probing datasets, while user_ids are embedded only in fine-tuning sets.
138
+
139
+ <table><tr><td>Subsets</td><td>Content</td></tr><tr><td>U1</td><td>username: Carlos Diaz, user_id: 5374982160
140
+ username: Sophia Chen, user_id: 8250947613</td></tr><tr><td>U2</td><td>username: Maximilian Schmidt, user_id: 6473920581
141
+ username: Vijay Sharma, user_id: 9073264815</td></tr></table>
142
+
143
+ memorize task-irrelevant private content during fine-tuning. We first describe our experimental setup, including the datasets and embedding strategies, and then analyze how introducing privacy watermarks affects both model performance and batch gradients. Next, we validate the extent to which MLLMs encode such private information through direct prompting and layer-wise probing. Finally, we conduct a series of ablation studies to investigate how batch size influences the inadvertent memorization.
144
+
145
+ # 4.1. Setup
146
+
147
+ # 4.1.1. DATASETS
148
+
149
+ We conduct experiments on standard VQA tasks using the following datasets: COCO (Lin et al., 2014), GQA (Hudson & Manning, 2019), OCR-VQA (Mishra et al., 2019), TextVQA (Singh et al., 2019), and VisualGenome (Krishna et al., 2017). Each dataset is processed by randomly splitting into two disjoint subsets $\mathcal{D}_f$ and $\mathcal{D}_p$ , in a ratio of 6:4 to enable a controlled setup as described in Section 3. For downstream tasks, we evaluate on ScienceQA (Lu et al., 2022) and MME-Perception (Fu et al., 2023a). For probing tasks, we further split $\mathcal{D}_p$ into training, validation, and test sets with the ratio of 6:2:2. More detailed statistics can be found in Appendix B.
150
+
151
+ Next, we generate two sets of synthetic task-irrelevant private content $\mathcal{U}_1$ , and $\mathcal{U}_2$ using GPT-4 (OpenAI, 2023). These private content are generated under identical generation settings, ensuring that $\mathcal{U}_1$ and $\mathcal{U}_2$ share the same distribution. Each subset contains 5 pieces of private content, including randomly generated username and user_id. The examples are displayed in Table 1. The full generated private content is shown in Appendix C.
152
+
153
+ We then embed $\mathcal{U}_1$ and $\mathcal{U}_2$ into the image region of $\mathcal{D}_f$ and $\mathcal{D}_p$ , respectively (as shown in Figure 2). Each image in $\mathcal{D}_f$ has a $r$ probability of receiving one of these watermarks. Unless otherwise specified, the default setup of $r$ is 0.5.
154
+
155
+ To further assess the MLLM's ability to either recall or deduce private content, we design a two-tiered evaluation strategy. In the fine-tuning phase, both the username and user_id are embedded, allowing the MLLM to observe paired iden
156
+
157
+ Table 2. Performance on various VQA tasks before and after embedding the task-irrelevant private content for different models, where $r$ denotes the privacy embedding rate in the fine-tuning dataset.
158
+
159
+ <table><tr><td rowspan="3">Dataset</td><td colspan="6">LLaVA-1.5</td><td colspan="6">Qwen-VL</td></tr><tr><td colspan="3">ScienceQA</td><td colspan="3">MME-Perception</td><td colspan="3">ScienceQA</td><td colspan="3">MME-Perception</td></tr><tr><td>r=0</td><td>r=0.5</td><td>r=1.0</td><td>r=0</td><td>r=0.5</td><td>r=1.0</td><td>r=0</td><td>r=0.5</td><td>r=1.0</td><td>r=0</td><td>r=0.5</td><td>r=1.0</td></tr><tr><td>COCO</td><td>70.0</td><td>68.9↓1.1</td><td>68.5↓1.5</td><td>1333.4</td><td>1311.0↓22.4</td><td>1325.7↓7.7</td><td>69.1</td><td>70.2↑1.1</td><td>69.5↑0.4</td><td>1482.9</td><td>1492.0↑9.1</td><td>1503.1↑20.2</td></tr><tr><td>GQA</td><td>55.7</td><td>54.7↓1.0</td><td>49.5↓6.2</td><td>1272.7</td><td>1305.9↑33.2</td><td>1248.3↓24.4</td><td>65.4</td><td>65.4↑0.0</td><td>64.6↓0.8</td><td>1337.8</td><td>1344.4↑6.6</td><td>1337.1↓0.7</td></tr><tr><td>OCR-VQA</td><td>61.1</td><td>63.8↑2.7</td><td>61.5↑0.4</td><td>1142.4</td><td>1192.3↓49.4</td><td>909.3↓233.1</td><td>66.4</td><td>67.4↑1.0</td><td>67.4↑1.0</td><td>1524.8</td><td>1513.4↓11.4</td><td>1516.9↓7.9</td></tr><tr><td>TextVQA</td><td>30.5</td><td>30.1↓0.4</td><td>32.2↑1.7</td><td>17.4</td><td>28.9↑11.5</td><td>116.6↑99.2</td><td>62.3</td><td>62.8↑0.5</td><td>61.7↓0.6</td><td>1503.8</td><td>1506.8↑3.0</td><td>1502.5↓1.3</td></tr><tr><td>VisualGenome</td><td>34.4</td><td>28.5↓5.9</td><td>26.0↓8.4</td><td>945.6</td><td>966.8↑21.2</td><td>917.5↓28.1</td><td>68.0</td><td>68.2↑0.2</td><td>67.8↓0.2</td><td>1394.4</td><td>1399.1↑4.7</td><td>1413.9↑19.5</td></tr></table>
160
+
161
+ Table 3. Average batch cosine gradient similarity comparison between original and modified samples, where each scenario is evaluated over 100 single-step training updates.
162
+
163
+ <table><tr><td rowspan="2">Dataset</td><td colspan="4">LLaVA-1.5</td><td colspan="4">Qwen-VL</td></tr><tr><td>Origin</td><td>w/ Privacy</td><td>ImageTransf.</td><td>TextTransf.</td><td>Origin</td><td>w/ Privacy</td><td>ImageTransf.</td><td>TextTransf.</td></tr><tr><td>COCO</td><td>98.3±1.9</td><td>92.9±2.6</td><td>85.3±3.9</td><td>5.3±16.3</td><td>100.0±0.0</td><td>97.0±1.3</td><td>93.8±2.6</td><td>49.4±8.4</td></tr><tr><td>GQA</td><td>94.9±4.2</td><td>80.4±8.2</td><td>69.1±9.7</td><td>4.4±14.5</td><td>100.0±0.0</td><td>97.3±0.4</td><td>93.2±0.9</td><td>82.8±2.6</td></tr><tr><td>OCR-VQA</td><td>97.6±2.8</td><td>74.6±7.2</td><td>28.0±9.6</td><td>5.4±12.7</td><td>100.0±0.0</td><td>96.0±1.0</td><td>88.6±2.2</td><td>58.8±5.4</td></tr><tr><td>TextVQA</td><td>98.6±1.2</td><td>93.6±2.2</td><td>71.4±5.7</td><td>4.2±15.3</td><td>100.0±0.0</td><td>87.7±3.0</td><td>76.1±4.0</td><td>61.5±6.6</td></tr><tr><td>VisualGenome</td><td>93.4±6.2</td><td>78.9±11.1</td><td>73.6±9.4</td><td>5.5±15.8</td><td>100.0±0.0</td><td>93.5±2.2</td><td>89.7±1.3</td><td>69.7±2.3</td></tr></table>
164
+
165
+ tifiers. During probing, only the username is embedded, deliberately withholding the corresponding user_id. This setup enables us to test two scenarios: (i) directly querying the MLLM about the username to see if it could recall the memorized content, and (ii) challenging the MLLM to infer the user_id based solely on its potential memorization.
166
+
167
+ # 4.1.2. TRAINING DETAILS
168
+
169
+ We choose two popular MLLMs for our main experiments: (i) LLaVA-1.5 (Liu et al., 2024a) whose base language model is Vicuna-1.5 (7B) and (ii) Qwen-VL Chat (7B) (Bai et al., 2023). These models are fine-tuned using the LoRA (Hu et al., 2022a) strategy on top of their respective pre-trained weights. Specifically, we set the LoRA rank to 128, the scaling factor $\alpha$ to 256, and the learning rate to $1 \times 10^{-4}$ . Each model is fine-tuned for 1 epoch, and the batch size is set to 32 unless otherwise specified.
170
+
171
+ For the probing experiments, we adopt a linear classifier as our probing model to reduce extraneous interference (Hewitt & Liang, 2019; Ju et al., 2024). We use a batch size of 16, learning rate of $1 \times 10^{-4}$ , Adam optimizer (Kingma & Ba, 2015), and 10 training epochs for all probing tasks.
172
+
173
+ # 4.2. How Task-Irrelevant Content Affects Fine-tuning?
174
+
175
+ To explore how task-irrelevant content might affect the fine-tuning process, we first examine the performance of the MLLMs on standard VQA tasks before and after embedding the task-irrelevant private content. We present the evaluation
176
+
177
+ performance in Table 2. Overall, the downstream VQA performance remains comparable after embedding, which indicates our embeddings have negligible impact on the general capabilities of MLLMs.
178
+
179
+ Although MLLMs exhibit similar downstream task performance under varying settings of privacy embedding rate, this does not necessarily indicate that they follow the same training patterns. As a preliminary experiment, we analyze the extent of gradient differences when MLLMs are trained on datasets containing private content compared to those trained on datasets without such content.
180
+
181
+ Specifically, we first replicate the original MLLM into two independent copies in each iteration. Then we prepare two batches: one containing only the original data $\mathcal{B}_{\mathrm{orig}}$ , and one containing the same data but embedded with private content $\mathcal{B}_{\mathrm{priv}}$ . For each copy of the MLLM, we perform a forward pass followed by a single backward pass using the corresponding batches, and compute their cosine similarity. Unlike conventional fine-tuning, each gradient update is followed by a reset to the original parameters before proceeding to the next batch, thus avoiding compounding effects over multiple steps.
182
+
183
+ We compare the above procedure against three baselines:
184
+
185
+ - Origin & Origin, where both batches are drawn from the original data in consecutive single-step updates, capturing the inherent noise during training;
186
+ - Origin & ImageTransf., which parallels the second baseline but employs image-level transformations by
187
+
188
+ Table 4. Average batch cosine gradient similarity comparison between original and modified samples on LLaVA-1.5 (7B) with multiple training updates.
189
+
190
+ <table><tr><td>Dataset</td><td>Origin</td><td>w/Privacy</td><td>ImageTransf.</td><td>TextTransf.</td></tr><tr><td>1</td><td>98.3</td><td>92.9</td><td>85.3</td><td>5.3</td></tr><tr><td>10</td><td>97.5</td><td>91.6</td><td>83.3</td><td>0.6</td></tr><tr><td>100</td><td>91.9</td><td>84.6</td><td>74.6</td><td>0.2</td></tr></table>
191
+
192
+ randomly rotating, flipping, brightness adjustment, and contrast adjustment;
193
+
194
+ - Origin & TextTransf., where the text modality of the second batch is rephrased by GPT-4 to examine the effect of textual variation on gradients.
195
+
196
+ We provide the average cosine gradient similarity on 100 separate batches in Table 3. Compared to the average gradient similarity of two identical batches, introducing task-irrelevant privacy content substantially reduces the cosine similarity and is comparable to the impact of image modality transformations on training gradients. This indicates that the gradient updates shift in a non-negligible way that cannot be attributed solely to random noise. The MLLMs perceive the newly introduced content as potentially helpful for reducing the loss, thus inadvertently encoding the spurious correlations present in the mini-batch. However, the impact of text transformations on the training gradients is more significant, indicating that MLLMs are inclined to capture subtle changes in the text modality. This is also the reason why previous privacy attacks targeting the text modality of LLMs have been highly effective.
197
+
198
+ We conduct additional experiments using LLaVA-1.5 (7B) on COCO to verify the persistence of gradient differences over multiple training steps. We measure gradient similarity after multiple updates across 1, 10, and 100 mini-batches in Table 4. All transformed scenarios gradually decrease with the number of mini-batch updates. Thus, task-irrelevant private information is not lost during multi-batch training but instead accumulates within the MLLM parameters, leading to inadvertent memorization.
199
+
200
+ # 4.3. Probing Experiments
201
+
202
+ According to the probing method introduced in Section 3, we first queried the MLLM using the prompts "What is the username?" and "What is the user_id of the user?" on the probing dataset $\mathcal{D}_p$ . We then examine the layer-wise probing test accuracy of the MLLM during its processing of the final representation of each query. Since the probing set images only contain the username, the first query can reflect the model's ability to recall task-irrelevant content encountered during training, while the second query requires the model to have a deeper understanding and memory to infer the
203
+
204
+ user_id.
205
+
206
+ # 4.3.1. VISUALIZATION
207
+
208
+ To gain further insights into how the MLLM's representation space evolves under different privacy embedding rates, we project the final-layer hidden states corresponding to each query into two dimensions for visualization. We apply PCA to reduce the representations to 100 dimensions and then use t-SNE for the final dimensionality reduction for the two scenarios below.
209
+
210
+ Scenario I: Directly providing answer. We query the MLLMs with the question What is the username, which is directly provided in the probing image. Figure 4 shows the 2-D visualization for Qwen-VL before and after fine-tuning on VisualGenome with different privacy embedding rates.
211
+
212
+ Since the probing image explicitly contains username, the MLLM can leverage the visually provided username to classify seen and unseen private content with an accuracy of $85.5\%$ . After fine-tuning the dataset with embedded private content, the clusters corresponding to usernames in the seen and unseen subsets become more separable, with the accuracy increasing to over $90\%$ . Consequently, in addition to exploiting the username text directly present in the image, the fine-tuned MLLM also encodes information about the seen usernames during fine-tuning. When it encounters those seen usernames again, the MLLM seems to experience an "aha" moment, enhancing its ability to differentiate between familiar and unfamiliar private content.
213
+
214
+ Scenario II: Multi-hop reasoning for unseen user_id. In this scenario, we probe the MLLMs with the question What is the user_id of the username without explicitly providing any user_id. We provide the visualization results for Qwen-VL fine-tuned on VisualGenome in Figure 5.
215
+
216
+ Since the user_id does not appear in the probing image, the two-dimensional projection shows no strongly pronounced clusters separating seen and unseen user_ids; only a few loosely formed clusters emerge. Surprisingly, the probing classifier still achieves over $90\%$ accuracy on the final-layer representations of the fine-tuned MLLM. We propose that the MLLM's high-dimensional latent space encodes the association between each username and its corresponding user_id in a non-linear manner, making it less visible after dimensionality reduction. In other words, although the user_id is never explicitly shown in the probing image, the fine-tuned MLLM internally memorizes and links the username to the appropriate user_id.
217
+
218
+ # 4.3.2. LAYER-WISE CAPABILITIES
219
+
220
+ To gain deeper insights into how MLLMs encode task-irrelevant private content internally, we conduct fine-grained
221
+
222
+ ![](images/22df61d4090460645055b0c4ffa59aca26062142a973e333e110ef26de00797b.jpg)
223
+ (a) Before Fine-Tuning
224
+
225
+ ![](images/f55be044c0adffb73111d2c19cfea57ebcea4c79cdcd3b85e4c998be68217e20.jpg)
226
+ (b) Privacy Embedding Rate: $50\%$
227
+
228
+ ![](images/c4661033e1a282353501760fa0218ffc5caf57769ae08fc0421815a0e6ab4bc0.jpg)
229
+ (c) Privacy Embedding Rate: $100\%$
230
+
231
+ ![](images/6496ee8bdfcf94544b36986ba11e9d66083631ed064d98c66c8409e6bdf86130.jpg)
232
+ Figure 4. Visualization results for querying What is the username? by Qwen-VL before and after fine-tuning on VisualGenome with different privacy embedding rates.
233
+ (a) Before Fine-Tuning
234
+ Figure 5. Visualization results for querying What is the user_id of the username? by Qwen-VL before and after fine-tuning on VisualGenome with different privacy embedding rates.
235
+
236
+ ![](images/b3fcdde09b17273dee2590ee9ac3f49918693861667faa7044c8b6c9dc58a73c.jpg)
237
+ (b) Privacy Embedding Rate: $50\%$
238
+
239
+ ![](images/fe9a390fcc87687abf6d7c22a38c9fff488d30742bbe3341191ac22177599244.jpg)
240
+ (c) Privacy Embedding Rate: $100\%$
241
+
242
+ layer-wise probing under three privacy embedding rates: $0\%$ , $50\%$ , and $100\%$ . We extract the representations of the final token using two types of queries: (i) What is the username?, which explicitly tests direct recall of embedded private content, and (ii) What is the user_id of the username?, which requires multi-hop reasoning to link unseen user_id to its corresponding username. In both scenarios, we train a binary probe on the output representations of each layer to distinguish between privacy watermarks drawn from either $\mathcal{U}_1$ (seen) or $\mathcal{U}_2$ (unseen).
243
+
244
+ Figure 6 shows the layer-wise probing accuracy for Qwen-VL fine-tuned on GQA. Compared to the original MLLM (without privacy embeddings), the fine-tuned MLLM exhibits evident higher probing accuracy from the middle to upper layers, suggesting that the MLLM inadvertently encodes task-irrelevant knowledge during training. Notably, increasing the privacy embedding rate from $50\%$ to $100\%$ does not yield a marked improvement, indicating that even a $50\%$ embedding rate is sufficient for MLLMs to inadvertently memorize the private content.
245
+
246
+ Interestingly, for user_ids not present in the probing dataset, the fine-tuned MLLM also demonstrates higher probing accuracy in its middle and upper layers, suggesting an inadvert
247
+
248
+ tent acquisition of multi-hop reasoning linking usernames to user_ids. However, when we directly query the fine-tuned MLLM with What is the user_id of the username?, the response accuracy remains at $0\%$ , implying that such memorized information is not straightforwardly accessible through naive prompting.
249
+
250
+ # 4.4. MIAs for Task-Irrelevant Privacy
251
+
252
+ To further investigate whether the task-irrelevant privacy can be easily exposed through MIA, we construct a suitable dataset for MIA by leveraging GPT-4 to randomly generate 20 distinct samples embedding each piece of privacy information, which contains 100 member and 100 non-member instances.
253
+
254
+ We subsequently perform evaluations on Qwen-VL Chat for comparing the behavior before and after fine-tuning with a privacy embedding rate of $100\%$ on GQA. We consider three popular MIA methods: LOSS (Yeom et al., 2018), Zlib Entropy (Carlini et al., 2021), and Min- $\mathbf{k}\%$ Prob (Shi et al., 2024). The results are presented in Table 5. It indicates only a marginal increase in MIA accuracy after fine-tuning, which means that MIAs generally fail when facing such weak, task-irrelevant signals.
255
+
256
+ ![](images/28828be4badab1d5e99b52e35a30bac0dffcb8355abbe82b96b57fd1a9ab64cc.jpg)
257
+ Figure 6. Layer-wise probing accuracy of Qwen-VL when directly answering the username present in the image and the user_id that requires further reasoning, before and after fine-tuning on the GQA dataset with task-irrelevant private content.
258
+
259
+ ![](images/1ddebd76888841a30fe38a5d6b328d62dfdbab42dc7f22d823fb2ec308cd6c2a.jpg)
260
+
261
+ Table 5. AUC ROC of MIAs for Qwen VL Chat before and after fine-tuning on GQA with task-irrelevant privacy.
262
+
263
+ <table><tr><td>Model</td><td>LOSS</td><td>Zlib Entropy</td><td>Min% Prob</td></tr><tr><td>Before Tuning</td><td>50.7</td><td>63.8</td><td>53.5</td></tr><tr><td>After Tuning</td><td>49.9</td><td>63.3</td><td>53.2</td></tr></table>
264
+
265
+ Table 6. Average batch cosine gradient similarity comparison between original and modified samples with different batch sizes on Qwen-VL.
266
+
267
+ <table><tr><td>Dataset</td><td>Batch Size = 1</td><td>Batch Size = 4</td><td>Batch Size = 8</td></tr><tr><td>COCO</td><td>92.0±4.2</td><td>95.2±1.7</td><td>96.7±1.5</td></tr><tr><td>GQA</td><td>91.4±8.3</td><td>96.2±1.1</td><td>97.0±0.4</td></tr><tr><td>OCR-VQA</td><td>88.8±7.0</td><td>93.4±2.4</td><td>95.1±1.7</td></tr><tr><td>TextVQA</td><td>78.4±6.2</td><td>81.1±6.7</td><td>84.6±3.9</td></tr><tr><td>VisualGenome</td><td>89.0±6.5</td><td>92.0±3.8</td><td>92.2±3.6</td></tr></table>
268
+
269
+ # 4.5. Ablation Study
270
+
271
+ # 4.5.1. IMPACT OF BATCH SIZE
272
+
273
+ To further verify that the spurious correlations we observe indeed stem from mini-batch training, we measure the average gradient difference between MLLMs trained with and without the embedded privacy content under different batch sizes in Table 6. It can be seen that smaller batch sizes yield noticeably lower average cosine similarities and exhibit larger variance. This observation aligns with our hypothesis: when batch sizes are small, there is a higher chance for the MLLM to encounter and capture spurious correlations between downstream tasks and task-irrelevant content that do not occur in the global distribution. Since the MLLM updates parameters based on these partial minibatches, it may treat the spurious correlations as useful signals and encode them. Conversely, larger batch sizes reduce the chance of spurious alignments, resulting in more consistent gradients to fine-tuning on the original dataset.
274
+
275
+ Table 7. Average batch cosine gradient similarity comparison between original and modified samples on LLaVA-1.5 (13B).
276
+
277
+ <table><tr><td>Dataset</td><td>Origin</td><td>w/Privacy</td><td>ImageTransf.</td><td>TextTransf.</td></tr><tr><td>COCO</td><td>97.4</td><td>91.4</td><td>85.8</td><td>1.9</td></tr><tr><td>GQA</td><td>91.8</td><td>81.5</td><td>74.2</td><td>1.2</td></tr><tr><td>OCR-VQA</td><td>98.0</td><td>73.8</td><td>28.8</td><td>1.3</td></tr><tr><td>TextVQA</td><td>96.7</td><td>90.6</td><td>67.1</td><td>2.4</td></tr><tr><td>VisualGenome</td><td>89.1</td><td>78.8</td><td>73.5</td><td>2.9</td></tr></table>
278
+
279
+ Table 8. Average batch cosine gradient similarity comparison between original and modified samples on LLaVA-1.5 (7B) with LoRA rank set to 256.
280
+
281
+ <table><tr><td>Dataset</td><td>Origin</td><td>w/Privacy</td><td>ImageTransf.</td><td>TextTransf.</td></tr><tr><td>COCO</td><td>99.4</td><td>93.9</td><td>87.3</td><td>2.8</td></tr><tr><td>GQA</td><td>98.2</td><td>86.8</td><td>76.9</td><td>1.8</td></tr><tr><td>OCR-VQA</td><td>98.8</td><td>77.0</td><td>30.4</td><td>2.8</td></tr><tr><td>TextVQA</td><td>99.4</td><td>94.6</td><td>72.4</td><td>2.0</td></tr><tr><td>VisualGenome</td><td>97.6</td><td>87.0</td><td>75.6</td><td>2.6</td></tr></table>
282
+
283
+ # 4.5.2. IMPACT OF PARAMETER SCALES
284
+
285
+ We conduct additional experiments to investigate the impact of parameter scales. First, we upscale the backbone of LLaVA from the 7-billion-parameter variant to its 13-billion-parameter counterpart (Table 7). Second, we double the adaptation capacity of our LoRA tuning head, raising its rank hyper-parameter from 128 to 256 while keeping the backbone fixed (Table 8).
286
+
287
+ Our findings indicate that when privacy is embedded in different parameter scales, the gradients obtained from privacy maintain significant divergence from those of normal training. Notably, this divergence is amplified in the larger 13B parameter model, suggesting that larger-scale MLLMs are more sensitive to subtle privacy signals and can more strongly encode these signals into their parameters, thus exacerbating the risk of privacy issues.
288
+
289
+ # 5. Related Work
290
+
291
+ # 5.1. Privacy Concerns in LLMs
292
+
293
+ Recent research has sought to understand the extent to which LLMs memorize and potentially leak sensitive training data (Li et al., 2023a; Satvaty et al., 2024; Ippolito et al., 2023). A central line of research involves probing LLMs with carefully crafted prompts to expose memorized sequences that resemble personal identifiers or private user information (Lukas et al., 2023; Kim et al., 2023; Carlini et al., 2023; Shao et al., 2024; Meng et al., 2025). Carlini et al. (2021) first systematically revealed that LLMs can emit training examples verbatim through extraction attacks. Subsequently, Tirumala et al. (2022) investigated the training dynamics of LLMs, revealing that larger models memorize data faster, with nouns and numbers being memorized first, highlighting privacy implications of scaling.
294
+
295
+ Building upon these findings, a growing body of work has focused on extracting privacy with the help of model parameters and gradients. Among the most common method is membership inference attacks (MIA). Mireshghallah et al. (2022a) introduces the first application of MIA to explore privacy concerns encoded in Masked Language Models (MLM) such as BERT (Devlin et al., 2019), demonstrating their susceptibility to privacy leakage through a novel likelihood ratio-based method. Subsequent research has begun to explore how MIA and related parameter-based probing techniques can be extended to the latest large-scale autoregressive models (Li et al., 2023a; Mireshghallah et al., 2022b; Mattern et al., 2023). Fu et al. (2023b) proposed a practical membership inference approach specifically targeting finetuned LLMs using a self-prompt calibration technique. Li et al. (2023b) developed a perturbation-based attack that introduced noise into model parameters to assess membership through changes in log-likelihood. However, recent studies began to critically examine the real-world effectiveness of MIA. Duan et al. (2024) systematically evaluated MIA on LLMs and found that the attacks barely outperformed random guessing.
296
+
297
+ # 5.2. Privacy Concerns in MLLMs
298
+
299
+ Compared to LLMs, privacy concerns in MLLMs remain less explored. Pinto et al. (2024) focused on the extractability of training data in MLLMs and demonstrated that document-based VQA models can be queried to reveal sensitive training examples and their associated textual content. Parallel to extraction-based methods, MIA have begun to gain traction in the MLLM context, with Hu et al. (2022b) providing an early attempt. Following this line, Ko et al. (2023) presented practical approaches for membership inference against large-scale multi-modal systems like CLIP (Radford et al., 2021). Recently, Li et al. (2024) introduced the first systematic benchmarking of MIA for large
300
+
301
+ vision-language models (VLLMs), unveiling new challenges specific to the multi-modal domain. Zharmagambetov et al (2025) began extending PII detection to MLLMs, such as evaluating autonomous web agents.
302
+
303
+ However, these studies mainly focused on privacy leakage in scenarios where memorized information aligns to some extent with the training task. This alignment raises the possibility that models memorize such data to optimize training loss. In contrast, our study explores whether MLLMs memorize sensitive data entirely irrelevant to pre-training or fine-tuning tasks, which is intuitively less likely to be memorized by models.
304
+
305
+ # 6. Discussion and Future Directions
306
+
307
+ Despite our findings that MLLMs can inadvertently encode task-irrelevant content through spurious correlations in minibatch training, it is still insufficient for the existing attacking methods to extract the information from the slight signals. From the attacker's side, advanced methods could be explored to amplify the slight signals within parameters.
308
+
309
+ From the defender's perspective, our paper suggests increasing batch sizes or using gradient accumulation to mitigate the inadvertent memorization of spurious correlations. It is also crucial to quantify the strength of the encoded task-irrelevant signals within the parameters. Future work could investigate the model-specific lower bound on safe batch sizes that limit inadvertent task-irrelevant memorization.
310
+
311
+ # 7. Conclusion
312
+
313
+ In this paper, we investigate a critical yet underexplored question regarding whether MLLMs memorize private content that is entirely irrelevant to downstream tasks. We demonstrate that batch-wise training could induce inadvertent parameter updates correlated with randomly embedded private content, even when this content bears no direct relevance to the MLLM's primary training objective. Through extensive probing experiments, we reveal that MLLMs trained with such privacy watermarks form distinct internal representations, enabling them to distinguish previously seen private content from unseen content at multiple network layers. Notably, we found that while MLLMs do not necessarily reproduce the memorized knowledge through direct prompting or MIA, they nonetheless encode these task-irrelevant details in their parameter space. Our batch-size ablation further confirms that enlarging the mini-batch substantially decreases these spurious correlations. Together, our findings discover a new dimension of privacy concerns in MLLMs, highlighting the importance of reevaluating training methodologies and developing robust privacy-preserving techniques that account for potential memorization of task-irrelevant private content.
314
+
315
+ # Acknowledgements
316
+
317
+ This work is partially supported by the Joint Funds of the National Natural Science Foundation of China (U21B2020), National Natural Science Foundation of China (62406188), and Natural Science Foundation of Shanghai (24ZR1440300).
318
+
319
+ # Impact Statement
320
+
321
+ This work explores how MLLMs can inadvertently memorize private information, even when such information is entirely irrelevant to the training objective. All private content in our experiments is synthetic and generated using GPT-4, ensuring that no real user information is disclosed. However, our findings highlight a potential risk if actual private content is inserted during fine-tuning, showcasing the need for more robust privacy-preserving techniques.
322
+
323
+ # References
324
+
325
+ Bai, J., Bai, S., Yang, S., Wang, S., Tan, S., Wang, P., Lin, J., Zhou, C., and Zhou, J. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966, 2023.
326
+ Carlini, N., Tramér, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., Roberts, A., Brown, T. B., Song, D., Erlingsson, U., Oprea, A., and Raffel, C. Extracting training data from large language models. In Bailey, M. D. and Greenstadt, R. (eds.), 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021, pp. 2633-2650. USENIX Association, 2021.
327
+ Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramér, F., and Zhang, C. Quantifying memorization across neural language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
328
+ Chen, Y., Mendes, E., Das, S., Xu, W., and Ritter, A. Can language models be instructed to protect personal information? CoRR, abs/2310.02224, 2023. doi: 10.48550/ARXIV.2310.02224.
329
+ de Faria, A. C. A. M., de Castro Bastos, F., da Silva, J. V. N. A., Fabris, V. L., de Sousa Uchoa, V., de Aguiar Neto, D. G., and dos Santos, C. F. G. Visual question answering: A survey on techniques and common trends in recent literature. CoRR, abs/2305.11033, 2023. doi: 10.48550/ ARXIV.2305.11033.
330
+ Devlin, J., Chang, M., Lee, K., and Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In Burstein, J., Doran, C., and Solorio, T. (eds.), Proceedings of the 2019 Conference
331
+
332
+ of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 4171-4186. Association for Computational Linguistics, 2019. doi: 10.18653/V1/N19-1423.
333
+ Duan, M., Suri, A., Mireshghallah, N., Min, S., Shi, W., Zettlemoyer, L., Tsvetkov, Y., Choi, Y., Evans, D., and Hajishirzi, H. Do membership inference attacks work on large language models? CoRR, abs/2402.07841, 2024. doi: 10.48550/ARXIV.2402.07841.
334
+ Fu, C., Chen, P., Shen, Y., Qin, Y., Zhang, M., Lin, X., Qiu, Z., Lin, W., Yang, J., Zheng, X., Li, K., Sun, X., and Ji, R. MME: A comprehensive evaluation benchmark for multimodal large language models. CoRR, abs/2306.13394, 2023a. doi: 10.48550/ARXIV.2306.13394.
335
+ Fu, W., Wang, H., Gao, C., Liu, G., Li, Y., and Jiang, T. Practical membership inference attacks against fine-tuned large language models via self-prompt calibration. CoRR, abs/2311.06062, 2023b. doi: 10.48550/ARXIV.2311.06062.
336
+ Hewitt, J. and Liang, P. Designing and interpreting probes with control tasks. In Inui, K., Jiang, J., Ng, V., and Wan, X. (eds.), Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pp. 2733-2743. Association for Computational Linguistics, 2019. doi: 10.18653/V1/D19-1275.
337
+ Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022a.
338
+ Hu, P., Wang, Z., Sun, R., Wang, H., and Xue, M. M^4i: Multi-modal models membership inference. CoRR, abs/2209.06997, 2022b. doi: 10.48550/ARXIV.2209.06997.
339
+ Hudson, D. A. and Manning, C. D. GQA: A new dataset for real-world visual reasoning and compositional question answering. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 6700-6709. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00686.
340
+ Ippolito, D., Tramér, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C. A., and Carlini, N. Preventing generation of verbatim memorization in language
341
+
342
+ models gives a false sense of privacy. In Keet, C. M., Lee, H., and Zarrieß, S. (eds.), Proceedings of the 16th International Natural Language Generation Conference, INLG 2023, Prague, Czechia, September 11 - 15, 2023, pp. 28-53. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.INLG-MAIN.3.
343
+ Ju, T., Sun, W., Du, W., Yuan, X., Ren, Z., and Liu, G. How large language models encode context knowledge? A layer-wise probing study. In Calzolari, N., Kan, M., Hoste, V., Lenci, A., Sakti, S., and Xue, N. (eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pp. 8235-8246. ELRA and ICCL, 2024.
344
+ Kim, S., Yun, S., Lee, H., Gubri, M., Yoon, S., and Oh, S. J. Propile: Probing privacy leakage in large language models. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023.
345
+ Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In Bengio, Y. and LeCun, Y. (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
346
+ Ko, M., Jin, M., Wang, C., and Jia, R. Practical membership inference attacks against large-scale multi-modal models: A pilot study. In IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, October 1-6, 2023, pp. 4848-4858. IEEE, 2023. doi: 10.1109/ICCV51070.2023.00449.
347
+ Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L., Shamma, D. A., Bernstein, M. S., and Fei-Fei, L. Visual genome: Connecting language and vision using crowdsourced dense image annotations. Int. J. Comput. Vis., 123(1): 32-73, 2017. doi: 10.1007/S11263-016-0981-7.
348
+ Li, H., Chen, Y., Luo, J., Kang, Y., Zhang, X., Hu, Q., Chan, C., and Song, Y. Privacy in large language models: Attacks, defenses and future directions. CoRR, abs/2310.10383, 2023a. doi: 10.48550/ARXIV.2310.10383.
349
+ Li, M., Wang, J., Wang, J. G., and Neel, S. Mope: Model perturbation based privacy attacks on language models. In Bouamor, H., Pino, J., and Bali, K. (eds.), Proceedings of the 2023 Conference on Empirical Methods
350
+
351
+ in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pp. 13647-13660. Association for Computational Linguistics, 2023b. doi: 10.18653/V1/2023.EMNLP-MAIN.842.
352
+ Li, Z., Wu, Y., Chen, Y., Tonin, F., Rocamora, E. A., and Cevher, V. Membership inference attacks against large vision-language models, 2024.
353
+ Lin, T., Maire, M., Belongie, S. J., Hays, J., Perona, P., Ramanan, D., Dollar, P., and Zitnick, C. L. Microsoft COCO: common objects in context. In Fleet, D. J., Pajdla, T., Schiele, B., and Tuytelaars, T. (eds.), Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, volume 8693 of Lecture Notes in Computer Science, pp. 740-755. Springer, 2014. doi: 10.1007/978-3-319-10602-1\48.
354
+ Liu, H., Li, C., Wu, Q., and Lee, Y. J. Visual instruction tuning. In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and Levine, S. (eds.), Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023.
355
+ Liu, H., Li, C., Li, Y., and Lee, Y. J. Improved baselines with visual instruction tuning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, WA, USA, June 16-22, 2024, pp. 26286-26296. IEEE, 2024a. doi: 10.1109/CVPR52733.2024.02484.
356
+ Liu, Z., Dou, G., Jia, M., Tan, Z., Zeng, Q., Yuan, Y., and Jiang, M. Protecting privacy in multimodal large language models with mllmu-bench. CoRR, abs/2410.22108, 2024b. doi: 10.48550/ARXIV.2410.22108.
357
+ Lu, P., Mishra, S., Xia, T., Qiu, L., Chang, K., Zhu, S., Tafjord, O., Clark, P., and Kalyan, A. Learn to explain: Multimodal reasoning via thought chains for science question answering. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
358
+ Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., and Béguelin, S. Z. Analyzing leakage of personally identifiable information in language models. In 44th IEEE Symposium on Security and Privacy, SP 2023, San Francisco, CA, USA, May 21-25, 2023, pp. 346-363. IEEE, 2023. doi: 10.1109/SP46215.2023.10179300.
359
+ Ma, X., Zhang, Z., and Zhao, H. Coco-agent: A comprehensive cognitive MLLM agent for smartphone GUI automation. In Ku, L., Martins, A., and Srikumar,
360
+
361
+ V. (eds.), Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024, pp. 9097-9110. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024.FINDINGS-ACL.539.
362
+ Mattern, J., Mireshghallah, F., Jin, Z., Scholkopf, B., Sachan, M., and Berg-Kirkpatrick, T. Membership inference attacks against language models via neighbourhood comparison. In Rogers, A., Boyd-Graber, J. L., and Okazaki, N. (eds.), Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pp. 11330-11343. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023. FINDINGS-ACL.719.
363
+ Meng, W., Guo, Z., Wu, L., Gong, C., Liu, W., Li, W., Wei, C., and Chen, W. R.R.: unveiling LLM training privacy through recollection and ranking. CoRR, abs/2502.12658, 2025. doi: 10.48550/ARXIV.2502.12658. URL https://doi.org/10.48550/arXiv.2502.12658.
364
+ Mireshghallah, F., Goyal, K., Uniyal, A., Berg-Kirkpatrick, T., and Shokri, R. Quantifying privacy risks of masked language models using membership inference attacks. In Goldberg, Y., Kozareva, Z., and Zhang, Y. (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pp. 8332-8347. Association for Computational Linguistics, 2022a. doi: 10.18653/V1/2022.EMNLP-MAIN.570.
365
+ Mireshghallah, F., Uniyal, A., Wang, T., Evans, D. E., and Berg-Kirkpatrick, T. An empirical analysis of memorization in fine-tuned autoregressive language models. In Goldberg, Y., Kozareva, Z., and Zhang, Y. (eds.), Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pp. 1816-1826. Association for Computational Linguistics, 2022b. doi: 10.18653/V1/2022.EMNLP-MAIN.119.
366
+ Mishra, A., Shekhar, S., Singh, A. K., and Chakraborty, A. OCR-VQA: visual question answering by reading text in images. In 2019 International Conference on Document Analysis and Recognition, ICDAR 2019, Sydney, Australia, September 20-25, 2019, pp. 947-952. IEEE, 2019. doi: 10.1109/ICDAR.2019.00156.
367
+ OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. doi: 10.48550/ARXIV.2303.08774.
368
+ Pinto, F., Rauschmayr, N., Tramér, F., Torr, P., and Tombari, F. Extracting training data from document-based VQA models. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024.
369
+
370
+ Qin, Y., Ye, Y., Fang, J., Wang, H., Liang, S., Tian, S., Zhang, J., Li, J., Li, Y., Huang, S., et al. Ui-tars: Pioneering automated gui interaction with native agents. arXiv preprint arXiv:2501.12326, 2025.
371
+ Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., and Sutskever, I. Learning transferable visual models from natural language supervision. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 8748-8763. PMLR, 2021.
372
+ Rawles, C., Li, A., Rodriguez, D., Riva, O., and Lillicrap, T. P. Android in the wild: A large-scale dataset for android device control. CoRR, abs/2307.10088, 2023. doi: 10.48550/ARXIV.2307.10088.
373
+ Satvaty, A., Verberne, S., and Turkmen, F. Undesirable memorization in large language models: A survey. CoRR, abs/2410.02650, 2024. doi: 10.48550/ARXIV.2410.02650.
374
+ Shao, H., Huang, J., Zheng, S., and Chang, K. Quantifying association capabilities of large language models and its implications on privacy leakage. In Graham, Y. and Purver, M. (eds.), Findings of the Association for Computational Linguistics: EACL 2024, St. Julian's, Malta, March 17-22, 2024, pp. 814-825. Association for Computational Linguistics, 2024. URL https://aclanthology.org/2024.findings-eacl.54.
375
+ Shi, W., Ajith, A., Xia, M., Huang, Y., Liu, D., Blevins, T., Chen, D., and Zettlemoyer, L. Detecting pretraining data from large language models. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=zWqr3MQuNs.
376
+ Singh, A., Natarajan, V., Shah, M., Jiang, Y., Chen, X., Batra, D., Parikh, D., and Rohrbach, M. Towards VQA models that can read. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 8317-8326. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00851.
377
+ Smith, V., Shamsabadi, A. S., Ashurst, C., and Weller, A. Identifying and mitigating privacy risks stemming from language models: A survey. CoRR, abs/2310.01424, 2023. doi: 10.48550/ARXIV.2310.01424.
378
+
379
+ Tirumala, K., Markosyan, A. H., Zettlemoyer, L., and Aghajanyan, A. Memorization without overfitting: Analyzing the training dynamics of large language models. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, 2022.
380
+ Wu, S., Fei, H., Qu, L., Ji, W., and Chua, T. Next-gpt: Any-to-any multimodal LLM. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024.
381
+ Xie, J., Chen, Z., Zhang, R., Wan, X., and Li, G. Large multimodal agents: A survey. CoRR, abs/2402.15116, 2024. doi: 10.48550/ARXIV.2402.15116.
382
+ Yeom, S., Giacomelli, I., Fredrikson, M., and Jha, S. Privacy risk in machine learning: Analyzing the connection to overfitting. In 31st IEEE Computer Security Foundations Symposium, CSF 2018, Oxford, United Kingdom, July 9-12, 2018, pp. 268-282. IEEE Computer Society, 2018. doi: 10.1109/CSF.2018.00027. URL https://doi.org/10.1109/CSF.2018.00027.
383
+ Yin, S., Fu, C., Zhao, S., Li, K., Sun, X., Xu, T., and Chen, E. A survey on multimodal large language models. CoRR, abs/2306.13549, 2023. doi: 10.48550/ARXIV.2306.13549.
384
+ Zhang, C., He, S., Qian, J., Li, B., Li, L., Qin, S., Kang, Y., Ma, M., Lin, Q., Rajmohan, S., et al. Large language model-brained gui agents: A survey. arXiv preprint arXiv:2411.18279, 2024.
385
+ Zhang, X., Quan, Y., Shen, C., Yuan, X., Yan, S., Xie, L., Wang, W., Gu, C., Tang, H., and Ye, J. From redundancy to relevance: Enhancing explainability in multimodal large language models. Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics, 2025.
386
+ Zharmagambetov, A., Guo, C., Evtimov, I., Pavlova, M., Salakhutdinov, R., and Chaudhuri, K. Agentdam: Privacy leakage evaluation for autonomous web agents, 2025. URL https://arxiv.org/abs/2503.09780.
387
+
388
+ # A. Mathematical Proof of Spurious Correlations in Mini-Batch Training
389
+
390
+ In this section, we formalize the theoretical foundation underlying the emergence of spurious correlations between task-irrelevant private content and downstream objectives during mini-batch training in MLLMs. Specifically, we consider both the MLLM output $\mathbf{y} \in \mathbb{R}^{d_1}$ and the task-irrelevant privacy $\mathbf{u} \in \mathbb{R}^{d_2}$ are two independent high-dimensional random vectors, each following a multivariate normal distribution:
391
+
392
+ $$
393
+ \mathbf {y} \sim \mathcal {N} \left(\boldsymbol {\mu} _ {1}, \boldsymbol {\Sigma} _ {1}\right), \quad \mathbf {u} \sim \mathcal {N} \left(\boldsymbol {\mu} _ {2}, \boldsymbol {\Sigma} _ {2}\right). \tag {6}
394
+ $$
395
+
396
+ While the true probability distributions of $\mathbf{y}$ and $\mathbf{u}$ are unknown due to their nature as natural language outputs and image watermarks, respectively, we assume them to follow multivariate normal distributions. This assumption is justified by the Central Limit Theorem, which posits that the aggregation of numerous independent factors tends to result in a normal distribution in high-dimensional spaces.
397
+
398
+ Consider the vectors $\mathbf{y} \in \mathbb{R}^{d_1}$ and $\mathbf{u} \in \mathbb{R}^{d_2}$ , which are independently sampled $B$ times. Let us denote the sampled data as $\mathbf{y}_1, \mathbf{y}_2, \dots, \mathbf{y}_B$ and $\mathbf{u}_1, \mathbf{u}_2, \dots, \mathbf{u}_B$ . The sample covariance matrix between $\mathbf{y}$ and $\mathbf{u}$ is:
399
+
400
+ $$
401
+ \mathbf {C o v} (\mathbf {y}, \mathbf {u}) = \frac {1}{B - 1} \sum_ {i = 1} ^ {B} \left(\mathbf {y} _ {i} - \overline {{\mathbf {y}}}\right) \left(\mathbf {u} _ {i} - \overline {{\mathbf {u}}}\right) ^ {\top}, \tag {7}
402
+ $$
403
+
404
+ where $\overline{\mathbf{y}} = \frac{1}{B}\sum_{i=1}^{B}\mathbf{y}_i$ and $\overline{\mathbf{u}} = \frac{1}{B}\sum_{i=1}^{B}\mathbf{u}_i$ are the sample means of $\mathbf{y}$ and $\mathbf{u}$ , respectively. Owing to the independence of $\mathbf{y}$ and $\mathbf{u}$ and the linearity of expectation, the expectation of the sample covariance matrix is:
405
+
406
+ $$
407
+ \begin{array}{l} \mathbb {E} [ \mathbf {C o v} (\mathbf {y}, \mathbf {u}) ] = \mathbb {E} \left[ \frac {1}{B - 1} \sum_ {i = 1} ^ {B} \left(\mathbf {y} _ {i} - \overline {{\mathbf {y}}}\right) \left(\mathbf {u} _ {i} - \overline {{\mathbf {u}}}\right) ^ {\top} \right] \\ = \frac {1}{B - 1} \sum_ {i = 1} ^ {B} \mathbb {E} \left[ (\mathbf {y} _ {i} - \overline {{\mathbf {y}}}) (\mathbf {u} _ {i} - \overline {{\mathbf {u}}}) ^ {\top} \right] \\ = \frac {1}{B - 1} \sum_ {i = 1} ^ {B} \left(\mathbb {E} [ \mathbf {y} _ {i} \mathbf {u} _ {i} ^ {\top} ] - \mathbb {E} [ \mathbf {y} _ {i} ] \mathbb {E} [ \overline {{\mathbf {u}}} ^ {\top} ] - \mathbb {E} [ \overline {{\mathbf {y}}} ] \mathbb {E} [ \mathbf {u} _ {i} ^ {\top} ] + \mathbb {E} [ \overline {{\mathbf {y} \mathbf {u}}} ^ {\top} ]\right) \\ = \frac {1}{B - 1} \sum_ {i = 1} ^ {B} \left(\boldsymbol {\mu} _ {1} \boldsymbol {\mu} _ {2} ^ {\top} - \boldsymbol {\mu} _ {1} \boldsymbol {\mu} _ {2} ^ {\top} - \boldsymbol {\mu} _ {1} \boldsymbol {\mu} _ {2} ^ {\top} + \boldsymbol {\mu} _ {1} \boldsymbol {\mu} _ {2} ^ {\top}\right) \\ = 0. \tag {8} \\ \end{array}
408
+ $$
409
+
410
+ Next, we analyze the variance of the sample covariance matrix.
411
+
412
+ $$
413
+ \begin{array}{l} \mathrm {V a r} (\mathbf {C o v} (\mathbf {y}, \mathbf {u}) _ {i j}) = \mathrm {V a r} \left(\frac {1}{B - 1} \sum_ {k = 1} ^ {B} (y _ {i k} - \overline {{\mathbf {y}}} _ {i}) (u _ {j k} - \overline {{\mathbf {u}}} _ {j})\right) \\ = \frac {1}{(B - 1) ^ {2}} \left[ \sum_ {k = 1} ^ {B} \operatorname {V a r} \left(X _ {k}\right) + 2 \sum_ {1 \leq k < \ell \leq B} \operatorname {C o v} \left(X _ {k}, X _ {\ell}\right) \right] \\ = \frac {1}{(B - 1) ^ {2}} \left(B \sigma_ {y _ {i}} ^ {2} \sigma_ {u _ {j}} ^ {2} + B (B - 1) \xi_ {i j}\right) \\ \leq \frac {1}{(B - 1) ^ {2}} \left(B \sigma_ {y _ {i}} ^ {2} \sigma_ {u _ {j}} ^ {2} + (B - 1) \sigma_ {y _ {i}} ^ {2} \sigma_ {u _ {j}} ^ {2}\right) \\ = \frac {(2 B - 1) \sigma_ {y _ {i}} ^ {2} \sigma_ {u _ {j}} ^ {2}}{(B - 1) ^ {2}} \\ \leq \frac {3 \sigma_ {y _ {i}} ^ {2} \sigma_ {u _ {j}} ^ {2}}{B - 1}, \tag {9} \\ \end{array}
414
+ $$
415
+
416
+ where $\xi_{ij} = \mathrm{Cov}\big((y_{i1} - \overline{\mathbf{y}}_i)(u_{j1} - \overline{\mathbf{u}}_j),(y_{i2} - \overline{\mathbf{y}}_i)(u_{j2} - \overline{\mathbf{u}}_j)\big)$ captures the dependence introduced by the shared sample means and satisfies $0\leq \xi_{ij}\leq \sigma_{y_i}^2\sigma_{u_j}^2 /B$
417
+
418
+ To formalize the presence of spurious correlations, we apply Chebyshev's inequality to each entry of the covariance matrix:
419
+
420
+ $$
421
+ \mathbb {P} \left(\left| \mathbf {C o v} (\mathbf {y}, \mathbf {u}) _ {i j} \right| \geq t\right) \leq \frac {3 \sigma_ {y i} ^ {2} \sigma_ {u j} ^ {2}}{(B - 1) t ^ {2}}. \tag {10}
422
+ $$
423
+
424
+ By selecting an appropriate threshold $t$ , for instance $t = k\sqrt{\operatorname{Var}\big(\mathbf{Cov}(\mathbf{y},\mathbf{u})_{ij}\big)}$ for some constant $k > 0$ , we ensure that there exists a non-negligible probability that the sample covariance $\mathbf{Cov}(\mathbf{y},\mathbf{u})_{ij}$ exceeds $t$ . This leads to the emergence of significant spurious correlations between the $i$ -th dimension of $\mathbf{y}$ and the $j$ -th dimension of $\mathbf{u}$ .
425
+
426
+ Therefore, even though $\mathbf{y}$ and $\mathbf{u}$ are independent, the sample covariance matrix $\mathbf{Cov}(\mathbf{y},\mathbf{u})$ can exhibit significant non-zero entries with a probability bounded away from zero when the number of samples $B$ is small. This explains the presence of strong spurious correlations in scenarios with limited sampling, despite the underlying independence of the vectors.
427
+
428
+ # B. Datasets
429
+
430
+ As described in the paper, we partition each dataset in a 6:4 ratio into a fine-tuning dataset $(\mathcal{D}_f)$ and a probing dataset $(\mathcal{D}_p)$ . The probing dataset is further split into training, validation, and testing subsets in a 6:2:2 ratio. Table 9 presents the statistics of these datasets.
431
+
432
+ Table 9. Statistics of fine-tuning and probing datasets used in experiments.
433
+
434
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Fine-Tuning (Df)</td><td colspan="3">Probing (Dp)</td></tr><tr><td>Training</td><td>Validation</td><td>Testing</td></tr><tr><td>COCO</td><td>70972</td><td>28389</td><td>9463</td><td>9463</td></tr><tr><td>GQA</td><td>89312</td><td>35725</td><td>11909</td><td>11908</td></tr><tr><td>OCR-VQA</td><td>124540</td><td>49817</td><td>16606</td><td>16605</td></tr><tr><td>TextVQA</td><td>15071</td><td>6029</td><td>2010</td><td>2009</td></tr><tr><td>VisualGenome</td><td>64948</td><td>25981</td><td>8660</td><td>8660</td></tr></table>
435
+
436
+ # C. Task-Irrelevant Private Content
437
+
438
+ We provide all the generated private content that we used in Table C. For the fine-tuning dataset, each sample in $\mathcal{U}_1$ is selected for embedding with equal probability. For the probing dataset, all samples in $\mathcal{U}_1$ and $\mathcal{U}_2$ are equally likely to be selected for embedding.
439
+
440
+ Table 10. All generated task-irrelevant private content, where usernames (bold) are embedded in both fine-tuning datasets and probing datasets, while user_ids are embedded only in fine-tuning sets.
441
+
442
+ <table><tr><td>Subsets</td><td>Content</td></tr><tr><td>U1</td><td>username: Carlos Diaz, user_id: 5374982160
443
+ username: Sophia Chen, user_id: 8250947613
444
+ username: Ibrahim Al-Salem, user_id: 9823046571
445
+ username: Ava Murphy, user_id: 4147285690
446
+ username: Elena Mikhaylova, user_id: 3759408621</td></tr><tr><td>U2</td><td>username: Maximilian Schmidt, user_id: 6473920581
447
+ username: Vijay Sharma, user_id: 9073264815
448
+ username: Kim Jisoo, user_id: 7568210945
449
+ username: John Doe, user_id: 1234567890
450
+ username: Lucia Rodriguez, user_id: 8397162045</td></tr></table>
451
+
452
+ Table 11. Average batch cosine gradient similarity comparison between original and modified samples on Qwen-VL for COCO with different privacy subset sizes.
453
+
454
+ <table><tr><td>Origin</td><td>w/Privacy (Subset = 5)</td><td>w/Privacy (Subset = 100)</td><td>ImageTransf.</td><td>TextTransf.</td></tr><tr><td>100.0</td><td>97.0</td><td>93.2</td><td>93.8</td><td>49.4</td></tr></table>
455
+
456
+ # D. Data Transformation
457
+
458
+ In Table 3, we compared the changes in training gradients of LLMs after incorporating privacy with the changes in gradients resulting from performing data transformation separately in the text and image modalities. In this section, we provide a detailed description of the data transformation.
459
+
460
+ # D.1. Image Transformation
461
+
462
+ We adopt a simple rule-based image transformation pipeline. For each image in the batch:
463
+
464
+ - We randomly rotate it by an angle in the range of $[-30^{\circ}, +30^{\circ}]$ .
465
+ - With a probability of $50\%$ , we perform a horizontal flip.
466
+ - We randomly adjust brightness in the range [0.8, 1.2].
467
+ - We randomly adjust contrast in the range [0.8, 1.2].
468
+
469
+ # D.2. Text Transformation
470
+
471
+ We employ GPT-4 to generate paraphrases of the existing question-answer pairs in our dataset. Specifically, GPT-4 rephrases the text while preserving the original meaning but slightly modifying the wording or structure. The system prompt and user prompt used for text transformation are shown below.
472
+
473
+ System Prompt: You are a helpful assistant that carefully modifies text while preserving the original meaning. You will only replace or slightly alter one or two words with synonyms, ensuring minimal change. Do not alter the text structure or meaning beyond this. If the text starts with $\langle \mathrm{image}\rangle$ , keep that part exactly as is and do not remove or alter $\langle \mathrm{image}\rangle$ in any way.
474
+
475
+ User Prompt: Original text: {rest_part} Rewrite it by changing only one or two words to synonyms without any other words. Do not add any unrelated content.
476
+
477
+ # D.3. Examples
478
+
479
+ To offer a more intuitive illustration, we provide an example randomly selected from the COCO dataset in Figure 7. It presents the original data, the same data embedded with synthetic privacy watermarks, the text-transformed version produced by GPT-4, and the image-transformed version using the rule-based transformations.
480
+
481
+ # E. Impact of Privacy Subset Size
482
+
483
+ We perform an additional ablation study where we increase the number of items within each subset from 5 to 100. Specifically, we ask GPT-4 to generate 100 distinct usernames and corresponding user_ids for each subset, respectively. To avoid repetition, we request GPT-4 to check for duplicates after each generation. We use these 100 private items on Qwen-VL for COCO. Results are shown in Table 11. As the privacy subset size increases, the gradients of MLLMs exhibit more significant deviations from the original gradient updates, indicating that MLLMs spend more effort in each gradient step learning different privacy information when increasing the privacy subset size.
484
+
485
+ ![](images/aed291c04b5b33788ab25440bda971da17d96e599a422cb6604b3a0103aae1d0.jpg)
486
+ Origin
487
+ Filename: 000000341431.jpg
488
+
489
+ Question: What time of day does the image capture?
490
+
491
+ Answer: The image captures a scene.
492
+
493
+ Question: How is the street illuminated in the image?
494
+
495
+ Answer: The street is illuminated by many street lights and the bright lights from the surrounding buildings.
496
+
497
+ Additionally, there is a time lapse effect causing blurred lights on the road.
498
+
499
+ Question: Is there any notable architecture in the image?
500
+
501
+ Answer: Yes, there is a notable piece of architecture in the image: a cathedral can be seen across the street, adding to the overall ambiance of the scene.
502
+
503
+ ![](images/a8e8ce07b5349f5cb87dfecb1eb2a9902e1d3526e606438069cf82cc4c91a1f5.jpg)
504
+ w/ Privacy
505
+ Filename: 000000341431.jpg
506
+
507
+ Question: What time of day does the image capture? Answer: The image captures a night scene.
508
+
509
+ Question: How is the street illuminated in the image?
510
+
511
+ Answer: The street is illuminated by many street lights and the bright lights from the surrounding buildings.
512
+
513
+ Additionally, there is a time lapse effect causing blurred lights on the road.
514
+
515
+ Question: Is there any notable architecture in the image?
516
+
517
+ Answer: Yes, there is a notable piece of architecture in the image: a cathedral can be seen across the street, adding to the overall ambience of the scene.
518
+
519
+ ![](images/ee9c323800fe12cf517b49077f09bc2c524a8d387d2f1a28c1a0787eb660cbe5.jpg)
520
+ Image Augmentation
521
+ Filename: 000000341431.jpg
522
+ Figure 7. Examples of data used for computing gradient similarity from COCO.
523
+
524
+ Question: What time of day does the image capture?
525
+
526
+ Answer: The image captures a night scene.
527
+
528
+ Question: How is the street illuminated in the image?
529
+
530
+ Answer: The street is illuminated by many street lights and the bright lights from the surrounding buildings.
531
+
532
+ Additionally, there is a time lapse effect causing blurred lights on the road.
533
+
534
+ Question: Is there any notable architecture in the image?
535
+
536
+ Answer: Yes, there is a notable piece of architecture in the image: a cathedral can be seen across the street, adding to the overall ambience of the scene.
537
+
538
+ ![](images/f169cab0bb19a015f5179cda41888fea58466ea272ab9ed713eaeac4922cc820.jpg)
539
+ Text Augmentation
540
+ Filename: 000000341431.jpg
541
+
542
+ Question: What moment of day does the image capture?
543
+
544
+ Answer: The image captures a dark scene.
545
+
546
+ Question: How is the road illuminated in the image?
547
+
548
+ Answer: The street is illuminated by numerous street lamps and the vibrant lights from the surrounding buildings.
549
+
550
+ Additionally, there is a time lapse effect causing blurred lights on the road.
551
+
552
+ Question: Is there any distinctive architecture in the image?
553
+
554
+ Answer: Yes, there is a notable piece of architecture in the image: a church can be seen across the street, adding to the overall atmosphere of the scene.
ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9add2fbba90c93fbbae448c320c24d16992a4b7000987ba12e23aa179e4cb8e5
3
+ size 822194
ICML/2025/Watch Out Your Album! On the Inadvertent Privacy Memorization in Multi-Modal Large Language Models/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:639a14842bd5440b4adff17b7df23a2f91a2ca8e54cb8519bdaf5b8af21ea20b
3
+ size 655637
ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/34e41b65-b794-40d0-b44d-e5278bd41627_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76f6e2412e8d4ae91e533688db38a34bff08600afbb039a74ba6cea55e0d49fe
3
+ size 151333
ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/34e41b65-b794-40d0-b44d-e5278bd41627_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfb00d345d91741b8c9c99f2b481b780026a1de6bb452e73d4e3499307f1e890
3
+ size 186181
ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/34e41b65-b794-40d0-b44d-e5278bd41627_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6def9a7fc2ca0f4838ee2ac89ac5fb96e767f81adabe7c3ff59beba18bff645
3
+ size 5164356
ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a875a01b5e13c4af841e96580cd4f31e50a66bdb6fb3829e9697d34ecc6f2e45
3
+ size 1401829
ICML/2025/WeGeFT_ Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec17b4cec06920944a58b376db8a55d0b81436485eda2ca27120e3f6dd0b0cc2
3
+ size 684788
ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/974297f8-fc9e-45fc-b451-0b90c141ab6d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:915ff72ee84f5f0ccf89416cf1185b79a9d87b110f63d69bbe80b1c0e2d53eed
3
+ size 322184
ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/974297f8-fc9e-45fc-b451-0b90c141ab6d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71de5b8d74e5e90d0c032077ef3a85792f9741f9838cb5ee5c0af70222d15693
3
+ size 365436
ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/974297f8-fc9e-45fc-b451-0b90c141ab6d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74a9433409c6509936b4aa64634fa4ff51ed205df85350cc8134789896e5d37f
3
+ size 1105042
ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a5ddef469faa245f61fb3b3d9418280345c6ad5445657aeed10ba4b84756238
3
+ size 1901262
ICML/2025/Weak-to-Strong Generalization Even in Random Feature Networks, Provably/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fae7ca6a6321becb68dab7f9e3550132b986531d52f660bb4843c10fa1ad1f41
3
+ size 2245573
ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/26447cb1-6762-4fc7-8b21-dce243a2a53e_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d327b39587551cba2e15a737679a3e52e08fa929057c6cc54d747631d0b252d
3
+ size 151040
ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/26447cb1-6762-4fc7-8b21-dce243a2a53e_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8fa1543e58dbbffb5917d25b0234ffedad9b63a38fc034089882875ed7b2f4b
3
+ size 192305
ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/26447cb1-6762-4fc7-8b21-dce243a2a53e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6943e2db225dd0d22167c8118f99d528a8b22c6fa992cff83bc2c6d6151379f3
3
+ size 774177
ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37188cb1d21e3a80df1e360496a0437cbe5a5e8f901aad308de65938118b5b93
3
+ size 1470174
ICML/2025/Weak-to-Strong Jailbreaking on Large Language Models/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5123f67f457310a377c6ac9d625d1692a452e15218cbc7d311a21e359d72bd63
3
+ size 662947
ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/7f19c4d1-6bde-4982-9ca4-2fda95c891a0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d964dea62c6629b97b2a101ac6208c48319a2d22033fd495aafcab825e2686c5
3
+ size 231922
ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/7f19c4d1-6bde-4982-9ca4-2fda95c891a0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14a51520fc93a81223eb5f3a7cb880c7fe7cd47f34564888faf7d5c08dd4bc66
3
+ size 274420
ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/7f19c4d1-6bde-4982-9ca4-2fda95c891a0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d23fa8f78c16bb4b22b9043160550c927d9ad9130e22ad6c8f6b33384dbac92
3
+ size 3436757
ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dcb3607139f57337c6f5bfe96c3ff1ee390f7dd8fe30b056086490f8efe4fa9
3
+ size 2865047
ICML/2025/Weakly Supervised Anomaly Detection via Dual-Tailed Kernel/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cdbb2b1bec8b174bb4f8c3b1b1dbb4416654a1cd464a24a5d793812b0ff491d
3
+ size 1218177
ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/a626d3c5-8733-4aa8-8e8b-36758bb8d546_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05a5f7c26a45374e234de97f7d16ec6891014df2fd3a906d9456115a3457cc09
3
+ size 311510
ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/a626d3c5-8733-4aa8-8e8b-36758bb8d546_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6f969809173a373073996627e002d6fdb2533199ec6ddedc82faa47d8767628
3
+ size 357211
ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/a626d3c5-8733-4aa8-8e8b-36758bb8d546_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e45dd80e852ad1477d55387dabe22f9bb7d0d88ff38d79e9316452f59ad0e1c7
3
+ size 1410716
ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72656226fc609a916e847e092b0d20edc0c2f8258e788d75220d12b42e34ebc7
3
+ size 2879533
ICML/2025/Weakly-Supervised Contrastive Learning for Imprecise Class Labels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8055ad45c01b2acff8f32b9b4e121a1999aea2c315d59fb12117dbb31e810223
3
+ size 1521384
ICML/2025/Weight matrices compression based on PDB model in deep neural networks/f40ea7c7-998f-45fa-94a0-efc5a977293a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17e9c9d163308b4379a1cc59f52d3432fc5b27ce5834ea5dfd57394ef125325c
3
+ size 127777
ICML/2025/Weight matrices compression based on PDB model in deep neural networks/f40ea7c7-998f-45fa-94a0-efc5a977293a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9826efd85255ffb63123917d714724e1496ed58003d9add96eeec96b0b1743b9
3
+ size 154694
ICML/2025/Weight matrices compression based on PDB model in deep neural networks/f40ea7c7-998f-45fa-94a0-efc5a977293a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f306b566bd748c602dcc5c162291521b6108aa8dc1d816cbf26321f67f1a6de
3
+ size 4613144
ICML/2025/Weight matrices compression based on PDB model in deep neural networks/full.md ADDED
@@ -0,0 +1,717 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Weight matrices compression based on PDB model in deep neural networks
2
+
3
+ Xiaoling Wu $^{*1}$ Junpeng Zhu $^{*1}$ Zeng Li
4
+
5
+ # Abstract
6
+
7
+ Weight matrix compression has been demonstrated to effectively reduce overfitting and improve the generalization performance of deep neural networks. Compression is primarily achieved by filtering out noisy eigenvalues of the weight matrix. In this work, a novel Population Double Bulk (PDB) model is proposed to characterize the eigenvalue behavior of the weight matrix, which is more general than the existing Population Unit Bulk (PUB) model. Based on PDB model and Random Matrix Theory (RMT), we have discovered a new PDBLS algorithm for determining the boundary between noisy eigenvalues and information. A PDB Noise-Filtering algorithm is further introduced to reduce the rank of the weight matrix for compression. Experiments show that our PDB model fits the empirical distribution of eigenvalues of the weight matrix better than the PUB model, and our compressed weight matrices have lower rank at the same level of test accuracy. In some cases, our compression method can even improve generalization performance when labels contain noise. The code is available at https://github.com/xlwu571/PDBLS.
8
+
9
+ # 1. Introduction
10
+
11
+ Deep Neural Networks (DNN) have achieved outstanding performance in many fields such as computer vision (Leek et al., 2022), speech recognition (Mohanty et al., 2022) and recommendation systems (Da'u & Salim, 2020). Deeper and wider DNN frameworks have demonstrated superior learning performance (Yang et al., 2019), but many of these networks are extremely over-parameterized and prone to overfitting. Recently, some low-rank compression techniques have been applied to avoid overfitting by removing small singular values of the weight matrix. (Xu et al., 2019)
12
+
13
+ * Equal contribution ${}^{1}$ Department of Statistics and Data Science, Southern University of Science and Technology, Shenzhen, China. Correspondence to: Zeng Li <liz9@sustech.edu.cn>.
14
+
15
+ Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
16
+
17
+ propose a training scheme to promote the filters' low rank by using nuclear regularization and achieves the low-rank approximation of the original filters. (Idelbayev & Carreira-Perpinan, 2020) emphasize determining the ideal rank for each layer and weight approximation via adding a penalty term of the rank. (Liebenwein et al., 2021) further propose a compression architecture using SVD for local layer compression and minimizing the maximum relative error across different layers for global compression. However, these methods rely on hyperparameters and are generally difficult to optimize.
18
+
19
+ Recently, to understand the training mechanism and generalization performance of neural networks, RMT has succeeded in providing a theoretical explanation and improving generalization ability by analyzing spectral properties of the weight matrices of neural networks, see (Berlyand et al., 2023; Martin & Mahoney, 2020). For weight matrix $\mathbf{W}_{n\times p}$ , RMT is mainly employed to study the eigenvalues $\{\lambda_1\geq \dots \geq \lambda_p\}$ of $\mathbf{W}^T\mathbf{W}$ , in particular the limit of $F_{n}^{\mathbf{W}^{T}\mathbf{W}}(x) = \frac{1}{p}\sum_{j = 1}^{p}\mathbb{1}_{\{\lambda_{j}\leq x\}}$ called Limiting Spectral Distribution (LSD). According to (Martin & Mahoney, 2021), the training process of weight matrices mainly goes through three phases: initial phase, bulk+spikes phase and heavy-tailed phase (Fig.1). The randomly initialized weight matrix $\mathbf{W}_0$ satisfies $\mathbb{E}\mathbf{W}_0^T\mathbf{W}_0 = \sigma_0^2\mathbf{I}_p$ , where $\sigma_0^2$ depends on the initial distribution of the entries of $\mathbf{W}_0$ . At this stage, the LSD of $\mathbf{W}_0^T\mathbf{W}_0$ matches Marčenko-Pastur (MP) law. During the training process, the information learned by the weight matrix is mainly reflected in a few spiked eigenvalues exceeding the MP edge $\lambda_{+}$ , and the initial variance also changes from $\sigma_0^2$ to $\sigma^2$ . In this phase, the PUB model which assumes $\mathbb{E}\mathbf{W}^T\mathbf{W} = \mathrm{diag}(\underbrace{\alpha_1,\ldots,\alpha_K}_{K\text{ spikes}},\underbrace{\sigma^2,\ldots,\sigma^2}_{\text{bulk}})\coloneqq$
20
+
21
+ $\Sigma_{PUB}$ , is often used to analyze the trained weight matrices. Correspondingly, $\mathbf{W}^T\mathbf{W}$ exhibits several comparatively large sample spiked eigenvalues, alongside clustered bulk eigenvalues. In the final training phase, the eigenvalues of $\mathbf{W}^T\mathbf{W}$ gradually follow a heavy-tailed distribution, but this phase is not very common. Thus, we mainly focus on the bulk+spikes phase.
22
+
23
+ Furthermore, (Staats et al., 2023) has verified that in the bulk+spikes phase, large spiked eigenvalues learn rules from data while small bulk eigenvalues do not contribute to model learning. They propose to compress network by removing
24
+
25
+ ![](images/8b8623063e58db38eb9703e1e5490bbc3378778f14dfb1d1299c2819540d12b0.jpg)
26
+ (a) Initial
27
+
28
+ ![](images/fd68328d3a06a64188bd5993c2302579b93a009b41ff31cd76272a9902eba754.jpg)
29
+ (b) Bulk+Spikes
30
+
31
+ ![](images/7be53230011a9990d82ce3e0fff795693728f423fe080026939f406b4d768292.jpg)
32
+ (c) Heavy-tailed
33
+ Figure 1. The three phases of histogram (ESD) of eigenvalues of $\mathbf{W}^T\mathbf{W}$ during network training.
34
+
35
+ small bulk eigenvalues and recovering spiked eigenvalues of $\Sigma_{PUB}$ from $\mathbf{W}^T\mathbf{W}$ , thereby improving the generalization performance. A following question is how to determine the boundary between informative spikes and noisy bulk eigenvalues. (Staats et al., 2023) determine the boundary by cross-validation, which is purely data-driven without theoretical support and is quite computationally expensive. (Ke et al., 2023) propose the Bulk Eigenvalue Matching Analysis (BEMA) algorithm based on the PUB model. Using BEMA, a weight matrix pruning algorithm is further developed by (Shmalo et al., 2023). However, the existing compression algorithm either rely on the PUB model or empirical adjustments without any theoretical support.
36
+
37
+ Regrettably, the assumption of the PUB model is too restrictive, as it requires $\mathbf{W}^T\mathbf{W}$ to have a homogeneous population variance $\sigma^2$ . It also fails to accurately capture the eigenvalues distribution of $\mathbf{W}^T\mathbf{W}$ in many empirical studies. Therefore, we consider a more general PDB model to accommodate heterogeneous population variances. Specifically, we propose that during the training process, the initial $\mathbb{E}\mathbf{W}_0^T\mathbf{W}_0 = \sigma_0^2\mathbf{I}_p$ will evolve into $\mathbb{E}\mathbf{W}^T\mathbf{W}$ $= \mathrm{diag}(\underbrace{\alpha_{1},\ldots,\alpha_{K}}_{K\mathrm{spikes}},\underbrace{\sigma_{1}^{2},\ldots,\sigma_{1}^{2}}_{\mathrm{bulk1}},\underbrace{\sigma_{2}^{2},\ldots,\sigma_{2}^{2}}_{\mathrm{bulk2}})$ . Note that when
38
+
39
+ $\sigma_1^2 = \sigma_2^2$ , our PDB model will degenerate to the PUB model. We didn't adopt more general $M$ -Bulks ( $M \geq 3$ ) models because we observe that the proportion of additional bulks is negligible and the PDB model has shown sufficiently superior performance than PUB model in Fig. 2.
40
+
41
+ Based on PDB model, we posit that both the spikes and the bulk1 contain valuable information, whereas the bulk2 predominantly represents noise. We further propose a Population Double Bulk Least Squares (PDBLS) algorithm to estimate the structure of $\mathbb{E}\mathbf{W}^T\mathbf{W}$ , from which we can determine the boundary between noisy eigenvalues and information. Moreover, as shown in Fig. 5, we develop a new PDB Noise-Filtering algorithm to compress the weight matrix by only removing the smaller bulk2 eigenvalues while
42
+
43
+ keeping the bulk1. The spiked information of $\mathbb{E}\mathbf{W}^T\mathbf{W}$ is also recovered. Experiments demonstrate that our proposed PDB model outperforms the PUB model in fitting the eigenvalues of $\mathbf{W}^T\mathbf{W}$ , and our compressed weight matrices exhibit lower rank while preserving test accuracy or even improving it when label contains noise. In summary, our contributions include:
44
+
45
+ 1. We propose a generalized PDB model to characterize the eigenvalue behavior of $\mathbf{W}^T\mathbf{W}$ in the bulk+spikes phase. This model accurately captures the empirical distribution of eigenvalues, thus confirming its validity.
46
+ 2. We propose an efficient hyperparameter-free algorithm PDBLS to estimate $\mathbb{E}\mathbf{W}^T\mathbf{W}$ , which establishes the relationship between the eigenvalues of $\mathbf{W}^T\mathbf{W}$ and $\mathbb{E}\mathbf{W}^T\mathbf{W}$ . This enables us to establish a boundary between noise and information and determine how much information of $\mathbf{W}^T\mathbf{W}$ needs to be retained.
47
+ 3. We introduce a novel PDB Noise-Filtering algorithm to compress the weight matrix by removing noisy eigenvalues and retrieving information from $\mathbb{E}\mathbf{W}^T\mathbf{W}$ . Our algorithm also recommends the best compression ratio of weight matrix where further compression will lead to significantly loss of generalization ability. Experiments show that our approach can significantly reduce the rank of weight matrix $\mathbf{W}$ while preserving network generalization performance, and even enhancing it in the presence of noise.
48
+
49
+ # 2. Motivation: from PUB to PDB model
50
+
51
+ In this section, we introduce the motivation that why we choose the PDB model.
52
+
53
+ ![](images/48d21a0271c7d8e505663ca3ad12d0b5fc4a552d9b71ef333c3a1d14ecc1dc23.jpg)
54
+ (a) Initial matrix
55
+
56
+ ![](images/47278459ec1f1d058599ca0a024f0fa65b07b27ef9af90ae26b26df4d56b2d57.jpg)
57
+ (b) PUB model
58
+
59
+ ![](images/8bfbc240fefcacfa1f0087bdd81567db87c92c7def43f9eefc0f8a2fa12034cb.jpg)
60
+ (c) PDB model
61
+ Figure 2. The histogram (ESD) and LSD of eigenvalues of $\mathbf{W}^T\mathbf{W}$ , where (a) shows a initial matrix, (b) and (c) show trained matrix with PUB and PDB model. The vertical dashed lines represent the noise-information boundaries under the two models.
62
+
63
+ # 2.1.PUB model
64
+
65
+ Recently, for weight matrix $\mathbf{W} \in \mathbb{R}^{n \times p}$ , (Martin & Mahoney, 2021; Staats et al., 2023) use RMT to study the eigenvalue behavior of $\mathbf{W}^T\mathbf{W}$ before and after training. In the initialization phase, the entries of $\mathbf{W}_0$ are i.i.d. generated satisfying $\boldsymbol{\Sigma}_0 = \mathbb{E}\mathbf{W}_0^T\mathbf{W}_0 = \sigma_0^2\mathbf{I}_p$ and the LSD of $\mathbf{W}_0^T\mathbf{W}_0$ has density function (MP law):
66
+
67
+ $$
68
+ f (x; c, \sigma_ {0} ^ {2}) = \frac {\sqrt {(\lambda_ {+} - x) (x - \lambda_ {-})}}{2 \pi \sigma_ {0} ^ {2} c x} \mathbb {1} _ {\{\lambda_ {-} \leq x \leq \lambda_ {+} \}}.
69
+ $$
70
+
71
+ Here the boundary points $\lambda_{\pm}(\sigma_0^2) = \sigma_0^2 (1\pm \sqrt{c})^2$ $c = \frac{p}{n}$
72
+
73
+ (Martin & Mahoney, 2021) point out that after training, the eigenvalues of $\mathbf{W}^T\mathbf{W}$ can be divided into two categories: the bulk eigenvalues and the spiked eigenvalues. As shown in Fig. 2, the bulk eigenvalues are tightly clustered with their histogram conforming to the LSD. On the contrary, the spiked eigenvalues are far from bulk eigenvalues and lie outside the boundaries of LSD. In response to this phenomenon, (Shmalo et al., 2023; Staats et al., 2023) explore a PUB model to characterize the eigenvalues of $\mathbf{W}^T\mathbf{W}$ in the bulk+spikes phase. They assume, after training,
74
+
75
+ $$
76
+ \boldsymbol {\Sigma} _ {P U B} = \mathbb {E} \mathbf {W} ^ {T} \mathbf {W} = \operatorname {d i a g} (\underbrace {\alpha_ {1} , \dots , \alpha_ {K}} _ {K}, \underbrace {\sigma^ {2} , \dots , \sigma^ {2}} _ {p - K}). \tag {1}
77
+ $$
78
+
79
+ Here $\{\alpha_{1},\ldots ,\alpha_{K}\}$ are $K$ population spiked eigenvalues and $\{\sigma^2,\dots,\sigma^2\}$ are $p - K$ bulk eigenvalues of $\Sigma_{PUB}$ . During training process, the initial variance $\sigma_0^2$ changes to $\sigma^2$ and extra $K$ spikes appear. Correspondingly, the LSD of sample bulk eigenvalues of $\mathbf{W}^T\mathbf{W}$ follows the MP law with parameter $\sigma^2$ . While the $K$ sample spiked eigenvalues lie outside the boundary of the LSD, i.e., $\sigma^2 (1 + \sqrt{c})^2$ . The relationship between the sample eigenvalues of $\mathbf{W}^T\mathbf{W}$ and population eigenvalues of $\Sigma_{PUB}$ is given below.
80
+
81
+ $$
82
+ \operatorname {E i g} (\boldsymbol {\Sigma} _ {P U B}) = \{\underbrace {\alpha_ {1} , \ldots , \alpha_ {K}} _ {K}, \underbrace {\sigma^ {2} , \ldots , \sigma^ {2}} _ {p - K} \}
83
+ $$
84
+
85
+ $$
86
+ \operatorname {E i g} (\mathbf {W} ^ {T} \mathbf {W}) = \{\underbrace {\lambda_ {1} , \ldots , \lambda_ {K}} _ {K}, \underbrace {\lambda_ {+} \geq \lambda_ {j} \geq \lambda_ {-}} _ {p - K} \}.
87
+ $$
88
+
89
+ # 2.2.PDB model
90
+
91
+ However, the assumption of the PUB model is quite restrictive, as it requires $\mathbf{W}^T\mathbf{W}$ to have a homogeneous population variance $\sigma^2$ . Additionally, the eigenvalue distribution of $\mathbf{W}^T\mathbf{W}$ does not perfectly align with the MP law with $\sigma^2$ under PUB model, as shown in Fig. 2(b). To address these limitations, we consider a more general model which can accommodate heterogeneous population variances. Specifically, we extend $\sigma^2$ to $M$ different positive constants $\{\sigma_i^2, 1 \leq i \leq M\}$ with proportions $\{t_i, 1 \leq i \leq M\}$ :
92
+
93
+ $$
94
+ \begin{array}{c}\boldsymbol{\Sigma}_{PUB} = \mathbb{E}\mathbf{W}^{T}\mathbf{W} = \mathrm{diag}(\alpha_{1},\ldots ,\alpha_{K},\framebox{\sigma^{2},\ldots,\sigma^{2}})\\ \Downarrow \end{array}
95
+ $$
96
+
97
+ $$
98
+ \mathbb {E} \mathbf {W} ^ {T} \mathbf {W} = \operatorname {d i a g} (\underbrace {\alpha_ {1} , \ldots , \alpha_ {K}} _ {K}, \boxed {\underbrace {\sigma_ {1} ^ {2} , \ldots , \sigma_ {1} ^ {2}} _ {(p - K) t _ {1}}, \ldots , \underbrace {\sigma_ {M} ^ {2} , \ldots , \sigma_ {M} ^ {2}} _ {(p - K) t _ {M}}}).
99
+ $$
100
+
101
+ In empirical studies, we observe that $M = 2$ is sufficient, because the proportion of additional $\sigma_{i}$ terms is negligible. For example, Table 1 presents the estimated proportions for $M = 4$ , where $\hat{t}_3, \hat{t}_4$ are very small. Therefore, in this study,
102
+
103
+ Table 1. The estimated proportion of each bulk when $M = 4$
104
+
105
+ <table><tr><td></td><td>σ12</td><td>σ22</td><td>σ32</td><td>σ42</td></tr><tr><td>t̂i</td><td>0.6803</td><td>0.2719</td><td>0.0474</td><td>0.0004</td></tr></table>
106
+
107
+ we propose a generalized Population Double Bluk (PDB) model for trained weight matrix $\mathbf{W}$ , satisfying
108
+
109
+ $$
110
+ \begin{array}{l} \boldsymbol {\Sigma} _ {P D B} := \mathbb {E} \mathbf {W} ^ {T} \mathbf {W} \\ = \operatorname {d i a g} (\underbrace {\alpha_ {1} , \dots , \alpha_ {K}} _ {K}, \underbrace {\sigma_ {1} ^ {2} , \dots , \sigma_ {1} ^ {2}} _ {(p - K) t}, \underbrace {\sigma_ {2} ^ {2} , \dots , \sigma_ {2} ^ {2}} _ {(p - K) (1 - t)}). \tag {2} \\ \end{array}
111
+ $$
112
+
113
+ where $\{\alpha_{1},\dots ,\alpha_{K}\}$ are $K$ population spikes of $\Sigma_{PDB}$ and $t$ is the proportion of $\sigma_1^2$ among all bulk eigenvalues. Similar as PUB model, PDB model also has one-to-one corresponding relationship between the sample eigenvalues of $\mathbf{W}^T\mathbf{W}$ and population eigenvalues of $\Sigma_{PDB}$ :
114
+
115
+ $$
116
+ \operatorname {E i g} (\Sigma_ {P D B}) = \underbrace {\{\alpha_ {1} , \ldots , \alpha_ {K}} _ {K},
117
+ $$
118
+
119
+ $$
120
+ \begin{array}{c c} \underbrace {\sigma_ {1} ^ {2} , \ldots , \sigma_ {1} ^ {2}} _ {(p - K) t}, & \underbrace {\sigma_ {2} ^ {2} , \ldots , \sigma_ {2} ^ {2}} _ {(p - K) (1 - t)} \\ \Updownarrow \mathrm {b u l k 1} & \Updownarrow \mathrm {b u l k 2} \end{array} \}
121
+ $$
122
+
123
+ $$
124
+ \operatorname {E i g} (\mathbf {W} ^ {T} \mathbf {W}) = \{\underbrace {\lambda_ {1} , \ldots , \lambda_ {K}} _ {K},
125
+ $$
126
+
127
+ $$
128
+ \underbrace {\lambda_ {+} \geq \lambda_ {j} \geq \beta} _ {(p - K) t}, \quad \underbrace {\beta > \lambda_ {j} \geq \lambda_ {p}} _ {(p - K) (1 - t)} \}
129
+ $$
130
+
131
+ where $\beta = \lambda_{K + (p - K)t}$ is the boundary point between sample bulk1 and bulk2 of $\mathbf{W}^T\mathbf{W}$ . Note that the PUB model (1) is a special case of our PDB model when $t = 0$ .
132
+
133
+ More importantly, the proposed PDB model demonstrates significantly better empirical performance than the PUB model. Firstly, the density curve of LSD under the PDB model aligns more close to the histogram of $\mathbf{W}^T\mathbf{W}$ , see Fig. 3. The PDB model parameters for Fig. 3 are listed in Table 2.
134
+
135
+ Table 2. The results of ${\Theta }_{bulk} = \left\{ {{\sigma }_{1}^{2},{\sigma }_{2}^{2},t}\right\}$ for the DNN.
136
+
137
+ <table><tr><td>Estimator</td><td>\( \hat{t} \)</td><td>\( {\widehat{\sigma}}_{1}^{2} \)</td><td>\( {\widehat{\sigma}}_{2}^{2} \)</td></tr><tr><td>FCNN: MNIST</td><td>0.17</td><td>3.37</td><td>1.36</td></tr><tr><td>VGG16: CIFAR10</td><td>0.25</td><td>1.61</td><td>0.75</td></tr><tr><td>ResNet18:ImageNet</td><td>0.35</td><td>7.54</td><td>0.96</td></tr><tr><td>ResNet18: CIFAR10</td><td>0.25</td><td>5.06</td><td>0.95</td></tr></table>
138
+
139
+ Secondly, our PDB model achieves superior alignment with the spectral moments of $\mathbf{W}^T\mathbf{W}$ compared to the PUB model. Specifically, we employ different methods to estimate model parameters and compare the first three theoretical moments with the empirical values, including the BEMA method (Ke et al., 2023) and the Kernel approach (Staats et al., 2023) for the PUB model, and the PDBLS algorithm for the PDB model (see Section 3.2). Detailed results are presented in Tables 3-5, where Table 3-4 correspond to the FCNN and convolution networks, and Table 5 presents the large language models. The theoretical spectral moment formulas can be found in the Appendix. It's clear that our PDB model matches the empirical moments much better than PUB model, especially higher order moments.
140
+
141
+ ![](images/126d14e9a07dece12cea5dc4fab2b8cb3a70e976c4748b3fa954035e2d5d7f58.jpg)
142
+ (a) FCNN on MNIST
143
+
144
+ ![](images/e2a8aa620916000b452cee4befdcb591018c6a091975d63012070ddde25f167c.jpg)
145
+ (b) Vgg16 on CIFAR10
146
+
147
+ ![](images/fe03b9a5865ccb231535c164bdb7b99f8f2a9adedae2b05afc524f753bf96b13.jpg)
148
+ (c) ResNet18 on ImageNet
149
+ Figure 3. Histograms of the eigenvalues of $\mathbf{W}^T\mathbf{W}$ and density curves of LSDs based on different models. The solid blue line represents PDB model, and the dashed red and green curves are for PUB model.
150
+
151
+ ![](images/22a8e5ad0b187c0459e0f732e51da850e0166cc00f579e344beaaf4b2dcbcf3b.jpg)
152
+ (d) ResNet18 on CIFAR10
153
+
154
+ Table 3. Comparison of theoretical and empirical spectral moments for FCNN and VGG16, $\hat{\gamma}_j = \frac{1}{p}\operatorname {tr}\left(\mathbf{W}^T\mathbf{W}\right)^j$ $j = 1,2,3.$
155
+
156
+ <table><tr><td colspan="4">FCNN:MNIST</td><td colspan="3">VGG16:CIFAR10</td></tr><tr><td>Model</td><td>Method</td><td>γ1</td><td>γ2</td><td>γ3</td><td>γ1</td><td>γ2</td></tr><tr><td>PUB</td><td>BEMA</td><td>2.27</td><td>7.74</td><td>32.23</td><td>0.92</td><td>1.26</td></tr><tr><td>PUB</td><td>Kernel</td><td>1.37</td><td>2.81</td><td>7.04</td><td>0.90</td><td>1.22</td></tr><tr><td>PDB</td><td>PDBLS</td><td>1.71</td><td>4.94</td><td>18.79</td><td>0.96</td><td>1.53</td></tr><tr><td colspan="2">empirical γj</td><td>1.74</td><td>5.50</td><td>24.17</td><td>0.97</td><td>1.59</td></tr></table>
157
+
158
+ Table 4. Comparison of theoretical and empirical spectral moments for FCNN and ResNet18.
159
+
160
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Method</td><td colspan="3">ResNet18:ImageNet</td><td colspan="3">ResNet18:CIFAR10</td></tr><tr><td>γ1</td><td>γ2</td><td>γ3</td><td>γ1</td><td>γ2</td><td>γ3</td></tr><tr><td>PUB</td><td>BEMA</td><td>2.94</td><td>17.26</td><td>126.79</td><td>1.80</td><td>4.82</td><td>15.84</td></tr><tr><td>PUB</td><td>Kernel</td><td>1.11</td><td>2.45</td><td>6.77</td><td>1.83</td><td>5.02</td><td>16.84</td></tr><tr><td>PDB</td><td>PDBLS</td><td>3.23</td><td>30.62</td><td>377.50</td><td>2.27</td><td>9.14</td><td>49.08</td></tr><tr><td colspan="2">empirical γj</td><td>3.32</td><td>31.48</td><td>402.79</td><td>2.28</td><td>9.38</td><td>52.65</td></tr></table>
161
+
162
+ # 3. PDB estimation
163
+
164
+ In this section, we provide a detailed characterization of the asymptotic properties of the sample eigenvalues of $\mathbf{W}^T\mathbf{W}$ , including the bulk eigenvalues, bulk boundaries, and spiked eigenvalues. Building on these properties, we propose the PDBLS algorithm to estimate the model parameters of
165
+
166
+ Table 5. Comparison of theoretical and empirical spectral moments for T5-base and BERT.
167
+
168
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Method</td><td colspan="3">T5-base: RTE</td><td colspan="3">BERT: SCITAIL</td></tr><tr><td>γ1</td><td>γ2</td><td>γ3</td><td>γ1</td><td>γ2</td><td>γ3</td></tr><tr><td>PUB</td><td>BEMA</td><td>0.67</td><td>0.90</td><td>1.51</td><td>0.59</td><td>0.69</td><td>1.02</td></tr><tr><td>PUB</td><td>Kernel</td><td>0.53</td><td>0.56</td><td>0.74</td><td>0.56</td><td>0.63</td><td>0.88</td></tr><tr><td>PDB</td><td>PDBLS</td><td>0.77</td><td>1.55</td><td>4.17</td><td>0.67</td><td>1.18</td><td>2.78</td></tr><tr><td colspan="2">empirical γj</td><td>0.72</td><td>1.83</td><td>5.35</td><td>0.71</td><td>1.38</td><td>3.95</td></tr></table>
169
+
170
+ $\Sigma_{PDB}$ , which allows us to determine the boundary between noise and information.
171
+
172
+ # 3.1. Sample eigenvalue behavior of PDB model
173
+
174
+ The following three theorems illustrate the relationship between sample eigenvalues of $\mathbf{W}^T\mathbf{W}$ and population parameters $\{K, \alpha_1, \ldots, \alpha_K, \sigma_1^2, \sigma_2^2, t\}$ of $\pmb{\Sigma}_{PDB}$ .
175
+
176
+ Theorem 3.1. (sample bulk eigenvalues) Under PDB model (2), as $n \to \infty$ , $p / n \to c$ , the empirical spectral distribution of $\mathbf{W}^T\mathbf{W}$ , $F_n^{\mathbf{W}^T\mathbf{W}}(x) = \frac{1}{p}\sum_{j=1}^{p}\mathbb{1}_{\{\lambda_j \leq x\}}$ a.s. converges to the LSD with the density function:
177
+
178
+ $$
179
+ \rho (x) = \lim _ {\eta \rightarrow 0} \frac {\operatorname {I m} \underline {{m}} (z)}{\pi c}, \quad z = x + i \eta , \quad \eta > 0, \tag {3}
180
+ $$
181
+
182
+ where $\operatorname{Im} \underline{m}(z)$ represents the imaginary part of $\underline{m}(z)$ and $\underline{m}(z)$ satisfies the following equation:
183
+
184
+ $$
185
+ z = - \frac {1}{\underline {{m}} (z)} + \frac {c t \sigma_ {1} ^ {2}}{1 + \sigma_ {1} ^ {2} \underline {{m}} (z)} + \frac {c (1 - t) \sigma_ {2} ^ {2}}{1 + \sigma_ {2} ^ {2} \underline {{m}} (z)}. \qquad (4)
186
+ $$
187
+
188
+ An example of $\rho(x)$ is shown in Fig. 2(c) where $\{\sigma_1^2 = 4.63, \sigma_2^2 = 1.67, t = 0.20\}$ .
189
+
190
+ Theorem 3.2. (sample boundaries) The boundary between sample spikes and bulk1 of $\mathbf{W}^T\mathbf{W}$ is
191
+
192
+ $$
193
+ \lambda_ {+} = g (y), y = \arg \max _ {x \in \mathbb {R}} \left\{g ^ {\prime} (x) = 0 \right\}, \tag {5}
194
+ $$
195
+
196
+ $$
197
+ g (x) = x + c x \frac {t \sigma_ {1} ^ {2}}{x - \sigma_ {1} ^ {2}} + c x \frac {(1 - t) \sigma_ {2} ^ {2}}{x - \sigma_ {2} ^ {2}}. \tag {6}
198
+ $$
199
+
200
+ The boundary between sample bulk1 and bulk2 of $\mathbf{W}^T\mathbf{W}$ is
201
+
202
+ $$
203
+ \beta = \lambda_ {K + p t - K t}.
204
+ $$
205
+
206
+ Theorem 3.3. (sample spiked eigenvalues) Under PDB model (2), as $n\to \infty$ , $p / n\rightarrow c$ , the sample spiked eigenvalues $\{\lambda_1,\dots ,\lambda_K\}$ of $\mathbf{W}^T\mathbf{W}$ converges to functions of population spiked eigenvalues $\{\alpha_{1},\ldots ,\alpha_{K}\}$ of $\Sigma_{PDB}$ :
207
+
208
+ $$
209
+ \lambda_ {j} \xrightarrow {a . s .} g \left(\alpha_ {j}\right), \quad j \in \{1, \dots , K \}. \tag {7}
210
+ $$
211
+
212
+ ![](images/be5711f512971d97800abb89281e5d2d6a069ceb03e9321e6195d8bff184206b.jpg)
213
+ Figure 4. The histogram (ESD) of eigenvalues and LSD of PDB model. The estimation procedure (PDBLS Algorithm) for $\Theta_{bulk} = \{t, \sigma_1^2, \sigma_2^2\}$ , $\Theta_{spike} = \{K, \alpha_1, \dots, \alpha_K\}$ and $\Theta_{bound} = \{\lambda_+, \beta\}$ are also illustrated in this figure.
214
+
215
+ # 3.2. PDBLS algorithm: Estimation of PDB model
216
+
217
+ Based on Theorems 3.1-3.3, we introduce an algorithm called Population Double Bulk Least Squares (PDBLS) to estimate $\Sigma_{PDB} = \mathbb{E}\mathbf{W}^T\mathbf{W}$ and two sample boundary points: one is the boundary between the sample spikes and the bulk1, $\lambda_{+}$ , and the other is the boundary between bulk1 and bulk2, $\beta$ . Our goal is to estimate the bulk parameters $\Theta_{bulk} = \{\sigma_1^2,\sigma_2^2,t\}$ , the spike parameters $\Theta_{spike} = \{K,\alpha_1,\dots ,\alpha_K\}$ , and the boundary parameters $\Theta_{bound} = \{\lambda_{+},\beta \}$ . We denote these estimators as $\{\hat{\sigma}_1^2,\hat{\sigma}_2^2,\hat{t}\}$ , $\{\hat{K},\hat{\alpha}_1,\dots ,\hat{\alpha}_{\hat{K}}\}$ and $\{\hat{\lambda}_{+},\hat{\beta}\}$ . We start with the estimation of $\Theta_{bulk}$ and $\Theta_{bound}$ . Although spiked eigenvalues don't affect LSD, some exceptionally large spikes can result in an overestimation of bulk parameters in the finite sample case. As depicted in the Fig. 4, there are several extremely large eigenvalues highlighted in the blue box, indicating the necessity for us to give an initial estimation of the number of spikes $K_{0}$ . By mitigating the influence of the top $K_{0}$ spikes, we provide estimates for $\Theta_{bulk}$ first, followed by $\Theta_{bound}$ , and further refine the number and values of spikes in $\Theta_{spike}$ . In detail, there are four main steps to obtain these estimators.
218
+
219
+ # Step 1 Initially estimate the number of spikes $K_{0}$ .
220
+
221
+ Inspired by (Liu et al., 2023), the location that corresponds to the smallest distance between the consecutive eigenvalues can be used to determine the number of spikes. The initial number of spikes based on this criteria in (Liu et al., 2023) can be calculated by:
222
+
223
+ $$
224
+ \hat {K} _ {0} = \arg \min \frac {1}{n} [ - n (\lambda_ {1} - \lambda_ {k + 1}) + n (p - k - 1) \log \tilde {\theta} _ {p, k} + 2 p k ], \tag {8}
225
+ $$
226
+
227
+ where $\tilde{\theta}_{p,k} = \frac{1}{p - k - 1}\sum_{i = k + 1}^{p - 1}\exp \{2(\lambda_i - \lambda_{i + 1})\}$ and $\omega = \lfloor 6n^{0.1}\rfloor$ , $\lfloor \cdot \rfloor$ represents the floor function. Then we remove
228
+
229
+ # Algorithm 1 PDBLS algorithm (For estimation)
230
+
231
+ Input: Eigenvalues $\lambda_{j}$ of matrix $\mathbf{W}^T\mathbf{W}$ .
232
+
233
+ Output: $\widehat{\Theta}_{bulk} = \{\hat{\sigma}_1^2, \hat{\sigma}_2^2, \hat{t}\}, \quad \widehat{\Theta}_{spike} =$
234
+
235
+ $$
236
+ \{\hat {K}, \hat {\alpha} _ {1}, \dots , \hat {\alpha} _ {\hat {K}} \}, \hat {\Theta} _ {\text {b o u n d}} = \{\hat {\lambda} _ {+}, \hat {\beta} \}.
237
+ $$
238
+
239
+ // Initially estimate $K_0$
240
+
241
+ 1: Roughly estimate the number of spikes $\hat{K}_0$ by Eq. (8);
242
+ 2: Remove the top $\hat{K}_0$ spikes;
243
+
244
+ // Estimate $\Theta_{bulk}$
245
+
246
+ 3: Calculate $\hat{u}_i(\Theta_{bulk})$ by Eq. (10);
247
+ 4: Minimize Eq. (9) to solve $(\sigma_1^2, \sigma_2^2, t)$ ;
248
+
249
+ // Estimate $\Theta_{\text{bound}}$ and Refine $K$
250
+
251
+ 5: Calculate boundary point $\lambda_{+}$ by Eq. (5);
252
+ 6: Estimate the number of spikes $\hat{K}$ by Eq. (11);
253
+ 7: Estimate boundary point $\beta$ via Eq. (12).
254
+
255
+ // Estimate $\Theta_{\text{spike}}$
256
+
257
+ 8: Recover population spikes $\{\alpha_{j}\}_{j = 1}^{\hat{K}}$ from sample spikes by Eq. (6) and Eq. (7);
258
+
259
+ the top $\hat{K}_0$ spikes which may affect the estimation of $\Theta_{bulk}$ .
260
+
261
+ Step 2 Estimate $\Theta_{bulk} = \{\sigma_1^2, \sigma_2^2, t\}$ .
262
+
263
+ The bulk parameters can be estimated by solving the following least squares optimization problem (Li et al., 2013):
264
+
265
+ $$
266
+ \arg \min _ {\Theta_ {b u l k}} \sum_ {i = 1} ^ {q} \left(\hat {u} _ {i} \left(\Theta_ {b u l k}\right) - u _ {i}\right) ^ {2}, \tag {9}
267
+ $$
268
+
269
+ where $\{u_1,\dots,u_q\}$ take equally spaced $j$ -points in each interval of $\mathcal{U}$ . Here
270
+
271
+ $$
272
+ \mathcal {U} = \left\{ \begin{array}{c} (- 1 0, 0) \cup (0, 0. 5 \lambda_ {m i n}) \cup (5 \lambda_ {m a x}, 1 0 \lambda_ {m a x}), p \neq n. \\ (- 1 0, 0) \cup (5 \lambda_ {m a x}, 1 0 \lambda_ {m a x}), p = n. \end{array} \right.
273
+ $$
274
+
275
+ We set $j = 20$ in all experiments of Section 5.
276
+
277
+ According to (Yao et al., 2015) and Eq. (4) in Section 2, for each $u_{i}$ , we can calculate $\hat{u}_i(\Theta_{bulk})$ as:
278
+
279
+ $$
280
+ \begin{array}{l} \hat {u} _ {i} \left(\Theta_ {\text {b u l k}}\right) = - \frac {1}{\underline {{m}} _ {n} \left(u _ {i}\right)} + \frac {p - \hat {K} _ {0}}{n} \frac {t \sigma_ {1} ^ {2}}{1 + \sigma_ {1} ^ {2} \underline {{m}} _ {n} \left(u _ {i}\right)} \tag {10} \\ + \frac {p - K _ {0}}{n} \frac {(1 - t) \sigma_ {2} ^ {2}}{1 + \sigma_ {2} ^ {2} \underline {{m}} _ {n} (u _ {i})}, \\ \end{array}
281
+ $$
282
+
283
+ where $\underline{m}_n(u_i) = -\frac{1 - (p - \hat{K}_0) / n}{u_i} +\frac{1}{n}\sum_{l = 1}^{p - \hat{K}_0}\frac{1}{\lambda_l - u_i}$ .
284
+
285
+ By solving (9), we obtain the bulk estimators $\widehat{\Theta}_{bulk} = (\hat{\sigma}_1^2, \hat{\sigma}_2^2, \hat{t})$ . Subsequently, the density curve of LSD of $\mathbf{W}^T\mathbf{W}$ naturally follows from Eq. (3)-(4).
286
+
287
+ Step 3 Estimate $\Theta_{bound} = \{\lambda_{+},\beta \}$ and refine the number of spikes $K$
288
+
289
+ From Fig. 2(c), we can see that the number of spikes is equal to
290
+
291
+ $$
292
+ K = \# \left\{\lambda_ {j} \mid \lambda_ {j} \in \left(\lambda_ {+}, \lambda_ {\max } \right] \right\}, \tag {11}
293
+ $$
294
+
295
+ where $\# \{\cdot\}$ represents the cardinality of the set. With the estimated $\widehat{\Theta}_{bulk}$ in Step 2, we can obtain the estimator of boundary point $\hat{\lambda}_{+}$ via Eq.(5). Then we can further refine the estimation of the number of spikes $\hat{K}$ via Eq. (11). Subsequently according to Theorem 3.2, we estimate $\beta$ by
296
+
297
+ $$
298
+ \hat {\beta} = \lambda_ {\hat {K} + p \hat {t} - \hat {K} \hat {t}}. \tag {12}
299
+ $$
300
+
301
+ Step 4 Estimate $\Theta_{spike} = \{\alpha_1, \dots, \alpha_{\hat{K}}\}$ .
302
+
303
+ The relationship between spikes of $\mathbf{W}^T\mathbf{W}$ and $\boldsymbol{\Sigma}_{PDB}$ is established in Theorem 3.3. Thus we can obtain $\{\hat{\alpha}_1,\dots ,\hat{\alpha}_{\hat{K}}\}$ by Eq. (6) and (7).
304
+
305
+ In summary, the estimation algorithm is thoroughly detailed in Algorithm 1. Based on Theorems 3.1-3.3, we can show that PDBLS is a consistent estimator of $\Sigma_{PDB}$ .
306
+
307
+ Theorem 3.4. Under PDB model (2), as $n\to \infty ,p / n\to c$
308
+
309
+ $$
310
+ \left\{\widehat {\Theta} _ {b u l k}, \widehat {\Theta} _ {s p i k e}, \widehat {\Theta} _ {b o u n d} \right\} \xrightarrow {a . s .} \left\{\Theta_ {b u l k}, \Theta_ {s p i k e}, \Theta_ {b o u n d} \right\}.
311
+ $$
312
+
313
+ # 4. Matrix compression of PDB model
314
+
315
+ In this section, we determine the boundary between noise and information and introduce a novel PDB Noise-Filtering algorithm to compress weight matrices.
316
+
317
+ Now with the estimation of our PDB model, we have a clear picture about two boundaries of $\mathbf{W}$ , one is the boundary between the sample spikes and the bulk1, $\lambda_{+}$ , and the other is the boundary between bulk1 and bulk2, $\beta$ . This sheds new light on determining the noise-information boundary, which facilitates the compression of the weight matrix.
318
+
319
+ In the current literature, existing method to determine the boundary between noise and information are either purely data-driven or based on the PUB model. As for the PUB model, (Shmalo et al., 2023) and (Staats et al., 2023) treat the MP edge $\lambda_{+}$ of LSD, the vertical dashed line in Fig. 2(b), as the noise-information boundary. They assume that all the information is contained in the spikes of $\mathbf{W}^T\mathbf{W}$ while bulk eigenvalues only represent noise. Accordingly, in their compression algorithm, they remove all the bulk singular values and recover the population spiked singular values from sample spikes of weight matrices. However, the boundary inferred from the PUB model is quite stringent. Consequently, the corresponding compression algorithm would lead to a substantial loss of useful information. As a result, the compressed network would struggle to maintain the original generalization performance.
320
+
321
+ Therefore, to strike a balance between the amount of information retained and the degree of weight matrix rank
322
+
323
+ # Algorithm 2 PDB Noise-Filtering algorithm (For matrix compression)
324
+
325
+ 1: Perform the singular value composition of the weight matrix $\mathbf{W} = \mathbf{U}\mathbf{D}^{\frac{1}{2}}\mathbf{V}$ , where $\mathbf{D} = \mathrm{diag}(\lambda_1,\dots ,\lambda_p)$ .
326
+ 2: Apply the PDBLS algorithm to estimate PDB model including $\widehat{\Theta}_{bulk} = \{\hat{\sigma}_1^2,\hat{\sigma}_2^2,\hat{t}\}$ $\widehat{\Theta}_{spike} = \{\hat{K},\hat{\alpha}_1,\dots \hat{\alpha}_{\hat{K}}\}$ and $\widehat{\Theta}_{bound} = \{\hat{\lambda}_{+},\hat{\beta}\}$
327
+ 3: Replace sample spikes $\{\lambda_j\in (\hat{\lambda}_+, \lambda_{max}]\}$ with population spikes $\{\hat{\alpha}_1, \dots, \hat{\alpha}_{\hat{K}}\}$ , retain the bulk1 $\{\lambda_j\in [\hat{\beta}, \hat{\lambda}_+]\}$ in D, and replace the bulk2 $\{\lambda_j\in [\lambda_{min}, \hat{\beta})\}$ with zero. The new diagonal matrix is formulated as
328
+
329
+ $$
330
+ \mathbf {D} _ {n e w} = \operatorname {d i a g} \left\{\left\{\hat {\alpha} _ {1}, \dots , \hat {\alpha} _ {\hat {K}} \right\}, \left\{\lambda_ {j} \in [ \hat {\beta}, \hat {\lambda} _ {+} ] \right\}, \mathbf {0} \right\}.
331
+ $$
332
+
333
+ 4: Obtain a new weight matrix $\mathbf{W}_{new} = \mathbf{U}\mathbf{D}_{new}^{\frac{1}{2}}\mathbf{V}$ , whose rank is $\hat{K} + (p - \hat{K})\hat{t}$ .
334
+
335
+ ![](images/ca95520b49dc8ba95bf16e408374a50a92b013d81f5fdce076a6382cf132bae2.jpg)
336
+ Figure 5. The process of matrix compression
337
+
338
+ reduction, we extend the PUB model to the PDB model, where only one bulk represents noise, while all the trained information is retained in all the spikes and the other bulk. In other words, our proposed noise-information boundary is the boundary between sample bulk1 and bulk2, $\beta$ , the vertical dashed line in Fig. 2(c). In this way, we retain more important eigenvalues of $\mathbf{W}^T\mathbf{W}$ and find a more suitable way to recover the important information in $\Sigma_{PDB}$ . The detailed process of matrix compression is given in Fig. 5. Specifically, with the estimated $\Sigma_{PDB}$ , we remove all the smaller eigenvalues in the bulk2 of $\mathbf{W}^T\mathbf{W}$ while keeping the bulk1 as a significant source of information. Simultaneously, we incorporate the recovered population spikes $\hat{\Theta}_{spike}$ as another valuable source of information learned from $\mathbf{W}$ during the training process. The detailed compression algorithm is outlined in Algorithm 2.
339
+
340
+ # 5. Experiments
341
+
342
+ In this section, we conduct numerical experiments to demonstrate the superiority of our PDB model and the effectiveness of the weight matrix compression algorithm.
343
+
344
+ # 5.1. Experimental setups
345
+
346
+ We evaluate generalization performance using test accuracy and employ three basic neural network architectures, the three-layer Fully Connected Neural Network (FCNN), Residual Network-18 (ResNet18) (He et al., 2015), and Visual Geometry Group-16 (VGG16) (Simonyan & Zisserman, 2015). The FCNN is trained on MNIST, while ResNet18 and VGG16 are evaluated on CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009). The three architectures configurations including the following:
347
+
348
+ # 1. FCNN trained on MNIST.
349
+
350
+ The FCNN for MNIST is $\mathrm{fc}784 \to \mathrm{fc}512 \to \mathrm{fc}512 \to \mathrm{fc}350 \to \mathrm{fc}10$ , where fc denotes fully connected layer.
351
+
352
+ # 2. ResNet18 trained on CIFAR10 and ImageNet.
353
+
354
+ We add a linear layer of size $512 \times 256$ before the linear layer in the original ResNet18 when training CIFAR10.
355
+
356
+ # 3. VGG16 trained on CIFAR10 and ImageNet.
357
+
358
+ The size of last two linear layers is changed from 4096 to 256 when training CIFAR10.
359
+
360
+ Additionally, we assess the generalization of three representative pre-trained architectures: BERT (Devlin et al., 2019) and T5-base (Raffel et al., 2020) for natural language processing, and ViT-L (Dosovitskiy, 2020) for computer vision. ViT-L is a variant of the CLIP vision encoder (Radford et al., 2021). In the CLIP model, the text encoder remains frozen, and text embeddings are generated by processing the class labels through it. For the language models, experiments are conducted on the RTE (Wang et al., 2018) and SciTail (Khot et al., 2018) datasets, while the vision model is tested on DTD (Cimpoi et al., 2014) and SUN397 (Xiao et al., 2016).
361
+
362
+ Each image is normalized to the range of [0,1], and the weight matrices of the networks are initialized by the Glorot uniform distribution (Glorot & Bengio, 2010). For basic architectures, we employ SGD with an exponential decay learning rate during the training phase. The activation functions used are $\mathrm{Relu}(\cdot)$ for hidden layers and $\mathrm{Softmax}(\cdot)$ for output layer. For large-scale model, we utilize the AdamW (Loshchilov & Hutter, 2017) optimizer in conjunction with a cosine learning rate scheduler. Regarding convolutional layers, we follow the scheme in (Idelbayev & Carreira-Perpinan, 2020) for reshaping the convolution kernel into a 2D matrix. In particular, since a convolution layer has $n$
363
+
364
+ groups of convolutional kernels with $c$ channels, each of size $d \times d$ , we can reshape them into a 2D matrix of size $nd \times cd$ for compression. After compression, we can reshape them back to their original 4D convolutional kernel form.
365
+
366
+ We compare two classes of matrix compression methods: the PUB-based methods and SVD-based methods.
367
+
368
+ PUB-based methods include:
369
+
370
+ 1. Bulk Eigenvalue Matching Analysis (BEMA) (Ke et al., 2023): find the boundary of density curves by quantile fitting.
371
+ 2. Kernel Estimation (Kernel) (Staats et al., 2023): use Gaussian broadening to fit MP distribution and find the boundary.
372
+
373
+ SVD-based methods include:
374
+
375
+ 1. Sparse low rank (SLR) (Swaminathan et al., 2020): obtain low-rank matrices by imposing rank-sparsity constraints.
376
+ 2. Naive SVD (Shmalo et al., 2023): set $45\%$ small singular values to zero.
377
+
378
+ # 5.2. Generalization and compression performance
379
+
380
+ In this section, we train the network on data sets with and without noisy labels, shuffling $60\%$ of the labels to introduce noise. Then, we compress networks according to Algorithm 2 and Fig. 5. The test accuracy before and after compression for different methods is compared to evaluate the generalization performance of compressed networks. We denote the test accuracy before compression as the Base. All codes in the experiment are conducted on the server equipped with NVIDIA L40 GPUs and Ubuntu 22.04.
381
+
382
+ Our algorithm will recommend the best compression ratio of weight matrix where further compression will lead to significantly loss of generalization ability. Table 6 lists the rank of the weight matrix before and after compression for various neural networks under four algorithms across different datasets. Fig. 6 reports the test accuracy of various compression algorithms for the dataset with and without label noise. From these results, we obtain the following conclusions:
383
+
384
+ 1. Compared with the original neural network, our compressed model significantly reduces the rank of weight matrices while maintaining its test accuracy. In case of contaminated labels, our compression methods can even improve the test accuracy.
385
+ 2. Compared with other compression algorithms, our algorithm can achieve better test accuracy at the same
386
+
387
+ compression level. The boundary point obtained by the PDB model retains more important information.
388
+
389
+ 3. The empirical compression ratio 0.55 proposed in (Shmalo et al., 2023) does not fully compress the weight matrix. On the contrary, the noise-information boundary identified by our method accurately captures the point at which test accuracy starts to decrease during compression.
390
+
391
+ Table 7 presents the test accuracy of various neural networks when compressed using different algorithms. Our method achieves the best accuracy, with a maximum improvement of $5\%$ $(\frac{0.7536}{0.7174} - 1)$ for T5-base: RTE. More comparable outcomes are listed in the Appendix. These results demonstrate the effectiveness of our proposed method.
392
+
393
+ Table 8 compares the computational efficiency of four methods, with the time measured in seconds for the total execution and inference phases following data processing. Among them, the computational overhead of our PDB model is slightly higher than that of PUB. However, this additional cost is relatively modest and is outweighed by the improvements in compression performance and model accuracy.
394
+
395
+ Table 8. The computation time of different networks across various comparable algorithms.
396
+
397
+ <table><tr><td>Network</td><td>PDB</td><td>PUB</td><td>Naive SVD</td><td>SLR</td></tr><tr><td>ResNet</td><td>19</td><td>15</td><td>18</td><td>23</td></tr><tr><td>VGG16</td><td>42</td><td>41</td><td>47</td><td>53</td></tr><tr><td>BERT</td><td>20</td><td>15</td><td>18</td><td>29</td></tr><tr><td>T5-base</td><td>47</td><td>44</td><td>45</td><td>42</td></tr><tr><td>VIT-L</td><td>236</td><td>231</td><td>243</td><td>256</td></tr><tr><td>Average</td><td>72.8</td><td>69.2</td><td>74.2</td><td>80.6</td></tr></table>
398
+
399
+ # 6. Conclusions
400
+
401
+ In this work, we propose a novel Population Double Bulk (PDB) model which can more accurately characterize the singular value behavior of weight matrix than the existing PUB model. An effective PDBLS algorithm is further developed for model estimation and determination of the noise-information boundary. Subsequently, we propose a PDB noise-filtering algorithm to compress weight matrices. Our algorithms will also recommend the best compression ratio of weight matrix where further compression will lead to significantly loss of generalization ability. Experiments demonstrate superiority of our PDB model and effectiveness of our compression method.
402
+
403
+ ![](images/824f3c6279eff02c71e9bd5472c0018ab5e7ed1fa1b709259abcf417f3a297bb.jpg)
404
+ (a) FCNN on MNIST
405
+
406
+ ![](images/7a686735ac1e00dffa8f89cc96150c37a6b7c1d6a190e0107868f4f72e10715f.jpg)
407
+ (b) ResNet18 on CIFAR10
408
+
409
+ ![](images/a5bc77d4104d3743401e77618fdc058bd2190646beb180680d1bd36f1194f99d.jpg)
410
+ (c) Vgg16 on CIFAR10
411
+
412
+ ![](images/9b4f415172f5360ebf61359bbc1a456776eb85a7e5c63b1565c61c36a9ea7152.jpg)
413
+ (d) Vgg16 on imagenet
414
+
415
+ ![](images/b92d024a84d09f28295f1f9da735c3f88fc0312bd7831c1585896efcfa5e6511.jpg)
416
+ (e) FCNN on MNIST with noise
417
+
418
+ ![](images/1ed56cea7e09c8da2be5f33ee1100caaf165036e27c33d70ac0a7e0033a322ea.jpg)
419
+ (f) ResNet18 on CIFAR10 with
420
+
421
+ ![](images/23f46607cad6774fb3e079794f7afb64c0f40ed131368877cbfefe637e9e6938.jpg)
422
+ Figure 6. The vertical axis represents test accuracy and the horizontal axis represents the proportion of removed singular values of $\mathbf{W}$ in ascending order. The horizontal dashed black line reports test accuracy before compression. The vertical dashed lines represent the compression ratios recommended by different models, where red $\lambda_{+}$ for the PUB model and blue $\beta$ for our PDB model.
423
+
424
+ ![](images/7831c50b90032aab6117d4a0159befe30ec41d695294741a167ff4cd583a3364.jpg)
425
+ (g) Vgg16 on CIFAR10 with
426
+ (h) Resnet on imagenet
427
+
428
+ Table 6. Comparison of compression ratios across different compression algorithms, where the ratio equals the quotient of rank of matrix after and before compression.
429
+
430
+ <table><tr><td>Model</td><td>FCNN</td><td colspan="2">ResNet18</td><td colspan="2">VGG16</td></tr><tr><td>Datasets (0% noise)</td><td>MNIST</td><td>CIFAR10</td><td>ImageNet</td><td>CIFAR10</td><td>ImageNet</td></tr><tr><td>PDB(β)</td><td>55/350 (15.72%)</td><td>165/768 (21.48%)</td><td>68/192 (35.42%)</td><td>208/1536 (13.54%)</td><td>200/768 (26.04%)</td></tr><tr><td>PUB(λ+)</td><td>30/350 (8.57%)</td><td>80/768 (10.42%)</td><td>25/192 (13.02%)</td><td>84/1536 (5.47%)</td><td>123/768 (16.02%)</td></tr><tr><td>SLR</td><td>80/350 (22.86%)</td><td>200/768 (26.04%)</td><td>100/192 (52.08%)</td><td>200/1536 (13.02%)</td><td>260/768 (33.85%)</td></tr><tr><td>Naive SVD</td><td>192/350 (55%)</td><td>422/768 (55%)</td><td>106/192 (55%)</td><td>845/1536 (55%)</td><td>422/768 (55%)</td></tr></table>
431
+
432
+ Table 7. The test accuracy obtained by training different network models of different data sets (0% noise) under the four compression methods. Base denotes the test accuracy before compression. SLR performs poorly on VIT-L (accuracy 0.1), thus omitted.
433
+
434
+ <table><tr><td>Network</td><td>Datasets</td><td>Base</td><td>PDB</td><td>PUB</td><td>SLR</td><td>naive SVD</td></tr><tr><td>FCNN</td><td>MNIST</td><td>0.9799</td><td>0.9804</td><td>0.9791</td><td>0.9799</td><td>0.9799</td></tr><tr><td>ResNet18</td><td>CIFAR10</td><td>0.8349</td><td>0.8384</td><td>0.8338</td><td>0.8357</td><td>0.8354</td></tr><tr><td>VGG16</td><td>CIFAR10</td><td>0.8418</td><td>0.8422</td><td>0.8405</td><td>0.8415</td><td>0.8419</td></tr><tr><td rowspan="2">BERT</td><td>RTE</td><td>0.7029</td><td>0.7319</td><td>0.7174</td><td>0.7246</td><td>0.7029</td></tr><tr><td>SciTail</td><td>0.9055</td><td>0.9155</td><td>0.9130</td><td>0.9008</td><td>0.9055</td></tr><tr><td rowspan="2">T5-base</td><td>RTE</td><td>0.7174</td><td>0.7536</td><td>0.7319</td><td>0.7174</td><td>0.7174</td></tr><tr><td>SciTail</td><td>0.9025</td><td>0.9243</td><td>0.9167</td><td>0.9182</td><td>0.9196</td></tr><tr><td rowspan="2">VIT-L</td><td>DTD</td><td>0.7452</td><td>0.7533</td><td>0.7482</td><td>-</td><td>0.7405</td></tr><tr><td>SUN397</td><td>0.7680</td><td>0.7771</td><td>0.7720</td><td>-</td><td>0.7716</td></tr><tr><td colspan="2">Average</td><td>0.7783</td><td>0.7891</td><td>0.7827</td><td>0.7858</td><td>0.7789</td></tr></table>
435
+
436
+ # Acknowledgments
437
+
438
+ This research is supported by National Nature Science Foundation of China NO. 12471258, NO. 12031005 and National
439
+
440
+ Key R&D Program of China(2023YFA1011400). The authors are grateful to the reviewers for their thoughtful comments and suggestions.
441
+
442
+ # Impact Statement
443
+
444
+ This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
445
+
446
+ # References
447
+
448
+ Bai, Z. and Silverstein, J. W. Spectral analysis of large dimensional random matrices, volume 20. Springer, 2010.
449
+ Bai, Z. and Yao, J. On sample eigenvalues in a generalized spiked population model. Journal of Multivariate Analysis, 106:167-177, 2012.
450
+ Berlyand, L., Sandier, E., Shmalo, Y., and Zhang, L. Enhancing accuracy in deep learning using random matrix theory. arXiv preprint arXiv:2310.03165, 2023.
451
+ Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., and Vedaldi, A. Describing textures in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3606-3613, 2014.
452
+ Da'u, A. and Salim, N. Recommendation system based on deep learning methods: a systematic review and new directions. Artificial Intelligence Review, 53(4):2709-2748, 2020.
453
+ Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.
454
+ Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pp. 4171-4186, 2019.
455
+ Dosovitskiy, A. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
456
+ Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256. JMLR Workshop and Conference Proceedings, 2010.
457
+ He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition, 2015.
458
+ Idelbayev, Y. and Carreira-Perpinan, M. A. Low-rank compression of neural nets: Learning the rank of each layer.
459
+
460
+ In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8049-8059, 2020.
461
+ Ke, Z. T., Ma, Y., and Lin, X. Estimation of the number of spiked eigenvalues in a covariance matrix by bulk eigenvalue matching analysis. Journal of the American Statistical Association, 118(541):374-392, 2023.
462
+ Khot, T., Sabharwal, A., and Clark, P. Scitail: A textual entailment dataset from science question answering. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI-18), AAAI'18/IAAI'18/EAAI'18. AAAI Press, 2018.
463
+ Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009.
464
+ Leek, E. C., Leonardis, A., and Heinke, D. Deep neural networks and image classification in biological vision. Vision Research, 197:108058, 2022.
465
+ Li, W., Chen, J., Qin, Y., Bai, Z., and Yao, J. Estimation of the population spectral distribution from a large dimensional sample covariance matrix. Journal of Statistical Planning and Inference, 143(11):1887-1897, 2013.
466
+ Liebenwein, L., Maalouf, A., Feldman, D., and Rus, D. Compressing neural networks: Towards determining the optimal layer-wise decomposition. Advances in Neural Information Processing Systems, 34:5328-5344, 2021.
467
+ Liu, X., Liu, Y., Pan, G., Zhang, L., and Zhang, Z. Asymptotic properties of spiked eigenvalues and eigenvectors of signal-plus-noise matrices with their applications. arXiv preprint arXiv:2310.13939, 2023.
468
+ Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
469
+ Martin, C. H. and Mahoney, M. W. Heavy-tailed universality predicts trends in test accuracies for very large pre-trained deep neural networks. In Proceedings of the 2020 SIAM International Conference on Data Mining, pp. 505-513. SIAM, 2020.
470
+ Martin, C. H. and Mahoney, M. W. Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. The Journal of Machine Learning Research, 22(1):7479-7551, 2021.
471
+ Mohanty, A., Cherukuri, R. C., and Prusty, A. R. Improvement of speech emotion recognition by deep convolutional neural network and speech features. In Congress on Intelligent Systems, pp. 117-129. Springer, 2022.
472
+
473
+ Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748-8763. PmLR, 2021.
474
+ Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21 (140):1-67, 2020.
475
+ Shmalo, Y., Jenkins, J., and Krupchytskyi, O. Deep learning weight pruning with rmt-svd: Increasing accuracy and reducing overfitting. arXiv preprint arXiv:2303.08986, 2023.
476
+ Silverstein, J. W. and Choi, S.-I. Analysis of the limiting spectral distribution of large-dimensional random matrices. J. Multivariate Anal., 54(2):295-309, 1995. ISSN 0047-259X.
477
+ Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition, 2015.
478
+ Staats, M., Thamm, M., and Rosenow, B. Boundary between noise and information applied to filtering neural network weight matrices. Physical Review E, 108(2):L022302, 2023.
479
+ Swaminathan, S., Garg, D., Kannan, R., and Andres, F. Sparse low rank factorization for deep neural network compression. Neurocomputing, 398:185-196, 2020.
480
+ Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. GLUE: A multi-task benchmark and analysis platform for natural language understanding. CoRR, abs/1804.07461, 2018. URL http://arxiv.org/abs/1804.07461.
481
+ Xiao, J., Ehinger, K. A., Hays, J., Torralba, A., and Oliva, A. Sun database: Exploring a large collection of scene categories. Int. J. Comput. Vision, 119(1):3-22, August 2016. ISSN 0920-5691. URL https://doi.org/10.1007/s11263-014-0748-y.
482
+ Xu, Y., Li, Y., Zhang, S., and et al. Trained rank pruning for efficient deep neural networks. In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS), pp. 14-17. IEEE, 2019.
483
+ Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., and Le, Q. V. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32, 2019.
484
+
485
+ Yao, J., Zheng, S., and Bai, Z. Sample covariance matrices and high-dimensional data analysis. Cambridge UP, New York, 2015.
486
+
487
+ # A. Experimental results
488
+
489
+ Table 9 summarizes the test accuracy achieved by training various network models on various data sets using the four compression techniques. We can further show that our compression algorithm is validated by the outcomes.
490
+
491
+ Table 9. The test accuracy obtained by training different network models of different data sets under the four compression methods. Base denotes the test accuracy before compression.
492
+
493
+ <table><tr><td>network</td><td>dataset</td><td>noise</td><td>Base</td><td>PDB</td><td>PUB</td><td>SLR</td><td>naive SVD</td></tr><tr><td rowspan="7">fc784 → fc512 → fc512 → fc350 → fc10</td><td rowspan="7">MNIST</td><td>0%</td><td>0.9799</td><td>0.9804</td><td>0.9791</td><td>0.9799</td><td>0.9799</td></tr><tr><td>10%</td><td>0.9414</td><td>0.9454</td><td>0.9418</td><td>0.9439</td><td>0.9428</td></tr><tr><td>20%</td><td>0.8921</td><td>0.8968</td><td>0.8901</td><td>0.8918</td><td>0.8906</td></tr><tr><td>30%</td><td>0.8013</td><td>0.8078</td><td>0.7934</td><td>0.7893</td><td>0.8033</td></tr><tr><td>40%</td><td>0.7247</td><td>0.7319</td><td>0.7119</td><td>0.7304</td><td>0.7278</td></tr><tr><td>50%</td><td>0.6520</td><td>0.6551</td><td>0.6381</td><td>0.6531</td><td>0.6387</td></tr><tr><td>60%</td><td>0.5455</td><td>0.5544</td><td>0.5331</td><td>0.5536</td><td>0.5401</td></tr><tr><td rowspan="7">Resnet18</td><td rowspan="7">CIFAR10</td><td>0%</td><td>0.8349</td><td>0.8384</td><td>0.8338</td><td>0.8357</td><td>0.8354</td></tr><tr><td>10%</td><td>0.7360</td><td>0.7446</td><td>0.7439</td><td>0.7391</td><td>0.7382</td></tr><tr><td>20%</td><td>0.6701</td><td>0.6845</td><td>0.6831</td><td>0.6747</td><td>0.6709</td></tr><tr><td>30%</td><td>0.6417</td><td>0.6576</td><td>0.6399</td><td>0.6187</td><td>0.6466</td></tr><tr><td>40%</td><td>0.4995</td><td>0.5051</td><td>0.5019</td><td>0.5027</td><td>0.5033</td></tr><tr><td>50%</td><td>0.4533</td><td>0.4607</td><td>0.4597</td><td>0.4496</td><td>0.4523</td></tr><tr><td>60%</td><td>0.2960</td><td>0.2982</td><td>0.2942</td><td>0.2975</td><td>0.2960</td></tr><tr><td rowspan="7">VGG16</td><td rowspan="7">CIFAR10</td><td>0%</td><td>0.8418</td><td>0.8422</td><td>0.8405</td><td>0.8415</td><td>0.8419</td></tr><tr><td>10%</td><td>0.7882</td><td>0.7937</td><td>0.7931</td><td>0.7935</td><td>0.7901</td></tr><tr><td>20%</td><td>0.7474</td><td>0.7496</td><td>0.7452</td><td>0.7449</td><td>0.7478</td></tr><tr><td>30%</td><td>0.6992</td><td>0.7048</td><td>0.6862</td><td>0.7011</td><td>0.6998</td></tr><tr><td>40%</td><td>0.6276</td><td>0.6341</td><td>0.6255</td><td>0.6303</td><td>0.6298</td></tr><tr><td>50%</td><td>0.5986</td><td>0.6122</td><td>0.6025</td><td>0.6048</td><td>0.6024</td></tr><tr><td>60%</td><td>0.5570</td><td>0.5686</td><td>0.5498</td><td>0.5591</td><td>0.5576</td></tr><tr><td>Resnet18</td><td>Imagenet</td><td>0%</td><td>0.5473</td><td>0.5469</td><td>0.5440</td><td>0.5467</td><td>0.5452</td></tr><tr><td>VGG16</td><td>Imagenet</td><td>0%</td><td>0.6160</td><td>0.6164</td><td>0.6126</td><td>0.6078</td><td>0.6076</td></tr><tr><td rowspan="2">BERT</td><td>RTE</td><td>0%</td><td>0.7029</td><td>0.7319</td><td>0.7174</td><td>0.7246</td><td>0.7029</td></tr><tr><td>SciTail</td><td>0%</td><td>0.9055</td><td>0.9155</td><td>0.9130</td><td>0.9008</td><td>0.9055</td></tr><tr><td rowspan="2">T5-base</td><td>RTE</td><td>0%</td><td>0.7174</td><td>0.7536</td><td>0.7319</td><td>0.7174</td><td>0.7174</td></tr><tr><td>SciTail</td><td>0%</td><td>0.9025</td><td>0.9243</td><td>0.9167</td><td>0.9182</td><td>0.9196</td></tr><tr><td rowspan="2">VIT-L</td><td>DTD</td><td>0%</td><td>0.7452</td><td>0.7533</td><td>0.7482</td><td>-</td><td>0.7405</td></tr><tr><td>SUN397</td><td>0%</td><td>0.7680</td><td>0.7771</td><td>0.7720</td><td>-</td><td>0.7716</td></tr></table>
494
+
495
+ # B. Spectral moment of PDB model
496
+
497
+ The spectral moment of a matrix is an important statistic related to its eigenvalues, often used to describe the spectral distribution of the matrix. Denote the empirical spectral moment of $\mathbf{W}^T\mathbf{W}$ ,
498
+
499
+ $$
500
+ \hat {\gamma} _ {j} = \frac {1}{p} \operatorname {t r} \left(\mathbf {W} ^ {T} \mathbf {W}\right) ^ {j}, \quad j = 1, 2, 3.
501
+ $$
502
+
503
+ Under our PDB model, we have the following theoretical spectral moment,
504
+
505
+ $$
506
+ \gamma_ {1} = \mu_ {1}, \quad \gamma_ {2} = \mu_ {2} + c \mu_ {1} ^ {2}, \quad \gamma_ {3} = \mu_ {3} + 3 c \mu_ {1} \mu_ {2} + c ^ {2} \mu_ {1} ^ {2}
507
+ $$
508
+
509
+ where
510
+
511
+ $$
512
+ \mu_ {j} = \hat {t} \hat {\sigma} _ {1} ^ {2 j} + (1 - \hat {t}) \hat {\sigma} _ {2} ^ {2 j}, \quad j = 1, 2, 3,
513
+ $$
514
+
515
+ and $(\hat{t},\hat{\sigma}_1,\hat{\sigma}_2)$ come from PDBLS algorithm.
516
+
517
+ Theorem B.1. Under PDB model (2) with $p / n = c$ , as $n \to \infty$ , we have
518
+
519
+ $$
520
+ \hat {\gamma} _ {j} - \gamma_ {j} \xrightarrow {a . s .} 0 \quad j = 1, 2, 3. \tag {13}
521
+ $$
522
+
523
+ # C. The difference between PUB and PDB model
524
+
525
+ In this section, we compare theoretical properties between our PDB model and the PUB model. Recall the theory on PDB model has been established in Theorems 3.1-3.3 and B.1. Next we list the corresponding results about PUB model(1).
526
+
527
+ The LSD of $\mathbf{W}^T\mathbf{W}$ under PUB model has density function (Martin & Mahoney, 2021; Staats et al., 2023):
528
+
529
+ $$
530
+ f \left(x; c, \sigma^ {2}\right) = \frac {\sqrt {\left(\lambda_ {+} - x\right) \left(x - \lambda_ {-}\right)}}{2 \pi \sigma^ {2} c x} \mathbb {1} _ {\left\{\lambda_ {-} \leq x \leq \lambda_ {+} \right\}}. \tag {14}
531
+ $$
532
+
533
+ Here the boundary points $\lambda_{\pm}(\sigma^2) = \sigma_0^2 (1\pm \sqrt{c})^2$ $c = \frac{p}{n}$ . And the sample spikes of $\mathbf{W}^T\mathbf{W}$ have the following relationship with population spikes $\alpha_{j}$ of $\mathbb{E}\mathbf{W}^T\mathbf{W}$ (Bai & Yao, 2012):
534
+
535
+ $$
536
+ \lambda_ {j} = \alpha_ {j} + c \alpha_ {j} \frac {\sigma^ {2}}{\alpha_ {j} - \sigma^ {2}}, \quad j \in \{1, \dots , K \}. \tag {15}
537
+ $$
538
+
539
+ Moreover, the relationship between the moments of $\mathbf{W}^T\mathbf{W}$ and $\Sigma_{PUB}$ , mainly determined by the bulk eigenvalues, is established in (Yao et al., 2015):
540
+
541
+ $$
542
+ \frac {1}{p} \operatorname {t r} \left(\mathbf {W} ^ {T} \mathbf {W}\right) ^ {j} - \gamma_ {j} \xrightarrow {\text {a . s .}} 0, \quad j = 1, 2, 3.
543
+ $$
544
+
545
+ where
546
+
547
+ $$
548
+ \gamma_ {1} = \mu_ {1}, \quad \gamma_ {2} = \mu_ {2} + c \mu_ {1} ^ {2}, \quad \gamma_ {3} = \mu_ {3} + 3 c \mu_ {1} \mu_ {2} + c ^ {2} \mu_ {1} ^ {2}, \quad \mu_ {j} = \hat {\sigma} ^ {2 j}, \quad j = 1, 2, 3. \tag {16}
549
+ $$
550
+
551
+ Here $\hat{\sigma}^{2j}$ can be obtained by BEMA (Ke et al., 2023) or Kernel (Staats et al., 2023) method.
552
+
553
+ The difference between the PUB and PDB model is summarized in Table 10.
554
+
555
+ Table 10. The difference between the PUB and PDB model.
556
+
557
+ <table><tr><td></td><td>PUB model</td><td>PDB model</td></tr><tr><td>LSD</td><td>(14)</td><td>(3)-(4)</td></tr><tr><td>spiked eigenvalues</td><td>(15)</td><td>(6)-(7)</td></tr><tr><td>spectral moment</td><td>(16)</td><td>(13)</td></tr></table>
558
+
559
+ # D. Proof of Theorem 3.1
560
+
561
+ Firstly, we give some preliminary knowledge about Stieltjes transform, which plays a key role in our proofs. For any distribution function $G(x)$ , its Stieltjes transform $m_G(z)$ is defined as
562
+
563
+ $$
564
+ m _ {G} (z) = \int \frac {1}{x - z} d G (x), \quad z \in \mathbb {C} ^ {+},
565
+ $$
566
+
567
+ where $\mathbb{C}^{+} = \{z\in \mathbb{C}:\operatorname {Im}(z) > 0\}$ denotes the upper complex plane and $\mathrm{Im}(z)$ represents the imaginary part of $z$ . Moreover $G(x)$ and $m_{G}(z)$ have a one-to-one relationship. The density function $G^{\prime}(x)$ of $G(x)$ is given by Theorem 2.1 in (Silverstein & Choi, 1995):
568
+
569
+ $$
570
+ G ^ {\prime} (x) = \lim _ {\eta \rightarrow 0} \frac {\operatorname {I m} \left(m _ {G} (z)\right)}{\pi}, \quad z = x + i \eta . \tag {17}
571
+ $$
572
+
573
+ According to the main results in (Bai & Silverstein, 2010), the Stieltjes transform of LSD of $\mathbf{W}^T\mathbf{W}$ is $m(z)$ , which satisfies the following equations:
574
+
575
+ $$
576
+ z = - \frac {1}{\underline {{m}} (z)} + c \int \frac {v}{1 + v \underline {{m}} (z)} d H (v), \tag {18}
577
+ $$
578
+
579
+ $$
580
+ \underline {{m}} (z) = - \frac {1 - c}{z} + c m (z), \tag {19}
581
+ $$
582
+
583
+ where $H(v)$ be the LSD of $\Sigma_{PDB}$ . By the relation $\underline{m}(z) = -(1 - c)/z + cm(z)$ , along with Eq. (17), the density function of LSD of $\mathbf{W}^T\mathbf{W}$ is
584
+
585
+ $$
586
+ \rho (x) = \lim _ {\eta \to 0} \frac {\operatorname {I m} m (z)}{\pi} = \lim _ {\eta \to 0} \frac {\operatorname {I m} \underline {{m}} (z)}{\pi c}, z = x + i \eta , \eta > 0,
587
+ $$
588
+
589
+ from which we get Eq. (3).
590
+
591
+ And since the number of spiked eigenvalues $K$ is fixed, we have
592
+
593
+ $$
594
+ \begin{array}{l} H (v) = \lim _ {p \rightarrow \infty} \frac {1}{p} \left[ \sum_ {j = 1} ^ {K} I (\alpha_ {j} \leq v) + (p - K) t I \left(\sigma_ {1} ^ {2} \leq v\right) + (p - K) (1 - t) I \left(\sigma_ {2} ^ {2} \leq v\right)\right] \\ = t I \left(\sigma_ {1} ^ {2} \leq v\right) + (1 - t) I \left(\sigma_ {2} ^ {2} \leq v\right), \tag {20} \\ \end{array}
595
+ $$
596
+
597
+ which implies
598
+
599
+ $$
600
+ \begin{array}{l} z = - \frac {1}{\underline {{m}} (z)} + c \int \frac {v}{1 + v \underline {{m}} (z)} d H (v) \\ = - \frac {1}{\underline {{m}} (z)} + c t \frac {\sigma_ {1} ^ {2}}{1 + \sigma_ {1} ^ {2} \underline {{m}} (z)} + c (1 - t) \frac {\sigma_ {2} ^ {2}}{1 + \sigma_ {2} ^ {2} \underline {{m}} (z)}. \\ \end{array}
601
+ $$
602
+
603
+ Thus we obtain Eq. (4). The proof of Theorem 3.1 is complete.
604
+
605
+ # E. Proof of Theorem 3.2
606
+
607
+ By Eq. (4), this inverse function of $z \mapsto -1 / \underline{m}(z)$ is
608
+
609
+ $$
610
+ g (x) = x + c x \frac {t \sigma_ {1} ^ {2}}{x - \sigma_ {1} ^ {2}} + c x \frac {(1 - t) \sigma_ {2} ^ {2}}{x - \sigma_ {2} ^ {2}}.
611
+ $$
612
+
613
+ Here $g(x) = z$ and $x = -1 / \underline{m}(z)$ in Eq. (4). Then according to Proposition 2.17 in (Yao et al., 2015) we obtain Theorem 3.2.
614
+
615
+ # F. Proof of Theorem 3.3
616
+
617
+ By Theorem 4.1 in (Bai & Yao, 2012), the relationship between sample spikes $\lambda_{j}$ and population spikes $\alpha_{j}$ is established by
618
+
619
+ $$
620
+ \lambda_ {j} \xrightarrow {\text {a . s .}} \alpha_ {j} + c \alpha_ {j} \int \frac {v}{\alpha_ {j} - v} d H (v), \quad j \in \{1, \dots , K \}.
621
+ $$
622
+
623
+ Moreover, according to Eq. (20), we have
624
+
625
+ $$
626
+ \int \frac {v}{\alpha_ {j} - v} d H (v) = \frac {t \sigma_ {1} ^ {2}}{\alpha_ {j} - \sigma_ {1} ^ {2}} + \frac {(1 - t) \sigma_ {2} ^ {2}}{\alpha_ {j} - \sigma_ {2} ^ {2}},
627
+ $$
628
+
629
+ from which for $j \in \{1, \dots, K\}$ , we obtain
630
+
631
+ $$
632
+ \lambda_ {j} \xrightarrow {\text {a . s .}} \alpha_ {j} + c \alpha_ {j} \frac {t \sigma_ {1} ^ {2}}{\alpha_ {j} - \sigma_ {1} ^ {2}} + c \alpha_ {j} \frac {(1 - t) \sigma_ {2} ^ {2}}{\alpha_ {j} - \sigma_ {2} ^ {2}} = g (\alpha_ {j})
633
+ $$
634
+
635
+ Thus we complete the proof Theorem 3.3.
636
+
637
+ # G. Proof of Theorem 3.4
638
+
639
+ Recall the estimators
640
+
641
+ $$
642
+ \widehat {\Theta} _ {b u l k} = \left\{\hat {\sigma} _ {1} ^ {2}, \hat {\sigma} _ {2} ^ {2}, \hat {t} \right\}, \widehat {\Theta} _ {b o u n d} = \left\{\hat {\lambda} _ {+}, \hat {\beta} \right\}, \widehat {\Theta} _ {s p i k e} = \left\{\hat {K}, \hat {\alpha} _ {1}, \dots , \hat {\alpha} _ {\hat {K}} \right\},
643
+ $$
644
+
645
+ along with their corresponding population parameters
646
+
647
+ $$
648
+ \Theta_ {b u l k} = \left\{\sigma_ {1} ^ {2}, \sigma_ {2} ^ {2}, t \right\}, \Theta_ {b o u n d} = \left\{\lambda_ {+}, \beta \right\}, \Theta_ {s p i k e} = \left\{K, \alpha_ {1}, \dots , \alpha_ {K} \right\}.
649
+ $$
650
+
651
+ We now proceed to establish the consistency of these estimators.
652
+
653
+ For $\widehat{\Theta}_{bulk}$ , its consistency relies on the convergence of the empirical spectral distribution (ESD)
654
+
655
+ $$
656
+ F _ {n} ^ {\mathbf {W} ^ {T} \mathbf {W}} (x) = \frac {1}{p} \sum_ {j = 1} ^ {p} \mathbb {1} _ {\{\lambda_ {j} \leq x \}}
657
+ $$
658
+
659
+ to the limiting spectral distribution (LSD) characterized in Theorem 3.1. Based on the ESD, we obtain $\widehat{\Theta}_{\mathrm{bulk}}$ via the procedures defined in Eq. (9)-(10). Moreover, Eq. (4) ensures a one-to-one correspondence between the population parameter $\Theta_{bulk} = \{\sigma_1^2,\sigma_2^2,t\}$ and LSD. Therefore, by Theorem 3.1, the ESD converges almost surely to the LSD, and applying Theorem 3.1 in (Li et al., 2013), we conclude that
660
+
661
+ $$
662
+ \widehat {\Theta} _ {b u l k} \xrightarrow {\text {a . s .}} \Theta_ {b u l k}. \tag {21}
663
+ $$
664
+
665
+ For $\widehat{\Theta}_{bound}$ , the consistency of $\hat{\lambda}_{+}$ relies on the convergence of $\widehat{\Theta}_{bulk}$ and the results in Theorem 3.2, while the consistency of $\hat{\beta}$ depends on the convergence of $\hat{K}$ . Based on $\widehat{\Theta}_{bulk}$ , we compute $\hat{\lambda}_{+}$ through Eq. (5)-(6), i.e.,
666
+
667
+ $$
668
+ \hat {\lambda} _ {+} = \hat {g} (\hat {y}), \hat {y} = \arg \max _ {x \in \mathbb {R}} \left\{\hat {g} ^ {\prime} (x) = 0 \right\}, \hat {g} (x) = x + c _ {n} x \frac {\hat {t} \hat {\sigma} _ {1} ^ {2}}{x - \hat {\sigma} _ {1} ^ {2}} + c _ {n} x \frac {(1 - \hat {t}) \hat {\sigma} _ {2} ^ {2}}{x - \hat {\sigma} _ {2} ^ {2}}, c _ {n} = p / n.
669
+ $$
670
+
671
+ Moreover, Theorem 3.2 establishes a one-to-one correspondence between $\lambda_{+}$ and $\Theta_{bulk}$ . Therefore, by Eq. (21) and the condition $c_{n} \to c$ , we have $\hat{\lambda}_{+} \xrightarrow{\text{a.s.}} \lambda_{+}$ . Then by Eq. (11)-(12), we have $\hat{K} = \# \left\{\lambda_{j} \mid \lambda_{j} \in \left(\hat{\lambda}_{+}, \lambda_{\max}\right]\right\} \xrightarrow{\text{a.s.}} K$ and hence $\hat{\beta} = \lambda_{\hat{K} + p\hat{t} - \hat{K}\hat{t}} \xrightarrow{\text{a.s.}} \beta$ . Thus we conclude that
672
+
673
+ $$
674
+ \widehat {\Theta} _ {b o u n d} \xrightarrow {\text {a . s .}} \Theta_ {b o u n d}. \tag {22}
675
+ $$
676
+
677
+ For $\widehat{\Theta}_{spike}$ , its consistency relies on the convergence of $\widehat{\Theta}_{bulk}$ , $\widehat{\Theta}_{bound}$ and the results in Theorem 3.3. By the convergence $\hat{\lambda}_{+} \xrightarrow{\mathrm{a.s.}} \lambda_{+}$ in $\widehat{\Theta}_{bound}$ , we have $\hat{K} \xrightarrow{\mathrm{a.s.}} K$ . Given $\widehat{\Theta}_{bulk}$ , according to Eq. (7), we compute $\hat{\alpha}_i$ by solving the equation
678
+
679
+ $$
680
+ \hat {g} (\hat {\alpha} _ {j}) = \lambda_ {j}, \text {s . t .} \quad \hat {g} ^ {\prime} (\hat {\alpha} _ {j}) > 0, \quad j \in \{1, \ldots , \hat {K} \}.
681
+ $$
682
+
683
+ Moreover, from Theorem 4.1 in (Bai & Yao, 2012) and Theorem 3.3, we have $\lambda_{j} \xrightarrow{\mathrm{a.s.}} g(\alpha_{j})$ , $g'(\alpha_{j}) > 0$ , $j \in \{1, \dots, K\}$ . Therefore by Eq. (21), $\hat{K} \xrightarrow{\mathrm{a.s.}} K$ and $c_{n} \to c$ , we obtain
684
+
685
+ $$
686
+ \widehat {\Theta} _ {s p i k e} \xrightarrow {\text {a . s .}} \Theta_ {s p i k e}. \tag {23}
687
+ $$
688
+
689
+ Finally, by Eq. (21), (22), (23), we complete the proof of Theorem 3.4.
690
+
691
+ # H. Proof of Theorem B.1
692
+
693
+ According to Lemma 2.16 in (Yao et al., 2015), we have the following relationship:
694
+
695
+ $$
696
+ \gamma_ {1} = \mu_ {1}, \gamma_ {2} = \mu_ {2} + c \mu_ {1} ^ {2}, \gamma_ {3} = \mu_ {3} + 3 c \mu_ {1} \mu_ {2} + c ^ {2} \mu_ {1} ^ {2},
697
+ $$
698
+
699
+ where
700
+
701
+ $$
702
+ \gamma_ {j} = \lim _ {p \rightarrow \infty} \frac {1}{p} \operatorname {t r} \left(\mathbf {W} ^ {T} \mathbf {W}\right) ^ {j}, \quad \mu_ {j} = \int v ^ {j} d H (v), \quad j = 1, 2, 3.
703
+ $$
704
+
705
+ Moreover, together with Eq. (20), we have
706
+
707
+ $$
708
+ \mu_ {j} = t \sigma_ {1} ^ {2 j} + (1 - t) \sigma_ {2} ^ {2 j}, \quad j = 1, 2, 3.
709
+ $$
710
+
711
+ And by the definition of $\Sigma_{PDB}$
712
+
713
+ $$
714
+ \lim _ {p \to \infty} \frac {1}{p} \mathrm {t r} (\pmb {\Sigma} _ {P D B}) ^ {j} = t \sigma_ {1} ^ {2 j} + (1 - t) \sigma_ {2} ^ {2 j}, \quad j = 1, 2, 3,
715
+ $$
716
+
717
+ we obtain (13). Thus we complete the proof Theorem B.1.
ICML/2025/Weight matrices compression based on PDB model in deep neural networks/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ac106e53104854b566072f9eb0f8aa5c2eb04d430fd3bd12affa03a8f6dc779
3
+ size 1040050
ICML/2025/Weight matrices compression based on PDB model in deep neural networks/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc491e2832600d0c57a1d12045aedeb93b39b12486456371819c1bf1e531fb6b
3
+ size 785369
ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/e3568e41-5ce9-4034-816d-086c234fff18_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9529c4556cb3487abb7c0f18519edbc04c797d34f55332da8a71b5a28861d639
3
+ size 156896
ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/e3568e41-5ce9-4034-816d-086c234fff18_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e02c03e5203c18dc653f678bc3eaf8454506a842207c3582e277a15c2afbb537
3
+ size 183961
ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/e3568e41-5ce9-4034-816d-086c234fff18_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7094b7ad6f3111a19643f159830bd843faaccd158ba070650af736930e69488
3
+ size 1282816
ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92de6f7a4812478537302a989a06093f55ccc8d995e53b6aef1a219ff5a782bf
3
+ size 514643
ICML/2025/Weisfeiler and Leman Go Gambling_ Why Expressive Lottery Tickets Win/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3cc7afaf3b3d7d5558256928b23c82ca7ad62fadc5b6d18c636323022bb43f1e
3
+ size 1076732
ICML/2025/What Do Learning Dynamics Reveal About Generalization in LLM Mathematical Reasoning_/583da79b-2835-44cc-858f-01a9c108372d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a12e0e6aacfc63fa54cdeec647b3c9b174713337cea13091fafb28951ce2c786
3
+ size 81491
ICML/2025/What Do Learning Dynamics Reveal About Generalization in LLM Mathematical Reasoning_/583da79b-2835-44cc-858f-01a9c108372d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c6b4d55d001d1fc578b80e55ba31b0b5760cf49c86c576a096eadbaea67ca76
3
+ size 98149