SlowGuess commited on
Commit
43c669e
·
verified ·
1 Parent(s): 9deda4d

Add Batch 2d229c79-e997-4718-b34f-5a6a908dc52e

Browse files
geniegenerativeinteractiveenvironments/6cea4c71-bf12-4baf-99bf-5864cc0f055e_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6bf7e89ee8a254bdae9c9405e78f09e8988f43c5d48a55f975159d819944e85
3
+ size 137425
geniegenerativeinteractiveenvironments/6cea4c71-bf12-4baf-99bf-5864cc0f055e_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c581cf412857405d0cb4dd8d37cc7903c2bc4af8db63bdc59351058f0f30641c
3
+ size 172835
geniegenerativeinteractiveenvironments/6cea4c71-bf12-4baf-99bf-5864cc0f055e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84031578fd88f09cc46bbb3c3ba18ec352e49d092683ac136dec0e2e8d0b8cd1
3
+ size 3324745
geniegenerativeinteractiveenvironments/full.md ADDED
@@ -0,0 +1,602 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ![](images/67c08179a5c6ff3deb58a7c75b3c97242d4b466496995eedd48ee5d84b75de0e.jpg)
2
+
3
+ # Genie: Generative Interactive Environments
4
+
5
+ Jake Bruce\*1 Michael Dennis\*1 Ashley Edwards\*1 Jack Parker-Holder\*1 Yuge (Jimmy) Shi\*1 Edward Hughes' Matthew Lai\*1 Aditi Mavalankar' Ritchie Steigerwald' Chris Apps' Yusuf Aytar Sarah Bechtle' Feryal Behbahani Stephanie Chan' Nicolas Heess' Lucy Gonzalez' Simon Osindero Sherjil Ozair Scott Reed' Jingwei Zhang' Konrad Zolna' Jeff Clune' Nando de Freitas Satinder Singh' Tim Rocktaschel
6
+
7
+ # Abstract
8
+
9
+ We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model. Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain-specific requirements typically found in the world model literature. Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future.
10
+
11
+ # 1. Introduction
12
+
13
+ The last few years have seen an emergence of generative AI, with models capable of generating novel and creative content. Driven by breakthroughs in architectures such as transformers (Vaswani et al., 2017), advances in hardware, and a recent focus on scaling models and datasets, we can now generate coherent, conversational language (Brown et al., 2020; Radford et al., 2018; 2019), as well as crisp and aesthetically pleasing images from a text prompt (Ramesh et al., 2021; 2022; Rombach et al., 2022; Saharia et al.,
14
+
15
+ *Equal contribution 'Google DeepMind 'University of British Columbia. Correspondence to: Project Leads: Ashley Edwards <edwardsashley@google.com>, Jack Parker-Holder <jparkerholder@google.com>, Tech Lead: Jake Bruce <jacobbruce@google.com>.
16
+
17
+ Proceedings of the $41^{st}$ International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s).
18
+
19
+ ![](images/1aa8d8673c1ed04696e8399d5d1f38c0ac2c8ec758a80790aa3f453f20bb79a2.jpg)
20
+ Figure 1: Diverse trajectories: Genie is a generative model that can be used as an interactive environment. The model can be prompted in various ways, either with a generated image (top) or a hand-drawn sketch (bottom). At each time step, the model takes a user-provided latent action to generate the next frame, producing trajectories with interesting and diverse character actions.
21
+
22
+ 2022). Early signs indicate video generation will be yet another frontier, with recent results suggesting that such models may also benefit from scale (Blattmann et al., 2023a; Esser et al., 2023; Ho et al., 2022a; Hong et al., 2023). Still, there remains a gulf between the level of interactions and engagement of video generative models and language tools such as ChatGPT, let alone more immersive experiences.
23
+
24
+ What if, given a large corpus of videos from the Internet, we could not only train models capable of generating novel images or videos, but entire interactive experiences? In this work we propose generative interactive environments, a new paradigm for generative AI whereby interactive environments can be generated from a single text or image prompt. Our approach, Genie, is trained from a large dataset of over 200,000 hours of publicly available Internet gaming videos and, despite training without action or text annotations, is controllable on a frame-by-frame basis via a learned
25
+
26
+ ![](images/b10a1bedc30b5d2ed252dabdfa147719928d8422fc6c3f41b5ca302f86888ab9.jpg)
27
+ Figure 2: A whole new world: Genie is capable of converting a variety of different prompts into interactive, playable environments that can be easily created, stepped into, and explored. This is made possible via a latent action interface, learned fully unsupervised from Internet videos. On the right we see a few generated steps for taking two latent actions. See more examples on our website.
28
+
29
+ latent action space (see Table 1 for a comparison to other approaches). At 11B parameters, Genie exhibits the properties typically seen in foundation models—it can take an unseen image as a prompt making it possible to create and play entirely imagined virtual worlds (Figures 1 and 2).
30
+
31
+ Genie builds upon ideas from state-of-the-art video generation models (Gupta et al., 2023; Villegas et al., 2023), with a core design choice being spatiotemporal (ST) transformers (Xu et al., 2020) which are used in all of our model components. Genie utilizes a novel video tokenizer, and extracts latent actions via a causal action model. Both the video tokens and latent actions are passed to the dynamics model, which autoregressively predicts the next frame using MaskGIT (Chang et al., 2022). We provide a rigorous scaling analysis of our architecture with respect to both batch size and model size, which we vary from 40M to 2.7B parameters. The results show that our architecture scales gracefully with additional computational resources, leading to our final 11B parameter model. We train Genie on a filtered set of 30,000 hours of Internet gameplay videos from hundreds of 2D platformer games, producing a foundation world model for this setting.
32
+
33
+ To demonstrate the generality of our approach, we also train a separate model on action-free robot videos from the RT1 dataset (Brohan et al., 2023), learning a generative environment with consistent latent actions. Finally, we show that latent actions learned from Internet videos can be used for inferring policies from unseen action-free videos of simulated reinforcement learning (RL) environments, indicating that Genie may hold the key to unlocking unlimited data for training the next generation of generalist agents (Bauer et al., 2023; Clune, 2019; Open Ended Learning Team et al., 2021; Reed et al., 2022).
34
+
35
+ Table 1: A new class of generative model: Genie is a novel video and world model that is controllable on a frame-by-frame basis, which requires only video data at train time.
36
+
37
+ <table><tr><td>Model Class</td><td>Training Data</td><td>Controllability</td></tr><tr><td>World Models</td><td>Video + Actions</td><td>Frame-level</td></tr><tr><td>Video Models</td><td>Video + Text</td><td>Video-level</td></tr><tr><td>Genie</td><td>Video</td><td>Frame-level</td></tr></table>
38
+
39
+ # 2. Methodology
40
+
41
+ Genie is a generative interactive environment trained from video-only data. In this section we begin with preliminaries before explaining the main components of our model.
42
+
43
+ Several components in the Genie architecture are based on the Vision Transformer (ViT) (Dosovitskiy et al., 2021; Vaswani et al., 2017). Notably, the quadratic memory cost of transformers poses challenges for videos, which can contain up to $O(10^{4})$ tokens. We thus adopt a memory efficient ST-transformer architecture (inspired by Xu et al. (2020), see Figure 4) across all model components, balancing model capacity with computational constraints.
44
+
45
+ Unlike a traditional transformer where every token attends to all others, an ST-transformer contains $L$ spatiotemporal blocks with interleaved spatial and temporal attention layers, followed by a feed-forward layer (FFW) as standard attention blocks. The self-attention in the spatial layer attends over the $1 \times H \times W$ tokens within each time step, and in the temporal layer attends over $T \times 1 \times 1$ tokens across the $T$ time steps. Similar to sequence transformers, the temporal layer assumes a causal structure with a causal mask. Crucially, the dominating factor of computation complexity (i.e. the spatial attention layer) in our architecture scales linearly with the number of frames rather than quadratically, making
46
+
47
+ ![](images/8cb4028e54b58a454b81f0869f274e3f16bac6bd894eb5b514405bc9bcf2b5a6.jpg)
48
+ Figure 3: Genie model training: Genie takes in $T$ frames of video as input, tokenizes them into discrete tokens $z$ via the video tokenizer, and infers the latent actions between each frame $a$ with the latent action model. Both are then passed to the dynamics model to generate predictions for the next frames in an iterative manner.
49
+
50
+ it much more efficient for video generation with consistent dynamics over extended interactions. Further, note that in the ST block, we include only one FFW after both spatial and temporal components, omitting the post-spatial FFW to allow for scaling up other components of the model, which we observe to improve results significantly.
51
+
52
+ ![](images/f10416ec0418fc036554eab1a2ac046ff27a43c2632e16ff0ac29df32d667967.jpg)
53
+ Figure 4: ST-transformer architecture. The architecture is composed of $L$ spatiotemporal blocks, each containing a spatial layer, temporal layer and feed-forward layer. Each color represents a single self-attention map, with the spatial layer attending over the $H \times W$ tokens from within a single time step, and temporal the same token from across the $T$ time steps.
54
+
55
+ # 2.1. Model Components
56
+
57
+ As shown in Figure 3, our model contains three key components: 1) a latent action model that infers the latent action $\mathbf{a}$ between each pair of frames and 2) a video tokenizer that converts raw video frames into discrete tokens $\mathbf{z}$ and 3) a dynamics model that, given a latent action and past frame tokens, predicts the next frame of the video. The model is trained in two phases following a standard autoregressive video generation pipeline: we train the video tokenizer first, which is used for the dynamics model. We then co-train the latent action model (directly from pixels) and the dynamics model (on video tokens).
58
+
59
+ Latent Action Model (LAM) To achieve controllable video generation, we condition each future frame prediction on the action taken at the previous frame. However, such action labels are rarely available in videos from the Internet and
60
+
61
+ action annotation can be costly to obtain. Instead, we learn latent actions in a fully unsupervised manner (see Figure 5).
62
+
63
+ ![](images/84bececb607a7884435df943dae6753b38bedbfc0dc8a0c706217b7135e63137.jpg)
64
+ Figure 5: Latent action model: learns actions $a_{t}$ unsupervised from unlabelled video frames.
65
+
66
+ First, an encoder takes as inputs all previous frames $x_{1:t} = (x_1,\dots x_t)$ as well as the next frame $x_{t + 1}$ , and outputs a corresponding set of continuous latent actions $\tilde{a}_{1:t} = (\tilde{a}_1,\dots \tilde{a}_t)$ . A decoder then takes all previous frames and latent actions as input and predicts the next frame $\hat{x}_{t + 1}$ .
67
+
68
+ To train the model, we leverage a VQ-VAE-based objective van den Oord et al. (2017), which enables us to limit the number of predicted actions to a small discrete set of codes. We limit the vocabulary size $|A|$ of the VQ codebook, i.e. the maximum number of possible latent actions, to a small value to permit human playability and further enforce controllability (we use $|A| = 8$ in our experiments). As the decoder only has access to the history and latent action, $\tilde{a}_t$ should encode the most meaningful changes between the past and the future for the decoder to successfully reconstruct the future frame.
69
+
70
+ We found it to be beneficial to use a lightweight decoder to learn the latent actions rather than directly using the dynamics model. Thus the LAM decoder exists only to give the LAM training signal and is not subsequently used at inference time. Indeed, apart from the VQ codebook, the entire LAM is discarded at inference time and replaced with actions from the user.
71
+
72
+ We utilize our ST-transformer architecture for the latent action model. The causal mask in the temporal layer allows us to take the entire video $\pmb{x}_{1:T}$ as input and generate all latent actions between each frame $\tilde{a}_{1:T-1}$ .
73
+
74
+ Video Tokenizer Following prior work (Gupta et al., 2023; Villegas et al., 2023; Yan et al., 2023), we compress videos into discrete tokens to reduce dimensionality and enable higher quality video generation (see Figure 6). We again make use of VQ-VAE, which takes in $T$ frames of video $\pmb{x}_{1:T} = (x_1,x_2,\dots ,x_T)\in \mathbb{R}^{T\times H\times W\times C}$ as input, generating discrete representations for each frame $\pmb{z}_{1:T} = (z_{1},z_{2},\dots ,z_{T})\in \mathbb{I}^{T\times D}$ , where $D$ is the size of the discrete latent space. The tokenizer is trained using a standard VQ-VQAE objective over the entire video sequence.
75
+
76
+ ![](images/d490f450d531c5ebe5d8912a7c565d9e6653e5b0a5e2773359f93f1e297f990f.jpg)
77
+ Figure 6: Video tokenizer: a VQ-VAE with ST-transformer.
78
+
79
+ Unlike prior works that focus on spatial-only compression in the tokenization phase (Gupta et al., 2023; Hong et al., 2022; Wu et al., 2022), we utilize the ST-transformer in both the encoder and decoder to incorporate temporal dynamics in the encodings, which improves the video generation quality. By the causal nature of the ST-transformer, each discrete encoding $z_{t}$ contains information from all previously seen frames of the video $x_{1:t}$ . Phenaki (Villegas et al., 2023) also uses a temporal-aware tokenizer, C-ViViT, but this architecture is compute intensive, as the cost grows quadratically with the number of frames—in comparison, our ST-transformer based tokenizer (ST-ViViT) is much more compute efficient with the dominating factor in its cost increasing linearly with the number of frames.
80
+
81
+ ![](images/5f0e3d998331545966c6f417ac56f00850060a15e8d06acadb7ff4b414675c89.jpg)
82
+ Figure 7: Dynamics model: takes in video tokens and action embeddings, and predicts future masked video tokens.
83
+
84
+ Dynamics Model The dynamics model is a decoder-only MaskGIT (Chang et al., 2022) transformer (Figure 7). At each time step $t \in [1, T]$ , it takes in the tokenized video $z_{1:t-1}$ and stopgrad latent actions $\tilde{a}_{1:t-1}$ and predicts the next frame tokens $\hat{z}_t$ . We again utilize an ST-transformer, whose causal structure enables us to use tokens from all $(T-1)$ frames $z_{1:T-1}$ and latent actions $\tilde{a}_{1:T-1}$ as input, and generate predictions for all next frames $\hat{z}_{2:T}$ . The model is trained with a cross-entropy loss between the predicted tokens $\hat{z}_{2:T}$ and ground-truth tokens $z_{2:T}$ . At train time we randomly mask the input tokens $z_{2:T-1}$ according to a Bernoulli distribution masking rate sampled uniformly between 0.5 and 1. Note that a common practice for training
85
+
86
+ world-models, including transformer-based models, is to concatenate the action at time $t$ to the corresponding frame (Micheli et al., 2023; Robine et al., 2023). However, we found that treating the latent actions as additive embeddings for both the latent action and dynamics models helped to improve the controllability of the generations.
87
+
88
+ # 2.2. Inference: Action-Controllable Video Generation
89
+
90
+ ![](images/bca8bc2411d43bc65c4c77f8091efcc7b11d0a7d4bec49ea8b74afa606b2c580.jpg)
91
+ Figure 8: Genie Inference: the prompt frame is tokenized, then combined with the latent action for the corresponding step taken from the user, and passed to the dynamics model for iterative generation. The predicted frame tokens are then decoded back to image space via the tokenizer's decoder.
92
+
93
+ We now describe how to use Genie for action-controllable video generation at inference time (see Figure 8). A player first prompts the model with an image $x_{1}$ that serves as the initial frame<sup>1</sup>. The image is tokenized using the video encoder, yielding $z_{1}$ . The player then specifies a discrete latent action $a_{1}$ to take by choosing any integer value within $[0, |A|)$ . Note that when first interacting with the model, it is unclear how each latent action will impact the next frame generation. However, we found that the meaning of each action remained consistent across different inputs. Hence, interpreting the mapping of latent actions is akin to learning the buttons on a new controller. The dynamics model takes the frame tokens $z_{1}$ and corresponding latent action $\tilde{a}_{1}$ , which is obtained by indexing into the VQ codebook with the discrete input $a_{1}$ , to predict the next frame tokens $z_{2}$ . This process is repeated to generate the rest of the sequence $\hat{z}_{2:T}$ in an autoregressive manner as actions continue to be passed to the model, while tokens are decoded into video frames $\hat{x}_{2:T}$ with the tokenizer's decoder. Note that we can re-generate ground truth videos from the dataset by passing the model the starting frame and inferred actions from the video, while also generating completely new videos (or trajectories) by changing the actions.
94
+
95
+ # 3. Experimental Results
96
+
97
+ Datasets We train Genie on a large-scale dataset collected from publicly available Internet videos of 2D Platformer games (referred to from here on as "Platformers"). We construct the Platformers dataset by filtering publicly available videos for keywords relating to platformers, yielding 55M 16s video clips at 10FPS, with 160x90 resolution. The final
98
+
99
+ ![](images/9470a29161902cb5a10ae936fe36a2dc3a4f78b130ab44ab21281f2a54d0ed59.jpg)
100
+ Figure 9: Scaling results. Left: Training curves for different model sizes, Middle: Final training loss for each model size, averaged over the last 300 updates, Right: Final training loss for a 2.3B model with different batch sizes.
101
+
102
+ ![](images/760a3652ef005056054df4f8809f4699b8747a1d09ab675c6ae562b6f40a1c41.jpg)
103
+
104
+ ![](images/97d7efca33f52a459dd498dcd5d7d50ba5d867bfbb1b6a5d2df0c53411229207.jpg)
105
+
106
+ dataset contains 6.8M 16s video clips (30k hours), within an order of magnitude of other popular Internet video datasets (Bain et al., 2021; Wang et al., 2023). More details can be found in Appendix B.1. Unless otherwise specified, results are with a 11B-parameter model trained on this dataset.
107
+
108
+ To verify the generality of our method, we also consider the robotics datasets used to train RT1 (Brohan et al., 2023), combining their dataset of $\sim 130k$ robot demonstrations with a separate dataset of simulation data and the 209k episodes of real robot data from prior work (Kalashnikov et al., 2018). Note that we do not use actions from any of these datasets, and simply treat them as videos. For simplicity, from here on we refer to this dataset as "Robotics".
109
+
110
+ Metrics We examine the video generation performance of Genie via two factors, namely video fidelity, i.e. the quality of video generation, and controllability, i.e. how much impact the latent actions have in video generation. For video fidelity we use the Frechet Video Distance (FVD), a video-level metric, which has been shown to have a high level of alignment to human evaluation on video quality (Unterthiner et al., 2019). For controllability, we devise a metric based on peak signal-to-noise ratio (PSNR) which we call $\Delta_t$ PSNR, that measures how much the video generations differ when conditioned on latent actions inferred from ground-truth $(\hat{x}_t)$ vs. sampled from a random distribution $(\hat{x}_t')$ :
111
+
112
+ $$
113
+ \Delta_ {t} \mathrm {P S N R} = \mathrm {P S N R} \left(x _ {t}, \hat {x} _ {t}\right) - \mathrm {P S N R} \left(x _ {t}, \hat {x} _ {t} ^ {\prime}\right),
114
+ $$
115
+
116
+ where $x_{t}$ denotes the ground-truth frame at time $t$ , $\hat{x}_{t}$ denotes the frame from latent actions $\tilde{a}_{1:t}$ inferred from ground-truth frames, and $\hat{x}_{t}'$ the same frame generated from a sequence of latent actions randomly sampled from a categorical distribution. As such, the greater $\Delta_{t}$ PSNR is, the more the video generated from random latent actions differs from ground-truth, which indicates a higher level of controllability from the latent actions. For all experiments we report $\Delta_{t}$ PSNR with $t = 4$ .
117
+
118
+ Training Details Our video tokenizer uses 200M parameters, a patch size of 4 and a codebook with embedding size 32 and 1024 unique codes, which we found to be the most
119
+
120
+ effective given the trade-off between reconstruction quality of the tokenizer and downstream performance of video prediction. The latent action model has 300M parameters, uses a patch size of 16 and a codebook with embedding size 32 and 8 unique codes (latent actions). For all modelling components we use a sequence length of 16 frames with an FPS of 10. Further, we employ bfloat16 and QK norm for training our dynamics model, which has been shown to stabilize training at large scale (Dehghani et al., 2023; Henry et al., 2020). At inference time, we perform 25 MaskGIT steps for the sampling of each frame with a temperature of 2 using random sampling. See Appendix C for more details.
121
+
122
+ # 3.1. Scaling Results
123
+
124
+ In this section, we investigate the scaling behavior of our model. To this end, we conduct studies that explore the impact of both model size and batch size. See Appendix D for more details on architecture and compute usage.
125
+
126
+ Scaling Model Size Given a fixed video tokenizer and action model architecture, we train a series of dynamics models ranging from 40M to 2.7B parameters. Figure 9 shows our architecture scales gracefully with model parameters, with each increase in size corresponding to a consistent decrease in the final training loss. This is a strong indication that our approach benefits from scaling, which we exploit with our main Genie model.
127
+
128
+ Scaling Batch Size We also investigate the effect of scaling the batch size, considering a 2.3B model with batch sizes of 128, 256, and 448, equating to 1.9M, 3.8M and 6.6M tokens. As shown in Figure 9, increasing the batch size leads to a similarly favorable gain in terms of model performance.
129
+
130
+ Genie Model It is clear that increasing both model size and batch size helps improve model performance. As a result, for our final model, we train a 10.1B dynamics model with a batch size of 512, for a total of 125k steps, using 256 TPUv5p. When combined with the tokenizer and action model this brings the total to 10.7B parameters, trained on 942B tokens, which we refer to as the Genie model.
131
+
132
+ ![](images/85e93f804fb0a331b04269d1cd95a3d3ed5025f31804553419dc5479f2cc9cef.jpg)
133
+ Figure 10: Playing from Image Prompts: We can prompt Genie with images generated by text-to-image models, hand-drawn sketches or real-world photos. In each case we show the prompt frame and a second frame after taking one of the latent actions four consecutive times. In each case we see clear character movement, despite some of the images being visually distinct from the dataset.
134
+
135
+ # 3.2. Qualitative Results
136
+
137
+ We now present qualitative results from the Genie model. We showcase a 11B parameter model trained on the Platformers dataset and a smaller 1.3B model trained on the Robotics dataset. Our model generates high-quality, controllable videos across diverse domains. Notably, we qualitatively evaluate our Platformers-trained model using only out-of-distribution (OOD) image prompts, including those generated from text-to-image models, hand-drawn sketches, and even real-world photos. The ability to generalize to such significantly OOD inputs underscores the robustness of our approach and the value of training on large-scale data, which would not have been feasible with real actions as input.
138
+
139
+ Platformers-trained model Figure 10 showcases examples of our model's generations prompted from OOD images, including (top row) images generated fromImagen2 (Ho et al., 2022a; van den Oord et al.), (second row) hand-drawn sketches and (bottom row) real-world photos. Genie enables bringing these imagined worlds to life, as we see gamelike behaviour when interacting with each example. We showcase more generations in Appendix A, additionally highlighting the consistency of the latent actions.
140
+
141
+ Another emergent capability of our model is its ability to understand 3D scenes and emulate parallax, which is commonly seen in platformer games. In Figure 11 we show an image generated by Imagen2, where taking a latent action
142
+
143
+ ![](images/3bbee164610a5d343cc61cd83403d9921f8e72664c30a2ef9ab323ac4a900c1d.jpg)
144
+ Figure 11: Emulating parallax, a common feature in platformer games. From this initial frame generated by a text-to-image model, the foreground moves more than the near and far middle ground, while the background moves only slightly.
145
+
146
+ ![](images/2acd98ff6133ff7f7338f8adfc3049238c31992f7f8af6420c6a8dd77a73c09f.jpg)
147
+
148
+ moves the foreground at a different rate to the background (as indicated by the length of different colored arrows).
149
+
150
+ Robotics-trained model We trained a 2.5B-parameter model on the Robotics dataset using the same hyperparameters found to be best on Platformers, achieving an FVD of 82.7 on the test split. As shown in Figure 17, this model successfully learns distinct and consistent actions from video data, requiring neither text nor action labels (as in e.g. Yang et al. (2023)). Notably, our model learns not only the controls of the robotic arm but also the interactions and deformations of various objects (Figure 12). We believe this shows our approach presents a path to using larger video datasets from the Internet to create a foundational world model for robotics, with low-level controllable simulation that could be used for a variety of applications.
151
+
152
+ # Prompt
153
+
154
+ # Generated
155
+
156
+ ![](images/92e60980acc41e7dd5e46be87ce8aacbc6e22fba597cf8797b9165d5fd20b85e.jpg)
157
+ Figure 12: Learning to simulate deformable objects: we show frames from a ten step trajectory in the model, taking the same action. Genie is capable of learning the physical properties of objects such as bags of chips.
158
+
159
+ ![](images/c18bb7d7508b7fdd19aa9a29dbc161c547f9042a20e8104e6d6bd528f23be988.jpg)
160
+
161
+ ![](images/3608b7edb95d766840cd54ece3a252280b2dc013156723086ff1e1800ba98fca.jpg)
162
+
163
+ ![](images/5fc429dc382dbfe7631707cd71be31cbef9d060221fdfbe9967108ccbbd4b105.jpg)
164
+
165
+ ![](images/16a391219ed70e908eccec15234032b047a9a5e0f55827e53e9bab55eaa784c6.jpg)
166
+
167
+ ![](images/c385c1cb801cb7110d770de8c4b76c2eb1b1d51a58999900af2e586b30e2db3b.jpg)
168
+
169
+ ![](images/a578f286e2e566eddffab5391e231ea9d7f27532a0024fc50f8016a6ff434ffc.jpg)
170
+
171
+ ![](images/aa0881cdcab98c2c75fcfc67190b20d53e15483dfc1e917816f4508f46de5a0b.jpg)
172
+ Prompt
173
+
174
+ ![](images/823a22bfce53f583aa02dbe31ecee0315bf6c03426d99ebc224adf266749cfe1.jpg)
175
+ 图:down
176
+
177
+ ![](images/25eb191edf78008f87b93bf49530f14a577b2b4d988e4c0e86cf5ec66711dc94.jpg)
178
+ : up
179
+
180
+ ![](images/f17b6f1c282850c68a9570cffa5795e6f85139bd659719e6e62f85ea5782faa7.jpg)
181
+ :left
182
+
183
+ ![](images/3f1768ff04a9bb2a2691b22c465eeb054369703e329b88528feb3780fcfb8f49.jpg)
184
+
185
+ ![](images/154d03e8c84964e0592539960b59d60f7f8c04b0d59e548881240a6cf8c08986.jpg)
186
+
187
+ ![](images/022f1a52411f99d2eaf3b304b12773395e4d0d984a4af6a04b09c6569fb0231e.jpg)
188
+
189
+ ![](images/63e4c63202209403fd1203c619078002e55b7ea211986f2cdd0b934605105196.jpg)
190
+
191
+ ![](images/5f0ab35c4e45025d7540c171007ec99027ac999e7644737cd65317eabe48af37.jpg)
192
+ Figure 13: Controllable, consistent latent actions in Robotics: trajectories beginning from three different starting frames from our Robotics dataset. Each column shows the resulting frame from taking the same latent action five times. Despite training without action labels, not only are the same actions consistent across varied prompt frames, but also have semantic meaning: down, up and left.
193
+
194
+ ![](images/2eaa6aa4559068d75c6847ea3f622a73d28fb2aba5ea802e4a1ae42303a3a95c.jpg)
195
+
196
+ ![](images/e2d1419354060abb5fab281ae46d4d64cda7634ec1b1bff3c639a4a9bae6851a.jpg)
197
+
198
+ ![](images/f040565edbc3e8a2e48adff72b8d7a294dd727fea41df9b5a0035f1d1c580826.jpg)
199
+
200
+ # 3.3. Training Agents
201
+
202
+ ![](images/50ff2707b6688709b5b9f410bc2575f7bb24dba6be827c5ae19b6d2231f7a0f9.jpg)
203
+ Figure 14: Playing from RL environments: Genie can generate diverse trajectories given an image of an unseen RL environment.
204
+
205
+ We believe Genie could one day be used as a foundation world model for training generalist agents. In fact, in Figure 14 we show that the model can be used for generating diverse trajectories in unseen RL environments. We further investigate if latent actions learnt from Internet videos can be used for imitating behaviors from unseen videos. We use a frozen LAM to label a sequence of expert videos from a target environment with discrete latent actions and then train a policy that predicts the likelihood of the expert taking
206
+
207
+ a latent action given an observation. We then use a small dataset with expert ground-truth actions for mapping latent to real actions (see Appendix E for more details).
208
+
209
+ We evaluate in both hard and easy settings of a procedurally generated 2D-platformer environment, CoinRun (Cobbe et al., 2020), and compare against an oracle behavioral cloning (BC) model that has access to expert actions as an upper bound, and a random agent as a lower bound (Figure 15). Notably, the LAM-based policy achieves the same score as the oracle and adapts given as few as 200 expert samples, despite almost certainly never seeing CoinRun before. This provides evidence that the learnt latent actions are consistent, as the mapping from latent to real contains no information about the current observation.
210
+
211
+ ![](images/28d629e165849717cb1a88708f0a5e016f0667e7df61660634aae448af804ed9.jpg)
212
+ Figure 15: BC results. Mean percentage of levels solved out of 100 samples, averaged over 5 seeds with $95\%$ confidence intervals.
213
+
214
+ ![](images/e1499c4bd7d77b67f062e2527ca4b6344789cb165c40362cd1933545023db606.jpg)
215
+
216
+ # 3.4. Ablation Studies
217
+
218
+ Design choices for latent action model In designing our latent action model, we carefully considered the type of input to use. While we ultimately chose to use the original images (pixels), we thoroughly evaluated this choice against the alternative of using tokenized images (replacing x with z in Figure 5). We refer to this alternative approach as the "token-input" model (see Table 2). While this model achieved a slightly lower FVD score on the Platformers dataset, it did not maintain this advantage on the Robotics dataset. More importantly, in both environments, the token-input model exhibited worse controllability (as measured by $\Delta_t$ PSNR). This suggests that some information about video dynamics and movement might have been lost during tokenization, and as a result it is beneficial for the latent
219
+
220
+ action model to take in raw videos as input.
221
+
222
+ Table 2: Latent action model input ablation. We see that Genie achieves higher controllability.
223
+
224
+ <table><tr><td></td><td>Dataset</td><td>#Params</td><td>FVD (↓)</td><td>ΔtPSNR(↑)</td></tr><tr><td>Token-input</td><td>Platformers</td><td>2.3B</td><td>38.8</td><td>1.33</td></tr><tr><td>Pixel-input (Genie)</td><td>Platformers</td><td>2.5B</td><td>40.1</td><td>1.91</td></tr><tr><td>Token-input</td><td>Robotics</td><td>1B</td><td>257.8</td><td>1.65</td></tr><tr><td>Pixel-input (Genie)</td><td>Robotics</td><td>1B</td><td>136.4</td><td>2.07</td></tr></table>
225
+
226
+ Tokenizer architecture ablations We compare the performance of three choices of tokenizers, including 1) (spatial-only) ViT, 2) (spatial-temporal) ST-ViViT and 3) (spatial-temporal) C-ViViT (Table 3). For comparison we use similar number of parameters for all tokenizers, with patch size 10, batch size 128 and sequence length 16. We then train the same dynamics and latent action model on these three different tokenizers, and report their FVD as well as $\Delta_t$ PSNR.
227
+
228
+ Table 3: Tokenizer architecture ablation: Our ST-ViViT architecture results in the best performing tokenizer.
229
+
230
+ <table><tr><td></td><td>#Params</td><td>Memory</td><td>FVD (↓)</td><td>ΔtPSNR(↑)</td></tr><tr><td>ViT</td><td>230M</td><td>0.3GB</td><td>114.5</td><td>1.39</td></tr><tr><td>C-ViViT (Villegas et al., 2023)</td><td>225M</td><td>1.6GB</td><td>272.7</td><td>1.37</td></tr><tr><td>ST-ViViT (ours)</td><td>205M</td><td>0.9GB</td><td>81.4</td><td>1.66</td></tr></table>
231
+
232
+ Our proposed ST-ViViT architecture provides both improved video generation (FVD) and $\Delta_t$ PSNR, for a reasonable trade-off in memory, as compared to to C-ViViT and the spatial-only ViT. This demonstrates its ability to generate videos of high fidelity and controllability, respectively. While C-ViViT employs a full space-time attention mechanism, resulting in significantly higher memory consumption compared to the other two architectures at the same parameter count, this does not translate to improved performance. In fact, C-ViViT exhibits a tendency towards overfitting, necessitating strong regularization during training, which might explain its considerably lower performance.
233
+
234
+ # 4. Related Work
235
+
236
+ World models Generative interactive environments can be considered a class of World Models (Ha & Schmidhuber, 2018; Oh et al., 2015), which enable next-frame prediction that is conditioned on action inputs (Bamford & Lucas, 2020; Chiappa et al., 2017; Hafner et al., 2020; 2021; Kim et al., 2020; 2021; Micheli et al., 2023; Nunes et al., 2020; Pan et al., 2022; Robine et al., 2023). Such models can be useful for training agents, as they can be used for learning policies without direct environment experience at agent training time. However, learning the models themselves typically requires action-conditioned data obtained directly from the environment. In contrast, our approach seeks to learn a world model in an unsupervised fashion from videos alone. Recently, there has been renewed emphasis on scaling
237
+
238
+ world models. GAIA-1 (Hu et al., 2023) and UniSim (Yang et al., 2023) learn world models for autonomous driving and robotic manipulation respectively. These approaches require both text and action labels, while we focus on training from video-only data from publicly available Internet videos.
239
+
240
+ Video models Our work is related to video models, which typically condition on initial frames (or text) and predict the remaining frames in a video (Blattmann et al., 2023b; Clark et al., 2019; Finn et al., 2016; Ho et al., 2022a;b; Hoppe et al., 2022; Kalchbrenner et al., 2017; Le Moing et al., 2021; Lotter et al., 2017; Luc et al., 2020; Singer et al., 2023; Walker et al., 2021; Yan et al., 2021). Our approach most resembles recent transformer based models such as Phenaki (Villegas et al., 2023), TECO (Yan et al., 2023) and MaskViT (Gupta et al., 2023), as we use MaskGIT (Chang et al., 2022) and an ST-Transformer (Xu et al., 2020) over tokenized images. While video models are becoming increasingly controllable (e.g. (Huang et al., 2022)), we seek a more agentic goal and explicitly learn a latent action space directly from data, allowing users or agents to "play" the model using latent action-conditioned predictions.
241
+
242
+ Playable Video Generation Genie generalizes beyond Playable Video Generation (PVG) (Menapace et al., 2021), where latent actions are used for controlling world models learnt directly from videos (Menapace et al., 2021; 2022). In contrast to Genie, SVG and related works (Davtyan & Favaro, 2022) consider domain-specific static examples, rather than generating entirely new environments via prompting. Thus, scaling beyond this setting required nontrivial architectural changes, dropping inductive biases in exchange for a general method.
243
+
244
+ Environment generation Our work is also related to Procedural Content Generation (PCG, see for example Risi & Togelius, 2020a;b) where machine learning has proven highly effective for generating game levels (Summerville et al., 2018), recently via language models that directly write game code (Sudhakaran et al., 2023; Todd et al., 2023). Language models themselves can also be considered to be interactive environments (Wong et al., 2023), albeit lacking a visual component. By contrast in our setting the levels can be learnt and generated directly from pixels, which enables us to utilize the diversity of Internet video data. Other related works have aimed to learn game level components from videos, but require domain-specific knowledge and thus could be difficult to scale (Guzdial & Riedl, 2016; Guzdial et al., 2017).
245
+
246
+ Training agents with latent actions Prior works have used latent actions for imitation from observation (Edwards et al., 2019), planning (Rybkin* et al., 2019) and pre-training RL agents (Schmidt & Jiang, 2024; Ye et al., 2022). These approaches have similar objectives to our latent action model, though have not been applied at scale. VPT (Baker et al.,
247
+
248
+ 2022) is a recent approach that uses an inverse dynamics model learnt from human-provided action labeled data, to label Internet-scale videos with actions that can then be used for training a policy. We showed, in contrast, that we can use latent actions learnt from Internet videos to infer policies for arbitrary environments, avoiding the need for ground-truth actions that are costly and may not generalize.
249
+
250
+ # 5. Conclusion and Future Work
251
+
252
+ We proposed Genie, a new form of generative AI that enables anyone, even children, to dream up, create, and step into generated worlds as we can with human-designed simulated environments. We demonstrated that Genie can be prompted to generate a diverse set of interactive and controllable environments despite training from video-only data.
253
+
254
+ There are clear improvements that can be made to the model. Genie inherits some of the weaknesses of other autoregressive transformer models, and can hallucinate unrealistic futures. And while we have made progress with spatiotemporal representations, we are still limited to 16 frames of memory which makes it challenging to get consistent environments over long horizons. Finally, Genie currently operates around 1FPS and requires future advances to achieve an efficient frame rate for interaction.
255
+
256
+ Still, we believe Genie opens up vast potential for future research. Given its generality, the model could be trained from an even larger proportion of Internet videos to simulate diverse, realistic, and imagined environments. Furthermore, we only briefly touched upon the capabilities of using Genie for training agents, but given that the lack of rich and diverse environments is one of the key limitations in RL, we could unlock new paths to creating more generally capable agents.
257
+
258
+ # Impact Statement
259
+
260
+ Societal Impact Genie could enable a large amount of people to generate their own game-like experiences. This could be positive for those who wish to express their creativity in a new way, for example children who could design and step into their own imagined worlds. We also recognize that with significant advances, it will be critical to explore the possibilities of using this technology to amplify existing human game generation and creativity—and empowering relevant industries to utilize Genie to enable their next generation of playable world development.
261
+
262
+ Training Data and Weights: We have chosen not to release the trained model checkpoints, the model's training dataset, or examples from that data to accompany this paper or the website. We would like to have the opportunity to further engage with the research (and video game) community and to ensure that any future such releases are respectful, safe
263
+
264
+ and responsible.
265
+
266
+ Reproducibility: We understand that it may be challenging for researchers with fewer computational to reproduce our main results. In order to mitigate this issue, we describe a smaller scale, fully reproducible example in Appendix F that can run on a single mid-range TPU (or GPU). Given that many design choices translate between the two settings, we believe this will make it possible for the broader community to investigate future architectural improvements as well as additional research directions resulting from our work.
267
+
268
+ # Acknowledgements
269
+
270
+ We thank Mateusz Malinowski, Philip Ball and Louis Kirsch for reviewing a draft of our paper; Cassidy Hardin, David Bridson, Eric Lau, Lars Lowe Sjoesund, Lucas Smaira and Bernardo Avila Pires for help with our Platformers dataset; Ruben Villegas for valuable discussions on our video model training and evaluation; and Adrian Bolton, Rushil Mistry, Hannah Openshaw, Zoubin Ghahramani, Raia Hadsell, Koray Kavukcuoglu, Daan Wierstra, Doina Precup and Ed Hirst for strategic advice and guidance. We make use of the DeepMind Jax ecosystem (Babuschkin et al., 2010) and specifically thank Andy Brock for building the internal framework we used for our model training and Arthur Brussee who provided an initial interface that enabled us to "play" our models. Finally, thank you to Seneca and Caspian Clune for their creative sketches, potentially making them the youngest ever game designers.
271
+
272
+ # References
273
+
274
+ Babuschkin, I., Baumli, K., Bell, A., Bhupatiraju, S., Bruce, J., Buchlovsky, P., Budden, D., Cai, T., Clark, A., Danihelka, I., et al. The deepmind jax ecosystem, 2020. URL http://github.com/deepmind, 2010.
275
+ Bain, M., Nagrani, A., Varol, G., and Zisserman, A. Frozen in time: A joint video and image encoder for end-to-end retrieval. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1708-1718, Los Alamitos, CA, USA, oct 2021. IEEE Computer Society. doi: 10.1109/ICCV48922.2021.00175.
276
+ Baker, B., Akkaya, I., Zhokov, P., Huizinga, J., Tang, J., Ecoffet, A., Houghton, B., Sampedro, R., and Clune, J. Video pretraining (vpt): Learning to act by watching unlabeled online videos. Advances in Neural Information Processing Systems, 35:24639-24654, 2022.
277
+ Bamford, C. and Lucas, S. M. Neural game engine: Accurate learning of generalizable forward models from pixels. In Conference on Games, 2020.
278
+ Bauer, J., Baumli, K., Behbahani, F., Bhoopchand, A., Bradley-Schmieg, N., Chang, M., Clay, N., Collister, A., Dasagi, V., Gonzalez, L., Gregor, K., Hughes, E., Kashem, S., Loks-Thompson, M., Openshaw, H., Parker-Holder, J., Pathak, S., Perez-Nieves, N., Rakicevic, N., Rocttäschel, T., Schroecker, Y., Singh, S., Sygnowski, J., Tuyls, K., York, S., Zacherl, A., and Zhang,
279
+
280
+ L. M. Human-timescale adaptation in an open-ended task space. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 1887-1935. PMLR, 23-29 Jul 2023.
281
+ Blattmann, A., Dockhorn, T., Kulal, S., Mendelevitch, D., Kilian, M., Lorenz, D., Levi, Y., English, Z., Voleti, V., Letts, A., Jampani, V., and Rombach, R. Stable video diffusion: Scaling latent video diffusion models to large datasets, 2023a.
282
+ Blattmann, A., Rombach, R., Ling, H., Dockhorn, T., Kim, S. W., Fidler, S., and Kreis, K. Align your latents: High-resolution video synthesis with latent diffusion models. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 22563-22575, 2023b.
283
+ Brohan, A., Brown, N., Carbajal, J., Chebotar, Y., Dabis, J., Finn, C., Gopalakrishnan, K., Hausman, K., Herzog, A., Hsu, J., Ibarz, J., Ichter, B., Irpan, A., Jackson, T., Jesmonth, S., Joshi, N. J., Julian, R., Kalashnikov, D., Kuang, Y., Leal, I., Lee, K.-H., Levine, S., Lu, Y., Malla, U., Manjunath, D., Mordatch, I., Nachum, O., Parada, C., Peralta, J., Perez, E., Pertsch, K., Quiambao, J., Rao, K., Ryoo, M., Salazar, G., Sanketi, P., Sayed, K., Singh, J., Sontakke, S., Stone, A., Tan, C., Tran, H., Vanhoucke, V., Vega, S., Vuong, Q., Xia, F., Xiao, T., Xu, P., Xu, S., Yu, T., and Zitkovich, B. Rt-1: Robotics transformer for real-world control at scale. In Robotics: Science and Systems, 2023.
284
+ Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020.
285
+ Chang, H., Zhang, H., Jiang, L., Liu, C., and Freeman, W. T. Maskgit: Masked generative image transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11315-11325, June 2022.
286
+ Chiappa, S., Racaniere, S., Wierstra, D., and Mohamed, S. Recurrent environment simulators. In International Conference on Learning Representations, 2017.
287
+ Clark, A., Donahue, J., and Simonyan, K. Efficient video generation on complex datasets. CoRR, abs/1907.06571, 2019. URL http://arxiv.org/abs/1907.06571.
288
+ Clune, J. Ai-gas: Ai-generating algorithms, an alternate paradigm for producing general artificial intelligence. arXiv preprint arXiv:1905.10985, 2019.
289
+ Cobbe, K., Hesse, C., Hilton, J., and Schulman, J. Leveraging procedural generation to benchmark reinforcement learning. In Proceedings of the 37th International Conference on Machine Learning, pp. 2048-2056, 2020.
290
+ Davtyan, A. and Favaro, P. Controllable video generation through global and local motion dynamics. In Avidan, S., Brostow, G., Cisse, M., Farinella, G. M., and Hassner, T. (eds.), Computer Vision - ECCV 2022, pp. 68-84, Cham, 2022. Springer Nature Switzerland. ISBN 978-3-031-19790-1.
291
+ Dehghani, M., Djolonga, J., Mustafa, B., Padlewski, P., Heek, J., Gilmer, J., Steiner, A. P., Caron, M., Geirhos, R., Alabdul-mohsin, I., Jenatton, R., Beyer, L., Tschannen, M., Arnab, A.,
292
+
293
+ Wang, X., Riquelme Ruiz, C., Minderer, M., Puigcerver, J., Evci, U., Kumar, M., Steenkiste, S. V., Elsayed, G. F., Mahendran, A., Yu, F., Oliver, A., Huot, F., Bastings, J., Collier, M., Gritsenko, A. A., Birodkar, V., Vasconcelos, C. N., Tay, Y., Mensink, T., Kolesnikov, A., Pavetic, F., Tran, D., Kipf, T., Lucic, M., Zhai, X., Keysers, D., Harmsen, J. J., and Houlsby, N. Scaling vision transformers to 22 billion parameters. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 7480-7512. PMLR, 23-29 Jul 2023.
294
+ Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum? id=YicbFdNTTy.
295
+ Edwards, A., Sahni, H., Schroecker, Y., and Isbell, C. Imitating latent policies from observation. In International conference on machine learning, pp. 1755-1763. PMLR, 2019.
296
+ Esser, P., Chiu, J., Atighechian, P., Granskog, J., and Germanidis, A. Structure and content-guided video synthesis with diffusion models. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023.
297
+ Finn, C., Goodfellow, I., and Levine, S. Unsupervised learning for physical interaction through video prediction. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, pp. 64-72, Red Hook, NY, USA, 2016. Curran Associates Inc. ISBN 9781510838819.
298
+ Gupta, A., Tian, S., Zhang, Y., Wu, J., Martin-Martín, R., and Fei-Fei, L. Maskvit: Masked visual pre-training for video prediction. In The Eleventh International Conference on Learning Representations, 2023.
299
+ Guzdial, M. and Riedl, M. Game level generation from gameplay videos. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 12, pp. 44-50, 2016.
300
+ Guzdial, M., Li, B., and Riedl, M. O. Game engine learning from video. In *IJCAI*, pp. 3707-3713, 2017.
301
+ Ha, D. and Schmidhuber, J. Recurrent world models facilitate policy evolution. In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, NeurIPS'18, pp. 2455-2467, 2018.
302
+ Hafner, D., Lillicrap, T., Ba, J., and Norouzi, M. Dream to control: Learning behaviors by latent imagination. In International Conference on Learning Representations, 2020.
303
+ Hafner, D., Lillicrap, T. P., Norouzi, M., and Ba, J. Mastering atari with discrete world models. In International Conference on Learning Representations, 2021.
304
+ He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016. doi: 10.1109/CVPR.2016.90.
305
+
306
+ Henry, A., Dachapally, P. R., Pawar, S. S., and Chen, Y. Query-key normalization for transformers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 4246-4253, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.379.
307
+ Ho, J., Chan, W., Sahara, C., Whang, J., Gao, R., Gritsenko, A., Kingma, D. P., Poole, B., Norouzi, M., Fleet, D. J., and Salimans, T. Imagen video: High definition video generation with diffusion models, 2022a.
308
+ Ho, J., Salimans, T., Gritsenko, A., Chan, W., Norouzi, M., and Fleet, D. J. Video diffusion models. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 8633-8646. Curran Associates, Inc., 2022b.
309
+ Hong, W., Ding, M., Zheng, W., Liu, X., and Tang, J. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. arXiv preprint arXiv:2205.15868, 2022.
310
+ Hong, W., Ding, M., Zheng, W., Liu, X., and Tang, J. Cogvideo: Large-scale pretraining for text-to-video generation via transformers. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=rB6TpjAuSRy.
311
+ Höppe, T., Mehrjou, A., Bauer, S., Nielsen, D., and Dittadi, A. Diffusion models for video prediction and infilling. Transactions on Machine Learning Research, 2022. ISSN 2835-8856.
312
+ Hu, A., Russell, L., Yeo, H., Murez, Z., Fedoseev, G., Kendall, A., Shotton, J., and Corrado, G. Gaia-1: A generative world model for autonomous driving, 2023.
313
+ Huang, J., Jin, Y., Yi, K. M., and Sigal, L. Layered controllable video generation. In Computer Vision - ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XVI, pp. 546-564, Berlin, Heidelberg, 2022. Springer-Verlag. ISBN 978-3-031-19786-4.
314
+ Jouppi, N. P., Yoon, D. H., Kurian, G., Li, S., Patil, N., Laudon, J., Young, C., and Patterson, D. A domain-specific supercomputer for training deep neural networks. Communications of the ACM, 63(7):67-78, 2020.
315
+ Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., Quillen, D., Holly, E., Kalakrishnan, M., Vanhoucke, V., et al. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293, 2018.
316
+ Kalchbrenner, N., van den Oord, A., Simonyan, K., Danihelka, I., Vinyals, O., Graves, A., and Kavukcuoglu, K. Video pixel networks. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1771-1779. PMLR, 06-11 Aug 2017. URL https://proceedings.mlr.press/v70/kalchbrenner17a.html.
317
+ Kapturowski, S., Ostrovski, G., Quan, J., Munos, R., and Dabney, W. Recurrent experience replay in distributed reinforcement learning. In International conference on learning representations, 2018.
318
+ Kim, S. W., Zhou, Y., Philion, J., Torralba, A., and Fidler, S. Learning to simulate dynamic environments with gamean. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
319
+
320
+ Kim, S. W., Philion, J., Torralba, A., and Fidler, S. Drivegan: Towards a controllable high-quality neural simulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5820-5829, June 2021.
321
+ Le Moing, G., Ponce, J., and Schmid, C. Ccvs: Context-aware controllable video synthesis. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 14042-14055. Curran Associates, Inc., 2021.
322
+ Lotter, W., Kreiman, G., and Cox, D. Deep predictive coding networks for video prediction and unsupervised learning. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Blewdt9xe.
323
+ Luc, P., Clark, A., Dieleman, S., de Las Casas, D., Doron, Y., Cassirer, A., and Simonyan, K. Transformation-based adversarial video prediction on large-scale data. CoRR, abs/2003.04035, 2020.
324
+ Menapace, W., Lathuilière, S., Tulyakov, S., Siarohin, A., and Ricci, E. Playable video generation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp. 10061-10070. Computer Vision Foundation / IEEE, 2021.
325
+ Menapace, W., Lathuilière, S., Siarohin, A., Theobalt, C., Tulyakov, S., Golyanik, V., and Ricci, E. Playable environments: Video manipulation in space and time. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
326
+ Micheli, V., Alonso, E., and Fleuret, F. Transformers are sample-efficient world models. In The Eleventh International Conference on Learning Representations, 2023.
327
+ Nunes, M. S., Dehban, A., Moreno, P., and Santos-Victor, J. Action-conditioned benchmarking of robotic video prediction models: a comparative study. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 8316-8322, 2020. doi: 10.1109/ICRA40945.2020.9196839.
328
+ Oh, J., Guo, X., Lee, H., Lewis, R., and Singh, S. Action-conditional video prediction using deep networks in atari games. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, NIPS'15, pp. 2863-2871, Cambridge, MA, USA, 2015. MIT Press.
329
+ Open Ended Learning Team, Stoke, A., Mahajan, A., Barros, C., Deck, C., Bauer, J., Sygnowski, J., Trebacz, M., Jaderberg, M., Mathieu, M., McAleese, N., Bradley-Schmieg, N., Wong, N., Porcel, N., Raileanu, R., Hughes-Fitt, S., Dalibard, V., and Czarnecki, W. M. Open-ended learning leads to generally capable agents. CoRR, abs/2107.12808, 2021.
330
+ Oquab, M., Darcet, T., Moutakanni, T., Vo, H., Szafraniec, M., Khalidov, V., Fernandez, P., Haziza, D., Massa, F., El-Nouby, A., et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023.
331
+ Pan, M., Zhu, X., Wang, Y., and Yang, X. Iso-dream: Isolating and leveraging noncontrollable visual dynamics in world models. In Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., and Oh, A. (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 23178-23191. Curran Associates, Inc., 2022.
332
+
333
+ Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. Improving language understanding by generative pre-training. 2018.
334
+ Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
335
+ Rajbhandari, S., Rasley, J., Ruwase, O., and He, Y. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-16. IEEE, 2020.
336
+ Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. Zero-shot text-to-image generation. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 8821-8831. PMLR, 18-24 Jul 2021.
337
+ Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. Hierarchical text-conditional image generation with clip latents, 2022.
338
+ Reed, S., Zolna, K., Parisotto, E., Colmenarejo, S. G., Novikov, A., Barth-maron, G., Giménez, M., Sulsky, Y., Kay, J., Springenberg, J. T., Eccles, T., Bruce, J., Razavi, A., Edwards, A., Heess, N., Chen, Y., Hadsell, R., Vinyals, O., Bordbar, M., and de Freitas, N. A generalist agent. Transactions on Machine Learning Research, 2022. ISSN 2835-8856. Featured Certification, Outstanding Certification.
339
+ Risi, S. and Togelius, J. Increasing generality in machine learning through procedural content generation. Nature Machine Intelligence, 2, 08 2020a. doi: 10.1038/s42256-020-0208-z.
340
+ Risi, S. and Togelius, J. Procedural content generation: From automatically generating game levels to increasing generality in machine learning. Nature, 2020b.
341
+ Robine, J., Höftmann, M., Uelwer, T., and Harmeling, S. Transformer-based world models are happy with 100k interactions. In The Eleventh International Conference on Learning Representations, 2023.
342
+ Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10684-10695, June 2022.
343
+ Rybkin*, O., Pertsch*, K., Derpanis, K. G., Daniilidis, K., and Jaegle, A. Learning what you can do before doing anything. In International Conference on Learning Representations, 2019.
344
+ Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, S. K. S., Gontijo-Lopes, R., Ayan, B. K., Salimans, T., Ho, J., Fleet, D. J., and Norouzi, M. Photorealistic text-to-image diffusion models with deep language understanding. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K. (eds.), Advances in Neural Information Processing Systems, 2022.
345
+ Schmidt, D. and Jiang, M. Learning to act without actions. In The Twelfth International Conference on Learning Representations, 2024.
346
+
347
+ Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-lm: Training multi-billion parameter language models using model parallelism. CoRR, abs/1909.08053, 2019. URL http://arxiv.org/abs/1909.08053.
348
+ Singer, U., Polyak, A., Hayes, T., Yin, X., An, J., Zhang, S., Hu, Q., Yang, H., Ashual, O., Gafni, O., Parikh, D., Gupta, S., and Taigman, Y. Make-a-video: Text-to-video generation without text-video data. In The Eleventh International Conference on Learning Representations, 2023.
349
+ Sudhakaran, S., González-Duque, M., Glanois, C., Freiberger, M., Najarro, E., and Risi, S. Prompt-guided level generation. In Proceedings of the Companion Conference on Genetic and Evolutionary Computation, pp. 179-182, 2023.
350
+ Summerville, A., Snodgrass, S., Guzdial, M., Holmgård, C., Hoover, A. K., Isaksen, A., Nealen, A., and Togelius, J. Procedural content generation via machine learning (PCGML). IEEE Trans. Games, 10(3):257-270, 2018.
351
+ Todd, G., Earle, S., Nasir, M. U., Green, M. C., and Togelius, J. Level generation through large language models. In Proceedings of the 18th International Conference on the Foundations of Digital Games, pp. 1-8, 2023.
352
+ Torabi, F., Warnell, G., and Stone, P. Behavioral cloning from observation. arXiv preprint arXiv:1805.01954, 2018.
353
+ Unterthiner, T., van Steenkiste, S., Kurach, K., Marinier, R., Michalski, M., and Gelly, S. FVD: A new metric for video generation, 2019.
354
+ van den Oord, A., Razavi, A., Uria, B., Caglar Ünlü, Nash, C., Wolff, C., Durkan, C., Ding, D., Górnny, D., Gladchenko, E., Riedel, F., Qi, H., Kelly, J., Bauer, J., Donahue, J., Zhang, J., Malinowski, M., Binkowski, M., Luc, P., Riachi, R., Strudel, R., Sander Dieleman, T. P. I., Ganin, Y., and Eaton-Rosen., Z. Imagen 2. URL https://deepmind.google/technologies/imagen-2/.
355
+ van den Oord, A., Vinyals, O., and Kavukcuoglu, K. Neural discrete representation learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 6309-6318, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964.
356
+ Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998-6008, 2017.
357
+ Villegas, R., Babaeizadeh, M., Kindermans, P.-J., Moraldo, H., Zhang, H., Saffar, M. T., Castro, S., Kunze, J., and Erhan, D. Phenaki: Variable length video generation from open domain textual descriptions. In International Conference on Learning Representations, 2023.
358
+ Walker, J. C., Razavi, A., and van den Oord, A. Predicting video with {vqvae}, 2021.
359
+ Wang, Y., He, Y., Li, Y., Li, K., Yu, J., Ma, X., Chen, X., Wang, Y., Luo, P., Liu, Z., Wang, Y., Wang, L., and Qiao, Y. Internvid: A large-scale video-text dataset for multimodal understanding and generation, 2023.
360
+
361
+ Wong, L., Grand, G., Lew, A. K., Goodman, N. D., Mansinghka, V. K., Andreas, J., and Tenenbaum, J. B. From word models to world models: Translating from natural language to the probabilistic language of thought, 2023.
362
+ Wu, C., Liang, J., Ji, L., Yang, F., Fang, Y., Jiang, D., and Duan, N. Niwa: Visual synthesis pre-training for neural visual world creation. In European conference on computer vision, pp. 720-736. Springer, 2022.
363
+ Xu, M., Dai, W., Liu, C., Gao, X., Lin, W., Qi, G.-J., and Xiong, H. Spatial-temporal transformer networks for traffic flow forecasting. arXiv preprint arXiv:2001.02908, 2020.
364
+ Yan, W., Zhang, Y., Abbeel, P., and Srinivas, A. Videogpt: Video generation using vq-vae and transformers, 2021.
365
+ Yan, W., Hafner, D., James, S., and Abbeel, P. Temporally consistent transformers for video generation. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 39062-39098. PMLR, 23-29 Jul 2023.
366
+ Yang, M., Du, Y., Ghasemipour, K., Tompson, J., Schuurmans, D., and Abbeel, P. Learning interactive real-world simulators. arXiv preprint arXiv:2310.06114, 2023.
367
+ Ye, W., Zhang, Y., Abbeel, P., and Gao, Y. Become a proficient player with limited data through watching pure videos. In The Eleventh International Conference on Learning Representations, 2022.
368
+
369
+ # Author Contributions
370
+
371
+ We list authors alphabetically by last name. Please direct all correspondence to project leads: Ashley Edwards (edwardsashley@google.com) and Jack Parker-Holder (jparkerholder@google.com) and tech lead: Jake Bruce (jacobbruce@google.com).
372
+
373
+ # Core Contributors
374
+
375
+ - Jake Bruce: project leadership, video tokenizer research, action model research, dynamics model research, scaling, model demo, infrastructure
376
+ - Michael Dennis: dynamics model research, scaling, metrics, model demo, infrastructure
377
+ - Ashley Edwards: genie concept, project leadership, action model research, agent training, model demo
378
+ - Edward Hughes: dynamics model research, infrastructure
379
+ - Matthew Lai: dataset curation, infrastructure
380
+ - Aditi Mavalankar: action model research, metrics, agent training
381
+ - Jack Parker-Holder: genie concept, project leadership, dynamics model research, scaling, dataset curation
382
+ - Yuge (Jimmy) Shi: video tokenizer research, dynamics model research, dataset curation, metrics
383
+ - Richie Steigerwald: dataset curation, metrics
384
+
385
+ # Partial Contributors and Advisors
386
+
387
+ - Chris Apps: project management
388
+ - Yusuf Aytar: technical advice
389
+ Sarah Bechtle: technical advice
390
+ - Feryal Behbahani: strategic advice
391
+ - Stephanie Chan: technical advice
392
+ - Jeff Clune: technical advice, strategic advice
393
+ Lucy Gonzalez: project management
394
+ Nicolas Heess: strategic advice
395
+ - Simon Osindero: technical advice
396
+ - Sherjil Ozair: technical advice
397
+ Scott Reed: technical advice
398
+ - Jingwei Zhang: technical advice
399
+ - Konrad Zolna: scaling, technical advice
400
+
401
+ # Sponsors
402
+
403
+ - Nando de Freitas: strategic advice
404
+ - Tim Rocktäschel: genie concept, project leadership
405
+ - Satinder Singh: strategic advice
406
+
407
+ # A. More Example Trajectories
408
+
409
+ ![](images/eeec716652025967ded05b154a4b7fb8cf2d97352a8566c28e283f47f4bf6fa0.jpg)
410
+ Prompt
411
+ Generated
412
+ Figure 16: More example trajectories: the model is prompted with either hand-drawn sketches, images generated from text-to-image generative models or realistic photos. Actions that drive the dynamics of the trajectory are provided by human input.
413
+
414
+ Prompt
415
+
416
+ : left
417
+
418
+ : right
419
+
420
+ : jump
421
+
422
+ : no-op
423
+
424
+ ![](images/c0f3458050e25070913f4decbeb9787df55252065b78fd98460d780b743bb1e6.jpg)
425
+
426
+ ![](images/b112a6fcb48a113fe4da18240f82b84bb636c8fbd65d904229dbb41181b15c11.jpg)
427
+
428
+ ![](images/3545cce195212077c57402ff9d564951116f7a56d88b8f17348facb9427a3592.jpg)
429
+
430
+ ![](images/48ef9e1ff2429e3f09a5edcd4584f61ee88718b945d549dab47f687994b84198.jpg)
431
+
432
+ ![](images/5595ded151ed414b0824620a7233bae1688bb40748787446697756e8329112a7.jpg)
433
+
434
+ ![](images/79bee118180d26e2e3d43d72327aa60ef8e87f322148084348622d282876aeb7.jpg)
435
+
436
+ ![](images/00f38691f5f01dc43b5448d719ec35b317d5ea8fc28b3a5918c21db9ab5d2cbc.jpg)
437
+
438
+ ![](images/4d8efed594367c8a63c32b5b296efedf19d09f66993d0a238fcb697eee944efc.jpg)
439
+
440
+ ![](images/99c90b6c9989a0893b52e42e367a84a0c6e85288a058b48268e7b3cd21bcf20a.jpg)
441
+
442
+ ![](images/b38a12f762f3816cbc16c57da9d3b9a0afab9530071401d8bd8fd36074339806.jpg)
443
+
444
+ ![](images/61c082f1a3562a425084c2971073904c2fd1e6b52a5c2f575a650a322bdc8c06.jpg)
445
+
446
+ ![](images/bd622f4acf66db4b9661204389abd3516254ed61607afd49123480ba1695a2f1.jpg)
447
+
448
+ ![](images/36e5d6cd464c538250ae65cc9c0cdacac33768602811fd6d5b89236589065e76.jpg)
449
+
450
+ ![](images/2c0085e0b6667bf9d1185fc7a2e72bc7c60177ad7d431fae841159ded2874c13.jpg)
451
+
452
+ ![](images/a287f1511a0ada5f338adb804cb0c824ba14ae0a262f499a80a58fad0e4c9c04.jpg)
453
+
454
+ ![](images/278b801af7ef4410327c62e31bd1669299518bbf8b38b09033c5e0ecf05a2197.jpg)
455
+ Figure 17: Controllable, consistent latent actions in Platformers: trajectories beginning from four different starting frames from our Platformers dataset. Each column shows the resulting frame from taking the same latent action five times. Despite training without action labels, not only are the same actions consistent across varied prompt frames, but also have semantic meaning: left, right, jump, and no-op.
456
+
457
+ ![](images/7a81d4f9c85af6290009d0c562ba0423c17732ba1ee456e0983e5528e7324fd1.jpg)
458
+
459
+ ![](images/ab9020e63f9dbc2d810db4dcb5b87681a4b8377e3030a8e05128463450c510d1.jpg)
460
+
461
+ ![](images/fd635e850add442d8685d5ff5068e18ac304b5ab61cf318be45b2f55c82976f6.jpg)
462
+
463
+ ![](images/b4f9499b971fb5d94e4a2f95c32264076e6f3930296406fe5616cff782f613d0.jpg)
464
+
465
+ # B. Dataset
466
+
467
+ # B.1. Platformers Dataset
468
+
469
+ Initial Dataset We generated a dataset by filtering publicly available Internet videos, using the following criteria:
470
+
471
+ - The title contains keywords relating to 2D platformer games.
472
+ - The title or description must contain an action word, such as "speedrun" or "playthrough".
473
+ - The title must not contain negating words such as "movie" or "unboxing".
474
+
475
+ We then split each video into 16s clips at 10 FPS, which corresponds to 160 frames per clip. Our resulting dataset contains 55M videos, which totals around 244k hours. When selecting keywords, we manually spot checked results to check that they typically produced 2D platformer gameplay videos which are not outnumbered by other sorts of videos which happen to share similar keywords.
476
+
477
+ Filter Pipeline We noticed that many of the videos in the dataset were of poor quality, impacting our model performance. We propose a scalable approach to systematically filter the data, using a learned classifier as in (Baker et al., 2022). First, we define high quality videos as those that display clear gameplay and do not contain distractor items such as menu screen or streamer faces. We then filter this data as follows:
478
+
479
+ 1. Our team hand labelled 10k videos, with roughly ten hours of total human effort. The labels ranged from 5 (best) to 1 (worst) quality.
480
+
481
+ 2. We trained a 11M parameter ResNet18 (He et al., 2016) with binary classification where we deleted all entries rated 2-4 and classified 5 as good and 1 as bad.
482
+ 3. We then apply a decision rule based on model prediction and confidence to determine whether to keep the video.
483
+
484
+ Consistent to findings in prior work Baker et al. (2022); Oquab et al. (2023), having high quality data outweighs the quantity of data - even though the curated datasaat is only just over $10\%$ the size of the original dataset, the model trained on the curated dataset outperforms in terms of FVD, see Table 4. Our final dataset is 6.8M videos for a total of over 30k hours.
485
+
486
+ Table 4: Effect of dataset curation.
487
+
488
+ <table><tr><td></td><td>#Params</td><td>FVD (↓)</td></tr><tr><td>Original dataset (55M videos)</td><td>580M</td><td>61.4</td></tr><tr><td>Curated dataset (6.8M videos)</td><td>580M</td><td>54.8</td></tr></table>
489
+
490
+ # C. Training details
491
+
492
+ # C.1. Latent Action Model Training
493
+
494
+ We found a benefit from increasing the number of codes (i.e. number of actions), at the cost of reduced playability for human and AI agents.
495
+
496
+ Table 5: Platformers action model hyperparameters
497
+
498
+ <table><tr><td>Component</td><td>Parameter</td><td>Value</td></tr><tr><td rowspan="3">Encoder</td><td>num_layers</td><td>20</td></tr><tr><td>d_model</td><td>1024</td></tr><tr><td>num_heads</td><td>16</td></tr><tr><td rowspan="3">Decoder</td><td>num_layers</td><td>20</td></tr><tr><td>d_model</td><td>1024</td></tr><tr><td>num_heads</td><td>16</td></tr><tr><td rowspan="3">Codebook</td><td>num_codes</td><td>8</td></tr><tr><td>patch_size</td><td>16</td></tr><tr><td>latent_dim</td><td>32</td></tr></table>
499
+
500
+ Note that the model inputs are normalized between 0 and 1 and the final outputs of the decoder are placed through a sigmoid.
501
+
502
+ # C.2. Video Tokenizer Training
503
+
504
+ Here we describe our video tokenizer training. We found it more effective to scale our decoder than the encoder, and a marginal gain from increasing batch size (see Table 6).
505
+
506
+ Table 6: Tokenizer batch size scaling hyperparameters.
507
+
508
+ <table><tr><td>batch_size</td><td>traininghardware</td><td>FLOPs</td><td>PSNR</td></tr><tr><td>64</td><td>64 TPUv2</td><td>4.22 × 1020</td><td>35.7</td></tr><tr><td>384</td><td>64 TPUv3</td><td>2.57 × 1021</td><td>36.5</td></tr></table>
509
+
510
+ Table 7: Platformers video tokenizer hyperparameters.
511
+
512
+ <table><tr><td>Component</td><td>Parameter</td><td>Value</td></tr><tr><td rowspan="4">Encoder</td><td>num_layers</td><td>12</td></tr><tr><td>d_model</td><td>512</td></tr><tr><td>num_heads</td><td>8</td></tr><tr><td>k/q_size</td><td>64</td></tr><tr><td rowspan="4">Decoder</td><td>num_layers</td><td>20</td></tr><tr><td>d_model</td><td>1024</td></tr><tr><td>num_heads</td><td>16</td></tr><tr><td>k/q_size</td><td>64</td></tr><tr><td rowspan="3">Codebook</td><td>num_codes</td><td>1024</td></tr><tr><td>patch_size</td><td>4</td></tr><tr><td>latent_dim</td><td>32</td></tr></table>
513
+
514
+ We train our video tokenizer for 300k steps using the AdamW optimizer, with cosine decay, using the hyperparameters in Table 8.
515
+
516
+ Table 8: Video tokenizer optimizer hyperparameters
517
+
518
+ <table><tr><td>Parameter</td><td>Value</td></tr><tr><td>max_lr</td><td>3e-4</td></tr><tr><td>min_lr</td><td>3e-4</td></tr><tr><td>β1</td><td>0.9</td></tr><tr><td>β2</td><td>0.9</td></tr><tr><td>weight_Decay</td><td>1e-4</td></tr><tr><td>warmup_steps</td><td>10k</td></tr></table>
519
+
520
+ # C.3. Dynamics Model Training
521
+
522
+ Table 9: Dynamics model optimizer hyperparameters
523
+
524
+ <table><tr><td>Parameter</td><td>Value</td></tr><tr><td>max_lr</td><td>3e-5</td></tr><tr><td>min_lr</td><td>3e-6</td></tr><tr><td>β1</td><td>0.9</td></tr><tr><td>β2</td><td>0.9</td></tr><tr><td>weight_Decay</td><td>1e-4</td></tr><tr><td>warmup_steps</td><td>5k</td></tr></table>
525
+
526
+ # D. Scaling Experiments Details
527
+
528
+ In this section we provide more details on the architecture as well as compute budget for the scaling experiments.
529
+
530
+ Scaling model size For all models we use a batch size of 256. We train all models for 200k steps, thus use a total of 750B training tokens for each run. All runs make use of batch parallelism and stage-3 ZeRO sharding (Rajbhandari et al., 2020), while our larger models also make use of tensor parallelism (Shoeybi et al., 2019). For this experiment we make use of TPUv2 and TPUv3 (Jouppi et al., 2020). See Table 10 for more details.
531
+
532
+ Table 10: Model size scaling architectures and compute usage. All models were trained for 200k steps with a batch size of 256, equating to 750B tokens.
533
+
534
+ <table><tr><td>Parameters</td><td>num_layers</td><td>num_heads</td><td>d_model</td><td>k/q size</td><td>training hardware</td><td>training time</td><td>FLOPs</td></tr><tr><td>41M</td><td>18</td><td>8</td><td>512</td><td>64</td><td>64 TPUv2</td><td>3 days</td><td>2.05 × 1020</td></tr><tr><td>96M</td><td>16</td><td>16</td><td>768</td><td>64</td><td>64 TPUv2</td><td>6 days</td><td>3.58 × 1020</td></tr><tr><td>192M</td><td>20</td><td>18</td><td>1024</td><td>64</td><td>64 TPUv2</td><td>9 days</td><td>6.4 × 1020</td></tr><tr><td>404M</td><td>21</td><td>12</td><td>1536</td><td>128</td><td>64 TPUv2</td><td>18 days</td><td>1.2 × 1021</td></tr><tr><td>811M</td><td>20</td><td>20</td><td>2048</td><td>128</td><td>128 TPUv3</td><td>7 days</td><td>2.2 × 1021</td></tr><tr><td>1.6B</td><td>28</td><td>22</td><td>2560</td><td>128</td><td>128 TPUv3</td><td>12 days</td><td>4.04 × 1021</td></tr><tr><td>2.7B</td><td>36</td><td>22</td><td>3072</td><td>128</td><td>256 TPUv3</td><td>16 days</td><td>6.91 × 1021</td></tr></table>
535
+
536
+ Scaling batch size All models use the same architecture with 2.3B parameters, as shown in Table 11, and train for 200k steps. The only difference between the three runs is hardware—the 128, 256 and 448 batch size models train on 64 TPUv3, 128 TPUv3 and 64 TPUv5p respectively.
537
+
538
+ Table 11: Batch size scaling hyperparameters. All models use the following architecture for 200k steps, differing only in batch size.
539
+
540
+ <table><tr><td>Parameters</td><td>num_layers</td><td>num_heads</td><td>d_model</td><td>k/q size</td></tr><tr><td>2.3B</td><td>34</td><td>20</td><td>2560</td><td>128</td></tr></table>
541
+
542
+ Genie Model The parameter count, model architecture as well as compute usage of the dynamics model for the final Genie model is listed in Table 12. We train a 10.1B dynamics model with a batch size of 512, for a total of 125k steps using 256 TPUv5.
543
+
544
+ Table 12: Genie dynamics model hyperparameters.
545
+
546
+ <table><tr><td>Parameters</td><td>num_layers</td><td>num_heads</td><td>d_model</td><td>k/q size</td><td>FLOPs</td></tr><tr><td>10.1B</td><td>48</td><td>36</td><td>5120</td><td>128</td><td>6.6 × 1022</td></tr></table>
547
+
548
+ # E. Behavioral Cloning Details
549
+
550
+ In this section we provide more details about our behavioral cloning experiments. We train within the Progen CoinRun environment (Cobbe et al., 2020) and evaluate in a held out test set. We assume we have a dataset of expert sequences in this environment from an agent trained with R2D2 (Kapturowski et al., 2018). We then train an agent to imitate from this data. Notably, the oracle agent has access to the corresponding ground-truth expert actions. We now discuss how we can utilize a pre-trained LAM to infer the actions taken.
551
+
552
+ # E.1. Genie LAM
553
+
554
+ In order to train an agent to imitate from unseen videos, we can use a frozen LAM from a Genie model trained on Internet videos. Given an expert sequence $\langle x_{t},x_{t + 1}\rangle$ we extract the corresponding latent action label $a_{t}\gets LAM(x_{t},x_{t + 1})$ . We then train a policy $\pi (a_t|x_t)$ to predict the likelihood of the expert taking latent action $a_{t}$ given observation $x_{t}$ . Note that this procedure is similar to prior works that learn from videos (Baker et al., 2022; Torabi et al., 2018). However, these approaches use ground-truth actions for labeling videos whereas we utilize latent actions learnt completely offline.
555
+
556
+ During inference, we must map latent actions emitted by the policy to real actions. To do this, we utilize a small set of action-labeled expert sequences. Given an expert sequence $\langle x_{t},u_{t},x_{t + 1}\rangle$ (we denote $u_{t}$ for ground-truth actions to avoid confusion with predicted latent actions), we use the LAM to obtain a latent action $a_{t}$ and fill a dictionary $D$ consisting of mapped latents to a list of corresponding real actions. In summary, given an observation $x_{t}$ from the environment, we can obtain the most likely latent action as $a_{t}\sim \pi (s_{t})$ , and then take the corresponding real action as $u_{t}\sim D[a_{t}]$ .
557
+
558
+ Note that other works have used data extracted from the agent's policy to obtain a mapping from latent to real actions (Edwards et al., 2019; Ye et al., 2022), but we found using expert data enabled us to better evaluate the quality of the learnt policy. As shown in the main text, the agent was capable of adapting with as few as 200 expert labels.
559
+
560
+ # E.2. Architecture
561
+
562
+ We train a transformer as the policy for both the oracle and latent BC agents. We utilize our proposed ST-ViViT architecture for encoding the frames $\boldsymbol{x}_{1:t} = (x_1, \dots, x_t)$ . All previous actions are placed through a one-hot and then combined with the corresponding frame encoding as an additive embedding. We use a sequence length of 4 during both training and inference and a batch size of 16.
563
+
564
+ Both the oracle and Genie LAM are trained with a cross-entropy loss where targets are either real or latent actions, respectively. During inference, we obtain the final prediction by sampling from the predicted logits. Note we found the oracle agent performed better when we randomly sampled actions $10\%$ of the time.
565
+
566
+ Table 13: BC model optimizer hyperparameters
567
+
568
+ <table><tr><td>Parameter</td><td>Value</td></tr><tr><td>max_lr</td><td>3e-5</td></tr><tr><td>min_lr</td><td>3e-6</td></tr><tr><td>β1</td><td>0.9</td></tr><tr><td>β2</td><td>0.96</td></tr><tr><td>weight_Decay</td><td>1e-4</td></tr><tr><td>warmup_steps</td><td>5k</td></tr></table>
569
+
570
+ Table 14: BC policy hyperparameters
571
+
572
+ <table><tr><td>Component</td><td>Parameter</td><td>Value</td></tr><tr><td rowspan="3">Encoder</td><td>num_layers</td><td>12</td></tr><tr><td>d_model</td><td>512</td></tr><tr><td>patch_size</td><td>4</td></tr><tr><td>Policy</td><td>linear_layer</td><td>512</td></tr></table>
573
+
574
+ # F. Reproducible Case Study
575
+
576
+ In this section we describe a self-contained, fully reproducible case study that can be trained with a single mid range TPU/GPU in under a week.
577
+
578
+ # F.1. Data Collection
579
+
580
+ First we need to collect the data to train our model. We use the CoinRun environment from the Progen benchmark (Cobbe et al., 2020) since it has thousands of visually diverse levels with fairly simple platformer-like dynamics. Using the "hard" mode, we collect data using a random policy with no action repeats. We sample level seeds between zero and 10,000 and collect 1,000 timesteps for each level, for a total of 10M transitions.
581
+
582
+ # F.2. Video Tokenizer Training
583
+
584
+ Our video tokenizer for CoinRun follows the same setup as described in Section 2.1, trained with the optimizer configuration as in Section C.2. The primary difference in this example is we use smaller model sizes (see Table 15), and then use a batch size of 48 sequences, of length 16, for a total of 768 images per batch. This is sufficient to fit in a single TPU with 16G memory. The model is trained for three days using a single TPU which is sufficient to complete 300k steps.
585
+
586
+ Table 15: CoinRun video tokenizer hyperparameters
587
+
588
+ <table><tr><td>Component</td><td>Parameter</td><td>Value</td></tr><tr><td rowspan="3">Encoder</td><td>num_layers</td><td>8</td></tr><tr><td>d_model</td><td>512</td></tr><tr><td>num_heads</td><td>8</td></tr><tr><td rowspan="3">Decoder</td><td>num_layers</td><td>8</td></tr><tr><td>d_model</td><td>512</td></tr><tr><td>num_heads</td><td>8</td></tr><tr><td rowspan="3">Codebook</td><td>num_codes</td><td>1024</td></tr><tr><td>patch_size</td><td>4</td></tr><tr><td>latent_dim</td><td>32</td></tr></table>
589
+
590
+ # F.3. Dynamics + Latent Action Model Training
591
+
592
+ Once we have trained the video tokenizer we can then jointly train the latent action and dynamics models. Once again we seek to fit our model training inside 16G memory, so we use a batch size of 36 sequences consisting of 16 frames each, for a total of 576 images. We train both the latent action model and dynamics model in parallel, using the setup described above (see: Section C.1 for the latent action model and Section C.3 for the dynamics model).
593
+
594
+ We train both the latent action and dynamics models in parallel for 200k steps, using the optimizer hyperparameters in Table 9.
595
+
596
+ Table 16: CoinRun action model hyperparameters
597
+
598
+ <table><tr><td>Component</td><td>Parameter</td><td>Value</td></tr><tr><td rowspan="3">Encoder</td><td>num_layers</td><td>8</td></tr><tr><td>d_model</td><td>512</td></tr><tr><td>num_heads</td><td>8</td></tr><tr><td rowspan="3">Decoder</td><td>num_layers</td><td>8</td></tr><tr><td>d_model</td><td>512</td></tr><tr><td>num_heads</td><td>8</td></tr><tr><td rowspan="2">Codebook</td><td>num_codes</td><td>6</td></tr><tr><td>latent_dim</td><td>32</td></tr></table>
599
+
600
+ Table 17: CoinRun dynamics model hyperparameters
601
+
602
+ <table><tr><td>Component</td><td>Parameter</td><td>Value</td></tr><tr><td rowspan="3">Architecture</td><td>num_layers</td><td>12</td></tr><tr><td>d_model</td><td>512</td></tr><tr><td>num_layers</td><td>8</td></tr><tr><td rowspan="2">Sampling</td><td>temperature</td><td>1.0</td></tr><tr><td>maskgit_steps</td><td>25</td></tr></table>
geniegenerativeinteractiveenvironments/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22e8c79de5b16fac6bf23c9e011642b89024a755d5d26f55c6ae2a7a50691f39
3
+ size 1188439
geniegenerativeinteractiveenvironments/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:603dcea1b67cf85fb67698e7ce4e87a3b7365ede483a3bd976a7768af9fcc876
3
+ size 711392
unveilingthedynamicsofinformationinterplayinsupervisedlearning/82880792-22f3-498c-90ba-281a4b485bd3_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a718913406b15ae6f9a48250291b1d44a74bea0eab941458a23cf7b5381954fd
3
+ size 99769
unveilingthedynamicsofinformationinterplayinsupervisedlearning/82880792-22f3-498c-90ba-281a4b485bd3_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cbd248f0f0d19442bb96aa552a7f9bf14e86302966684debd047d65f61263b3
3
+ size 120903
unveilingthedynamicsofinformationinterplayinsupervisedlearning/82880792-22f3-498c-90ba-281a4b485bd3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:335ab8ac83a859fa1f848b6740336249cace12607a8f86435cf9ea76f1877e0e
3
+ size 587811
unveilingthedynamicsofinformationinterplayinsupervisedlearning/full.md ADDED
@@ -0,0 +1,432 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Unveiling the Dynamics of Information Interplay in Supervised Learning
2
+
3
+ Kun Song $^{*1}$ Zhiquan Tan $^{*2}$ Bochao Zou $^{1}$ Huimin Ma $^{†1}$ Weiran Huang $^{†34}$
4
+
5
+ # Abstract
6
+
7
+ In this paper, we use matrix information theory as an analytical tool to analyze the dynamics of the information interplay between data representations and classification head vectors in the supervised learning process. Specifically, inspired by the theory of Neural Collapse, we introduce matrix mutual information ratio (MIR) and matrix entropy difference ratio (HDR) to assess the interactions of data representation and class classification heads in supervised learning, and we determine the theoretical optimal values for MIR and HDR when Neural Collapse happens. Our experiments show that MIR and HDR can effectively explain many phenomena occurring in neural networks, for example, the standard supervised training dynamics, linear mode connectivity, and the performance of label smoothing and pruning. Additionally, we use MIR and HDR to gain insights into the dynamics of grokking, which is an intriguing phenomenon observed in supervised training, where the model demonstrates generalization capabilities long after it has learned to fit the training data. Furthermore, we introduce MIR and HDR as loss terms in supervised and semi-supervised learning to optimize the information interactions among samples and classification heads. The empirical results provide evidence of the method's effectiveness, demonstrating that the utilization of MIR and HDR not only aids in comprehending the dynamics throughout the training process but can also enhances the training procedure itself.
8
+
9
+ *Equal contribution 1University of Science and Technology Beijing 2Department of Mathematical Sciences, Tsinghua University 3MIFA Lab, Qing Yuan Research Institute, SEIEE, Shanghai Jiao Tong University 4Shanghai AI Laboratory. †Correspondence to: Weiran Huang <weiran.huang@outlook.com>, Huimin Ma <mhmpub@ustb.edu.cn>.
10
+
11
+ Proceedings of the $41^{st}$ International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s).
12
+
13
+ # 1. Introduction
14
+
15
+ Supervised learning is a significant part of machine learning, tracing its development back to the early days of artificial intelligence. Leveraging ample annotated data from largescale datasets like ImageNet (Krizhevsky et al., 2012) and COCO (Lin et al., 2014), supervised learning has achieved outstanding performance in tasks such as image recognition (He et al., 2016; Girshick, 2015; Ronneberger et al., 2015), speech recognition (Hinton et al., 2012; Chan et al., 2016), and natural language processing (Vaswani et al., 2017), thereby advancing the development of artificial intelligence. Concurrently, with its enhanced performance in real-world applications, some interesting phenomena in supervised learning, such as Neural Collapse (Papyan et al., 2020), linear mode connectivity (Frankle et al., 2020), and grokking (Power et al., 2022) have emerged. More and more work is beginning to explore the reasons behind these phenomena.
16
+
17
+ Neural Collapse (Papyan et al., 2020) is an interesting phenomenon observed during the training process of supervised learning. In different stages of network training, features of samples within the same class become more similar in the feature space, meaning that intra-class differences decrease. At the same time, feature vectors of different classes become more distinct in feature space, leading to more significant inter-class differences. In supervised learning classification tasks, after prolonged training, a special alignment occurs, this alignment is formed between the weights of the network's final fully connected layer and the feature vectors of the classes. This indicates that for each class, the centroid of its feature vector almost coincides with the weight vector of its corresponding classifier (classification head vector).
18
+
19
+ Existing work on the theory of Neural Collapse primarily solely focuses on feature or classification head similarity, with few studies exploring the information between features and class classification heads. We introduce matrix information theory as an analytical tool, using similarity matrices constructed from sample features and class classification heads to analyze the information dynamics along the training process. According to Neural Collapse, for a well-trained model at its terminal phase, sample features and the corresponding class classification heads align well. Thus, at the stage of Neural Collapse, the similarity matrix of sample
20
+
21
+ features aligns with the similarity matrix constructed by the corresponding class classification heads. Therefore, we first theoretically calculate their matrix mutual information ratio and the matrix entropy difference ratio at the point of Neural Collapse. We find that at the point of Neural Collapse, the theoretical MIR is nearing its maximum, and the HDR reaches its theoretical minimum. Therefore, we expect an increase in MIR and a decrease in HDR during the training process. Experiments demonstrate that MIR and HDR effectively describe such phenomena during training. Motivated by the success of understanding dynamics in standard training, we also explore their uses in other settings like linear mode connectivity, label smoothing, pruning, and grokking. Compared to accuracy, MIR and HDR not only can describe the above phenomena but also possess unique analytical significance. We also explore integrating information constraints on model performance, by adding MIR and HDR as a loss term in supervised and semi-supervised learning. Experiments show that adding additional information constraints during training effectively enhances model performance, especially in semi-supervised learning with limited labeled samples, where information constraints help the model better learn information from unlabeled samples.
22
+
23
+ Our contributions are as follows:
24
+
25
+ 1. Motivated by Neural Collapse and matrix information theory, we introduce two new metrics: Matrix Mutual Information Ratio (MIR) and Matrix Entropy Difference Ratio (HDR), for which we also deduce their theoretical values when Neural Collapse happens.
26
+ 2. Through rigorous experiments, we find that MIR and HDR are capable of explaining various phenomena, such as the standard training of supervised learning, linear mode connectivity, pruning, label smoothing, and grokking.
27
+ 3. We integrate matrix mutual information and information entropy differences as a loss term in both supervised and semi-supervised learning. Experiments demonstrate that these information metrics can effectively improve model performance.
28
+
29
+ # 2. Related Work
30
+
31
+ Neural network training phenomenon. Recent research has revealed several interesting phenomena that are significant for understanding the behavior and learning dynamics of neural networks. Firstly, Papyan et al. (2020) observe that in the final stages of deep neural network training, the feature vectors of the last layer tend to converge to their class centroids, and these class centroids align with the weights of the corresponding class in the final fully connected layer. This phenomenon is known as Neural Collapse. Neural Collapse occurs in both MSE loss and cross-entropy loss (Han et al., 2021; Zhou et al., 2022). Secondly, Frankle
32
+
33
+ et al. (2020) find that models trained from the same starting point, even when changing the input data sequence and data augmentation, eventually converge to the same local area. This phenomenon is termed Linear Mode Connectivity, which is influenced by architecture, training strategy, and dataset (Altintas et al., 2023). Lastly, Power et al. (2022) discover that after prolonged training, models can transition from simply memorizing data to inductively processing it. This phenomenon is known as the Grokking. Nanda et al (2022) finds the connections of grokking on the modulo addition task with trigonometric functions.
34
+
35
+ Information theory. Traditional information theory provides a universally applicable set of fundamental concepts and metrics to understand the relationship between probability distributions and information (Wang et al., 2021). However, when dealing with high-dimensional data and complex data structures, traditional information theory tools struggle to analyze higher-order relationships within the data. As an extension and advancement of traditional information theory, matrix information theory broadens the scope of information theory to encompass the analysis of intermatrix relationships. This enables a better understanding of the latent structures in data and more effective handling of complex relationships in high-dimensional data (Bach, 2022). There have been works that utilize matrix mutual information to analyze neural networks. Tan et al. (2023b) use matrix mutual information to study the Siamese architecture self-supervised learning methods. Zhang et al. (2023a) point out the relationship between effective rank, matrix entropy, and equiangular tight frame.
36
+
37
+ Semi-supervised learning. Semi-supervised learning focuses on how to train a better model using a small number of labeled data and a large number of unlabeled data (Sohn et al., 2020; Zhang et al., 2021; Chen et al., 2023; Tan et al., 2023c; Wang et al., 2023; Tan et al., 2023a; Zhang et al., 2023b). FixMatch (Sohn et al., 2020) ingeniously integrate consistency regularization with pseudo-labeling techniques MixMatch (Berthelot et al., 2019b) amalgamates leading SSL methodologies, achieving a substantial reduction in error rates and bolstering privacy protection. FlexMatch (Zhang et al., 2021) introduces Curriculum pseudo-labeling to improve semi-supervised learning by dynamically adapting to the model's learning status, showing notable efficacy in challenging scenarios with limited labeled data. Softmatch (Chen et al., 2023) efficiently balances the quantity and quality of pseudo-labels in semi-supervised learning, demonstrating significant performance improvements in diverse applications including image and text classification FreeMatch (Wang et al., 2023) innovates in semi-supervised learning by self-adaptively adjusting confidence thresholds and incorporating class fairness regularization, significantly outperforming existing methods in scenarios with scarce la
38
+
39
+ beled data. How to more accurately utilize the information of unlabeled data remains an important problem in the field of semi-supervised learning.
40
+
41
+ # 3. Preliminaries
42
+
43
+ # 3.1. Supervised classification problem
44
+
45
+ Given a labeled dataset $\{(\mathbf{x}_i, y_i)\}_{i=1}^n$ , where $y_i \in \{1, 2, \dots, C\}$ is the class label. In this paper, we mainly consider training an image classification problem by concatenation of a deep neural network $h$ and a linear classifier. The linear classifier consists of a weight matrix $\mathbf{W} \in \mathbb{R}^{C \times d}$ and $\mathbf{b} \in \mathbb{R}^{C \times 1}$ . Denote $\mathbf{W}^T = [w_1 \dots w_C]$ . The training loss is the cross-entropy loss.
46
+
47
+ $$
48
+ \mathcal {H} (p, q) = - \sum_ {i = 0} ^ {n} p (x _ {i}) \log q (x _ {i}),
49
+ $$
50
+
51
+ where $p$ is the true probability distribution, and $q$ is the predicted probability distribution.
52
+
53
+ # 3.2. Matrix entropy and mutual information
54
+
55
+ The following definitions of matrix entropy and matrix mutual information are taken from paper (Skean et al., 2023).
56
+
57
+ Definition 3.1 (Matrix entropy). Suppose a positive-definite matrix $\mathbf{K} \in \mathbb{R}^{d \times d}$ which $\mathbf{K}(i, i) = 1$ ( $1 \leq i \leq d$ ). The matrix entropy is defined as follows:
58
+
59
+ $$
60
+ \mathrm {H} (\mathbf {K}) = - \operatorname {t r} \left(\frac {1}{d} \mathbf {K} \log \frac {1}{d} \mathbf {K}\right).
61
+ $$
62
+
63
+ In the following we assume that $\mathbf{K}_j\in \mathbb{R}^{d\times d}$ which $\mathbf{K}_j(i,i) = 1$ $(1\leq i\leq d,j = 1,2)$
64
+
65
+ Definition 3.2 (Matrix mutual information). The matrix mutual information is defined as follows:
66
+
67
+ $$
68
+ \operatorname {M I} \left(\mathbf {K} _ {1}, \mathbf {K} _ {2}\right) = \mathrm {H} \left(\mathbf {K} _ {1}\right) + \mathrm {H} \left(\mathbf {K} _ {2}\right) - \mathrm {H} \left(\mathbf {K} _ {1} \odot \mathbf {K} _ {2}\right),
69
+ $$
70
+
71
+ where $\odot$ is the Hardmard product.
72
+
73
+ Based on the two definitions above, we can introduce the following concepts, which measure the normalized information interactions between matrices.
74
+
75
+ Definition 3.3 (Matrix mutual information ratio (MIR)). The matrix mutual information ratio is defined as follows:
76
+
77
+ $$
78
+ \operatorname {M I R} \left(\mathbf {K} _ {1}, \mathbf {K} _ {2}\right) = \frac {\operatorname {M I} \left(\mathbf {K} _ {1} , \mathbf {K} _ {2}\right)}{\operatorname* {m i n} \left\{\mathrm {H} \left(\mathbf {K} _ {1}\right) , \mathrm {H} \left(\mathbf {K} _ {2}\right) \right\}}.
79
+ $$
80
+
81
+ Definition 3.4 (Matrix entropy difference ratio (HDR)). The matrix entropy difference ratio is defined as follows:
82
+
83
+ $$
84
+ \operatorname {H D R} \left(\mathbf {K} _ {1}, \mathbf {K} _ {2}\right) = \frac {\left| \mathrm {H} \left(\mathbf {K} _ {1}\right) - \mathrm {H} \left(\mathbf {K} _ {2}\right) \right|}{\max \left\{\mathrm {H} \left(\mathbf {K} _ {1}\right) , \mathrm {H} \left(\mathbf {K} _ {2}\right) \right\}}.
85
+ $$
86
+
87
+ # 4. Theoretic Insights in Supervised Learning
88
+
89
+ # 4.1. Neural collapse
90
+
91
+ Neural Collapse (NC) is an interesting phenomenon (Papyan et al., 2020) appeared at the terminal phase of the classification problem. We will briefly summarize the 3 most important NC conditions for our paper as follows.
92
+
93
+ Denote $\mu_{G} = \frac{\sum_{i=1}^{n} h(\mathbf{x}_{i})}{n}$ and $\mu_{c} = \frac{\sum_{y_{i}=c} h(\mathbf{x}_{i})}{\# \{y_{i}=c\}}$ be the global mean and class-wise mean respectively. Then we can define $\tilde{\mu}_{c} = \mu_{c} - \mu_{G}$ .
94
+
95
+ $$
96
+ (\mathrm {N C} 1) h (\mathbf {x} _ {i}) = \mu_ {y _ {i}} (i = 1, 2, \dots , n).
97
+ $$
98
+
99
+ (NC 2) $\cos (\tilde{\mu}_i,\tilde{\mu}_j) = \frac{C}{C - 1}\delta_j^i -\frac{1}{C - 1}$ , where $\cos$ is the cosine similarity and $\delta_j^i$ is Kronecker symbol.
100
+
101
+ $$
102
+ \left(\text {N C} 3\right) \frac {\mathbf {W} ^ {T}}{\| \mathbf {W} \| _ {F}} = \frac {\mathbf {M}}{\| \mathbf {M} \| _ {F}}, \text {w h e r e} \mathbf {M} = \left[ \tilde {\mu} _ {1} \dots \tilde {\mu} _ {C} \right].
103
+ $$
104
+
105
+ In this paper, the matrices used in the matrix information quantities usually is the similarity (gram) matrix. For ease of exposition, we introduce a standard way of constructing a similarity (gram) matrix as follows.
106
+
107
+ Definition 4.1 (Construction of similarity (gram) matrix). Given a set of representations $\mathbf{Z} = [\mathbf{z}_1\cdot \cdot \cdot \mathbf{z}_N]\in \mathbb{R}^{d\times N}$ . Denote the $l_{2}$ normalized feature $\hat{\mathbf{z}}_i = \frac{\mathbf{z}_i}{\|\mathbf{z}_i\|},\hat{\mathbf{Z}} = [\hat{\mathbf{z}}_1\dots \hat{\mathbf{z}}_N]$ . Then gram matrix is defined as $\mathbf{G}(\mathbf{Z}) = \hat{\mathbf{Z}}^T\hat{\mathbf{Z}}$
108
+
109
+ Note that Neural Collapse conditions impose structural information on the weight matrix and class means, we provide the matrix mutual information ratio and matrix entropy difference ratio of this imposed structure in Theorem 4.2.
110
+
111
+ Theorem 4.2. Suppose Neural collapse happens. Then $\mathrm{HDR}(\mathbf{G}(\mathbf{W}^T),\mathbf{G}(\mathbf{M})) = 0$ and $\mathrm{MIR}(\mathbf{G}(\mathbf{W}^T),\mathbf{G}(\mathbf{M})) = \frac{1}{C - 1} +\frac{(C - 2)\log(C - 2)}{(C - 1)\log(C - 1)}.$
112
+
113
+ The proof can be seen in Appendix A.1. As the linear weight matrix $\mathbf{W}$ can be seen as (prototype) embedding for each class. It is natural to consider the mutual information and entropy difference between sample embedding and label embedding. We discuss this in the following Corollary 4.3.
114
+
115
+ Corollary 4.3. Suppose the dataset is class-balanced, $\mu_G = 0$ and Neural collapse happens. Denote $\mathbf{Z}_1 = [h(\mathbf{x}_1)\dots h(\mathbf{x}_n)]\in \mathbb{R}^{d\times n}$ and $\mathbf{Z}_2 = [w_{y_1}\dots w_{y_n}]\in \mathbb{R}^{d\times n}$ . Then $\mathrm{HDR}(\mathbf{Z}_1,\mathbf{Z}_2) = 0$ and $\mathrm{MIR}(\mathbf{Z}_1,\mathbf{Z}_2) = \frac{1}{C - 1} +\frac{(C - 2)\log(C - 2)}{(C - 1)\log(C - 1)}$ .
116
+
117
+ Remark: Note $\frac{1}{C - 1} + \frac{(C - 2) \log(C - 2)}{(C - 1) \log(C - 1)} \approx \frac{1}{C - 1} + \frac{(C - 2) \log(C - 1)}{(C - 1) \log(C - 1)} = 1$ and MIR, HDR $\in [0,1]$ . These facts make quantities obtained by Theorem 4.2 and 4.3 very interesting, as HDR reaches the minimum possible value and MIR approximately reaches the highest possible value.
118
+
119
+ # 4.2. Some theoretical insights for HDR
120
+
121
+ Mutual information is a very intuitive quantity in information theory. On the other hand, it seems weird to consider the difference of entropy, but we will show that this quantity is closely linked with comparing the approximation ability of different representations on the same target.
122
+
123
+ For ease of theoretical analysis, in this section, we consider the MSE regression loss.
124
+
125
+ The following Lemma 4.4 shows that the regression of two sets of representations $\mathbf{Z}_1$ and $\mathbf{Z}_2$ to the same target $\mathbf{Y}$ are closely related. And the two approximation errors are closely related to the regression error of $\mathbf{Z}_1$ to $\mathbf{Z}_2$ .
126
+
127
+ Lemma 4.4. Suppose $\mathbf{W}_1^*$ , $\mathbf{b}_1^* = \arg \min_{\mathbf{W},\mathbf{b}}\|\mathbf{Y} - (\mathbf{W}\mathbf{Z}_1 + \mathbf{b}\mathbf{1}_N)\|_F$ . Then $\min_{\mathbf{W},\mathbf{b}}\|\mathbf{Y} - (\mathbf{W}\mathbf{Z}_2 + \mathbf{b}\mathbf{1}_N)\|_F \leq \min_{\mathbf{W},\mathbf{b}}\|\mathbf{Y} - (\mathbf{W}\mathbf{Z}_1 + \mathbf{b}\mathbf{1}_N)\|_F + \|\mathbf{W}_1^*\|_F \min_{\mathbf{H},\eta} \|\mathbf{Z}_1 - (\mathbf{H}\mathbf{Z}_2 + \eta \mathbf{1}_N)\|_F$ .
128
+
129
+ The proof can be found in Appendix B.1. From Lemma 4.4, we know that the regression error of $\mathbf{Z}_1$ to $\mathbf{Z}_2$ is crucial for understanding the differences of representations. We further bound the regression error with rank and singular values in the following Lemma 4.5.
130
+
131
+ Lemma 4.5. Suppose $\mathbf{Z}_1 = [\mathbf{z}_1^{(1)}\dots \mathbf{z}_N^{(1)}]\in \mathbb{R}^{d'\times N}$ and $\mathbf{Z}_2 = [\mathbf{z}_1^{(2)}\dots \mathbf{z}_N^{(2)}]\in \mathbb{R}^{d\times N}$ and rank $(\mathbf{Z}_1) > \mathrm{rank}(\mathbf{Z}_2)$ . Denote the singular value of $\frac{\mathbf{Z}_1}{\sqrt{N}}$ as $\sigma_1\geq \dots \geq \sigma_N$ . Then $\min_{\mathbf{H},\eta}\frac{1}{N}\| \mathbf{Z}_1 - (\mathbf{H}\mathbf{Z}_2 + \eta \mathbf{1}_N)\| _F^2\geq \sum_{j = \mathrm{rank}(\mathbf{Z}_2) + 2}^{\mathrm{rank}(\mathbf{Z}_1)}(\sigma_j)^2.$
132
+
133
+ The proof can be found in Appendix B.2. The bound given by Lemma 4.5 is not that straightforward to understand. Assuming the features are normalized, we successfully derived the connection of regression error and ratio of ranks in Theorem 4.6.
134
+
135
+ Theorem 4.6. Suppose $\| \mathbf{z}_j^{(1)}\| _2 = 1$ where $(1\leq j\leq N)$ . Then lower bound of approximation error can be upper-bounded as follows: $\sum_{j = rank(\mathbf{Z}_2) + 2}^{rank(\mathbf{Z}_1)}(\sigma_j)^2\leq$ $\frac{\text{rank}(\mathbf{Z}_1) - \text{rank}(\mathbf{Z}_2) - 1}{\text{rank}(\mathbf{Z}_1)}\leq 1 - \frac{\text{rank}(\mathbf{Z}_2)}{\text{rank}(\mathbf{Z}_1)}.$
136
+
137
+ The proof can be found in Appendix B.3. From (Wei et al., 2024; Zhang et al., 2023a), $\exp (\mathrm{H}(\mathbf{G}(\mathbf{Z}))$ is a good approximate of rank(Z). Then we can see that $\frac{\mathrm{rank}(\mathbf{Z}_2)}{\mathrm{rank}(\mathbf{Z}_1)}\approx$ $\exp (\mathrm{H}(\mathbf{G}(\mathbf{Z}_2)) - \mathrm{H}(\mathbf{G}(\mathbf{Z}_1)))$ , making the difference of entropy a good surrogate bound for approximation error.
138
+
139
+ # 5. Information Interplay in Supervised Learning
140
+
141
+ Inspired by matrix information theory and Neural Collapse theory, we focus more on the consistency between sample representations and class classification heads. We determine the relationships among samples by constructing a similarity matrix of the representations of dataset samples.
142
+
143
+ According to NC1 and NC3, the similarity matrix between samples approximates the similarity matrix of the corresponding class centers, which is also the similarity matrix of the corresponding weights in the fully connected layer. Therefore, under Neural Collapse, the similarity relationship among samples is equivalent to the similarity relationship of the corresponding category weights in the fully connected layer. Our analysis, grounded in matrix information theory, primarily concentrates on the relationship between the representations of samples and the weights in the fully connected layer. Due to constraints in computational resources, we approximate the dataset's matrix entropy using batch matrix entropy.
144
+
145
+ Our models are trained on CIFAR-10 and CIFAR-100. The default experimental configuration comprise training the models with an SGD optimizer (momentum of 0.9, weight decay of $5e^{-4}$ ), an initial learning rate of 0.03 with cosine annealing, a batch size of 64, and a total of $2^{20}$ training iterations. The backbone architecture is WideResNet-28-2 for CIFAR-10 and WideResNet-28-8 for CIFAR-100.
146
+
147
+ # 5.1. Information interplay during standard supervised learning process
148
+
149
+ According to Neural Collapse, during the terminal stages of training, sample features align with the weights of the fully connected layer. Theorem 4.2 indicates that during the training process, MIR increases to its theoretical upper limit, while HDR decreases to 0. We plot the model's accuracy on the test set during the training process, as well as the MIR and the HDR between data representations and the corresponding classification heads. As shown in Figure 1, on CIFAR-10 and CIFAR-100, the accuracy and MIR exhibit almost identical trends of variations. In most cases, both accuracy and MIR increase or decrease simultaneously, and MIR consistently shows an upward trend, having its trajectory toward its theoretical maximum value. As shown in Figure 2, during the training process, in most instances, accuracy and HDR show opposite trends, with HDR continually decreasing and even nearing its theoretical minimum
150
+
151
+ ![](images/e01cbe3b95ef9562f1251e8a7f19e645cf2d19ce7597077edd037a4254bf697f.jpg)
152
+ (a) CIFAR-10
153
+ Figure 1. Accuracy and MIR on the test set during training.
154
+
155
+ ![](images/7209c343c1dff7b285ca4d6bcf39f9cfc4e71ad2134f6dce524ae1a69ddb7b12.jpg)
156
+ (b) CIFAR-100
157
+
158
+ value of 0 on CIFAR-100. In summary, MIR and HDR effectively describe the process of training towards Neural Collapse.
159
+
160
+ ![](images/d556a873341d19af2933ec673ceb5ff10049ab84b924f889514f93cfa2ebd058.jpg)
161
+ (a) CIFAR-10
162
+ Figure 2. Accuracy and HDR on the test set during training.
163
+
164
+ ![](images/5a92b9e1631c06a79be9a5f9a82481f5fdd8a2725cf825928c4e8f877dea55fc.jpg)
165
+ (b) CIFAR-100
166
+ Figure 4. Accuracy, MIR, and HDR of models interpolated with different weights on CIFAR-10 test set.
167
+
168
+ # 5.2. Information interplay in linear mode connectivity
169
+
170
+ Linear mode connectivity (Frankle et al., 2020) suggests that under specific datasets and experimental setups, models initialized with the same random parameters will be optimized near the same local optimal basin, even if the order of training data and data augmentation differs. We investigate the behaviors of MIR and HDR under the setting of linear mode connectivity. We initialize models with the same random parameters and train them using different data sequences and random augmentations. Subsequently, we linearly interpolate these two checkpoints and obtain a new model $h = (1 - \omega) \cdot h_1 + \omega \cdot h_2$ , where $h_1$ and $h_2$ are the two checkpoints and $\omega$ is the interpolation weight. Then we test these models on a test set for accuracy, MIR, and HDR.
171
+
172
+ We conduct experiments on CIFAR-10 and CIFAR-100. As shown in Figure 3a and 3b. On CIFAR-100, the performance of models obtained along the interpolation line is close, aligning with the linear mode connectivity. At this point, MIR and HDR remain almost unchanged. However, on CIFAR-10, the models do not exhibit linear mode connectivity. When the value of interpolation weight is between
173
+
174
+ ![](images/6a4169b6106a80f884533a500fdfbcccde305d87f05f310b1acc019d69d6728b.jpg)
175
+ (a) CIFAR-10
176
+ Figure 3. Accuracy, MIR, and HDR of models interpolated with different weights on the test set.
177
+
178
+ ![](images/6f0564a9bdff242c4224dc88ecea627931b3643ae93995af5b5bb03bea2ea906.jpg)
179
+ (b) CIFAR-100
180
+
181
+ ![](images/10538ff8b5bdb2a2daae775e038393b5d132f40ace9db571eae1fe9201cbb586.jpg)
182
+ (a) $\mathrm{lr}:3e^{-2}$
183
+
184
+ ![](images/eae4470577d9543ec8b01488493398dbf92aa4f34a8e8d34da8d511bf642981f.jpg)
185
+ (b) $\operatorname{lr}: 3e^{-3}$
186
+
187
+ ![](images/2d4d7341c2a50ac761bbe81c4e0bc4a8e76e69f2a2104126d1ef28a897ac61c0.jpg)
188
+ (c) $\operatorname{lr}: 3e^{-4}$
189
+
190
+ 0.4 and 0.6, the performance of the interpolated models even drop to that of random guessing. Surprisingly, at this time, MIR shows an additional upward trend. Moreover, when the value of interpolation weight is close to 0 and 1, despite a slight decrease in performance, HDR also decreases. Although we find it difficult to explain this anomaly, it does demonstrate that HDR and MIR have distinctive attributes compared to the accuracy metric, presenting an intriguing avenue for further exploration.
191
+
192
+ Altuntaş et al. (2023) point that linear mode connectivity is related to the experimental configuration. Therefore, we posit that the performance decline of the interpolated model on CIFAR-10 is associated with an excessively high learning rate. In the training phase, models navigate the loss landscapes in search of minimal values, and two models with linear mode connectivity are optimized near the same local optimum. When the learning rate is too high, different training sample ordering and data augmentations lead to directing model optimization towards distinct regions within the loss landscapes. We experiment with different learning rates on CIFAR-10 and test their linear mode connectivity. It is observed that as the learning rate decreased, fluctuations in accuracy, MIR, and HDR also reduced. When the learning rate is lowered to $3e^{-4}$ , the model demonstrate linear mode connectivity on CIFAR-10. This suggests that HDR and MIR are also effective in describing linear mode connectivity when it exists.
193
+
194
+ # 5.3. Information interplay in label smoothing
195
+
196
+ Label smoothing (Szegedy et al., 2016) is a widely used technique in deep learning. It improves the generalization of the model by setting smoothness of labels. $y' = (1 - \epsilon) \cdot y + \frac{\epsilon}{C}$ , where $\epsilon$ is the smoothness and $y$ is the one-hot label. We train models with various smoothness levels to explore their impact on accuracy, HDR, and MIR. As shown in Figure 5, the variation in accuracy, MIR, and HDR are minimal, indicating that HDR and MIR can effectively describe the performance of label smoothing technique.
197
+
198
+ # 5.4. Information interplay in model pruning
199
+
200
+ We would like to use MIR and HDR to understand why the pruning technique is effective in maintaining relatively high accuracy. We apply standard unstructured pruning to models: for a well-trained model, we determine the number of parameters to prune, denoted as $k$ . The model's parameters are sorted in descending order based on their absolute values, and the smallest $k$ parameters are removed. Subsequently, the remaining parameters are fine-tuned again on the dataset (Han et al., 2015).
201
+
202
+ We prune the model under various sparsity levels and extract features using the model before and after pruning. We calculate the MIR and HDR of the features extracted by the model before and after pruning. As shown in Figure 6, even when the model sparsity is $90\%$ , the features extracted by the pruned model maintain a high MIR with those extracted before pruning. At various sparsity ratio, the variations in MIR and HDR for models after pruning are little compared to these before pruning, the performance differences of the models before and after pruning are also not significant. This indicates that fine-tuning the pruned subnetwork can restore adequate information extraction capabilities.
203
+
204
+ # 5.5. Information interplay in grokking
205
+
206
+ In supervised learning, training models on certain datasets can result in an anomalous situation. Initially, models quickly learn the patterns of the training set, but at this point, their performance on the test set is very poor. As training continues, the models learn representations that can generalize to the test set, a phenomenon referred to as Grokking (Nanda et al., 2022). We aim to explore the information interplay in Grokking. Following Nanda et al. (2022); Tan & Huang (2023), we train a transformer to learn modular addition $c \equiv (a + b) \pmod{p}$ , with $p$ being 113. The model input is " $ab =$ ", where $a$ and $b$ are encoded into $p$ -dimensional one-hot vectors, and " $=$ " is used to signify the output value $c$ . Our model employs a single-layer ReLU transformer with a token encoding dimension of 128
207
+
208
+ ![](images/fc30bc2c92e783dec5f61eb44ae59d98ad70e96d2ff1245e4ef12510b526b76b.jpg)
209
+ (a) CIFAR-10
210
+
211
+ ![](images/bc900dac4dbec2db185e9957eb8c0824dea62c2cb91a4b8779b6f70fa073e294.jpg)
212
+ (b) CIFAR-100
213
+
214
+ ![](images/6315aec85c44a8a1a516fb209b61229fe02103e5b41e9e575764a86183ecec36.jpg)
215
+ (a) CIFAR-10
216
+
217
+ ![](images/6f31ddf16656304e3cd8d7c1ff7ae114dd8fb421ddc50e555117d072e9a8adc2.jpg)
218
+ (b) CIFAR-100
219
+ Figure 6. Accuracy, MIR, and HDR of the features extracted by the model before and after pruning.
220
+
221
+ to learn positional encodings, four attention heads each of dimension 32, and an MLP with a hidden layer of dimension 512. We train the model using full-batch gradient descent, a learning rate of 0.001, and an AdamW optimizer with a weight decay parameter of 1. We use $30\%$ of all possible inputs $(113 \times 113$ pairs) for training data and test performance on the remaining $70\%$ .
222
+
223
+ As shown in Figure 7, we plot the accuracy of both the training and test sets during the grokking process, as well as the variation in MIR and HDR between the representation and the fully connected layer. It can be observed that, in the early stages of training, the model quickly fits the training data and achieves $100\%$ accuracy on the training set. However, at this point, the performance on the test set is nearly equivalent to that of random guessing. As training continues, the model gradually shows generalization capability on the test set, ultimately achieving $100\%$ accuracy, which is a hallmark of grokking. Figure 7 also reveals a clear two-phase variation in both MIR and HDR between data representation and the weight of fully connected layer. Initially, similar to fully supervised learning, MIR increases, while HDR decreases. However, as training proceeds, MIR begins to decrease, and HDR starts to increase, indicating the model is seeking new optimal points. After the model achieves the grokking, MIR reaches its lowest, and HDR rapidly declines from its highest point. The experiments demonstrate that HDR and MIR exhibit distinct phenomena in two stages, suggesting that information metrics can
224
+
225
+ ![](images/28d4d0ee7d939f1c4147563ed73f47c2e3e40391754d01bcecd249aea90ed8c3.jpg)
226
+ Figure 5. Accuracy, MIR, and HDR under different smoothness levels.
227
+ Figure 7. Accuracy, MIR, and HDR during Grokking.
228
+
229
+ ![](images/3ee9e39eaca47a8a1f8c811838d7c1b1c176c86bc68a099167c43241af14f04c.jpg)
230
+
231
+ describe the grokking phenomenon, providing a basis for further research.
232
+
233
+ # 6. Improving Supervised and Semi-Supervised Learning With Information Interplay
234
+
235
+ # 6.1. Pipeline of supervised and semi-supervised learning
236
+
237
+ In this section, we introduce how to apply matrix information entropy in supervised and semi-supervised learning. In supervised learning, we train the neural network $h$ and classifier $\mathbf{W} \in \mathbb{R}^{C \times d}$ on the dataset $\mathcal{D}_L = \{(x_i, y_i)\}_{i=0}^{N_L}$ consisting of $N_L$ samples. $h$ is used to extract data features $f \in \mathbb{R}^D$ , and $\mathbf{W}$ classifies the extracted features. The model is optimized using the following cross-entropy loss.
238
+
239
+ $$
240
+ \mathcal {L} _ {s} = \frac {1}{B} \sum_ {i = 1} ^ {B} \mathcal {H} (y _ {i}, p (\omega (x _ {i}))),
241
+ $$
242
+
243
+ where $B$ represents the batch size, $\mathcal{H}$ denotes the cross-entropy loss, $p(\cdot)$ refers to the model's output probability of a sample, and $\omega$ means random data augmentation.
244
+
245
+ Compared to supervised learning, semi-supervised learning includes an additional unlabeled dataset $\mathcal{D}_U = \{u_i\}_{i=0}^{N_U}$ which contain $N_U$ unlabeled data and utilizes it to assist in optimizing the model. In the processing of unlabeled data, we adopt the approach outlined in Freematch (Wang et al., 2023). This involves generating pseudo-labels through weak data augmentation and selecting data based on a probability threshold. The model is then employed to extract features from strongly augmented data for the computation of cross-entropy loss in conjunction with the pseudo-labels. The formulaic representation of the training objective for unlabeled data is as follows:
246
+
247
+ $$
248
+ \mathcal {L} _ {u} = \frac {1}{\mu B} \sum_ {i = 1} ^ {\mu B} \mathbb {I} (m a x (q _ {i}) > \tau) \cdot \mathcal {H} (\hat {q} _ {i}, Q _ {i}),
249
+ $$
250
+
251
+ where $q_{i}$ and $Q_{i}$ correspond to $p(y|\omega(u_{i}))$ and $p(y|\Omega(u_{i}))$ , respectively. The term $\hat{q}_{i}$ refers to one-hot pseudo-labels generated from $q_{i}$ . The symbol $\mathbb{I}(\cdot > \tau)$ denotes the indicator function applied to values surpassing the threshold $\tau$ . Furthermore, $\omega$ and $\Omega$ are used to distinguish between weak and strong data augmentation.
252
+
253
+ In addition, Freematch incorporates a fairness objective to predict each class with uniform frequency.
254
+
255
+ $$
256
+ \mathcal {L} _ {f} = - H \left(\operatorname {S u m N o r m} \left(\frac {p _ {1}}{h i s t _ {1}}\right), \operatorname {S u m N o r m} \left(\frac {p _ {2}}{h i s t _ {2}}\right)\right).
257
+ $$
258
+
259
+ $\mathrm{SumNorm} = (\cdot) / \sum (\cdot)$ . $p_1$ and $p_2$ refer to the average predictions of the model under weak and strong augmenta
260
+
261
+ tion, respectively. Likewise, $hist_{1}$ and $hist_{2}$ indicate the histogram distributions resulting from weak and strong augmentation, respectively.
262
+
263
+ The overall objective is
264
+
265
+ $$
266
+ \mathcal {L} _ {s s l} = \mathcal {L} _ {s} + \lambda_ {u} \mathcal {L} _ {u} + \lambda_ {f} \mathcal {L} _ {f},
267
+ $$
268
+
269
+ where $\lambda_{u}$ and $\lambda_{f}$ represent the weight for $\mathcal{L}_u$ and $\mathcal{L}_f$ .
270
+
271
+ # 6.2. Insights from information interplay
272
+
273
+ In a batch of labeled data $\{(x_i, y_i)\}_{i=1}^B \in \mathcal{D}_L$ , $h$ extracts feature representations, denoted as $f \in \mathbb{R}^{B \times D}$ . In Neural Collapse theory, the representation of each sample's class center aligns with the classifier weight of the respective category, i.e., $V_i = W_{y_i}$ . In the case of unlabeled data $\{u_i\}_{i=1}^{\mu B} \in \mathcal{D}_U$ , we select sample features $f'$ from $\mu B$ samples with pseudo-label probabilities greater than $\tau$ . i.e., $f' = \{f_i \in f | \mathbb{I}(max(q_j) > \tau)\}$ , and obtain the corresponding class centers $V' = W_{y_i'}$ , where $y_i'$ is the pseudo label of $f'$ .
274
+
275
+ Maximizing mutual information. As shown in Figure 1, during the model training process, the mutual information between a batch's data features $f$ and the corresponding class weights $V$ increases. Therefore, we add an additional loss term to increase the mutual information between them. For supervised learning, the final optimization objective is
276
+
277
+ $$
278
+ \mathcal {L} = \mathcal {L} _ {s} - \lambda_ {m i} \cdot \operatorname {M I} (\mathbf {G} (f), \mathbf {G} (V)).
279
+ $$
280
+
281
+ For semi-supervised learning, the final optimization objective is
282
+
283
+ $$
284
+ \mathcal {L} = \mathcal {L} _ {s s l} - \lambda_ {m i} \cdot \operatorname {M I} \left(\mathbf {G} \left(f ^ {\prime}\right), \mathbf {G} \left(V ^ {\prime}\right)\right),
285
+ $$
286
+
287
+ where $\lambda_{mi}$ is the weight for the mutual information.
288
+
289
+ Minimizing entropy difference. As depicted in Figure 2, throughout the training phase, the disparity in information entropy between a batch's data features $f$ and the associated category weights $V$ diminishes in tandem with an increase in accuracy. Consequently, it is feasible to introduce an auxiliary loss component within the training regime to further mitigate this entropy discrepancy. In the context of supervised learning, the ultimate optimization target is delineated as
290
+
291
+ $$
292
+ \mathcal {L} = \mathcal {L} _ {s} + \lambda_ {i d} \cdot | \mathrm {H} (\mathbf {G} (f)) - \mathrm {H} (\mathbf {G} (V)) |.
293
+ $$
294
+
295
+ Regarding semi-supervised learning, this target shifts to
296
+
297
+ $$
298
+ \mathcal {L} = \mathcal {L} _ {s s l} + \lambda_ {i d} \cdot | \mathrm {H} (\mathbf {G} (f ^ {\prime})) - \mathrm {H} (\mathbf {G} (V ^ {\prime})) |,
299
+ $$
300
+
301
+ wherein $\lambda_{id}$ signifies the weight for entropy difference.
302
+
303
+ Table 1. Error rates (100% - accuracy) on CIFAR-10/100, and STL-10 datasets for state-of-the-art methods in semi-supervised learning. Bold indicates the best performance, and underline indicates the second best.
304
+
305
+ <table><tr><td>Dataset</td><td colspan="3">CIFAR-10</td><td colspan="2">CIFAR-100</td><td colspan="2">STL-10</td></tr><tr><td># Label</td><td>10</td><td>40</td><td>250</td><td>400</td><td>2500</td><td>40</td><td>1000</td></tr><tr><td>II Model (Rasmus et al., 2015)</td><td>79.18±1.11</td><td>74.34±1.76</td><td>46.24±1.29</td><td>86.96±0.80</td><td>58.80±0.66</td><td>74.31±0.85</td><td>32.78±0.40</td></tr><tr><td>Pseudo Label (Lee et al., 2013)</td><td>80.21±0.55</td><td>74.61±0.26</td><td>46.49±2.20</td><td>87.45±0.85</td><td>57.74±0.28</td><td>74.68±0.99</td><td>32.64±0.71</td></tr><tr><td>VAT (Miyato et al., 2018)</td><td>79.81±1.17</td><td>74.66±2.12</td><td>41.03±1.79</td><td>85.20±1.40</td><td>48.84±0.79</td><td>74.74±0.38</td><td>37.95±1.12</td></tr><tr><td>MeanTeacher (Tarvainen &amp; Valpola, 2017)</td><td>76.37±0.44</td><td>70.09±1.60</td><td>37.46±3.30</td><td>81.11±1.44</td><td>45.17±1.06</td><td>71.72±1.45</td><td>33.90±1.37</td></tr><tr><td>MixMatch (Berthelot et al., 2019b)</td><td>65.76±7.06</td><td>36.19±6.48</td><td>13.63±0.59</td><td>67.59±0.66</td><td>39.76±0.48</td><td>54.93±0.96</td><td>21.70±0.68</td></tr><tr><td>ReMixMatch (Berthelot et al., 2019a)</td><td>20.77±7.48</td><td>9.88±1.03</td><td>6.30±0.05</td><td>42.75±1.05</td><td>26.03±0.35</td><td>32.12±6.24</td><td>6.74±0.17</td></tr><tr><td>UDA (Xie et al., 2020)</td><td>34.53±10.69</td><td>10.62±3.75</td><td>5.16±0.06</td><td>46.39±1.59</td><td>27.73±0.21</td><td>37.42±8.44</td><td>6.64±0.17</td></tr><tr><td>FixMatch (Sohn et al., 2020)</td><td>24.79±7.65</td><td>7.47±0.28</td><td>5.07±0.05</td><td>46.42±0.82</td><td>28.03±0.16</td><td>35.97±4.14</td><td>6.25±0.33</td></tr><tr><td>Dash (Xu et al., 2021)</td><td>27.28±14.09</td><td>8.93±3.11</td><td>5.16±0.23</td><td>44.82±0.96</td><td>27.15±0.22</td><td>34.52±4.30</td><td>6.39±0.56</td></tr><tr><td>MPL (Pham et al., 2021)</td><td>23.55±6.01</td><td>6.93±0.17</td><td>5.76±0.24</td><td>46.26±1.84</td><td>27.71±0.19</td><td>35.76±4.83</td><td>6.66±0.00</td></tr><tr><td>FlexMatch (Zhang et al., 2021)</td><td>13.85±12.04</td><td>4.97±0.06</td><td>4.98±0.09</td><td>39.94±1.62</td><td>26.49±0.20</td><td>29.15±4.16</td><td>5.77±0.18</td></tr><tr><td>FreeMatch (Wang et al., 2023)</td><td>8.07±4.24</td><td>4.90±0.04</td><td>4.88±0.18</td><td>37.98±0.42</td><td>26.47±0.20</td><td>15.56±0.55</td><td>5.63±0.15</td></tr><tr><td>OTMatch (Tan et al., 2023c)</td><td>4.89±0.76</td><td>4.72±0.08</td><td>4.60±0.15</td><td>37.29±0.76</td><td>26.04±0.21</td><td>12.10±0.72</td><td>5.60±0.14</td></tr><tr><td>SoftMatch (Chen et al., 2023)</td><td>4.91±0.12</td><td>4.82±0.09</td><td>4.04±0.02</td><td>37.10±0.07</td><td>26.66±0.25</td><td>21.42±3.48</td><td>5.73±0.24</td></tr><tr><td>FreeMatch + Maximizing Mutual Information (Ours)</td><td>4.87±0.66</td><td>4.66±0.13</td><td>4.56±0.15</td><td>36.41±1.91</td><td>25.77±0.35</td><td>16.61±1.19</td><td>5.24±0.17</td></tr><tr><td>FreeMatch + Minimizing Entropy Difference (Ours)</td><td>4.69±0.16</td><td>4.63±0.25</td><td>4.60±0.15</td><td>37.31±1.96</td><td>25.79±0.41</td><td>14.93±3.28</td><td>5.30±0.18</td></tr></table>
306
+
307
+ # 6.3. Performances on supervised and semi-supervised learning
308
+
309
+ In our effort to conduct a fair comparison between our proposed method and existing methodologies, we meticulously designed our experiments building upon previous scholarly work. TorchSSL (Zhang et al., 2021), a sophisticated codebase encompassing a wide array of semi-supervised learning techniques as well as supervised learning implementations, was employed as our foundational code base. This enables us to implement our algorithm effectively and assess its performance on well-established datasets like CIFAR-10, CIFAR-100, and STL-10. In the realm of supervised learning, our unique loss components are applied to annotated data, facilitating the computation of both mutual information loss and information entropy difference loss. For semi-supervised learning scenarios, these loss components are extended to unlabeled data, enhancing the calculation of these loss metrics during the unsupervised learning phase. We use an SGD optimizer, configured with a momentum of 0.9 and a weight decay parameter of $5e^{-4}$ . The learning rate was initially set at 0.03, subject to cosine annealing. We report the performance metrics over several runs of seeds. The batch size are maintained at 64 across a comprehensive 1,048,000 iterations training regimen. Concerning model architecture, WideResNet-28-2, WideResNet-28-8, and WideResNet-37-2 are respectively chosen for datasets CIFAR-10, CIFAR-100, and STL-10,
310
+
311
+ We train supervised and semi-supervised learning models using mutual information and information entropy difference as constraints in the loss function. Table 1 and 2 present the performance of semi-supervised and supervised learning, respectively. It is observed that applying mutual information and information entropy constraints led to a slight improve
312
+
313
+ Table 2. Results for fully supervised learning
314
+
315
+ <table><tr><td>Datasets</td><td>CIFAR-10</td><td>CIFAR-100</td></tr><tr><td>Fully supervised</td><td>95.35</td><td>80.77</td></tr><tr><td>Ours (MIR)</td><td>95.52</td><td>80.81</td></tr><tr><td>Ours (HDR)</td><td>95.57</td><td>80.96</td></tr></table>
316
+
317
+ ment in supervised learning performance. We believe this is because sufficient labeled data provides adequate information constraints, leading to only a modest enhancement in performance. However, in semi-supervised learning, in most settings, maximizing mutual information and minimizing information entropy resulted in the best or second-best performance. Additionally, our method consistently outperformed our baseline, FreeMatch, across various settings. This suggests that in situations with insufficient labeled samples, additional information constraints can more effectively improve model performance.
318
+
319
+ # 7. Conclusion
320
+
321
+ In conclusion, we have made significant advancements in understanding the dynamics of supervised learning by utilizing matrix information and Neural Collapse theory. Our introduction of matrix mutual information ratio (MIR) and matrix entropy difference ratio (HDR) provide novel insights into the interplay between data representations and classification head vectors, serving as new tools to understand the dynamics of neural networks.
322
+
323
+ Through a series of rigorous theoretical and empirical analyses, we demonstrate the effectiveness of MIR and HDR in elucidating various neural network phenomena, such as
324
+
325
+ grokking, and their utility in improving training dynamics. The incorporation of these metrics as loss functions in supervised and semi-supervised learning shows promising results, indicating their potential to enhance model performance and training efficiency. This study not only contributes to the field of machine learning by offering new analytical tools but also applies matrix information to optimize supervised learning algorithms.
326
+
327
+ # Acknowledgment
328
+
329
+ Huimin Ma and Bochao Zou are supported by the National Nature Science Foundation of China (No.U20B2062, No.62227801, No.62206015) and National Science and Technology Major Project (2022ZD0117900).
330
+
331
+ Weiran Huang is supported by 2023 CCF-Baidu Open Fund and Microsoft Research Asia.
332
+
333
+ We would also like to express our sincere gratitude to the reviewers of ICML 2024 for their insightful and constructive feedback. Their valuable comments have greatly contributed to improving the quality of our work.
334
+
335
+ # Impact Statement
336
+
337
+ This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
338
+
339
+ # References
340
+
341
+ Altintas, G. S., Bachmann, G., Noci, L., and Hofmann, T. Disentangling linear mode connectivity. In UniReps: the First Workshop on Unifying Representations in Neural Models, 2023. 2, 5
342
+ Bach, F. Information theory with kernel methods. IEEE Transactions on Information Theory, 69(2):752-775, 2022. 2
343
+ Berthelot, D., Carlini, N., Cubuk, E. D., Kurakin, A., Sohn, K., Zhang, H., and Raffel, C. Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring. arXiv preprint arXiv:1911.09785, 2019a. 8
344
+ Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., and Raffel, C. A. Mixmatch: A holistic approach to semi-supervised learning. Advances in neural information processing systems, 32, 2019b. 2, 8
345
+ Chan, W., Jaitly, N., Le, Q., and Vinyals, O. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 4960-4964. IEEE, 2016. 1
346
+ Chen, H., Tao, R., Fan, Y., Wang, Y., Wang, J., Schiele, B., Xie, X., Raj, B., and Savvides, M. Softmatch: Addressing the quantity-quality trade-off in semi-supervised learning. International Conference on Learning Representations (ICLR), 2023. 2, 8
347
+
348
+ Frankle, J., Dziugaite, G. K., Roy, D., and Carbin, M. Linear mode connectivity and the lottery ticket hypothesis. In International Conference on Machine Learning, pp. 3259-3269. PMLR, 2020. 1, 2, 5
349
+ Garrido, Q., Balestriero, R., Najman, L., and Lecun, Y. Rankme: Assessing the downstream performance of pretrained self-supervised representations by their rank. In International Conference on Machine Learning, pp. 10929-10974. PMLR, 2023. 12
350
+ Girshick, R. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440-1448, 2015. 1
351
+ Han, S., Pool, J., Tran, J., and Dally, W. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28, 2015. 6
352
+ Han, X., Papyan, V., and Donoho, D. L. Neural collapse under MSE loss: Proximity to and dynamics on the central path. In ICLR, 2021. 2
353
+ He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. 1
354
+ Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82-97, 2012. 1
355
+ Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012. 1
356
+ Lee, D.-H. et al. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, pp. 896, 2013. 8
357
+ Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014. 1
358
+ Miyato, T., Maeda, S.-i., Koyama, M., and Ishii, S. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993, 2018. 8
359
+ Nanda, N., Chan, L., Lieberum, T., Smith, J., and Steinhardt, J. Progress measures for grokking via mechanistic interpretability. In The Eleventh International Conference on Learning Representations, 2022. 2, 6
360
+ Papyan, V., Han, X., and Donoho, D. L. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40): 24652-24663, 2020. 1, 2, 3
361
+ Pham, H., Dai, Z., Xie, Q., and Le, Q. V. Meta pseudo labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11557-11568, 2021. 8
362
+ Power, A., Burda, Y., Edwards, H., Babuschkin, I., and Misra, V. Grokking: Generalization beyond overfitting on small algorithmic datasets. arXiv preprint arXiv:2201.02177, 2022. 1, 2
363
+
364
+ Rasmus, A., Berglund, M., Honkala, M., Valpola, H., and Raiko, T. Semi-supervised learning with ladder networks. Advances in Neural Information Processing Systems, 28:3546-3554, 2015. 8
365
+ Ronneberger, O., Fischer, P., and Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234-241. Springer, 2015. 1
366
+ Skean, O., Osorio, J. K. H., Brockmeier, A. J., and Giraldo, L. G. S. Dime: Maximizing mutual information by a difference of matrix-based entropies. arXiv preprint arXiv:2301.08164, 2023. 3
367
+ Sohn, K., Berthelot, D., Carlini, N., Zhang, Z., Zhang, H., Raffel, C. A., Cubuk, E. D., Kurakin, A., and Li, C.-L. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in neural information processing systems, 33:596-608, 2020. 2, 8
368
+ Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826, 2016. 5
369
+ Tan, Z. and Huang, W. Understanding grokking through a robustness viewpoint. arXiv preprint arXiv:2311.06597, 2023. 6
370
+ Tan, Z., Wang, Z., and Zhang, Y. Seal: Simultaneous label hierarchy exploration and learning. arXiv preprint arXiv:2304.13374, 2023a. 2
371
+ Tan, Z., Yang, J., Huang, W., Yuan, Y., and Zhang, Y. Information flow in self-supervised learning. arXiv e-prints, pp. arXiv-2309, 2023b. 2
372
+ Tan, Z., Zheng, K., and Huang, W. O'tmatch: Improving semi-supervised learning with optimal transport. arXiv preprint arXiv:2310.17455, 2023c. 2, 8
373
+ Tarvainen, A. and Valpola, H. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems, 30, 2017. 8
374
+ Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. Advances in neural information processing systems, 30, 2017. 1
375
+ Wang, X., Al-Bashabsheh, A., Zhao, C., and Chan, C. Adaptive label smoothing for classifier-based mutual information neural estimation. In 2021 IEEE International Symposium on Information Theory (ISIT), pp. 1035-1040. IEEE, 2021. 2
376
+ Wang, Y., Chen, H., Heng, Q., Hou, W., Fan, Y., Wu, Z., Wang, J., Savvides, M., Shinozaki, T., Raj, B., et al. Freematch: Self-adaptive thresholding for semi-supervised learning. In Eleventh International Conference on Learning Representations, 2023. 2, 7, 8
377
+ Wei, L., Tan, Z., Li, C., Wang, J., and Huang, W. Large language model evaluation via matrix entropy. arXiv preprint arXiv:2401.17139, 2024. 4
378
+
379
+ Xie, Q., Dai, Z., Hovy, E., Luong, T., and Le, Q. Unsupervised data augmentation for consistency training. Advances in Neural Information Processing Systems, 33:6256-6268, 2020. 8
380
+ Xu, Y., Shang, L., Ye, J., Qian, Q., Li, Y.-F., Sun, B., Li, H., and Jin, R. Dash: Semi-supervised learning with dynamic thresholding. In International Conference on Machine Learning, pp. 11525-11536. PMLR, 2021. 8
381
+ Zhang, B., Wang, Y., Hou, W., Wu, H., Wang, J., Okumura, M., and Shinozaki, T. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. Advances in Neural Information Processing Systems, 34:18408-18419, 2021. 2, 8
382
+ Zhang, Y., Tan, Z., Yang, J., Huang, W., and Yuan, Y. Matrix information theory for self-supervised learning. arXiv preprint arXiv:2305.17326, 2023a. 2, 4
383
+ Zhang, Y., Yang, J., Tan, Z., and Yuan, Y. Relationmatch: Matching in-batch relationships for semi-supervised learning. arXiv preprint arXiv:2305.10397, 2023b. 2
384
+ Zhou, J., You, C., Li, X., Liu, K., Liu, S., Qu, Q., and Zhu, Z. Are all losses created equal: A neural collapse perspective. NeurIPS, 35:31697-31710, 2022. 2
385
+
386
+ # Appendix
387
+
388
+ # A. Detailed proofs for Neural Collapse related Theorems
389
+
390
+ Theorem A.1. Suppose Neural collapse happens. Then $\mathrm{HDR}(\mathbf{G}(\mathbf{W}^T),\mathbf{G}(\mathbf{M})) = 0$ and $\mathrm{MIR}(\mathbf{G}(\mathbf{W}^T),\mathbf{G}(\mathbf{M})) = \frac{1}{C - 1} +\frac{(C - 2)\log(C - 2)}{(C - 1)\log(C - 1)}.$
391
+
392
+ Proof. By (NC 3), we know that $\mathbf{W}^T = \frac{\|\mathbf{W}\|_F}{\|\mathbf{M}\|_F} \mathbf{M}$ . Noticing that $\frac{\|\mathbf{W}\|_F}{\|\mathbf{M}\|_F} > 0$ , we know that $\frac{w_i}{\|w_i\|} = \frac{\tilde{\mu}_i}{\|\tilde{\mu}_i\|}$ . It is then very clear that $\mathbf{G}(\mathbf{W}^T) = \mathbf{G}(\mathbf{M})$ . Therefore from definition 4.1 and 3.4, it is clear that $\mathrm{HDR}(\mathbf{G}(\mathbf{W}^T), \mathbf{G}(\mathbf{M})) = 0$ .
393
+
394
+ Define $\mathcal{E}(\alpha) = \left[ \begin{array}{cccc}1 & \alpha & \dots & \alpha \\ \alpha & 1 & \dots & \alpha \\ \vdots & \vdots & \vdots & \vdots \\ \alpha & \alpha & \dots & 1 \end{array} \right]$ . From (NC 2), we know that $\mathbf{G}(\mathbf{W}^T) = \mathbf{G}(\mathbf{M}) = \mathcal{E}\left(\frac{-1}{C - 1}\right)$ and $\mathbf{G}(\mathbf{W}^T) \odot \mathbf{G}(\mathbf{M}) =$
395
+
396
+ $\mathcal{E}\left(\frac{1}{(C - 1)^2}\right)$ . Notice that $\mathcal{E}(\alpha) = (1 - \alpha)\mathbf{I}_C + \alpha \mathbf{1}_C^T\mathbf{1}_C$ , we can obtain its spectrum as $1 - \alpha$ ( $C - 1$ times) and $1 + (C - 1)\alpha$ ( $1$ time). Therefore, we can obtain that $\mathrm{H}(\mathbf{G}(\mathbf{W}^T)) = \mathrm{H}(\mathbf{G}(\mathbf{M})) = \log(C - 1)$ . And $\mathrm{H}(\mathbf{G}(\mathbf{W}^T) \odot \mathbf{G}(\mathbf{M})) = -\frac{1}{C - 1}\log\frac{1}{C - 1} - (C - 1)\frac{C - 2}{(C - 1)^2}\log\frac{C - 2}{(C - 1)^2} = \frac{1}{C - 1}\log(C - 1) - \frac{C - 2}{C - 1}\log(C - 2) + \frac{2(C - 2)}{C - 1}\log(C - 1) = (2 - \frac{1}{C - 1})\log(C - 1) - \frac{C - 2}{C - 1}\log(C - 2)$ . Then the conclusion follows from definition 3.3.
397
+
398
+ As the linear weight matrix $\mathbf{W}$ can be seen as (prototype) embedding for each class. It is natural to consider the mutual information and entropy difference between sample embedding and label embedding. We discuss this in the following theorem 4.3.
399
+
400
+ Corollary A.2. Suppose the dataset is class-balanced, $\mu_G = 0$ and Neural collapse happens. Denote $\mathbf{Z}_1 = [h(\mathbf{x}_1)\dots h(\mathbf{x}_n)]\in \mathbb{R}^{d\times n}$ and $\mathbf{Z}_2 = [w_{y_1}\dots w_{y_n}]\in \mathbb{R}^{d\times n}$ . Then $\mathrm{HDR}(\mathbf{Z}_1,\mathbf{Z}_2) = 0$ and $\mathrm{MIR}(\mathbf{Z}_1,\mathbf{Z}_2) = \frac{1}{C - 1} +\frac{(C - 2)\log(C - 2)}{(C - 1)\log(C - 1)}$ .
401
+
402
+ Proof. Denote $n_1 = \frac{n}{C}$ the number of samples in each class. Without loss of generality, assume samples are arranged as $C$ consecutive groups, each group has samples from the same class.
403
+
404
+ Define $\mathcal{E}(\alpha) = \begin{bmatrix} 1 & \alpha & \dots & \alpha \\ \alpha & 1 & \dots & \alpha \\ \vdots & \vdots & \vdots & \vdots \\ \alpha & \alpha & \dots & 1 \end{bmatrix}$ and $\mathbf{1}_{n_1,n_1}$ a $n_1\times n_1$ matrix with all its element 1.
405
+
406
+ From (NC 1), we know that $\mathbf{G}(\mathbf{W}^T) = \mathbf{G}(\mathbf{M}) = \mathcal{E}(\frac{-1}{C - 1})\otimes \mathbf{1}_{n_1,n_1}$ and $\mathbf{G}(\mathbf{W}^T)\odot \mathbf{G}(\mathbf{M}) = \mathcal{E}(\frac{1}{(C - 1)^2})\otimes \mathbf{1}_{n_1,n_1}$ , where $\otimes$ is the Kronecker product. By the property of the Kronecker product spectrum, the non-zero spectrum of $\mathbf{G}(\mathbf{W}^T)$ will be $n_1$ times the spectrum of $\mathcal{E}(\frac{-1}{C - 1})$ . Then this corollary follows by the same proof of the theorem 4.2.
407
+
408
+ # B. Some theoretical guarantees for HDR
409
+
410
+ Mutual information is a very intuitive quantity in information theory. On the other hand, it seems weird to consider the difference of entropy, but we will show that this quantity is closely linked with comparing the approximation ability of different representations on the same target.
411
+
412
+ For ease of theoretical analysis, in this section, we consider the MSE regression loss.
413
+
414
+ The following lemma 4.4 shows that the regression of two sets of representations $\mathbf{Z}_1$ and $\mathbf{Z}_2$ to the same target $\mathbf{Y}$ are closely related. And the two approximation errors are closely related to the regression error of $\mathbf{Z}_1$ to $\mathbf{Z}_2$ .
415
+
416
+ Lemma B.1. Suppose $\mathbf{W}_1^*$ , $\mathbf{b}_1^* = \arg \min_{\mathbf{W},\mathbf{b}}\| \mathbf{Y} - (\mathbf{W}\mathbf{Z}_1 + \mathbf{b}\mathbf{1}_N)\| _F$ . Then $\min_{\mathbf{W},\mathbf{b}}\| \mathbf{Y} - (\mathbf{W}\mathbf{Z}_2 + \mathbf{b}\mathbf{1}_N)\| _F \leq \min_{\mathbf{W},\mathbf{b}}\| \mathbf{Y} - (\mathbf{W}\mathbf{Z}_1 + \mathbf{b}\mathbf{1}_N)\| _F + \| \mathbf{W}_1^*\| _F\min_{\mathbf{H},\eta}\| \mathbf{Z}_1 - (\mathbf{H}\mathbf{Z}_2 + \eta \mathbf{1}_N)\| _F$ .
417
+
418
+ Proof. Suppose $\mathbf{H}^*$ , $\eta^* = \arg \min_{\mathbf{H},\eta}\| \mathbf{Z}_1 - (\mathbf{H}\mathbf{Z}_2 + \eta \mathbf{1}_N)\|_F$ . Then $\min_{\mathbf{W},\mathbf{b}}\| \mathbf{Y} - (\mathbf{W}\mathbf{Z}_2 + \mathbf{b}\mathbf{1}_N)\|_F \leq \| \mathbf{Y} - (\mathbf{W}_1^*\mathbf{H}^*\mathbf{Z}_2 + (\mathbf{b}_1^* + \mathbf{W}_1^*\eta^*)\mathbf{1}_N)\|_F \leq \| \mathbf{Y} - (\mathbf{W}_1^*\mathbf{Z}_1 + \mathbf{b}_1^*\mathbf{1}_N)\|_F + \| \mathbf{W}_1^*(\mathbf{Z}_1 - (\mathbf{H}^*\mathbf{Z}_2 + \eta^*\mathbf{1}_N))\|_F \leq \| \mathbf{Y} - (\mathbf{W}_1^*\mathbf{H}^*\mathbf{Z}_2 + (\mathbf{b}_1^* + \mathbf{W}_1^*\eta^*)\mathbf{1}_N)\|_F \leq \| \mathbf{Y} - (\mathbf{W}_1^*\mathbf{Z}_1 + \mathbb{B}^*\mathbf{Z}_2 + \eta^*\mathbf{1}_N)\|_F$ .
419
+
420
+ From lemma 4.4, we know that the regression error of $\mathbf{Z}_1$ to $\mathbf{Z}_2$ is crucial for understanding the differences of representations. We further bound the regression error with rank and singular values in the following lemma 4.5.
421
+
422
+ Lemma B.2. Suppose $\mathbf{Z}_1 = [\mathbf{z}_1^{(1)}\dots \mathbf{z}_N^{(1)}]\in \mathbb{R}^{d'\times N}$ and $\mathbf{Z}_2 = [\mathbf{z}_1^{(2)}\dots \mathbf{z}_N^{(2)}]\in \mathbb{R}^{d\times N}$ and rank $(\mathbf{Z}_1) > rank(\mathbf{Z}_2)$ . Denote the singular value of $\frac{\mathbf{Z}_1}{\sqrt{N}}$ as $\sigma_1\geq \dots \geq \sigma_N$ . Then $\min_{\mathbf{H},\eta}\frac{1}{N}\| \mathbf{Z}_1 - (\mathbf{H}\mathbf{Z}_2 + \eta \mathbf{1}_N)\| _F^2\geq \sum_{j = rank(\mathbf{Z}_2) + 2}^{rank(\mathbf{Z}_1)}(\sigma_j)^2$ .
423
+
424
+ Proof. The proof idea is similar to (Garrido et al., 2023). Suppose $\mathbf{H}^*$ , $\eta^* = \arg \min_{\mathbf{H},\eta}\frac{1}{N}\|\mathbf{Z}_1 - (\mathbf{H}\mathbf{Z}_2 + \eta\mathbf{1}_N)\|_F^2$ and $r = \mathrm{rank}(\mathbf{H}^*\mathbf{Z}_2 + \eta^*\mathbf{1}_N)$ .
425
+
426
+ Then from Eckart-Young-Mirsky theorem $\frac{1}{N}\| \mathbf{Z}_1 - (\mathbf{H}^*\mathbf{Z}_2 + \eta^*\mathbf{1}_N)\| _F^2\geq \sum_{j = r + 1}^N (\sigma_j^{(1)})^2$ . And note $r\leq \mathrm{rank}(\mathbf{Z}_2) + 1$ and singular value index bigger than rank is 0. The conclusion follows.
427
+
428
+ The bound given by lemma 4.5 is not that straightforward to understand. Assuming the features are normalized, we successfully derived the connection of regression error and ratio of ranks in theorem 4.6.
429
+
430
+ Theorem B.3. Suppose $\| \mathbf{z}_j^{(1)}\| _2 = 1$ where $(1\leq j\leq N)$ . Then lower bound of approximation error can be upper-bounded as follows: $\sum_{j = rank(\mathbf{Z}_2) + 2}^{rank(\mathbf{Z}_1)}(\sigma_j)^2\leq \frac{rank(\mathbf{Z}_1) - rank(\mathbf{Z}_2) - 1}{rank(\mathbf{Z}_1)}\leq 1 - \frac{rank(\mathbf{Z}_2)}{rank(\mathbf{Z}_1)}.$
431
+
432
+ Proof. The proof is direct by noticing the summation of the square of singular values is 1 and we have already ranked singular values by their indexes. $\square$
unveilingthedynamicsofinformationinterplayinsupervisedlearning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f68253b68cda7b3d9cab39c2fcafc6cf427c1dee8b6de44c56d824945dc597ec
3
+ size 411694
unveilingthedynamicsofinformationinterplayinsupervisedlearning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43f31ac8a0e99fc0260ba1e9b7cfee0f0dd4ae2a015511ac758b7cc0a5e9840d
3
+ size 557195