Add files using upload-large-folder tool
Browse files- 2002.06753/main_diagram/main_diagram.drawio +1 -0
- 2002.06753/main_diagram/main_diagram.pdf +0 -0
- 2002.06753/paper_text/intro_method.md +38 -0
- 2007.13078/main_diagram/main_diagram.drawio +0 -0
- 2007.13078/paper_text/intro_method.md +73 -0
- 2009.05166/main_diagram/main_diagram.drawio +1 -0
- 2009.05166/main_diagram/main_diagram.pdf +0 -0
- 2009.05166/paper_text/intro_method.md +37 -0
- 2011.12663/main_diagram/main_diagram.drawio +0 -0
- 2011.12663/paper_text/intro_method.md +29 -0
- 2103.16429/main_diagram/main_diagram.pdf +0 -0
- 2210.05577/main_diagram/main_diagram.drawio +0 -0
- 2210.05577/paper_text/intro_method.md +181 -0
- 2304.01042/main_diagram/main_diagram.drawio +0 -0
- 2304.01042/main_diagram/main_diagram.pdf +0 -0
- 2304.01042/paper_text/intro_method.md +165 -0
- 2406.03919/main_diagram/main_diagram.drawio +0 -0
- 2406.03919/paper_text/intro_method.md +159 -0
- 2411.03387/main_diagram/main_diagram.drawio +143 -0
- 2411.03387/main_diagram/main_diagram.pdf +0 -0
- 2411.03387/paper_text/intro_method.md +59 -0
- 2505.02537/main_diagram/main_diagram.drawio +265 -0
- 2505.02537/main_diagram/main_diagram.pdf +0 -0
- 2505.02537/paper_text/intro_method.md +63 -0
- 2506.06898/main_diagram/main_diagram.drawio +0 -0
- 2506.06898/paper_text/intro_method.md +43 -0
2002.06753/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="www.draw.io" modified="2020-02-04T15:36:49.732Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36" etag="Xfq5qaY-vBZMkZ7mJZ55" version="12.6.4" type="google"><diagram id="e245msCFES8Ne5Zk6oXc" name="Page-1">7VjbjtsgEP2aPFayIdfHNrvpRVqpVR7aV2QmNl1sLIzXzn59cQBf4mSTNMlmV9o8ROYwDMOcM2A8wPO4/CpJGj0ICnyAPFoO8N0AoYnv6/8KWBtgOEIGCCWjBvIbYMmewYKeRXNGIesYKiG4YmkXDESSQKA6GJFSFF2zleDdWVMSQg9YBoT30d+Mqsig05HX4N+AhZGb2fdsT0ycsQWyiFBRtCB8P8BzKYQyT3E5B17lzuXFjFvs6a0Dk5CoYwb8Ffmfhx+/CiGfvyPwkjTH+JP18kR4bhdsg1VrlwFI6OcqkbqViESDXyIVc93y9eOKcT4XXMiNLV5NAwgCjWdKikdo9SwWnv7VPS6XPtaQFHlCgVqXJgCgPV6ahVooE7kM4IXVTes0a3mCiEHJtR5XNEQ6HqMWhw6TwIliT90wiNVTWLurZ/gpmA4QeVb6yCnBKt8fe10XJnw7qk3clqPxIUeKyBBUz5F+aC27gTa6OEEj6EyN9LRQKWGx2K2FC7I/fkvso+GF2O85ujL74yPY51xvxxXrRcQULFOy4aXQB0JXCSRLzR69YmVV70dLY9bfamrLUBLKtDhcnxWgDRukgvJlKfUl4jicbaUe23ZLQmiHhJC3Xy0dek7lYvI2K9Eo8KMSr1yJ01etxD0H9o5KrC2vV4nDydYROLpxJc7OrMSbvjcdrNeP96ZL1Ku7wpwmEkqyqOb00N69SxknSOFWFGN/z8F6KsW9fWHyyhSfe386ltXhO2B1iLtkYO8/WcXjLVb9S7Gqm82F25g3Xy3w/T8=</diagram></mxfile>
|
2002.06753/main_diagram/main_diagram.pdf
ADDED
|
Binary file (3.71 kB). View file
|
|
|
2002.06753/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
In the context of few-shot learning, the objective of meta-learning algorithms is to produce a network that quickly adapts to new classes using little data. Concretely stated, meta-learning algorithms find parameters that can be fine-tuned in few optimization steps and on few data points in order to achieve good generalization on a task $\mathcal{T}_i$, consisting of a small number of data samples from a distribution and label space that was not seen during training. The task is characterized as *n-way*, *k-shot* if the meta-learning algorithm must adapt to classify data from $\mathcal{T}_i$ after seeing $k$ examples from each of the $n$ classes in $\mathcal{T}_i$.
|
| 4 |
+
|
| 5 |
+
Meta-learning schemes typically rely on bi-level optimization problems with an *inner loop* and an *outer loop*. An iteration of the outer loop involves first sampling a "task," which comprises two sets of labeled data: the support data, $\mathcal{T}_i^s$, and the query data, $\mathcal{T}_i^q$. Then, in the inner loop, the model being trained is fine-tuned using the support data. Finally, the routine moves back to the outer loop, where the meta-learning algorithm minimizes loss on the query data with respect to the pre-fine-tuned weights. This minimization is executed by differentiating through the inner loop computation and updating the network parameters to make the inner loop fine-tuning as effective as possible. Note that, in contrast to standard transfer learning (which uses classical training and simple first-order gradient information to update parameters), meta-learning algorithms differentiate through the entire fine-tuning loop. A formal description of this process can be found in Algorithm [\[alg:MetaAlgorithm\]](#alg:MetaAlgorithm){reference-type="ref" reference="alg:MetaAlgorithm"}, as seen in [@goldblum2019robust].
|
| 6 |
+
|
| 7 |
+
:::: algorithm
|
| 8 |
+
::: algorithmic
|
| 9 |
+
Base model, $F_\theta$, fine-tuning algorithm, $A$, learning rate, $\gamma$, and distribution over tasks, $p(\mathcal{T})$. Initialize $\theta$, the weights of $F$;\
|
| 10 |
+
Sample batch of tasks, $\{\mathcal{T}_i\}_{i=1}^n$, where $\mathcal{T}_i \sim p(\mathcal{T})$ and $\mathcal{T}_i = (\mathcal{T}_i^s, \mathcal{T}_i^q)$.\
|
| 11 |
+
Fine-tune model on $\mathcal{T}_i$ (inner loop). New network parameters are written $\theta_{i} = A(\theta, \mathcal{T}_i^s)$.\
|
| 12 |
+
Compute gradient $g_i = \nabla_{\theta} \mathcal{L}(F_{\theta_{i}}, \mathcal{T}_i^{q})$ Update base model parameters (outer loop):\
|
| 13 |
+
$\theta \leftarrow \theta - \frac{\gamma}{n} \sum_i g_i$
|
| 14 |
+
:::
|
| 15 |
+
::::
|
| 16 |
+
|
| 17 |
+
A variety of meta-learning algorithms exist, mostly differing in how they fine-tune on support data during the inner loop. Some meta-learning approaches, such as MAML, update all network parameters using gradient descent during fine-tuning [@finn2017model]. Because differentiating through the inner loop is memory and computationally intensive, the fine-tuning process consists of only a few (sometimes just 1) SGD steps.
|
| 18 |
+
|
| 19 |
+
Reptile, which functions as a zero'th-order approximation to MAML, avoids unrolling the inner loop and differentiating through the SGD steps. Instead, after fine-tuning on support data, Reptile moves the central parameter vector in the direction of the fine-tuned parameters during the outer loop [@nichol2018reptile]. In many cases, Reptile achieves better performance than MAML without having to differentiate through the fine-tuning process.
|
| 20 |
+
|
| 21 |
+
Another class of algorithms freezes the feature extraction layers during the inner loop; only the linear classifier layer is trained during fine-tuning. Such methods include R2-D2 and MetaOptNet [@bertinetto2018meta; @lee2019meta]. The advantage of this approach is that the fine-tuning problem is now a convex optimization problem. Unlike MAML, which simulates the fine-tuning process using only a few gradient updates, last-layer meta-learning methods can use differentiable optimizers to exactly minimize the fine-tuning objective and then differentiate the solution with respect to feature inputs. Moreover, differentiating through these solvers is computationally cheap compared to MAML's differentiation through SGD steps on the whole network. While MetaOptNet relies on an SVM loss, R2-D2 simplifies the process even further by using a quadratic objective with a closed-form solution. R2-D2 and MetaOptNet achieve stronger performance than MAML and are able to harness larger architectures without overfitting.
|
| 22 |
+
|
| 23 |
+
:::: table*
|
| 24 |
+
::: center
|
| 25 |
+
Model SVM RR ProtoNet MAML
|
| 26 |
+
-------------- ------------------------ ------------------------ ------------------------ ------------------------
|
| 27 |
+
MetaOptNet-M **62.64** $\pm$ 0.31 % **60.50** $\pm$ 0.30 % **51.99** $\pm$ 0.33 % **55.77** $\pm$ 0.32 %
|
| 28 |
+
MetaOptNet-C 56.18 $\pm$ 0.31 % 55.09 $\pm$ 0.30 % 41.89 $\pm$ 0.32 % 46.39 $\pm$ 0.28 %
|
| 29 |
+
R2-D2-M **51.80** $\pm$ 0.20 % **55.89** $\pm$ 0.31 % **47.89** $\pm$ 0.32 % **53.72** $\pm$ 0.33 %
|
| 30 |
+
R2-D2-C 48.39 $\pm$ 0.29 % 48.29 $\pm$ 0.29 % 28.77 $\pm$ 0.24 % 44.31 $\pm$ 0.28 %
|
| 31 |
+
:::
|
| 32 |
+
::::
|
| 33 |
+
|
| 34 |
+
Another last-layer method, ProtoNet, classifies examples by the proximity of their features to those of class centroids - a metric learning approach - in its inner loop [@snell2017prototypical]. Again, the feature extractor's parameters are frozen in the inner loop, and the extracted features are used to create class centroids which then determine the network's class boundaries. Because calculating class centroids is mathematically simple, this algorithm is able to efficiently backpropagate through this calculation to adjust the feature extractor.
|
| 35 |
+
|
| 36 |
+
In this work, "classically trained" models are trained, using cross-entropy loss and SGD, on all classes simultaneously, and the feature extractors are adapted to new tasks using the same fine-tuning procedures as the meta-learned models for fair comparison. This approach represents the industry-standard method of transfer learning using pre-trained feature extractors.
|
| 37 |
+
|
| 38 |
+
Several datasets have been developed for few-shot learning. We focus our attention on two datasets: mini-ImageNet and CIFAR-FS. Mini-ImageNet is a pruned and downsized version of the ImageNet classification dataset, consisting of 60,000, $84 \times 84$ RGB color images from $100$ classes [@vinyals2016matching]. These 100 classes are split into $64, 16,$ and $20$ classes for training, validation, and testing sets, respectively. The CIFAR-FS dataset samples images from CIFAR-100 [@bertinetto2018meta]. CIFAR-FS is split in the same way as mini-ImageNet with 60,000 $32 \times 32$ RGB color images from $100$ classes divided into $64, 16,$ and $20$ classes for training, validation, and testing sets, respectively.
|
2007.13078/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2007.13078/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The ability to reason about the future states of multiple agents in a scene is an important task for applications that seek vehicle autonomy. Ideally, a prediction framework should have three properties. First, it must be able to predict multiple plausible trajectories in the dominant modes of motion. Second, these trajectories should be consistent with the scene semantics. Third, it is attractive for several applications if constant-time prediction can be achieved regardless of the number of agents in the scene. In this paper, we propose dataset creation and future prediction methods that help achieve the above three properties (Figure 1).
|
| 4 |
+
|
| 5 |
+
A fundamental limitation for multimodal trajectory prediction is the lack of training data with a diverse enough coverage of the possible motion modes
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
|
| 9 |
+
Fig. 1: Left: Given the map and tracklets, we propose to reconstruct the real world scene in top-view and simulate diverse behaviors for multiple agents w.r.t. scene context. Right: The proposed SMART algorithm that is able to generate context aware and multimodal trajectories for multiple agents.
|
| 10 |
+
|
| 11 |
+
in a scene. Our first main contribution is a simulation strategy to recreate driving scenarios from real world data, which generates multiple driving behaviors to obtain diverse trajectories for a given scene. We construct a graph-based simulation environment that leverages scene semantics and maps to execute realistic vehicle behaviors in the top-view. We sample reference velocity profiles from trajectories executing similar maneuvers in the real world data. Then we use a variant of the Intelligent Driver Model [37,24] to model the dynamics of vehicle driving patterns and introduce lane-change decisions for simulated vehicles based on MOBIL [38]. We show that training with our simulated datasets leads to large improvements in prediction outcomes compared to the real data counterparts that are comparatively limited in both scale and diversity reflected by a Wasserstein metric.
|
| 12 |
+
|
| 13 |
+
Several recent works consider deep networks for trajectory prediction for humans [14,4,21,2,31] and vehicles [22,34,15,6,40,28]. Usually, they consider interactions among multiple agents, but still operate on single agent basis at inference time, requiring one forward pass for each agent in the scene. Vehicle motions are stochastic and depending on their goals, obtaining multimodal predictions for individual vehicles that are consistent with the scene significantly increases the time complexity. Our second main contribution addresses this through a novel approach, Simultaneous Multi-Agent Recurrent Trajectory (SMART) prediction. To the best of our knowledge, it is the first method to achieve multimodal, scene-consistent prediction of multiple agents in constant time.
|
| 14 |
+
|
| 15 |
+
Specifically, we propose a novel architecture based on Convolutional LSTMs (ConvLSTMs) [39] and conditional variational autoencoders (CVAEs) [33], where agent states and scene context are represented in the bird-eye-view. Our method predicts trajectories for n agents with a time complexity of O(1) (Table 1). To realize this, we use a single top-view grid map representation of all agents in the scene and utilize fully-convolutional operations to model the output predictions. Our ConvLSTM models the states of multiple agents, with novel state pooling
|
| 16 |
+
|
| 17 |
+
Table 1: Comparison of our method with existing works in terms of complexity, scene context and interactions. n and K are number of agents and iterations.
|
| 18 |
+
|
| 19 |
+
| Method | Social GAN[14] | Desire[22] | SoPhie[31] | INFER[34] | MATF GAN[40] | Ours |
|
| 20 |
+
|---------------------|----------------|------------|------------|-----------|--------------|------|
|
| 21 |
+
| Complexity | O(n) | O(nK) | $O(n^2)$ | O(n) | O(n) | O(1) |
|
| 22 |
+
| Scene Context | Γ | β | <b>β</b> | <b>β</b> | β | V . |
|
| 23 |
+
| Social Interactions | β | β | β | β | β | β |
|
| 24 |
+
|
| 25 |
+
operations, to implicitly account for interactions among objects and handle dynamically changing scenarios. To obtain multimodal predictions, we assign labels to trajectories based on the type of maneuver they execute and query for trajectories executing specific behaviors at test time. Our variational generative model is conditioned on this label to capture diversity in executing maneuvers of various types.
|
| 26 |
+
|
| 27 |
+
We validate our ideas on both real and simulated datasets and demonstrate state-of-the-art prediction numbers on both. We evaluate the network performance based on average displacement error(ADE), final displacement error(FDE) and likelihood(NLL) of the predictions with respect to the ground truth. Our experiments are designed to highlight the importance of methods to simulate datasets with sufficient realism at larger scales and diversity, as well as a prediction method that accounts for multimodality while achieving constant-time outputs independent of the number of agents in the scene.
|
| 28 |
+
|
| 29 |
+
To summarize, our key contributions are:
|
| 30 |
+
|
| 31 |
+
- A method to achieve constant-time trajectory prediction independent of number of agents in the scene, while accounting for multimodality and scene consistency.
|
| 32 |
+
- A method to simulate datasets in the top-view that imbibe the realism of real-world data, while augmenting them with diverse trajectories that cover diverse scene-consistent motion modes.
|
| 33 |
+
|
| 34 |
+
# Method
|
| 35 |
+
|
| 36 |
+
Given the lane centerline information $L^{1...m}$ for a scene, we render them in top view representations such that our scene context map $\mathcal{I}$ is of HxWx3 where channel dimension represents one-hot information of each pixel corresponding to $\{road, lane, unknown\}$ road element. Let $\mathbf{X}_i = \{X_i^1, X_i^2, ..., X_i^T\}$ denote trajectory information of $i^{th}$ vehicle from timestep 1...T where each $X_i^t = (x_i, y_i)^t$ represents spatial location of the agent in the scene. Our network takes input in the form of relative coordinates ${}^{R}\mathbf{X}_{i}$ with respect to agent's starting location. For the $i^{th}$ agent in the scene, we project ${}^R\mathbf{X}_i$ at corresponding $\mathbf{X}_i$ locations to construct a spatial location map of states $\mathcal{S}^{1...T}$ such that $\mathcal{S}^t[X_i^t]$ contains relative coordinate of $i^{th}$ agent at timestep t. ${}^R\mathbf{Y}_i = {}^R\mathbf{X}_i^{t_{obs}...T}$ represents ground truth trajectory. And we further denote $\mathcal{M}^t$ as the location mask representing configuration of agents in the scene. To keep track of vehicles across timesteps, we construct a vehicle IDs map $\mathcal{V}^{1...T}$ where $\mathcal{V}^t[X_i^t]=i$ . Furthermore, we associate each trajectory $X_i^{t_{obs},...T}$ with a label $c_i$ that represents the behavioral type of the trajectory from one of {straight, left, right} behaviors. And trajectory label for lane changes falls in one of the three categories. Let $\mathcal{C}$ encode grid map representation of $c_i$ such that $\mathcal{C}^t[X_i^t] = c_i$ . Note that vehicle trajectories are not random compared to the human motion. Instead, they depend on behaviors of other vehicles in the road, which motivates us to classify trajectories based on different maneuvers.
|
| 37 |
+
|
| 38 |
+
We follow the formulation proposed in [40,15,34] where network takes previous states $\mathcal{S}^{1..t_{obs}}$ as input along with the scene context map $\mathcal{I}$ , trajectory label map $\mathcal{C}$ , location mask $\mathcal{M}$ and a noise map $\mathcal{Z}$ to predict the future trajectories ${}^{R}\hat{\mathbf{Y}}_{i}$ for every agent at its corresponding grid map location $X_{i}^{t}$ in the scene. Note that
|
| 39 |
+
|
| 40 |
+
we do not have a separate head for each agent. Instead, our network predicts a single future state map $\hat{S}^t$ where each individual agent tries to match ${}^R\mathbf{Y}_i^t$ at t.
|
| 41 |
+
|
| 42 |
+
We illustrate our pipeline in Figure 4. Our network architecture comprises of two major parts, a latent encoder and a conditional generator. We model the temporal information with the agents previous locations using ConvLSTMs. We further introduce a state pooling operation to feed agents state information at respective locations in consecutive timestep. While we provide trajectory specific labels to capture diverse predictions, we leverage conditional variational generative models (CVAE[33]) to model diversity in the data for each type of label.
|
| 43 |
+
|
| 44 |
+
Latent Encoder: It acts as a recognition module $Q_{\phi}$ for our CVAE framework and is only used during our training phase. Specifically, it takes in both the past and future trajectory information ${}^R\mathbf{X}_i$ and passes them through an embedding layer. The embedded vectors are then passed on to a LSTM network to output encoding at every timestep. The outputs across all the timesteps are concatenated together into a single vector along with the one hot trajectory label $c_i$ to produce $V_{enc}(i)$ . This vector is then passed on through a MLP to obtain $\mu$ and $\sigma$ to output a distribution $Q_{\phi}(z_i|^R\mathbf{X}_i, c_i)$ . Formally,
|
| 45 |
+
|
| 46 |
+
$${}^{o}h_{i}^{t} = LSTM(h_{i}^{t-1}, {}^{R}X_{i}^{t})$$
|
| 47 |
+
|
| 48 |
+
$$V_{enc}(i) = [{}^{o}h_{i}^{1}, ..., {}^{o}h_{i}^{T}, c_{i}]$$
|
| 49 |
+
|
| 50 |
+
$$\mu, \sigma = MLP(V_{enc}(i)).$$
|
| 51 |
+
|
| 52 |
+
$$(4)$$
|
| 53 |
+
|
| 54 |
+
Conditional Generator: We adapt a U-Net like architecture for the generator. At any timestep t, the inputs to the network conditional generator are the following, a scene context map $\mathcal{I}$ (HxWx3), a single representation of all agents current state $\mathcal{S}^t$ (HxWx2), location mask $\mathcal{M}^t$ (HxWx1), a one-hot trajectory specific label for each agent projected at agent specific locations in a grid from $\mathcal{C}^t$ (HxWx3) and a latent vector map $\mathcal{Z}^t$ (HxWx16) containing $z_i$ obtained from $Q_{\phi}(z_i|^R\mathbf{X}_i,c_i)$ during training phase or sampled from prior distribution $P_v(z_i|^R\mathbf{X}_i,c_i)$ at test time. Formally the network input $E^t$ is given by:
|
| 55 |
+
|
| 56 |
+
$$E^{t} = [\mathcal{I}, \mathcal{S}^{t}, \mathcal{M}^{t}, \mathcal{C}^{t}, \mathcal{Z}^{t}], \tag{5}$$
|
| 57 |
+
|
| 58 |
+
which is of size HxWx25 for any timestep t. Note that our representation is not entity centric i.e we do not have one target entity for which we want to predict trajectories but rather have a global one for all agents.
|
| 59 |
+
|
| 60 |
+
At each timestep from $1, ..., t_{obs}$ , we pass the above inputs through the encoder module. This module is composed of strided convolutions, which encode information in small spatial dimensions, and passes them through the decoder. The decoder includes ConvLSTMs and transposed convolutions with skip connections from the encoder module, and outputs a HxW map. It is then passed on to another ConvLSTM layer with state pooling operations. The same network is
|
| 61 |
+
|
| 62 |
+
shared during observation and prediction phase. A final 1x1 convolution layer is added to output a 2 channel map containing relative predicted coordinates ${}^RX_i^t$ for the agents in the next timestep.
|
| 63 |
+
|
| 64 |
+
We use the ground truth agent locations for the observed trajectory and unroll our ConvLSTM based on the predictions of our network. During the prediction phase $(t_{obs},...,T)$ , the outputs are not directly fed back as inputs to the network rather the agent's state is updated to the next location in the scene based on the predictions. The relative predicted location $\hat{X}_i^{t-1}$ gets updated to absolute predicted location $\hat{X}_i^t$ to obtain a updated scene state map $\hat{S}^t$ containing updated locations of all the agents in the scene. Note that using such representations for the scene is agnostic to number of agents and as the agents next state is predicted at its respective pixel location it is capable of handling dynamic entry and exit of agents from the scene.
|
| 65 |
+
|
| 66 |
+
State-Pooled ConvLSTMs: Simultaneous multi-agent predictions are realized through state-pooling in ConvLSTMs. Using standard ConvLSTMs for multi-agent trajectory predictions usually produces semantically aligned trajectories, but the trajectories occasionally contain erratic maneuvers. We solve this issue via state-pooling, which ensures the availability of previous state information when trying to predict the next location. We pool the previous state information from the final ConvLSTM layer for all the agents ${}^{sp}\mathbf{H}_i^{t-1}$ and initialize the next state with ${}^{sp}\mathbf{H}_i^{t-1}$ (for both hidden and cell state) at agents updated locations and zero vectors at all other locations for timestep t.
|
| 67 |
+
|
| 68 |
+
**Learning:** We train both the recognition network $Q_{\phi}(z_i|^R\mathbf{X}_i, c_i)$ and the conditional generator $P_{\theta}(Y|E)$ concurrently. We obtain predicted trajectory $^R\hat{\mathbf{Y}}$ by pooling values from indexes that agents visited at every timestep. We use two loss functions in training our CVAE based ConvLSTM network:
|
| 69 |
+
|
| 70 |
+
- Reconstruction Loss: $\mathcal{L}_R = \frac{1}{N} \sum_i^N ||^R \mathbf{Y}_i ^R \hat{\mathbf{Y}}_i||$ that penalizes the predictions to enable them to reconstruct the ground truth accurately.
|
| 71 |
+
- KL Divergence Loss: $\mathcal{L}_{KLD} = D_{KL}(Q_{\phi}(z_i|^R\mathbf{X}_i, c_i)||P_v(z_i|^R\mathbf{X}_i, c_i))$ . That regularizes the output distribution from $Q_{\phi}$ to match the sampling distribution $P_v$ at test time.
|
| 72 |
+
|
| 73 |
+
**Test phase:** At inference time, we do not have access to trajectory specific labels $c_i$ but rather query for a specific behavior by sampling these labels randomly. Along with $c_i$ for each agent we also sample $z_i$ from $P_v(z_i|^R\mathbf{X}_i, c_i)$ . However, $P_v$ can be relaxed to be independent of the input[33] implying the prior distribution to be $P_v(z_i)$ . $P_v(z_i) := \mathcal{N}(0, 1)$ at test time.
|
2009.05166/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" modified="2020-09-10T04:46:42.648Z" agent="5.0 (Macintosh; Intel Mac OS X 10_13_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36" etag="SYJRx4jKL3rRqdNGmLFp" version="13.6.9" type="google"><diagram id="wFmjj2M9l914DBba86-q" name="Page-1">7V1bj5s4FP41kWYfMuJmLo+dS3dXarvtTqVtHwk4CSqBlDCdpL9+bcCAbTIhTGITQjVS4djYcL7j7xwbczLR71fbPxN3vfwY+zCcaIq/negPE01THUdD/2HJLpeYhp0LFkngF5UqwVPwGxZCpZA+Bz7cUBXTOA7TYE0LvTiKoJdSMjdJ4he62jwO6V7X7gJygifPDXnpf4GfLnOprVmV/C8YLJakZ9V08pKVSyoXT7JZun78UhPpjxP9PonjND9abe9hiJVH9JJf935PaXljCYzSNhf8u/yyWX2ef/k5/7l9/vINfPrkf5yqRt7MLzd8Lp74a+JGm3mcrGAy/fbh4wQ3a4aoi7tZgo4W+OhmhcShu4PJ5o/i6dIdUVkSP0c+xL0qqOrLMkjh09r1cOkLMhIkW6arEJ2p6HAehOF9HMZJdq0+n0PT85B8kybxD1gr8S1npuAGixuGSQq3e1WhlgpGlgnjFUyTHapSXDAFBSaFUeqKnp+/VBCrBLdlDV6zkLmFVS3KpivFo4NC98fgALrh8OOicXBoGBAQsmEwX4WBALBZuxGla/PnMx7Gd/M4SqdzdxWE6JHe5ZC5q3WmLV03sMph+AumgedyJVUrBN12oy8vRY+b31Ve5fKNoQdj0mo2htBNkSKRH4LblNNzJqSUSSstiiPIaLgQuWGwiNCph/QFkfwOaxMZSviuKFgFvo+7aUSPxvcUHGkweFgWj4fSgId2NjxsDo/HaBEGm+XgwVBVFgzAgWELxcLhsOC0jwKeNT7cBAgl+A4HY0gbfpCgOC2Io0zZCb77Q3SUxXEk+lJuAREV0WIueYuWif+xecppYpyzaZU0fGwUoBnTH9NhhWSqIZv+yTO8BkbhwqPZJvfkQ0TG6R0w2pC4h1zAuFv5XKQPSctEqw1BpVit8vPtC9bqPtuVruWOs+lzEbXvQnveSNSmZ8PZ/DQwaGzE3gSDKpSpX59OC1/VkIND07KGYBz4mdMVzWQ1ErG8MnkSO5PV+NnTtcxkdWAdBEPoTFbn51yX75HL6LEv0aTeYjIlfGYrxx00TaDEugN9UKE9IRabJhb5Jj/EUJ8jFtmhvs6H+pevZc6WpWt5z6uQYb+eZNck5b+fNHjmvnGRapUynJ+miRtEnLKHEz/SkBhNbwmFxvIGT/M3MwzJU7BaI71kW1M8hEzkZmQzWGQ0Esm/Bo0hEhprz+sUMu0N3RnerTRUQKa6TQGiKhoHiGOJxIOfBPyTBIsgcsPho8G+wpWPhnY4coKRX4RLD17objaBx2BB9rRhJcFtkH4rwyB89r2oho8ftrVqDztyEqFHqV2ET7/Xy6rLsrMdBQj0uc17DBzoaeLnxIMtWCJ1kwVMX6lY27VHAVwDEDTQG5ElEPFO8Iu+4SZUix4+xwF6lBq9GnQwotMt5M9ZXFSZBtdOuSjLTkdJQ7keuIYyGyuf+g1m12Ly2dHsDFWrG96tAs5ie7iRzzAJkCYwr5zcHsk214P2uIdwxNij7txqtMNXdaaRtiapMqYt3CT3bIK9kjiBW6CS7pn4Of31xAm6YfYMDf4l3qkIW+sUKGhniRSk+XUHcETKttKWSFFbcomUXyaS79stYc5dniu2bh1Td8p/jDslm06ONSfdsaWak900QzmwA9/LyR3vvU8WM/cGaRj9of6VxiO8UJMhrHBb91dxFG8ysqeqbLK1XlxBWW/Lopnr/VhkvmBK38GNhj94KrrTDKc6Brhzdsv/RAPsx1QAaQxLsy+FyjOiQZDpEEke8DFuHGAtAYTFobpqWZeYcqdmtKqZHKiypCrIcSoLSg7AEsIC6DjnASxUs9OCC7BAyQQ5H2TnKGYnku9MDTS0GcGuLsipgWokZ4eqY8IQzFVUO9XTZczQRq1lUanRiisAYYuyNqhQIKzRWFayR9MtoKFaihe1zqi7KEYwwCO/rG3XWtlVYt02m1unWKXJeo7qBU8Tm3qhKIfvJTul1EoXFBZcF9LjijTADsA9X9mMbDSy0chGIxv1g41Ov3qrtI2yqSBbFbd6Sz7SO7haln/pLCtEn2pkmZnse7PNbkE535IlOCpv2nEx+sHRD45+cPSD/fCDIxuNbDSy0chG/WAjiVG5JSssJ1+4HH6JDWSG5cybT0MB3YJy9ss7R3BI3rTVfHSCoxMcneDoBPvhBEc2GtloZKORjfrBRte4UG63XSg3pe5lYSPpjhE5244tOCJv2uU8+sDRB44+cPSB/fCBIxuNbDSy0chG/WAjiRG5Jm2RvG1ILneRnHxZUobSHUNyth3B+1Y6ZiC67OStBpOvRVeKVwXysrd2g+GyUwKyMGgaD0OpciEwvJ4hkwCAAzUcr7keHRzWf8XgbT9akPdwsT9aYGp9G197Um5eR6pHw2HgkP6jBfwXcdeS6dHk0m6aHBZCMz3ySVA55V9cpjCgM47FbnAsIpXsdHPvA8iTz/p41ZDtCkjGmWv/BQPWSfcAGX4ycvlcZFo94yKVn2tcvpY5xm+IOMVqeYjZZDlblq1lnY/rbzy86P7+7w9fH//lND6YMBIwSBgNSAgNI40DM6yB56cxbId2prLz0xh8XH892YJMk8n1QSa39dGhNqBhngkNwE8AOO0PIFtQi9V9whMHV/edZoTFLO7bKuPoOuaKAcqeGFvQ6r7Jz3VOZXfHph46b3rAnO5kmQuwbD7tn9bNZEwuvRAQazJtMlEiyJ6KUxjO4pfHSnCXCVDBMk6C33GUIoeD+eTt5Nbx+x6B5EaWFnpOboAhJb0juzkMu9ntyA0ZgrurVVvjCpv992syZEwyd1WGn7d42mHQJmlbm2FAIqGjB0FlzyZt0FPlVlGtA1adnXUn3H5bMNi3dHW0f7a6+edjTRiwQ06ICfNTgCcYzqcpdL1lEC04ex5M/A8s+pdCpM/G7PPFYRe13b5t8C81cSTrH8tVrrf6RxWIjf7tFuvp12B1bZMhSd1PxrqI8teOjvZpbEPWqaxuUuxmrFWv9jHqj/8D</diagram></mxfile>
|
2009.05166/main_diagram/main_diagram.pdf
ADDED
|
Binary file (13.4 kB). View file
|
|
|
2009.05166/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Method
|
| 2 |
+
|
| 3 |
+
Microsoft Dynamics 365 AI Research
|
| 4 |
+
|
| 5 |
+
{yuwfan,shuohang.wang,zhe.gan,siqi.sun,jingjl}@microsoft.com
|
| 6 |
+
|
| 7 |
+
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic cross-lingual alignment between different languages that proves essential for multilingual tasks. In this paper, we propose FILTER, an enhanced fusion method that takes cross-lingual data as input for XLM finetuning. Specifically, FILTER first encodes text input in the source language and its translation in the target language independently in the shallow layers, then performs crosslanguage fusion to extract multilingual knowledge in the intermediate layers, and finally performs further languagespecific encoding. During inference, the model makes predictions based on the text input in the target language and its translation in the source language. For simple tasks such as classification, translated text in the target language shares the same label as the source language. However, this shared label becomes less accurate or even unavailable for more complex tasks such as question answering, NER and POS tagging. To tackle this issue, we further propose an additional KL-divergence *self-teaching* loss for model training, based on auto-generated *soft* pseudo-labels for translated text in the target language. Extensive experiments demonstrate that FIL-TER achieves new state of the art on two challenging multilingual multi-task benchmarks, XTREME and XGLUE.[1](#page-0-0)
|
| 8 |
+
|
| 9 |
+
Cross-lingual low-resource adaptation has been a critical and exigent problem in the NLP field, despite recent success in large-scale language models (mostly trained on English with abundant training corpora). How to adapt models trained in high-resource languages (*e.g.*, English) to lowresource ones (most of the 6,900 languages in the world) still remains challenging. To address the proverbial domain gap between languages, three schools of approach have been widely studied. (i) *Unsupervised pre-training*: to learn a universal encoder (cross-lingual language model) for different languages. For example, mBERT [\(Devlin et al. 2019\)](#page-7-0), Unicoder [\(Huang et al. 2019\)](#page-7-1) and XLM [\(Lample and Conneau](#page-7-2) [2019\)](#page-7-2) have achieved strong performance on many crosslingual tasks by successfully transferring knowledge from source language to a target one. (ii) *Supervised training*: to enforce models insensitive to labeled data across different languages, through teacher forcing [\(Wu et al. 2020\)](#page-8-0) or adversarial learning [\(Cao, Liu, and Wan 2020\)](#page-7-3). (iii) *Translation*: to translate either source language to the target one, or vice versa [\(Cui et al. 2019;](#page-7-4) [Hu et al. 2020;](#page-7-5) [Liang et al.](#page-7-6) [2020\)](#page-7-6), so that training and inference can be performed in the same language.
|
| 10 |
+
|
| 11 |
+
The translation approach has proven highly effective on recent multilingual benchmarks. For example, the *translatetrain* method has achieved state of the art on XTREME [\(Hu](#page-7-5) [et al. 2020\)](#page-7-5) and XGLUE [\(Liang et al. 2020\)](#page-7-6). However, translate-train is simple data augmentation, which doubles training data by translating source text into target languages. Thus, only single-language input is considered for finetuning with augmented data, leaving out cross-lingual alignment between languages unexplored. Dual BERT [\(Cui et al.](#page-7-4) [2019\)](#page-7-4) is recently proposed to make use of the representations learned from source language to help target language understanding. However, it only injects information from the source language into the decoder of target language, without scoping into the intrinsic relations between languages.
|
| 12 |
+
|
| 13 |
+
Motivated by this, we propose FILTER, [2](#page-0-1) a generic and flexible framework that leverages translated data to enforce fusion between languages for better cross-lingual language understanding. As illustrated in Figure [2\(](#page-2-0)c), FILTER first (i) encodes a translated language pair separately in shallow layers; then (ii) performs cross-lingual fusion between languages in the intermediate layers; and finally (iii) encodes language-specific representations in deeper layers. Compared to the translate-train baseline (Figure [2\(](#page-2-0)a)), FILTER learns additional cross-lingual alignment that is instrumental to cross-lingual representations. Furthermore, compared to simply concatenating the language pair as the input of XLM (Figure [2\(](#page-2-0)b)), FILTER strikes a well-measured balance between cross-lingual fusion and individual language representation learning.
|
| 14 |
+
|
| 15 |
+
For classification tasks such as natural language infer-
|
| 16 |
+
|
| 17 |
+
<sup>\*</sup>Equal Contribution
|
| 18 |
+
|
| 19 |
+
Copyright Β© 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
| 20 |
+
|
| 21 |
+
<span id="page-0-0"></span><sup>1</sup>Our code is released at [https://github.com/yuwfan/FILTER.](https://github.com/yuwfan/FILTER)
|
| 22 |
+
|
| 23 |
+
<span id="page-0-1"></span><sup>2</sup>Fusion in the Intermediate Layers of TransformER
|
| 24 |
+
|
| 25 |
+
<span id="page-1-0"></span>
|
| 26 |
+
|
| 27 |
+
| Source language (train) | Target languages (test) |
|
| 28 |
+
|-----------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
|
| 29 |
+
| 1. She was still sweet as they make them. / She was as bitter as they get => contradiction | 1.ζε€§ηΊ¦δΈε°ζΆεεζη»δ½ ,δ»θ―΄γ/ δ»θ―΄δ»δΌεη΅θ―γ<br>=> entailment |
|
| 30 |
+
| 2. Now you can tour Chernobyl and write your own story => DV PRON AUX VERB PROPN CCONJ VERB PRON ADJ NOUN | 2.ΠΠ΅Π·Π³Π°ΡΠΈΡ
Π° Π΄Π΅ΡΠ΅Π²Π½Ρ Π² ΠΠ°Π±ΡΡΠΊΠΈΠ½ΡΠΊΠΎΠΌ ΡΠ°ΠΉΠΎΠ½Π΅ ΠΠΎΠ»ΠΎΠ³ΠΎΠ΄ΡΠΊΠΎΠΉ => ROPN NOUN ADP NOUN ADJ NOUN |
|
| 31 |
+
| 3. Q: What book did John Zahm write in 1896? A: Evolution and Dogma | 3.Q: CuΓ‘ntas especies se encontraron en el esquisto de Burgess? A: Tres |
|
| 32 |
+
|
| 33 |
+
Figure 1: Examples from XTREME for cross-lingual natural language inference, part-of-speech tagging, and question answering tasks. The source language is English; the target language can be any other languages.
|
| 34 |
+
|
| 35 |
+
ence, translated text in the target language shares the same label as the source language. However, for question answering (QA) tasks, the answer span in the translated text of target language generally differs from that in the source language. For sequential labeling tasks such as NER (Named Entity Recognition) and POS (Part-of-Speech) tagging, the sequence of labels in the target language becomes unavailable, as the linguistic structure of sentences greatly varies across different languages. To bridge the gap, we propose to generate *soft* pseudo-labels for translated text, and use an additional KL-divergence *self-teaching* loss for model training. Specifically, we first train a teacher FILTER model, to collect the inference probabilities for the translated text of all training samples, which will be used as pseudo soft-labels to train a student FILTER as the final prediction model. For QA, POS and NER tasks, this self-training process generates more reliable and accurate labels than hard label assignment on translated text, leading to better model performance. For classification tasks where the target label is identical to the source, self-teaching loss proves to also improve performance, by serving as an effective regularizer.
|
| 36 |
+
|
| 37 |
+
The main contributions are summarized as follows. (i) We propose FILTER, a new approach to cross-lingual language understanding by leveraging intrinsic linguistic alignment between languages for XLM finetuning. (ii) We propose a self-teaching loss to address the unreliable/unavailable label issue in target language, boosting model performance across diverse NLP tasks. (iii) We achieve Top-1 performance on both XTREME and XGLUE benchmarks, outperforming previous state of the art by absolute 8.8 and 2.2 points (published and unpublished) in XTREME, and 4.0 points in XGLUE, respectively.
|
2011.12663/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2011.12663/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Image-based retrieval systems show impressive performance in challenging tasks such as face verificationΒ [@DBLP:journals/corr/SchroffKP15; @DBLP:journals/corr/abs-1804-06655; @6909616], instance retrievalΒ [@zheng2017sift], landmark retrievalΒ [@DBLP:journals/corr/abs-1906-04087] and place recognitionΒ [@DBLP:journals/corr/abs-1711-02512; @DBLP:journals/corr/ArandjelovicGTP15]. These systems typically embed images into high-level features and retrieve with a nearest neighbor search. While this is efficient, the retrieval comes with no notion of confidence, which is particularly problematic in safety-critical applications. For instance, a self-driving car relying on visual place recognition should be able to filter out place estimates drawn from uninformative images. In a less critical but still relevant application, quantifying the retrieval uncertainty can significantly improve the user experience in human-computer interfaces by not showing low-confidence results for a query.
|
| 4 |
+
|
| 5 |
+
Practical retrieval systems do not have a small set of predefined classes as output targets, but rather need high-level features that generalize to unseen classes. For instance, a visual place recognition system may be deployed in a city in which it has not been trainedΒ [@Warburg_CVPR_2020]. This is achieved by keeping the encoder fixed and relying on nearest neighbor searches. This pipeline does not easily match current methods for posterior inference, and current uncertainty estimators for retrieval are often impractical and heuristic in nature. To construct a fully Bayesian retrieval system that fits with existing computational pipelines, we first recall the elementary equation $$\begin{align}
|
| 6 |
+
\mathbb{E}\left[ \| \Delta \|^2 \right] = \left\| \mathbb{E}\left[\Delta\right] \right\|^2 + \mathrm{trace}\left( \mathrm{cov}[\Delta] \right), \qquad \Delta \in \mathbb{R}^D,
|
| 7 |
+
\end{align}$$ which follows directly from the definition of variance. From this we see that the expected squared distance between two random features, $\mathbb{E}\left[ \| \Delta \|^2 \right]$, grows with the covariance of such distance, $\mathrm{cov}[\Delta]$, which in turn depends on the uncertainty of the features (Figure [1](#fig:teaserfigure){reference-type="ref" reference="fig:teaserfigure"}). This intuition forms the basis of this paper.
|
| 8 |
+
|
| 9 |
+
{#fig:teaserfigure width="85%"}
|
| 10 |
+
|
| 11 |
+
**In this paper** we propose to use stochastic image embeddings instead of the usual deterministic ones. Given an image $X$, we consider the posterior distribution over possible features $P(F|X)$. From this distribution we get direct uncertainty estimates and can assign probabilities to events such as 'two images belonging to the same place'. To realize this, we derive a likelihood corresponding to the probability that the conventional triplet constraint is satisfied, and a prior over the feature space that mimics conventional $l_2$ normalization. To build a system that is computationally efficient at both train and test time, we derive a variational approximation to the posterior $P(F|X)$, such that in practice, we encode an image to a distribution in feature space. Across several datasets, we show that the proposed model matches the state-of-the-art in predictive performance, while attaining state-of-the-art uncertainty estimates.
|
| 12 |
+
|
| 13 |
+
# Method
|
| 14 |
+
|
| 15 |
+
For each image we learn an isotropic distribution rather than a point embedding. We treat both Gaussian and von Mises Fisher embeddings identically, and here only describe the Gaussian setup. Similar to @DBLP:journals/corr/abs-1902-02586, we use a shared backbone network followed by a mean and a variance head (see Fig.Β [4](#fig:encoder){reference-type="ref" reference="fig:encoder"}). The mean head is a generalized mean (GeM)Β [@DBLP:journals/corr/abs-1711-02512] aggregation layer followed by a fully connected layer that outputs $\mu \in \mathbb{R}^D$. The variance head consists of a GeM layer followed by two fully connected layers with a ReLU activation function. We found it was advantageous to estimate $\sigma^2$ with a softplus activation rather than estimating $\log \sigma^2$. We have separate GeM layers for the variance and mean heads, as we found it beneficial to learn different $p$-norms for the variance head and mean head.
|
| 16 |
+
|
| 17 |
+
<figure id="fig:encoder" data-latex-placement="ht">
|
| 18 |
+
<embed src="images/encoder2.pdf" />
|
| 19 |
+
<figcaption>Overview of our network architecture.</figcaption>
|
| 20 |
+
</figure>
|
| 21 |
+
|
| 22 |
+
In real world applications this trade-off between predictive performance and uncertainty quantification is important. Therefore, we ensure that the number of output parameters is the same for probabilistic and non-probabilistic models, such that $D_{\mu}\!+\!D_{\sigma}\!=\!D$. We focus on isotropic distributions and set $D_{\sigma}\!=\!1$. For the triplet loss, we follow common practice and $l_2$-normalize the point estimates, that is $x / \|x\|_2 \in \mathbb{R}^{D}$. For the Bayesian triplet loss, we $l_2$-normalize the mean embedding $\mu / \|\mu\|_2 \in \mathbb{R}^{D_{\mu}}$ for the uniform prior, and scale with a single positive trainable parameter for the Gaussian prior.
|
| 23 |
+
|
| 24 |
+
We use a hard negative mining strategy similar to @DBLP:journals/corr/ArandjelovicGTP15. Given a query image, we find the closest negative images in a cache. We only present the model with the triplets that violate the triplet constraint [\[eq:tripletloss\]](#eq:tripletloss){reference-type="eqref" reference="eq:tripletloss"}. We update the cache with $5000$ new images every $1000$ iterations. @DBLP:journals/corr/ArandjelovicGTP15 and @Warburg_CVPR_2020 report the importance of updating the cache regularly to avoid overfitting. This speeds up learning by reducing the number of trivial examples presented to the model.
|
| 25 |
+
|
| 26 |
+
<figure id="fig:most_certain_bird" data-latex-placement="htp!">
|
| 27 |
+
<embed src="images/cub_resnet50_prob/query-examples.pdf" style="width:90.0%" />
|
| 28 |
+
<figcaption>Query images for which our Bayesian embedding gives the highest (two first rows) and lowest uncertainty (two last rows). Scenes associated with high uncertainty mostly correspond to scenes where birds blend in with the background and are hardly discernible. The two most certain ones correspond to Cliff Swallows, easily discernible by the characteristic mud nests that they cement to walls or cliffs. In all images, birds stand out from the background and have unique patterns.</figcaption>
|
| 29 |
+
</figure>
|
2103.16429/main_diagram/main_diagram.pdf
ADDED
|
Binary file (44.1 kB). View file
|
|
|
2210.05577/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2210.05577/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Despite the tremendous success of deep neural networks in many computer vision and language modeling tasks, as well as in scientific discoveries, their properties and the reasons for their success are still poorly understood. Focusing on computer vision, a particularly surprising phenomenon evidencing that those machines drift away from how humans perform image recognition is the presence of *adversarial examples*, images that are almost identical to the original ones, yet are misclassified by otherwise accurate models.
|
| 4 |
+
|
| 5 |
+
Since their discovery [@Sze+14], a vast amount of work has been devoted to understanding the sources of adversarial examples and explanations include, but are not limited to, the close to linear operating mode of neural nets [@GSS15], the curse of dimensionality carried by the input space [@GSS15; @Gab+19], insufficient model capacity [@Tsi+19; @Nakk19] or spurious correlations found in common datasets [@Ily+19]. In particular, one widespread viewpoint is that adversarial vulnerability is the result of a model's sensitivity to imperceptible yet well-generalizing features in the data, so called *useful non-robust* features, giving rise to a trade-off between accuracy and robustness [@Tsi+19; @Zha+19]. This gradual understanding has enabled the design of training algorithms, that provide convincing, yet partial, remedies to the problem; the most prominent of them being adversarial training and its many variants [@GSS15; @Mad+18; @robustbench20]. Yet we are far from a mature, unified theory of robustness that is powerful enough to universally guide engineering choices or defense mechanisms.
|
| 6 |
+
|
| 7 |
+
In this work, we aim to get a deeper understanding of adversarial robustness (or lack thereof) by focusing on the recently established connection of neural networks with kernel machines. Infinitely wide neural networks, trained via gradient descent with infinitesimal learning rate, provably become kernel machines with a data-independent, but architecture dependent kernel - its Neural Tangent Kernel (NTK) - that remains constant during training [@JHG18; @Lee+19; @Aro+19b; @Liu+20]. The analytical tools afforded by the rich theory of kernels have resulted in progress in understanding the optimization landscape and generalization capabilities of neural networks [@Du+19b; @Aro+19a], together with the discovery of interesting deep learning phenomena [@Fort+20; @Jim+21], while also inspiring practical advances in diverse areas of applications such as the design of better classifiers [@Sha+20], efficient neural architecture search [@Chen+21], low-dimensional tasks in graphics [@Tan+20] and dataset distillation [@Ngu+21]. While the NTK approximation is increasingly utlilized, even for finite width neural nets, little is known about the adversarial robustness properties of these infinitely wide models.
|
| 8 |
+
|
| 9 |
+
**Our contribution:** Our work inscribes itself into the quest to leverage analytical tools afforded by kernel methods, in particular spectral analysis, to track properties of interest in the associated neural nets, in this case as they pertain to robustness. To this end, we first demonstrate that adversarial perturbations generated *analytically* with the NTK can successfully lead the associated trained wide neural networks (in the kernel-regime) to misclassify, thus allowing kernels to faithfully predict the lack of robustness of those trained neural networks. In other words, adversarial (non-) robustness transfers from kernels to networks; and adversarial perturbations generated via kernels resemble those generated by the corresponding trained networks. One implication of this transferability is that we can analytically devise adversarial examples that do not require access to the trained model and in particular its weights; instead these "blind spots" may be calculated a-priori, before training starts.
|
| 10 |
+
|
| 11 |
+
{#fig:grad_decompose}
|
| 12 |
+
|
| 13 |
+
A perhaps even more crucial implication of the NTK approach to robustness relates to the *understanding* of adversarial examples. Indeed, we show how the spectrum of the NTK provides an alternative way to define *features* of the model, to classify them according to their robustness and usefulness for correct predictions and visually inspect them via their contribution to the adversarial perturbation (see Fig.Β [1](#fig:grad_decompose){reference-type="ref" reference="fig:grad_decompose"}). This in turn allows us to verify previously conjectured properties of standard classifiers; dependence on both *robust* and *non-robust* features in the data [@Tsi+19], and tradeoff of accuracy and robustness during training. In particular we observe that features tend to be rather invariable across architectures, and that robust features tend to correspond to the *top* of the eigenspectrum (see Fig.Β [2](#fig:features){reference-type="ref" reference="fig:features"}), and as such are learned first by the corresponding wide nets [@Aro+19a; @JHG18]. Moreover, we are able to visualize useful non-robust features of standard models (Fig.Β [\[fig:non_rob_feats\]](#fig:non_rob_feats){reference-type="ref" reference="fig:non_rob_feats"}). While this conceptual feature distinction has been highly influential in recent works that study the robustness of deep neural networks (see for example [@ZhLi20; @KLR21; @SMK21]), to the best of our knowledge, none of them has explicitly demonstrated the dependence of networks on such feature functions (except for simple linear models [@Goh19]). Rather, these works either reveal such features in some indirect fashion, or accept their existence as an assumption. Here, we show that Neural Tangent Kernel theory endows us with a natural definition of features through its eigen-decomposition and provides a way to *visualise and inspect robust and non-robust features directly* on the function space of trained neural networks.
|
| 14 |
+
|
| 15 |
+
{#fig:features}
|
| 16 |
+
|
| 17 |
+
Interestingly, this connection also enables us to empirically demonstrate that robust features of standard models alone are not enough for robust classification. Aiming to understand, then, what makes robust models robust, we track the *evolution* of the data-dependent *empirical* NTK during *adversarial training* of neural networks used in practice. Prior experimental work has found that networks with non-trivial width to depth ratio which are trained with large learning rates, depart from the NTK regime and fall in the so-called "rich feature" regime, where the NTK changes substantially during training [@Gei+19; @Fort+20; @Bar+21; @Jim+21]. In our work, which to the best of our knowledge is the first to provide insights on how the kernel behaves during adversarial training, we find that the NTK evolves much faster compared to standard training, simultaneously both changing its features and assigning more importance to the more robust ones, giving direct insight into the mechanism at play during adversarial training (see Fig.Β [7](#fig:polarrotation){reference-type="ref" reference="fig:polarrotation"}). In summary, the contributions of our work are the following:
|
| 18 |
+
|
| 19 |
+
- We discuss how to generate adversarial examples for infinitely-wide neural networks via the NTK, and show that they transfer to fool their associated (finite width) nets in the appropriate regime, yielding a \"training-free\" attack without need to access model weights (Sec. [3](#blackbox_attack){reference-type="ref" reference="blackbox_attack"}).
|
| 20 |
+
|
| 21 |
+
- Using the spectrum of the NTK, we give an alternative definition of features, providing a natural decomposition or perturbations into robust and non-robust parts [@Tsi+19; @Ily+19] (Fig.Β [1](#fig:grad_decompose){reference-type="ref" reference="fig:grad_decompose"}). We confirm that robust features overwhelmingly correspond to the top part of the eigenspectrum; hence they are learned early on in training. We bolster previously conjectured hypotheses that prediction relies on both robust and non-robust features and that robustness is traded for accuracy during standard training. Further, we show that only utilizing the robust features of standard models is not sufficient for robust classification (Sec. [4](#ntk_feats){reference-type="ref" reference="ntk_feats"}).
|
| 22 |
+
|
| 23 |
+
- We turn to finite-width neural nets with standard parameters to study the *dynamics* of their empirical NTK during *adversarial training*. We show that the kernel rotates in a way that enables both new (robust) feature learning and that drastically increases of the importance (relative weight) of the robust features over the non-robust ones. We further highlight the structural differences of the kernel change during adversarial training versus standard training and observe that the kernel seems to enter the "lazy" regime much faster (Sec. [5](#sec_dynamics){reference-type="ref" reference="sec_dynamics"}).
|
| 24 |
+
|
| 25 |
+
Collectively, our findings may help explain many phenomena present in the adversarial ML literature and further elucidate both the vulnerability of standard models and the robustness of adversarially trained ones. We provide code to visualize features induced by kernels, giving a unique and principled way to inspect features induced by standardly trained nets (available at <https://github.com/Tsili42/adv-ntk>).
|
| 26 |
+
|
| 27 |
+
**Related work:** To the best of our knowledge the only prior work that leverages NTK theory to derive perturbations in some adversarial setting is due to [@YuWu21], yet with entirely different focus. It deals with what is coined *generalization attacks*: the process of altering the training data distribution to prevent models to generalise on clean data. [@Bai+21] study aspects of robust models through their linearized sub-networks, but do not leverage NTKs.
|
| 28 |
+
|
| 29 |
+
# Method
|
| 30 |
+
|
| 31 |
+
We introduce background material and definitions important to our analysis. Here, we restrict ourselves to binary classification, to keep notation light. We defer the multiclass case, complete definitions and a more detailed discussion of prior work to the Appendix.
|
| 32 |
+
|
| 33 |
+
Let $f$ be a classifier, $\mathbf{x}$ be an input (e.g. a natural image) and $y$ its label (e.g. the image class). Then, given that $f$ is an accurate classifier on $\mathbf{x}$, $\Tilde{\mathbf{x}}$ is an adversarial example [@Sze+14] for $f$ if
|
| 34 |
+
|
| 35 |
+
1. the distance $d(\mathbf{x},\tilde{\mathbf{x}})$ is small. Common choices in computer vision are the $\ell_p$ norms, especially the $\ell_\infty$ norm on which we focus henceforth, and
|
| 36 |
+
|
| 37 |
+
2. $f(\tilde{\mathbf{x}}) \neq y$. That is, the perturbed input is being misclassified.
|
| 38 |
+
|
| 39 |
+
Given a loss function $\mathcal{L}$, such as cross-entropy, one can construct an adversarial example $\Tilde{\mathbf{x}} = \mathbf{x} + \bm{\eta}$ by finding the perturbation $\bm{\eta}$ that produces the maximal increase of the loss, solving $$\begin{equation}
|
| 40 |
+
\label{eq:adv1}
|
| 41 |
+
\bm{\eta} = \arg \max_{\| \bm{\eta} \|_\infty \leq \epsilon} \mathcal{L}(f(\mathbf{x} + \bm{\eta}), y),
|
| 42 |
+
\end{equation}$$ for some $\epsilon > 0$ that quantifies the dissimilarity between the two examples. In general, this is a non-convex problem and one can resort to first order methods [@GSS15] $$\begin{equation}
|
| 43 |
+
\label{eq:fgsm}
|
| 44 |
+
\tilde{\mathbf{x}} = \mathbf{x} + \epsilon \cdot \mathrm{sign} \left( \nabla_{\mathbf{x}} \mathcal{L}(f(\mathbf{x}), y) \right),
|
| 45 |
+
\end{equation}$$ or iterative versions for solving it [@KGB17; @Mad+18]. The former method is usually called *Fast Gradient Sign Method (FGSM)* and the latter *Projected Gradient Descent (PGD)*. These methods are able to produce examples that are being misclassified by common neural networks with a probability that approaches 1 [@CaWa17]. Even more surprisingly, it has been observed that adversarial examples crafted to "fool" one machine learning model are consistently capable of "fooling" others [@PMG16; @Pap+17], a phenomenon that is known as the *transferability* of adversarial examples. Finally, *adversarial training* refers to the alteration of the training procedure to include adversarial samples for teaching the model to be robust [@GSS15; @Mad+18] and empirically holds as the strongest defense against adversarial examples [@Mad+18; @Zha+19].
|
| 46 |
+
|
| 47 |
+
Despite a vast amount of research, the reasons behind the existence of adversarial examples are not perfectly clear. A line of work has argued that a central reason is the presence of robust and non-robust features in the data that standard models learn to rely upon [@Tsi+19; @Ily+19]. In particular it is conjectured that reliance on *useful but non-robust* features during training is responsible for the brittleness of neural nets. Here, we slightly adapt the feature definitions of [@Ily+19][^1], and extend them to multi-class problems (see Appendix [7](#App:feats){reference-type="ref" reference="App:feats"}).
|
| 48 |
+
|
| 49 |
+
Let $\mathcal{D}$ be the data generating distribution with $x \in \mathcal{X}$ and $y \in \{\pm 1\}$. We define a *feature* as a function $\phi: \mathcal{X} \to \mathbb{R}$ and distinguish how they perform as classifiers. Fix $\rho, \gamma \geq 0$:
|
| 50 |
+
|
| 51 |
+
1. **$\rho$-Useful** feature: A feature $\phi$ is called *$\rho$-useful* if $$\begin{equation}
|
| 52 |
+
\label{eq:useful}
|
| 53 |
+
\mathbb{E}_{x, y \sim \mathcal{D}} \big[\mathds{1}_{\{\textrm{sign}[\phi(x)] = y \}}\big] = \rho
|
| 54 |
+
\end{equation}$$
|
| 55 |
+
|
| 56 |
+
2. **$\gamma$-Robust** feature: A feature $\phi$ is called *$\gamma$-robust* if it remains useful under any perturbation inside a bounded "ball" $\mathcal{B}$, that is if $$\begin{equation}
|
| 57 |
+
\label{eq:robust}
|
| 58 |
+
\mathbb{E}_{x, y \sim \mathcal{D}} \big[\inf_{\delta \in \mathcal{B}}\mathds{1}_{\{ \textrm{sign}[\phi(x+\delta)] = y \}}\big] = \gamma
|
| 59 |
+
\end{equation}$$
|
| 60 |
+
|
| 61 |
+
In general, a feature adds predictive value if it gives an advantage above guessing the most likely label, i.e. $\rho > \max_{y' \in \{\pm1\}} \mathbb{E}_{x, y \sim \mathcal{D}} [\mathds{1}_{\{y'=y}\}]$, and we will speak of "useful" features in this case, omitting the $\rho$. We will call such a feature **useful, non-robust** if it is useful, but $\gamma$-robust only for $\gamma=0$ or very close to $0$, depending on context.
|
| 62 |
+
|
| 63 |
+
The vast majority of works imagines features as being induced by the *activations* of neurons in the net, most commonly those of the penultimate layer (*representation-layer* features), but the previous formal definitions are in no way restricted to activations, and we will show how to exploit them using the eigenspectrum of the NTK. In particular, in Sec.Β [4](#ntk_feats){reference-type="ref" reference="ntk_feats"}, we demonstrate that the above framework agrees perfectly with features induced by the eigenspectrum of the NTK of a network, providing a natural way to decompose the predictions of the NTK into such feature functions. In particular we can identify robust, useful, and, indeed, useful non-robust features.
|
| 64 |
+
|
| 65 |
+
Let $f: \mathbb{R}^d \to \mathbb{R}$ be a (scalar) neural network with a linear final layer parameterized by a set of weights $\mathbf{w} \in \mathbb{R}^p$ and $\{ \mathcal{X}, \mathcal{Y} \}$ be a dataset of size $n$, with $\mathcal{X} \in \mathbb{R}^{n \times d}$ and $\mathcal{Y} \in \{\pm 1\}^n$. Linearized training methods study the first order approximation $$\begin{equation}
|
| 66 |
+
\label{eq:first_order_NTK}
|
| 67 |
+
f(\mathbf{x}; \mathbf{w}_{t+1}) = f(\mathbf{x}; \mathbf{w}_t) + \nabla_{\mathbf{w}} f(\mathbf{x}; \mathbf{w}_t)^\top (\mathbf{w}_{t+1} - \mathbf{w}_t).
|
| 68 |
+
\end{equation}$$ The network gradient $\nabla_{\mathbf{w}} f(\mathbf{x}; \mathbf{w}_t)$ induces a kernel function $\Theta_t: \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}$, usually referred as the *Neural Tangent Kernel (NTK)* of the model $$\begin{equation}
|
| 69 |
+
\label{eq:Gram}
|
| 70 |
+
\Theta_t(\mathbf{x}, \mathbf{x}^\prime) = \nabla_{\mathbf{w}} f(\mathbf{x}; \mathbf{w}_t)^\top \nabla_{\mathbf{w}} f(\mathbf{x}^\prime; \mathbf{w}_t).
|
| 71 |
+
\end{equation}$$ This kernel describes the dynamics with infinitesimal learning rate (gradient flow). In general, the tangent space spanned by the $\nabla_{\mathbf{w}} f(\mathbf{x}; \mathbf{w}_t)$ twists substantially during training, and learning with the Gram matrix of Eq.Β [\[eq:Gram\]](#eq:Gram){reference-type="eqref" reference="eq:Gram"} (empirical NTK) corresponds to training along an intermediate tangent plane. Remarkably, however, in the infinite width limit with appropriate initialization and low learning rate, it has been shown that $f$ becomes a *linear* function of the parameters [@JHG18; @Liu+20], and the NTK remains *constant* ($\Theta_t =\Theta_0=:\Theta$). Then, for learning with $\ell_2$ loss the training dynamics of infinitely wide networks admits a closed form solution corresponding to kernel regression [@JHG18; @Lee+19; @Aro+19b] $$\begin{equation}
|
| 72 |
+
\label{eq:kernel_prediction}
|
| 73 |
+
f_t(\mathbf{x}) = \Theta(\mathbf{x}, \mathcal{X})^\top \Theta^{-1}(\mathcal{X}, \mathcal{X}) (I - e^{-\lambda \Theta(\mathcal{X}, \mathcal{X}) t}) \mathcal{Y},
|
| 74 |
+
\end{equation}$$ where $\mathbf{x} \in \mathbb{R}^d$ is any input (training or testing), $t$ denotes the time evolution of gradient descent, $\lambda$ is the (small) learning rate and, slightly abusing notation, $\Theta(\mathcal{X}, \mathcal{X}) \in \mathbb{R}^{n \times n}$ denotes the matrix containing the pairwise training values of the NTK, $\Theta(\mathcal{X}, \mathcal{X})_{ij} = \Theta(\mathbf{x}_i, \mathbf{x}_j)$, and similarly for $\Theta(\mathbf{x}, \mathcal{X}) \in \mathbb{R}^{n}$. To be precise, Eq.Β [\[eq:kernel_prediction\]](#eq:kernel_prediction){reference-type="eqref" reference="eq:kernel_prediction"} gives the *mean* output of the network using a weight-independent kernel with variance depending on the initialization[^2].
|
| 75 |
+
|
| 76 |
+
In this section, we show how to generate adversarial examples from NTKs and discuss their similarity to the ones generated by the actual networks. Note that for network results, we restrict ourselves to wide networks initialized in the "lazy" regime with small learning rates (the "kernel regime").
|
| 77 |
+
|
| 78 |
+
Adversarial examples arise in the context of *classification*, while the NTK learning process is described by a regression as in Eq.Β [\[eq:kernel_prediction\]](#eq:kernel_prediction){reference-type="eqref" reference="eq:kernel_prediction"}. The arguably simplest way to align with the framework presented in Eq.Β [\[eq:adv1\]](#eq:adv1){reference-type="eqref" reference="eq:adv1"} is to treat the outputs of the kernel similar to logits of a neural net, mapping them to a probability distribution via the sigmoid/softmax function and apply cross-entropy loss.
|
| 79 |
+
|
| 80 |
+
A simple calculation (see Appendix [8](#App:fgsm_ntk){reference-type="ref" reference="App:fgsm_ntk"}, together with the generalization to the multi-class case) gives:
|
| 81 |
+
|
| 82 |
+
*The optimal one step adversarial example of a scalar, infinitely wide, neural network is given by* $$\begin{equation}
|
| 83 |
+
\label{eq:ntk-fgsm-binary}
|
| 84 |
+
\begin{split}
|
| 85 |
+
\tilde{\mathbf{x}} & = \mathbf{x} - y \epsilon \cdot \mathrm{sign} \left( \nabla_{\mathbf{x}} f_t(\mathbf{x}) \right),
|
| 86 |
+
\end{split}
|
| 87 |
+
\end{equation}$$ for $\| \tilde{\mathbf{x}} - \mathbf{x} \|_\infty \leq \varepsilon$, where $\nabla_{\mathbf{x}} f_t(\mathbf{x}) = \nabla_\mathbf{x} \Theta(\mathbf{x}, \mathcal{X})^\top \Theta^{-1}(\mathcal{X}, \mathcal{X}) (I - e^{-\lambda \Theta(\mathcal{X}, \mathcal{X}) t}) \mathcal{Y}$.
|
| 88 |
+
|
| 89 |
+
One can conceive other ways to generate adversarial perturbations for the kernel, either by changing the loss function (as previously done in neural networks (e.g. [@CaWa17])) or through a Taylor expansion around the test input, and we present such alternative derivations in Appendix [8](#App:fgsm_ntk){reference-type="ref" reference="App:fgsm_ntk"}. However, in practice we observe little difference between that approach and the one presented here.
|
| 90 |
+
|
| 91 |
+
Predictions from NTK theory for infinitely wide neural networks have been used successfully for their large finite width counterparts, so it seems reasonable to conjecture that adversarial perturbations generated via the kernel as in Eq.Β [\[eq:ntk-fgsm-binary\]](#eq:ntk-fgsm-binary){reference-type="eqref" reference="eq:ntk-fgsm-binary"} resemble those directly computed for the corresponding neural net as per Eq.Β [\[eq:fgsm\]](#eq:fgsm){reference-type="eqref" reference="eq:fgsm"}. In particular, this would imply that adversarial perturbations derived from the NTK should not only fool the kernel machine itself, but also lead wide neural nets to misclassify.
|
| 92 |
+
|
| 93 |
+
::: wrapfigure
|
| 94 |
+
r0.3 {width="30%"}
|
| 95 |
+
:::
|
| 96 |
+
|
| 97 |
+
While similar transfer results in different contexts have been observed indirectly, via the *effects* of the perturbation on metrics like accuracy [@YuWu21; @Ngu+21], we aim to look deeper to compare perturbations *directly*. High similarity would imply that *any* gradient based white-box attack on the neural net can be successfully mimicked by a "black-box" kernel derived attack.
|
| 98 |
+
|
| 99 |
+
**Setting**. To this end, we train multiple two-layer neural networks on image classifications tasks extracted from MNIST and CIFAR-10 and compare adversarial examples generated by Eqs.Β [\[eq:fgsm\]](#eq:fgsm){reference-type="eqref" reference="eq:fgsm"} (attacking the neural network) and [\[eq:ntk-fgsm-binary\]](#eq:ntk-fgsm-binary){reference-type="eqref" reference="eq:ntk-fgsm-binary"} (attacking the kernel). The networks are trained with small learning rate and are sufficiently large, so lie close to the NTK regime.
|
| 100 |
+
|
| 101 |
+
We track cosine similarity between the gradients of the loss from the NTK predictions and the gradients from the actual neural net as training evolves. Then, we generate adversarial perturbations from both the neural net and the kernel machine, and test whether those produced by the latter can fool the former. Full experimental details can be found in Appendix [\[App:bbox\]](#App:bbox){reference-type="ref" reference="App:bbox"}.
|
| 102 |
+
|
| 103 |
+
**Results**. Our experiments confirm a very strong alignment of loss gradients from the neural nets and the NTK across the whole duration of training, as can be seen in Fig.Β [\[fig:finite_width_gradients\]](#fig:finite_width_gradients){reference-type="ref" reference="fig:finite_width_gradients"} (top). Then, as expected, kernel-generated attacks produce a similar drop in accuracy throughout training as the networks "own" white-box attacks, eventually driving robust accuracy to $0\%$, as seen in Fig.Β [\[fig:finite_width_gradients\]](#fig:finite_width_gradients){reference-type="ref" reference="fig:finite_width_gradients"} (bottom). We reproduce these plots for MNIST in Appendix [\[App:bbox\]](#App:bbox){reference-type="ref" reference="App:bbox"}, leading to similar conclusions.
|
| 104 |
+
|
| 105 |
+
When concerned with security aspects of neural nets, adversarial attacks are mainly characterised as either *white-box* or *black-box* attacks [@Pap+17]. White box attacks assume full access to the neural network and in particular its weights; prominent examples include FGSM/PGD attacks. Black box attacks, on the other hand, can only *query* the model to try to infer the loss gradient, either through training separate surrogate models [@PMG16] or through carefully crafted input-output pairs fed to the target model [@Che+17; @Ily+18; @And+20]. NTK theory and the experiments of this section suggest a threat model in which the attacker does not require access to the model or its weights, nor training of a substitute model. For fixed architecture and training data, all the information required for the computation of Eq.Β [\[eq:ntk-fgsm-binary\]](#eq:ntk-fgsm-binary){reference-type="eqref" reference="eq:ntk-fgsm-binary"} is available at initialization, making the "NTK-attack" akin to a "training free" substitution attack, and, at least in the kernel-regime for wide nets considered here, as effective as white-box attacks.
|
| 106 |
+
|
| 107 |
+
This close connection between adversarial perturbations from the kernel and the corresponding neural net gives us the opportunity to bring to bear kernel tools on the study of adversarial robustness and its relation to features in a more direct fashion. Several recent works leverage properties of the NTK, and specifically its spectrum, to study aspects of approximation and generalization in neural networks [@Aro+19a; @Bas+19; @BiMa19; @Bas+20]. Here we show how the spectrum relates to robustness and helps to clarify the notion of robust/non-robust features.
|
| 108 |
+
|
| 109 |
+
We define *features* induced by the eigendecomposition of the Gram matrix $\Theta(\mathcal{X}, \mathcal{X}) = \sum_{i = 1}^n \lambda_i \mathbf{v}_i \mathbf{v}_i^\top$. We will be most interested in the *end* of training, when the model has access to all the features it can extract from the training data $\mathcal{X}$. As $t \to \infty$, Eq.Β [\[eq:kernel_prediction\]](#eq:kernel_prediction){reference-type="eqref" reference="eq:kernel_prediction"} becomes $f_{\infty} (\mathbf{x}) = \Theta(\mathbf{x}, \mathcal{X})^\top \Theta(\mathcal{X}, \mathcal{X})^{-1} \mathcal{Y}$ and can be decomposed as $f_{\infty}(\mathbf{x}) = \Theta(\mathbf{x}, \mathcal{X})^\top \sum_{i = 1}^n \lambda_i^{-1} \mathbf{v}_i \mathbf{v}_i^\top \mathcal{Y} = \sum_{i = 1}^n f^{(i)} (\mathbf{x})$, where $$\begin{equation}
|
| 110 |
+
\label{eq:ntkfeat_definition}
|
| 111 |
+
f^{(i)}: \mathbb{R}^d \to \mathbb{R}^k, \; f^{(i)} (\mathbf{x}) := \lambda_i^{-1} \Theta(\mathbf{x}, \mathcal{X})^\top \mathbf{v}_i \mathbf{v}_i^\top \mathcal{Y}.
|
| 112 |
+
\end{equation}$$
|
| 113 |
+
|
| 114 |
+
Each $f^{(i)}$ can be seen as a *unique feature* captured from the (training) data. Note that these functions map the input to the output space, thus matching the definitions of Sec.Β [2.2](#ssec:feats){reference-type="ref" reference="ssec:feats"}. Also observe that all $f^{(i)}$'s jointly recover the original prediction of the model, while each one, intuitively, should contribute something different to it.
|
| 115 |
+
|
| 116 |
+
Importantly, these features induce a decomposition of the gradient of the loss into parts, each representing gradients of a unique feature as already advertised in Fig.Β [1](#fig:grad_decompose){reference-type="ref" reference="fig:grad_decompose"}. The binary case is particularly elegant as it gives rise to a linear decomposition of the gradient as $$\begin{equation}
|
| 117 |
+
\nabla_\mathbf{x} \mathcal{L} (f_{\infty}(\mathbf{x}), y) = \sum_{i = 1}^n \alpha_i \nabla_\mathbf{x} \mathcal{L} (f^{(i)}(\mathbf{x}), y),
|
| 118 |
+
\end{equation}$$ for some $\alpha_i$ depending on $\mathbf{x}$ and $y$ (see Appendix [10](#App:ntk_feats){reference-type="ref" reference="App:ntk_feats"}). But if $f^{(i)}$'s are features, how do they look like?
|
| 119 |
+
|
| 120 |
+
::: wrapfigure
|
| 121 |
+
r0.35 {width="35%"}
|
| 122 |
+
:::
|
| 123 |
+
|
| 124 |
+
With these definitions in place, we can now analyze the characteristics of features for commonly used architectures, leveraging their associated NTK. To be consistent with the previous section, we consider classification problems from MNIST (10 classes) and CIFAR-10 (car vs airplane). We compose the Gram matrices from the whole training dataset (50000 and 10000, respectively), and compute the different feature functions $f^{(i)}$ using the eigendecomposition of the matrix. We estimate the **usefulness** of a feature $f^{(i)}$ by measuring its accuracy on a hold-out validation set, and its **robustness** by perturbing each input of this set, using an FGSM attack on feature $f^{(i)}$. We consider several different Fully Connected and Convolutional Kernels, whose expressions are available through the Neural Tangents library [@Nov+20], built on top of JAX [@Brad+18]. We summarize our findings on how these features behave:
|
| 125 |
+
|
| 126 |
+
*Functions $f^{(i)}$ represent visually distinct features.* We visualise each feature $f^{(i)}$ by plotting its gradient with respect to $\mathbf{x}$. Fig.Β [2](#fig:features){reference-type="ref" reference="fig:features"} shows the gradient of the first 5 features for various architectures for a specific image from the CIFAR-10 dataset. We observe that features are fairly consistent across models, and they are interpretable: for example the 4th feature seems to represent the dominant color of an image, while the 5th one seems to be capturing horizontal edges.
|
| 127 |
+
|
| 128 |
+
*Networks use both robust and non-robust features for prediction.* It has been speculated that neural networks trained in a standard (non adversarial) fashion rely on both robust and non-robust features. Our feature definition in Eq.Β [\[eq:ntkfeat_definition\]](#eq:ntkfeat_definition){reference-type="eqref" reference="eq:ntkfeat_definition"} shows that this is indeed the case. The NTK of common neural networks consists of both robust features that match human expectations, such as the ones depicted in Fig.Β [2](#fig:features){reference-type="ref" reference="fig:features"}, but also on features that are predictive of the true label, while not being robust to adversarial perturbations of the input (Fig.Β [\[fig:non_rob_feats\]](#fig:non_rob_feats){reference-type="ref" reference="fig:non_rob_feats"}). Fig.Β [2](#fig:features){reference-type="ref" reference="fig:features"} depicts the first 100 features of a fully connected and a convolutional tangent kernel in Usefulness-Robustness space. The upper left region of the plots shows a large amount of useful, yet non-robust features. These features seem random to human observers.
|
| 129 |
+
|
| 130 |
+
*Robustness lies at the top.* We observe in Fig.Β [2](#fig:features){reference-type="ref" reference="fig:features"} that features corresponding to the top eigenvectors tend to be robust. This is consistent among different models and between the two datasets (see Appendix [10](#App:ntk_feats){reference-type="ref" reference="App:ntk_feats"}). Since these eigenvectors are the ones fitted first during training [@Aro+19a; @JHG18], it is no wonder that the loss gradient evolves from coherence to noise, as observed in Fig.Β [9](#fig:grads){reference-type="ref" reference="fig:grads"}. This also explains the apparent trade-off between robustness and accuracy of neural networks as training progresses: useful, robust features are fitted first, followed by useful, but non-robust ones. This ties in well with both empirical findings [@Rah+19] and theoretical case studies [@Bas+19; @BiMa19; @Bas+20] that demonstrate that low frequency *functions* are fitted first during training and provide favorable generalization properties and we would associate robust features with these low-frequency parts (in function space).
|
| 131 |
+
|
| 132 |
+
*Robust features alone are not enough.* In light of these findings, it might be reasonable to conjecture that we could obtain robust models by retaining the robust features of the prediction, while discarding the non-robust ones. The spectral approach gives a principled way to disentangle features and create kernel machines keeping only the robust ones. Our results show that in general it is not possible to obtain non-trivial performance without compromising robustness in this fashion, strengthening the case for the necessity of data augmentation in the form of adversarial training (see Appendix [10.3](#ssec:notenough){reference-type="ref" reference="ssec:notenough"}).
|
| 133 |
+
|
| 134 |
+
Given the apparent necessity for adversarial training to produce robust models, how does it achieve this goal? To shed some light on this fundamental question, we depart from the "lazy" NTK regime and study the evolution of the NTK of adversarially trained models. For a neural network trained with gradient descent, as the learning rate $\eta \rightarrow 0$, the continuous time dynamics can be written as $$\begin{equation}
|
| 135 |
+
\frac{\partial w}{\partial t} = - \eta \nabla_w \mathcal{L} = -\eta \nabla_w f^\top \frac{\partial \mathcal{L}}{\partial f} \,\,\,\,\,\textnormal{and}\,\,\,\,\, \frac{\partial f}{\partial t} = - \eta \underbrace{\nabla_w f \nabla_w f^\top}_{\Theta_t} \frac{\partial \mathcal{L}}{\partial f}.
|
| 136 |
+
\end{equation}$$ In the NTK regime, this kernel $\Theta_t$ remains fixed at its initial value. However, outside this regime, it has been demonstrated, both empirically [@Gei+19; @Fort+20; @Bar+21; @Jim+21] and theoretically [@ABP21], that $\Theta_t$ is not constant during training, and is changing as the weights move. In adversarial training, moreover, there is the additional effect that at each weight update, the data changes as well. For that reason, understanding the dynamics of adversarial training requires tracking the evolution of a kernel $\Theta_t (\mathcal{X}_t, \mathcal{X}_t)$, where $\mathcal{X}_t$ denotes the current (mini) batch of training data. Notice that the tangent vector $\nabla_w f(\mathcal{X}_t)$ is still describing the instantaneous change of $f$ on the current batch of data, thus $\Theta_t (\mathcal{X}_t, \mathcal{X}_t)$ is informative of the local geometry of the function space, justifying its value as a quantity to be measured during adversarial training.
|
| 137 |
+
|
| 138 |
+
We train a deep convolutional architecture on CIFAR-10 (multiclass) with standard (SGD) and adversarial training using PGD with an $\ell_\infty$ constraint. Full implementations details and accuracy curves can be found in Appendix [11](#App:empirical_ntk){reference-type="ref" reference="App:empirical_ntk"}, together with the reproduction of the same experiment on MNIST, where the observations are similar. We track the following quantities during training:
|
| 139 |
+
|
| 140 |
+
<figure id="fig:kernels">
|
| 141 |
+
<figure id="fig:kernel_screenshot">
|
| 142 |
+
<embed src="NTK_dynamics/kernel_images_3tight_1.pdf" style="width:99.0%" />
|
| 143 |
+
</figure>
|
| 144 |
+
<figure id="fig:norm_and_mass">
|
| 145 |
+
<p><embed src="NTK_dynamics/norm_and_mass_cifar10_1.pdf" style="width:99.0%" /> <span id="fig:norm_and_mass" data-label="fig:norm_and_mass"></span></p>
|
| 146 |
+
</figure>
|
| 147 |
+
<figcaption><strong>Left</strong>: Kernel Matrices for a mini batch of size 256. Left to Right: Kernel at initialization, Kernel after standard training, Kernel after adversarial training (20 PGD steps). The standard kernel grows significantly more than the adversarial one. <strong>Right</strong>: (a) Kernel Frobenius norm evolution, and (b) concentration on the top 20 eigenvalues during standard and adversarial training. Setting: CIFAR10, <span class="math inline"><em>β</em><sub>β</sub>β=β8/255</span>.</figcaption>
|
| 148 |
+
</figure>
|
| 149 |
+
|
| 150 |
+
**Kernel distance.** We compare two kernels using a *scale invariant distance*, which quantifies the relative rotation between them, as used in other works studying NTK dynamics (e.g. [@Fort+20]): $$\begin{equation}
|
| 151 |
+
\label{eq:rot}
|
| 152 |
+
d(\Theta_i, \Theta_j) = 1 - \frac{\mathrm{Tr}(\Theta_i \Theta_j^\top)}{\sqrt{\mathrm{Tr}(\Theta_i \Theta_i^\top)} \sqrt{\mathrm{Tr}(\Theta_j \Theta_j^\top)}}.
|
| 153 |
+
\end{equation}$$ **Polar dynamics**. Zooming in on the change that the initial kernel undergoes, we define a *polar space* on which we measure the movement of the kernel: $$\begin{equation}
|
| 154 |
+
\label{eq:polar}
|
| 155 |
+
\begin{split}
|
| 156 |
+
r_t = \frac{\| \Theta_t - \Theta_0 \|_F}{\| \Theta_f - \Theta_0 \|_F}, \; \; \;
|
| 157 |
+
\theta_t = \arccos\left(1 - d(\Theta_t, \Theta_0)\right),
|
| 158 |
+
\end{split}
|
| 159 |
+
\end{equation}$$ where $\Theta_0, \Theta_f$ are the initial and final kernel, respectively. Fig.Β [7](#fig:polarrotation){reference-type="ref" reference="fig:polarrotation"} presents a heatmap of kernel distances at different time steps for both standard and adversarial training, as well as both training trajectories in polar space.
|
| 160 |
+
|
| 161 |
+
<figure id="fig:polarrotation">
|
| 162 |
+
<figure id="fig:kernelrotations">
|
| 163 |
+
<p><embed src="NTK_dynamics/kernel_distance_combined_2.pdf" /> <span id="fig:kernelrotations" data-label="fig:kernelrotations"></span></p>
|
| 164 |
+
</figure>
|
| 165 |
+
<figure>
|
| 166 |
+
<embed src="NTK_dynamics/polars_combined_hor_tight_1.pdf" />
|
| 167 |
+
</figure>
|
| 168 |
+
<figcaption><strong>Left:</strong> Rotation (Eq.Β <a href="#eq:rot" data-reference-type="eqref" data-reference="eq:rot">[eq:rot]</a>) of the empirical NTK during standard, and adversarial training. Left to right: MNIST, standard, MNIST adversarial, CIFAR standard, CIFAR adversarial. <strong>Right:</strong> Kernel trajectories in polar space (Eq.Β <a href="#eq:polar" data-reference-type="eqref" data-reference="eq:polar">[eq:polar]</a>) for MNIST (left) and CIFAR10 (right). Darker colors indicate earlier epochs.</figcaption>
|
| 169 |
+
</figure>
|
| 170 |
+
|
| 171 |
+
**Concentration on subspaces.** To quantify weight concentration on the top region of the spectrum, we track the (normalized) Frobenius norm of subspaces as $\sum_{i = 1}^p \lambda_i^2 /\sum_{i = 1}^n \lambda_i^2$, for various cut-offs $p$, where we have indexed the eigenvalues from largest to smallest. Fig.Β [5](#fig:kernels){reference-type="ref" reference="fig:kernels"} depicts concentration on the top 20 eigenvalues during training.
|
| 172 |
+
|
| 173 |
+
Our findings show that similar to what has been reported in prior work [@Fort+20], the kernel rotates significantly in the beginning of training and then slows down for both standard and adversarial training. However, in the latter case, this second phase begins a lot earlier. As Fig.Β [7](#fig:polarrotation){reference-type="ref" reference="fig:polarrotation"} illuminates, the kernel moves a greater distance than when performing standard training, but after a few epochs it stops both rotating and expanding; note that this is not the case for standard training where the kernel increases its magnitude substantially later in training, and in fact grows to have a norm orders of magnitude larger than during adversarial training (see Fig.Β [5](#fig:kernels){reference-type="ref" reference="fig:kernels"}). In hindsight, this behavior is perhaps not surprising, as each element of the kernel measures similarity between data points, and a robust machine should be more conservative when estimating similarity. The observation that during adversarial training the kernel becomes relatively static relatively fast might indicate that *linear* dynamics govern the later phase of adversarial training. It has been observed in previous works [@Gei+19; @Fort+20; @Jim+21] that linearization after a few initial epochs of rapid rotation often closely matches performance of full network training. Our results indicate that a similar phenomenon occurs even under the data shift of adversarial training (see Appendix [11.1](#App:linear_advtrain){reference-type="ref" reference="App:linear_advtrain"} for a study of linearized adversarial training), opening avenues to design robust machines more efficiently.
|
| 174 |
+
|
| 175 |
+
Moreover, endowed with the knowledge that at least for kernels trained with static data robust features lie at the top, we study polar dynamics of the top space only (see Fig.Β [17](#fig:top_space_dynamics){reference-type="ref" reference="fig:top_space_dynamics"}) to observe that there is substantial rotation in this space, suggesting that robust features are learned early on not only during standard, but in particular during adversarial training. Even more interestingly, Fig.Β [5](#fig:kernels){reference-type="ref" reference="fig:kernels"} demonstrates that not only the robust features change, but their relative weight as measured by the concentration on the top-20 space is increasing simultaneously relative to standard training as well, and remains large; in fact, significantly larger than during standard training. As each eigenvalue weights the importance of the corresponding feature on the final prediction, this implies that the kernel "learns" to depend more on the most robust features.
|
| 176 |
+
|
| 177 |
+
Put together, these findings reveal different kernel dynamics during standard and adversarial training: the kernel rotates much faster, expands much less and becomes "lazy" much earlier than during standard training. Fully understanding the properties of converged adversarial kernels remains an important avenue for future work, that might allow to design faster algorithms for robust classification.
|
| 178 |
+
|
| 179 |
+
We have studied adversarial robustness through the lens of the NTK across multiple architectures and data sets both in the idealized NTK regime and the "rich feature" regime. When connecting the spectrum of the kernel with fundamental properties characterizing robustness our phenomenological study reveals a universal picture of the emergence of robust and non-robust features and their role during training. There are certain limitations and unexplored themes in our work; Sec.Β [3](#blackbox_attack){reference-type="ref" reference="blackbox_attack"} argues that transferable attacks from the NTK may be as effective as white-box attacks, but this warrants an in-depth study across architectures, kernels and data sets (which has not been the main focus of this work). Sec.Β [4](#ntk_feats){reference-type="ref" reference="ntk_feats"} visualises features for fairly simple models, since the computation of kernel derivatives is a costly procedure. It would be interesting to use our framework to visualise features from more complicated architectures. Finally, our work in Sec.Β [5](#sec_dynamics){reference-type="ref" reference="sec_dynamics"} invites more research on the kernel at the end of adversarial training, similar to what has been done for standard models [@Long21].
|
| 180 |
+
|
| 181 |
+
We hope that our viewpoint can motivate further theoretical understanding of adversarial phenomena (such as transferability) and the design of better and/or faster adversarial learning algorithms, by further analyzing the kernels from robust deep neural networks.
|
2304.01042/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2304.01042/main_diagram/main_diagram.pdf
ADDED
|
Binary file (71.7 kB). View file
|
|
|
2304.01042/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
<figure id="fig:overview" data-latex-placement="t">
|
| 4 |
+
<div class="center">
|
| 5 |
+
<img src="figures/overview.png" />
|
| 6 |
+
</div>
|
| 7 |
+
<figcaption>Overview of DivClust. Assuming clusterings <span class="math inline"><em>A</em></span> and <span class="math inline"><em>B</em></span>, the proposed diversity loss <span class="math inline"><em>L</em><sub><em>d</em><em>i</em><em>v</em></sub></span> calculates their similarity matrix <span class="math inline"><em>S</em><sub><em>A</em><em>B</em></sub></span> and restricts the similarity between cluster pairs to be lower than a similarity upper bound <span class="math inline"><em>d</em></span>. In the figure, this is represented by the model adjusting the cluster boundaries to produce more diverse clusterings. Best seen in color.</figcaption>
|
| 8 |
+
</figure>
|
| 9 |
+
|
| 10 |
+
The exponentially increasing volume of visual data, along with advances in computing power and the development of powerful Deep Neural Network architectures, have revived the interest in unsupervised learning with visual data. Deep clustering in particular has been an area where significant progress has been made in the recent years. Existing works focus on producing a single clustering, which is evaluated in terms of how well that clustering matches the ground truth labels of the dataset in question. However, consensus, or ensemble, clustering remains under-studied in the context of deep clustering, despite the fact that it has been found to consistently improve performance over single clustering outcomesΒ [@ensemble_survey; @zhou2021self; @ghaemi2009survey; @liu2019consensus].
|
| 11 |
+
|
| 12 |
+
Consensus clustering consists of two stages, specifically generating a set of base clusterings, and then applying a consensus algorithm to aggregate them. Identifying what properties ensembles should have in order to produce better outcomes in each setting has been an open problemΒ [@golalipour2021clustering]. However, research has found that inter-clustering diversity within the ensemble is an important, desirable factorΒ [@pividori2016diversity; @hamidi2022impact; @gullo2009diversity; @fern2003random; @iam2011link], along with individual clustering quality, and that diversity should be moderatedΒ [@moderate; @moderate2; @pividori2016diversity]. Furthermore, several works suggest that controlling diversity in ensembles is important toward studying its impact and determining its optimal level in each settingΒ [@moderate; @pividori2016diversity].
|
| 13 |
+
|
| 14 |
+
The typical way to produce diverse clusterings is to promote diversity by clustering the data multiple times with different initializations/hyperparameters or subsets of the dataΒ [@ghaemi2009survey; @ensemble_survey]. This approach, however, does not guarantee or control the degree of diversity, and is computationally costly, particularly in the context of deep clustering, where it would require the training of multiple models. Some methods have been proposed that find diverse clusterings by including diversity-related objectives to the clustering process, but those methods have only been applied to clustering precomputed features and cannot be trivially incorporated into Deep Learning frameworks. Other methods tackle diverse clustering by creating and clustering diverse feature subspaces, including some that apply this approach in the context of deep clusteringΒ [@DiMVMC; @enrc]. Those methods, however, do not control inter-clustering diversity. Rather, they influence it indirectly through the properties of the subspaces they create. Furthermore, typically, existing methods have been focusing on producing orthogonal clusterings or identifying clusterings based on independent attributes of relatively simple visual data (e.g. color/shape). Consequently, they are oriented toward *maximizing* inter-clustering diversity, which is not appropriate for consensus clusteringΒ [@moderate; @moderate2; @pividori2016diversity].
|
| 15 |
+
|
| 16 |
+
To tackle this gap, namely generating multiple clusterings with deep clustering frameworks efficiently and with the desired degree of diversity, we propose DivClust. Our method can be straightforwardly incorporated into existing deep clustering frameworks to learn multiple clusterings whose diversity is *explicitly controlled*. Specifically, the proposed method uses a single backbone for feature extraction, followed by multiple projection heads, each producing cluster assignments for a corresponding clustering. Given a user defined diversity target, in this work expressed in terms of the average NMI between clusterings, DivClust restricts inter-clustering similarity to be below an appropriate, dynamically estimated threshold. This is achieved with a novel loss component, which estimates inter-clustering similarity based on soft cluster assignments produced by the model, and penalizes values exceeding the threshold. Importantly, DivClust introduces minimal computational cost and requires no hyperparameter tuning with respect to the base deep clustering framework, which makes its use simple and computationally efficient.
|
| 17 |
+
|
| 18 |
+
Experiments on four datasets (CIFAR10, CIFAR100, Imagenet-10, Imagenet-Dogs) with three recent deep clustering methods (IICΒ [@IIC], PICAΒ [@PICA], CCΒ [@li2021contrastive]) show that DivClust can effectively control inter-clustering diversity without reducing the quality of the clusterings. Furthermore, we demonstrate that, with the use of an off-the-shelf consensus clustering algorithm, the diverse base clusterings learned by DivClust produce consensus clustering solutions that outperform the base frameworks, effectively improving them with minimal computational cost. Notably, despite the sensitivity of consensus clustering to the properties of the ensemble, our method is robust across various diversity levels, outperforming baselines in most settings, often by large margins. Our work then provides a straightforward way for improving the performance of deep clustering frameworks, as well as a new tool for studying the impact of diversity in deep clustering ensemblesΒ [@pividori2016diversity].
|
| 19 |
+
|
| 20 |
+
In summary, DivClust: a) can be incorporated in existing deep clustering frameworks in a plug-and-play way with very small computational cost, b) can explicitly and effectively control inter-clustering diversity to satisfy user-defined targets, and c) learns clusterings that can improve the performance of deep clustering frameworks via consensus clustering.
|
| 21 |
+
|
| 22 |
+
# Method
|
| 23 |
+
|
| 24 |
+
**Overview:** Our method consists of two components: a) A novel loss function that can be incorporated in deep clustering frameworks to control inter-clustering diversity by applying a threshold to cluster-wise similarities, and b) a method for dynamically estimating that threshold so that the clusterings learned by the model are sufficiently diverse, according to a user-defined metric.
|
| 25 |
+
|
| 26 |
+
More concretely, we assume a deep clustering model that learns $K$ clusterings (typically a backbone encoder followed by $K$ projection heads), a deep clustering framework and its loss function $L_{main}$, and a diversity target $D^T$ set by the user, expressed as an upper bound to inter-clustering similarity[^2] (i.e. the maximum acceptable similarity). In order to control the inter-clustering similarity $D^R$ of the learned clusterings so that $D^T\leq D^R$, we propose a complementary loss $L_{div}$. Specifically, given soft cluster assignments for a pair of clusterings $A,B \in K$, we define the inter-clustering similarity matrix $S_{AB}\in \mathbb{R}^{C_A\times C_B}$, where $C_A$ and $C_B$ is the number of clusters in each clustering, and $S_{AB}(i,j)\in [0,1]$ measures the similarity between clusters $i\in C_A$ and $j\in C_B$. It follows that decreasing the values of $S_{AB}$ reduces the similarity between the clusters of $A$ and $B$, and therefore increases their diversity. Accordingly, $L_{div}$ utilizes $S_{AB}$ in order to restrict inter-clustering similarity to be under an upper similarity bound $d$. The value of $d$ is dynamically adjusted during training, decreasing when $D^R>D^T$ and increasing when $D^R\leq D^T$, thereby tightening and relaxing the loss function so that, overall and throughout training, inter-clustering similarity $D^R$ remains at or under the desired level $D^T$.
|
| 27 |
+
|
| 28 |
+
<figure id="fig:illustrations">
|
| 29 |
+
<figure id="fig:PA">
|
| 30 |
+
<div class="center">
|
| 31 |
+
<img src="figures/pa.png" style="width:80.0%" />
|
| 32 |
+
</div>
|
| 33 |
+
<figcaption>Cluster assignments <span class="math inline"><em>P</em><sub><em>A</em></sub></span> for clustering A</figcaption>
|
| 34 |
+
</figure>
|
| 35 |
+
<figure id="fig:PB">
|
| 36 |
+
<div class="center">
|
| 37 |
+
<img src="figures/pb.png" style="width:80.0%" />
|
| 38 |
+
</div>
|
| 39 |
+
<figcaption>Cluster assignments <span class="math inline"><em>P</em><sub><em>B</em></sub></span> for clustering B</figcaption>
|
| 40 |
+
</figure>
|
| 41 |
+
<figure id="fig:SAB">
|
| 42 |
+
<div class="center">
|
| 43 |
+
<img src="figures/sab.png" style="width:80.0%" />
|
| 44 |
+
</div>
|
| 45 |
+
<figcaption>Similarity matrix <span class="math inline"><em>S</em><sub><em>A</em><em>B</em></sub></span></figcaption>
|
| 46 |
+
</figure>
|
| 47 |
+
<figcaption>Examples of synthetic cluster assignments <span class="math inline"><em>P</em><sub><em>A</em></sub></span>, <span class="math inline"><em>P</em><sub><em>B</em></sub></span> and similarity matrix <span class="math inline"><em>S</em><sub><em>A</em><em>B</em></sub></span>. Note that clusters <span class="math inline"><em>i</em>βββ<em>A</em></span> and <span class="math inline"><em>j</em>βββ<em>B</em></span> are softly assigned the same samples. Correspondingly, their similarity score <span class="math inline"><em>S</em><sub><em>A</em><em>B</em></sub>(<em>i</em>,β<em>j</em>)</span> is high (highlighted with red inΒ <a href="#fig:SAB" data-reference-type="ref+label" data-reference="fig:SAB">4</a>). Best seen in color.</figcaption>
|
| 48 |
+
</figure>
|
| 49 |
+
|
| 50 |
+
**Defining the inter-clustering similarity matrix:** Our method assumes a standard deep clustering architecture, consisting of an encoder $f$, followed by $K$ projection heads $h_1,..., h_K$, each of which produces assignments for a clustering $k$. Specifically, let $X$ be a set of $N$ unlabeled samples. The encoder maps each sample $x \in X$ to a representation $f(x)$, and each projection head $h_k$ maps $f(x)$ to $C_k$ clusters, so that $p_k(x)=h_k(f(x)) \in \mathbb{R}^{C_k\times 1}$ represents a probability assignment vector mapping sample $x \in X$ to $C_k$ clusters in clustering $k$. Without loss of generality, we assume that $C=C_k\forall k\in K$. Each clustering can then be represented by a cluster assignment matrix $P_k(X)=[p_k(x_1), p_k(x_2), ..., p_k(x_N)]\in \mathbb{R}^{C\times N}$. The column $p_k(n)$, that is the probability assignment vector for the $n$-th sample, encodes the degrees to which sample $x_n$ is assigned to different clusters. The row vector $q_k(i)~\in \mathbb{R}^{N}$ shows which samples are softly assigned to cluster $i\in C$. We refer to $q_k(i)$ as the cluster membership vector.
|
| 51 |
+
|
| 52 |
+
To quantify the similarity between clusterings $A$ and $B$ we define the inter-clustering similarity matrix $S_{AB} \in \mathbb{R}^{C\times C}$. We define each element $S_{AB}(i,j)$ as the cosine similarity between the cluster membership vector $q_A(i)$ of cluster $i\in A$ and the cluster membership vector $q_B(j)$ of cluster $j\in B$: $$\begin{align}
|
| 53 |
+
S_{AB}(i,j)=\frac{q_A(i)\cdot q_B(j)}{||q_A(i)||_2||q_B(j)||_2}
|
| 54 |
+
\label{eq:sab}
|
| 55 |
+
\end{align}$$ This measure expresses the degree to which samples in the dataset are assigned similarly to clusters $i$ and $j$. Specifically, $S_{AB}(i,j)=0$ if $q_A(i)\perp q_B(j)$ and $S_{AB}(i,j)=1$ if $q_A(i)=q_B(j)$. It is, therefore, a differentiable measure of the similarity of clusters $i$ and $j$.
|
| 56 |
+
|
| 57 |
+
**Defining the loss function:** Based on the inter-clustering similarity matrix $S_{AB}$, we define DivClust's loss to softly enforce that a clustering $A$ does not have an *aggregate* cluster similarity with a clustering $B$ greater than a similarity upper bound $d$. The aggregate similarity $S_{AB}^{aggr}$ is defined as the average similarity of clustering $A$'s clusters with their most similar cluster of clustering $B$ ([\[eq:s_aggr\]](#eq:s_aggr){reference-type="ref+label" reference="eq:s_aggr"}). Using this metric, we propose $L_{div}$ ([\[eq:global_cluster\]](#eq:global_cluster){reference-type="ref+label" reference="eq:global_cluster"}), a loss that regulates diversity between clusterings $A$ and $B$ by forcing that $S_{AB}^{aggr}<d$, for $d\in [0,1]$. It is clear fromΒ [\[eq:global_cluster\]](#eq:global_cluster){reference-type="ref+label" reference="eq:global_cluster"} that $S_{AB}^{aggr}<d\Rightarrow L_{div}(A,B)=0$, in which case the diversity requirement is satisfied and the loss has no impact. Conversely, $S_{AB}^{aggr}\geq d\Rightarrow L_{div}(A,B)>0$, in which case the loss requires that inter-clustering similarity decreases.
|
| 58 |
+
|
| 59 |
+
$$\begin{equation}
|
| 60 |
+
S_{AB}^{aggr}=\frac{1}{C}\sum_{i=1}^{C}{\underset{j}{max}(S_{AB}(i,j))}
|
| 61 |
+
\label{eq:s_aggr}
|
| 62 |
+
\end{equation}$$
|
| 63 |
+
|
| 64 |
+
$$\begin{equation}
|
| 65 |
+
L_{div}(A,B)=\left[S_{AB}^{aggr}-d\right]_{+}
|
| 66 |
+
\label{eq:global_cluster}
|
| 67 |
+
\end{equation}$$
|
| 68 |
+
|
| 69 |
+
Having defined the diversity loss $L_{div}$ between two clusterings, we extend it to multiple clusterings $K$ and combine it with the base deep clustering framework's objective. For a clustering $k\in K$, we denote with $L_{main}(k)$ the loss of the base deep clustering framework for that clustering, and with $L_{div}(k,k')$ the diversity controlling loss between clusterings $k$ and $k'$. We present the joint loss $L_{joint}(k)$ for each clustering $k$ inΒ [\[eq:total_loss_k\]](#eq:total_loss_k){reference-type="ref+label" reference="eq:total_loss_k"}, where $L_{main}(k)$ depends on cluster assignment matrix $P_k$, while $L_{div}(k,k')$ depends on $P_k$ and $P_{k'}$. Accordingly, the model's training loss $L_{total}$, seen in [\[eq:total_loss\]](#eq:total_loss){reference-type="ref+label" reference="eq:total_loss"}, is the average of $L_{joint}$ over all clusterings.
|
| 70 |
+
|
| 71 |
+
$$\begin{equation}
|
| 72 |
+
L_{joint}(k)=L_{main}(k) + \frac{1}{K-1}\sum_{k'=1,k'\neq k}^{K}{L_{div}(k,k')}
|
| 73 |
+
\label{eq:total_loss_k}
|
| 74 |
+
\end{equation}$$
|
| 75 |
+
|
| 76 |
+
$$\begin{equation}
|
| 77 |
+
L_{total}=\frac{1}{K}\sum_{k=1}^{K}{L_{joint}(k)}
|
| 78 |
+
\label{eq:total_loss}
|
| 79 |
+
\end{equation}$$
|
| 80 |
+
|
| 81 |
+
The loss $L_{total}$ is therefore a combination of the base deep clustering framework's loss $L_{main}$ for each clustering $k\in K$ and the loss $L_{div}$, which is used to control inter-clustering diversity. The proposed loss formulation is applicable to any deep clustering framework that produces cluster assignments through the model (as opposed to frameworks using offline methods such as MIX'EMΒ [@MIXEM]), which covers the majority of deep clustering frameworks outlined inΒ [2](#sec:related_works){reference-type="ref+label" reference="sec:related_works"}.
|
| 82 |
+
|
| 83 |
+
**Dynamic upper bound *d*:** The proposed loss $L_{div}$ controls inter-clustering diversity by restricting the values of $S_{AB}$ according to the similarity upper bound $d$. However, the values of $S_{AB}$ are calculated based on the cosine similarity of *soft* cluster assignments. This means that pairs of cluster assignment vectors $i$, $j$ will have different similarity values $S_{AB}(i,j)$ depending on their sharpness, even if they point to the same cluster in terms of their corresponding hard assignment. It follows that $S_{AB}$ and, accordingly, the impact of $d$, are dependent on the confidence of cluster assignments and vary throughout training and between experiments (as factors like the number of clusters and model capacity influence the confidence of cluster assignments). Therefore, $d$ is an ambiguous and unintuitive metric for users to define diversity targets with.
|
| 84 |
+
|
| 85 |
+
To tackle this issue and to provide a reliable and intuitive method for defining diversity objectives, we propose dynamically determining the value of the threshold $d$ during training. Concretely, let $D$ be an inter-clustering similarity metric chosen by the user. In this work, we use avg. Normalized Mutual Information (NMI), a well established metric for estimating inter-clustering similarity.
|
| 86 |
+
|
| 87 |
+
$$\begin{equation}
|
| 88 |
+
D = \frac{1}{(K-1)(K/ 2)}\sum_{k=1}^{K-1}{\sum_{k'=k+1}^{K}{NMI(P_k^h,P_{k'}^h)}}
|
| 89 |
+
\label{eq:aggr_nmi}
|
| 90 |
+
\end{equation}$$ where $P_k^h\in \mathbb{Z}^{N}$ is the hard cluster assignment vector for $N$ samples in clustering $k\in K$ and $NMI(P_k^h,P_{k'}^h)$ represents the NMI between $k$ and $k'$. $D\in [0,1]$, with higher values indicating more similar clusterings.
|
| 91 |
+
|
| 92 |
+
Assuming a user-defined similarity target $D^T$, expressed as a value of metric $D$, we denote with $D^R$ the measured inter-clustering similarity of the clusterings learned by the model, expressed in the same metric. DivClust's objective is to control inter-clustering diversity, which translates to learning clusterings such that $D^R\leq D^T$. Accordingly, appropriate thresholds $d$ must be used during training. Under the assumption that $D^R$ decreases monotonically w.r.t. $d$, we propose the following update rule for $d$: $$\begin{equation}
|
| 93 |
+
d_{s+1}=
|
| 94 |
+
\begin{cases}
|
| 95 |
+
max(d_s(1-m),0), & \text{if}\ D^R> D^T \\
|
| 96 |
+
min(d_s(1+m),1), & \text{if}\ D^R\leq D^T
|
| 97 |
+
\end{cases} ,
|
| 98 |
+
\label{eq:calc_d}
|
| 99 |
+
\end{equation}$$ where $d_s$ and $d_{s+1}$ are the values of the threshold $d$ for the current and the next steps, and $m\in (0,1)$ regulates the magnitude of the update steps. Following this update rule, we decrease $d$ when the measured inter-clustering similarity $D^R$ needs to decrease, and increase it otherwise. For computational efficiency, instead of calculating $D^R$ over the entire dataset in every training step, we do so every 20 iterations on a memory bank of $M=10,000$ cluster assignments -- the latter is updated at every step in a FIFO manner. We set the hyperparameter $m$ to $m=0.01$ in all experiments.
|
| 100 |
+
|
| 101 |
+
:::: table*
|
| 102 |
+
::: small
|
| 103 |
+
+-----------+-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 104 |
+
| Framework | Clusterings | $D^T$ | $D^R$ | CNF | Mean Acc. | Max. Acc. | DivClust Acc. |
|
| 105 |
+
+:=========:+:===========:+:=====:+:=====:+:=========:+:=========:+:=========:+:=============:+
|
| 106 |
+
| IIC | 1 | \- | \- | 0.997 | 0.442 | 0.442 | 0.442 |
|
| 107 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 108 |
+
| | 20 | 1\. | 0.983 | 0.996 | 0.526 | 0.526 | 0.526 |
|
| 109 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 110 |
+
| | 20 | 0.95 | 0.939 | **0.998** | 0.531 | 0.537 | 0.533 |
|
| 111 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 112 |
+
| | 20 | 0.9 | 0.888 | 0.997 | 0.568 | 0.59 | 0.578 |
|
| 113 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 114 |
+
| | 20 | 0.8 | 0.8 | 0.997 | **0.611** | **0.678** | 0.653 |
|
| 115 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 116 |
+
| | 20 | 0.7 | 0.694 | 0.996 | 0.566 | 0.637 | **0.685** |
|
| 117 |
+
+-----------+-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 118 |
+
| PICA | 1 | \- | \- | **0.906** | 0.533 | 0.533 | 0.533 |
|
| 119 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 120 |
+
| | 20 | 1\. | 0.991 | 0.814 | 0.597 | 0.597 | 0.596 |
|
| 121 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 122 |
+
| | 20 | 0.95 | 0.931 | 0.826 | 0.624 | 0.631 | 0.625 |
|
| 123 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 124 |
+
| | 20 | 0.9 | 0.891 | 0.841 | **0.648** | 0.665 | 0.652 |
|
| 125 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 126 |
+
| | 20 | 0.8 | 0.817 | 0.828 | 0.598 | 0.635 | 0.595 |
|
| 127 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 128 |
+
| | 20 | 0.7 | 0.703 | 0.824 | 0.625 | **0.691** | **0.671** |
|
| 129 |
+
+-----------+-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 130 |
+
| CC | 1 | \- | \- | **0.936** | 0.764 | 0.764 | 0.764 |
|
| 131 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 132 |
+
| | 20 | 1\. | 0.976 | 0.934 | 0.763 | 0.763 | 0.763 |
|
| 133 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 134 |
+
| | 20 | 0.95 | 0.946 | 0.934 | 0.762 | 0.773 | 0.76 |
|
| 135 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 136 |
+
| | 20 | 0.9 | 0.9 | 0.931 | **0.794** | 0.818 | 0.789 |
|
| 137 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 138 |
+
| | 20 | 0.8 | 0.814 | 0.93 | 0.762 | **0.847** | **0.819** |
|
| 139 |
+
| +-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 140 |
+
| | 20 | 0.7 | 0.699 | 0.927 | 0.703 | 0.818 | 0.815 |
|
| 141 |
+
+-----------+-------------+-------+-------+-----------+-----------+-----------+---------------+
|
| 142 |
+
:::
|
| 143 |
+
::::
|
| 144 |
+
|
| 145 |
+
:::: small
|
| 146 |
+
::: {#tab:cc_diversity}
|
| 147 |
+
+-------+--------------------------------------------------+---+
|
| 148 |
+
| $D^T$ | $D^R$ | |
|
| 149 |
+
+:=====:+:=======:+:========:+:===========:+:=============:+:=:+
|
| 150 |
+
| 2-5 | CIFAR10 | CIFAR100 | ImageNet-10 | ImageNet-Dogs | |
|
| 151 |
+
+-------+---------+----------+-------------+---------------+---+
|
| 152 |
+
| 1\. | 0.976 | 0.939 | 0.987 | 0.941 | |
|
| 153 |
+
+-------+---------+----------+-------------+---------------+---+
|
| 154 |
+
| 0.95 | 0.946 | 0.926 | 0.948 | 0.945 | |
|
| 155 |
+
+-------+---------+----------+-------------+---------------+---+
|
| 156 |
+
| 0.9 | 0.9 | 0.848 | 0.897 | 0.87 | |
|
| 157 |
+
+-------+---------+----------+-------------+---------------+---+
|
| 158 |
+
| 0.8 | 0.814 | 0.806 | 0.807 | 0.795 | |
|
| 159 |
+
+-------+---------+----------+-------------+---------------+---+
|
| 160 |
+
| 0.7 | 0.699 | 0.705 | 0.696 | 0.702 | |
|
| 161 |
+
+-------+---------+----------+-------------+---------------+---+
|
| 162 |
+
|
| 163 |
+
: Avg. inter-clustering similarity scores $D^R$ for clustering sets produced by DivClust combined with CC for various diversity targets $D^T$. The objective of DivClust is that $D^R\leq D^T$.
|
| 164 |
+
:::
|
| 165 |
+
::::
|
2406.03919/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2406.03919/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The simulation of physical systems often involves solving Partial Differential Equations (PDEs), and machine learning-based surrogate models are increasingly used to address this challenging taskΒ [@deeponet-lu:2019; @graph-neural-operator-li; @galerkin-cao]. Utilizing ML for solving PDEs has several advantages, such as faster simulation time than classical numerical PDE solvers, differentiability of the surrogate models [@pdebench-takamoto:2022], and their ability to be used even when the underlying PDEs are not known exactly [@fno-li]. However, if knowledge about the PDEs is available, it can be added to the model as in Physics-Informed Neural Networks (PINNs; @pinn-raissi).
|
| 4 |
+
|
| 5 |
+
<figure id="fig:vcnf-teaser" data-latex-placement="t!bp">
|
| 6 |
+
<div class="center">
|
| 7 |
+
<embed src="figures/vcnef_teaser.pdf" />
|
| 8 |
+
</div>
|
| 9 |
+
<figcaption>Conditional Neural Field (CNeF) vs proposed Vectorized CNeF (VCNeF) for solving parameterized PDEs.</figcaption>
|
| 10 |
+
</figure>
|
| 11 |
+
|
| 12 |
+
TransformersΒ [@transformers-vaswani] and its numerous variants are successfully used in natural language processing [@transfromers-bert-devlin], speech processing [@transformer-conformer-gulati], and computer vision [@transformers-images-dosovitskiy]. Due to their remarkable ability to effectively model long-range dependencies in sequential data and to have favorable scaling behavior, Transformers are used in an increasing number of additional applications. Transformer models have been gaining traction in Scientific Machine Learning (SciML) to model physical systemsΒ [@transformers-physical-systems-geneva:2020], solve PDEsΒ [@galerkin-cao; @oformer-pde-li:2023; @scalable-transformer-pde-li:2023; @gnot-operator-learn-hao:2023], and pre-train multiphysics SciML foundation modelsΒ [@multi-physics-pretrain-avit-mccabe:2023]. Meanwhile, recent advances in neural networks for computer graphics tasks have introduced Neural FieldsΒ [@neural-fields-xie], which have proven to be an efficient method to solve PDEsΒ [@siren-inr-sitzmann:2020; @crom-pde-yichen-chen:2023; @implicit-neural-spatial-repr-pde-chen:2023; @dynamics-aware-implicit-neural-repr-dino-yin:2023; @operator-learning-neural-fields-general-geometries-serrano:2023].
|
| 13 |
+
|
| 14 |
+
Despite these recent advances in neural architectures for PDE solving, current methods lack several of the characteristics of an ideal PDE solver: (i) generalization to different Initial Conditions (ICs), (ii) PDE parameters, (iii) support for 1D, 2D, and 3D PDEs, (iv) stability over long rollouts, (v) temporal extrapolation, (vi) spatial and temporal super-resolution capabilities, all with affordable cost, high speed, and accuracy.
|
| 15 |
+
|
| 16 |
+
Towards developing a model that encompasses these ideal characteristics, we propose *Vectorized Conditional Neural Field* (VCNeF), a linear transformer-based conditional neural field that solves PDEs continuously in time, endowing the model with temporal as well as spatial Zero-Shot Super-Resolution (ZSSR) capabilities. The model introduces a new mechanism to condition a neural field on Initial Conditions (ICs) and PDE parameters to achieve generalization to both ICs and PDE parameter values not seen during training. While modeling the solution using neural fields such as PINNs naturally provide temporal and spatial ZSSR, these methods are inefficient since we need to query them separately for every temporal and spatial location in the domain. We achieve faster training and inference by vectorizing these computations on GPUs. Moreover, the proposed method explicitly models dependencies between multiple simultaneous spatio-temporal queries to the model.
|
| 17 |
+
|
| 18 |
+
Concretely, we focus on training and evaluating VCNeF on 1D, 2D, and 3D Initial Value Problems (IVPs) where an IC is given and one predicts multiple future timesteps, as this setting is best suited for real-world applications. The IC could be the data from measurements, and longer rollouts are required to simulate the system under consideration. Additionally, we train our model on multiple PDE parameter values to evaluate its capability to generalize to unseen PDE parameter values.
|
| 19 |
+
|
| 20 |
+
In summary, we make the following contributions:
|
| 21 |
+
|
| 22 |
+
- A time-continuous transformer-based architecture that represents the solutions to PDEs at any point in time as neural fields, even those not encountered during training, which is accomplished by explicitly incorporating the query time.
|
| 23 |
+
|
| 24 |
+
- We empirically verify that VCNeFs generalize robustly to PDE parameter values not seen during training through effective parameter conditioning while also possessing intrinsic capabilities for spatial and temporal zero-shot superresolution.
|
| 25 |
+
|
| 26 |
+
- A model that naturally provides an implicit vectorization of the spatial coordinates that allows for faster training and inference. It also allows computing the solution of multiple spatial points in one forward pass and exploits spatial dependencies instead of processing them independently.
|
| 27 |
+
|
| 28 |
+
# Method
|
| 29 |
+
|
| 30 |
+
In this section, we formally introduce the problem of solving parametric PDEs using neural surrogate models.
|
| 31 |
+
|
| 32 |
+
FollowingΒ @mp-pde-brandstetter, PDEs over the time dimension, denoted as $t \in [0, T]$, and over multiple spatial dimensions, indicated by $\boldsymbol{x} = (x_x, x_y, x_z, \hdots)^{\top} \in \mathbb{X} \subseteq \mathbb{R}^D$ with $D$ dimensions of a PDE, can be expressed as $$\begin{equation}
|
| 33 |
+
\begin{gathered}
|
| 34 |
+
\partial_tu = F(t, \boldsymbol{x}, u, \partial_{\boldsymbol{x}} u, \partial_{\boldsymbol{xx}} u, \hdots) \text{ with } (t, \boldsymbol{x}) \in [0, T] \times \mathbb{X}\\
|
| 35 |
+
u(0, \boldsymbol{x}) = u(0, \cdot) = u^0(\boldsymbol{x}) = u^0 \text{ with } \boldsymbol{x} \in \mathbb{X}\\
|
| 36 |
+
B[u](t, \boldsymbol{x}) = 0 \text{ with } (t, \boldsymbol{x}) \in [0, T] \times \partial \mathbb{X}
|
| 37 |
+
\end{gathered}
|
| 38 |
+
\label{eq:pde}
|
| 39 |
+
\end{equation}$$ where $u: [0, T] \times \mathbb{X} \rightarrow \mathbb{R}^c$ represents the solution function of the PDE that satisfies IC $u(0, \bm{x})$ for time $t = 0$ and the Boundary Conditions (BCs) $B[u](t, \bm{x})$ if $\bm{x}$ is on the boundary $\partial \mathbb{X}$ of the domain $\mathbb{X}$. $c$ denotes the number of output channels or field variables of the PDE. Solving a PDE means determining (an approximation of) the function $u$ that satisfies [\[eq:pde\]](#eq:pde){reference-type="ref+label" reference="eq:pde"}. PDEs often contain a parameter, such as the diffusion or viscosity coefficient, which influences their dynamics. We denote the vector of PDE parameter(s) as $\bm{p}$[^1]. The notation $\partial_{\boldsymbol{x}} u, \partial_{\boldsymbol{xx}} u, \hdots$ represents the $i^{th}$ order (where $i \in [1,2,..,n]$) partial derivative $\frac{\partial u}{\partial \boldsymbol{x}}, \frac{\partial^2 u}{\partial \boldsymbol{x}^2}, \hdots, \frac{\partial^n u}{\partial \boldsymbol{x}^n}$.
|
| 40 |
+
|
| 41 |
+
One has to use discretized data generated by a numerical solver to train surrogate models. The temporal domain $[0, T]$ is discretized into $N_t$ timesteps yielding a sequence $(u(t_0, \cdot), u(t_1, \cdot), \hdots, u(t_{N_t-1}, \cdot))$ which describes the evolution of a PDE. $\Delta t = t_{i+1}-t_i$ denotes the temporal step size or resolution. We denote the number of timesteps used for the IC as $N_i$. Since we focus on initial value problems with one timestep as the IC, it holds $N_i = 1$. The spatial domain $\mathbb{X}$ is also transformed into a grid $\boldsymbol{X}$ by discretizing each spatial dimension. Each grid element localizes a point in the spatial domain of the PDE. For 1D PDEs, the grid $\boldsymbol{X} = (\bigl( \boldsymbol{x_i} = (x_{x_i}) \bigr)_{i=1}^{s_x})^{\top} \in \mathbb{R}^{s_x}$ and $s_x$ denotes the spatial resolution (i.e., number of spatial points) of the x-axis. Similarly, for 2D PDEs the grid $\boldsymbol{X} = (\bigl( \boldsymbol{x_i} = (x_{x_i}, x_{y_i}) \bigr)_{i=1}^{s_x \cdot s_y})^{\top} \in \mathbb{R}^{(s_x \cdot s_y) \times 2}$ and $s_x$, $s_y$ denote the spatial resolutions of the x and y axis, respectively. $u(t_i, \boldsymbol{X}) = (u(t_i, \boldsymbol{x_1}), u(t_i, \boldsymbol{x_2}), \hdots, u(t_i, \boldsymbol{x_s}))^{\top} \in \mathbb{R}^{s \times c}$ with $s = s_x \cdot s_y \cdot s_z \cdot \hdots$ contains the solutions at different spatial locations on the grid $\boldsymbol{X}$. The PDE parameters are stacked as a vector $\boldsymbol{p} = (p_1, \hdots, p_j)^{\top} \in \mathbb{R}^j$ where $p_i$ represents the value of a PDE parameter.
|
| 42 |
+
|
| 43 |
+
A dataset $\mathcal{D} = \{(\boldsymbol{I_1}, \boldsymbol{Y_1}), \hdots, (\boldsymbol{I_N}, \boldsymbol{Y_N})\}$ for each PDE consists of $N$ samples. $\boldsymbol{I_j} = (u(t_0, \boldsymbol{X}), \hdots, u(t_{N_i}, \boldsymbol{X}))$ denotes the solutions given as IC and $\boldsymbol{Y_j} = ((u(t_0, \boldsymbol{X}), \hdots, u(t_{N_t}, \boldsymbol{X}))$ denotes the target sequence of timesteps which represents the trajectory of the PDE.
|
| 44 |
+
|
| 45 |
+
The training objective aims to optimize the parameters $\theta$ (i.e., weights and biases) of the model $f_\theta$ that best approximate the true function $u$ by minimizing the empirical risk over the dataset $\mathcal{D}$ $$\begin{equation}
|
| 46 |
+
\argmin_{\theta \in \Theta} \sum_{i=1}^N \sum_{j=1}^{N_t} \mathcal{L}\left(f_\theta\left(t_j, \boldsymbol{X} \mid \boldsymbol{I_i}\right), \boldsymbol{Y}_{\boldsymbol{i}, j}\right),
|
| 47 |
+
\label{eq:empirical-risk}
|
| 48 |
+
\end{equation}$$ where $\mathcal{L}$ denotes a suitable loss function such as the Mean Squared Error (MSE). $f_\theta(t_j, \boldsymbol{X} | \boldsymbol{I_i})$ represents the prediction of the neural network for timestep $t_j$ and a grid $\boldsymbol{X}$ given the initial condition $\boldsymbol{I_i}$.
|
| 49 |
+
|
| 50 |
+
We briefly recall (conditional) neural fields and relate them to solving parametric PDEs.
|
| 51 |
+
|
| 52 |
+
In physics, a *field* is a quantity that is defined for all spatial and temporal coordinates. Neural Fields (NeFs; @neural-fields-xie) learn a function $f$ which maps the spatial and temporal coordinates (i.e., $\boldsymbol{x} \in \mathbb{R}^{D}, t \in \mathbb{R}_{+}$ respectively) to a quantity $\boldsymbol{q} \in \mathbb{R}^{c}$. Mathematically, a neural field can be expressed as a function $$\begin{equation}
|
| 53 |
+
f_\theta: (\mathbb{R}_+ \times \mathbb{R}^{D}) \rightarrow \mathbb{R}^{c} \text{ with } (t, \boldsymbol{x}) \mapsto \boldsymbol{q}
|
| 54 |
+
\label{eq:neural-field}
|
| 55 |
+
\end{equation}$$ that is parametrized by a neural network with parameters $\theta$. For solving PDEs, the function $f_\theta$ models the solution function $u$, and the quantity $\boldsymbol{q}$ represents the solution's value for the different channels, each representing a physical quantity (e.g., density, velocity, etc.). This architectural design takes inspiration from the Eulerian specification of the flow field from classical field theory, where a field of interest is prescribed both by spatial and temporal coordinates. PINNs [@pinn-raissi] are a special case of neural fields with a physics-aware loss function, modeling the solution $u$ as $$\begin{equation}
|
| 56 |
+
f_\theta: (\mathbb{R}_{+} \times \mathbb{R}^{D}) \rightarrow \mathbb{R}^{c} \text{ with } (t, \boldsymbol{x}) \mapsto u(t, \boldsymbol{x})
|
| 57 |
+
\label{eq:neural-field-pinn}
|
| 58 |
+
\end{equation}$$ where $f_\theta$ denotes the neural field that maps the input spatial and temporal locations to the solution of the PDE.
|
| 59 |
+
|
| 60 |
+
Conditional Neural Fields (CNeFs; @neural-fields-xie) extend NeFs with a conditioning factor $\boldsymbol{z}$ to influence the output of the neural field. The conditioning factor was originally introduced for computer vision to control the colors or shapes of objects that are being modeled. In contrast, we condition the neural field, which models the solution of the PDE, on the initial value or IC and the PDE parameters (cf. FigureΒ [1](#fig:vcnf-teaser){reference-type="ref" reference="fig:vcnf-teaser"}). Thus, the conditioning factor influences the entire field.
|
| 61 |
+
|
| 62 |
+
In this section, we propose *Vectorized Conditional Neural Fields* by explaining the transition from (conditional) neural fields to vectorized (conditional) neural fields. We also introduce our transformer-based architecture.
|
| 63 |
+
|
| 64 |
+
Typically, a (conditional) neural field generates the output quantities for all input spatial and temporal coordinates in multiple and independent forward passes. The training and inference times can be improved by processing multiple inputs in parallel on the GPU, which is possible since all forward passes are independent. However, there are spatial dependencies between different input spatial coordinates, particularly for solving PDEs, that will not be exploited with CNeFs or by processing multiple inputs of CNeFs in parallel. Consequently, we propose extending CNeFs to
|
| 65 |
+
|
| 66 |
+
- take a vector with *arbitrary* spatial coordinates of *variable size* (a set of query points) as input,
|
| 67 |
+
|
| 68 |
+
- exploit the dependencies of the input coordinates when generating the outputs,
|
| 69 |
+
|
| 70 |
+
- generate all outputs for the inputs in one forward pass.
|
| 71 |
+
|
| 72 |
+
Hence, we name our proposed model *Vectorized Conditional Neural Field* since it implicitly generates a vectorization of the input spatial coordinates for a given time $t$. The VCNeFΒ model represents a function $$\begin{equation}
|
| 73 |
+
\begin{gathered}
|
| 74 |
+
f_\theta: (\mathbb{R}_{+} \times \mathbb{R}^{s \times D}) \rightarrow \mathbb{R}^{s \times c} \\
|
| 75 |
+
\text{ with } (t, \boldsymbol{X}) \mapsto u(t, \boldsymbol{X}) = \begin{pmatrix}
|
| 76 |
+
u(t, \boldsymbol{x_1}) \\
|
| 77 |
+
\vdots \\
|
| 78 |
+
u(t, \boldsymbol{x_s})
|
| 79 |
+
\end{pmatrix}
|
| 80 |
+
\end{gathered}
|
| 81 |
+
\label{eq:vcnef}
|
| 82 |
+
\end{equation}$$ where $u(t, \boldsymbol{x_i})$ denotes the PDE solution for the spatial coordinates $\boldsymbol{x_i}$. Note that we do not impose a structure on the spatial coordinates $\boldsymbol{x_i}$ and that the number of spatial points (i.e., $s$) can be arbitrary. The model can process multiple timesteps $t$ in parallel on the GPU to further improve the training and inference time since VCNeF does not exploit dependencies between the temporal coordinates.
|
| 83 |
+
|
| 84 |
+
<figure id="fig:vcnef-architecture" data-latex-placement="t!hb">
|
| 85 |
+
<div class="center">
|
| 86 |
+
<embed src="figures/vcnef_architecture.pdf" style="width:92.5%" />
|
| 87 |
+
</div>
|
| 88 |
+
<figcaption>An illustration of the VCNeF architecture for solving parametric time-dependent 2D PDEs. Latent representations of ICs are generated with a multi-scale patching mechanism <span class="citation" data-cites="multi-scale-vit-chen"></span>. A modulation block consists of self-attention, activation function <span class="math inline"><em>Ο</em></span>, and a modulated neural field that uses the scaling of FiLM <span class="citation" data-cites="film-conditioning-perez:2018"></span> to condition the spatio-temporal coordinates on ICs.</figcaption>
|
| 89 |
+
</figure>
|
| 90 |
+
|
| 91 |
+
VCNeFs directly learn the solution function $u$ of a PDE by mapping a timestep $t_n$ to a subsequent timestep $t_{n+1}$. Hence, VCNeFs are not autoregressive by design. The model is conditioned on the IC to allow for generalization to different ICs and on the PDE parameters $\boldsymbol{p}$ to generalize to PDE parameter values not seen during training. The VCNeF model can be expressed as a function $$\begin{equation}
|
| 92 |
+
\begin{gathered}
|
| 93 |
+
f_\theta: (\mathbb{R}_+ \times \mathbb{R}^{s \times D} \times \mathbb{R}^{s \times c} \times \mathbb{R}^j) \rightarrow \mathbb{R}^{s \times c} \\
|
| 94 |
+
\text{ with } (t, \boldsymbol{X}, u(0, \boldsymbol{X}), \boldsymbol{p}) \mapsto u(t, \boldsymbol{X} | u(0, \boldsymbol{X}), \boldsymbol{p}),
|
| 95 |
+
\end{gathered}
|
| 96 |
+
\label{eq:vcnef-for-pde}
|
| 97 |
+
\end{equation}$$ where $\theta$ represents the parameters of the neural network, $\boldsymbol{X} \in \mathbb{R}^{s}$ the grid with query spatial coordinates, $t$ the query time, $u(0, \boldsymbol{X})$ the IC, and $\boldsymbol{p}$ the vector of PDE parameters. $u(t, \boldsymbol{X} | u(0, \boldsymbol{X}), \boldsymbol{p})$ denotes the solution function, which depends on the given IC and PDE parameters, that is directly regressed by VCNeF. The shape of the grid $\boldsymbol{X}$ depends on the dimensionality of the PDE.
|
| 98 |
+
|
| 99 |
+
As a consequence, VCNeFs do not have to generate the PDE trajectory autoregressively. If the complete trajectory is needed, VCNeF can be queried with the desired times $t$ along the trajectory. Furthermore, the model is continuous in time and can do temporal ZSSR (i.e., changing the temporal discretization $\Delta t$ after training) as well as spatial ZSSR (i.e., changing spatial resolution or grid at inference time). Thus, the model can be queried with arbitrary $t \in (0, T]$ and a finer grid $\boldsymbol{\mathcal{X}}$ which is different from the grid $\boldsymbol{X}$ seen during training. $$\begin{equation}
|
| 100 |
+
\begin{gathered}
|
| 101 |
+
f_\theta(t, \boldsymbol{\mathcal{X}}, u(0, \boldsymbol{\mathcal{X}}), \boldsymbol{p}) \approx u(t, \boldsymbol{\mathcal{X}} | u(0, \boldsymbol{\mathcal{X}}), \boldsymbol{p}) \quad \forall t \in (0, T]
|
| 102 |
+
\end{gathered}
|
| 103 |
+
\label{eq:vcnef-zssr}
|
| 104 |
+
\end{equation}$$ Alternatively, VCNeFs can be seen as an implementation of a neural operator [@neural-operators-kovachki] that is time-continuous and maps the input function (i.e., IC) to an output function that depends on both the IC and time $t$. However, prior work about neural operators [@graph-neural-operator-li; @fno-li; @galerkin-cao; @oformer-pde-li:2023] usually does not focus on time continuity. See [11.1](#app:vcnef-as-no){reference-type="ref+label" reference="app:vcnef-as-no"} for more details.
|
| 105 |
+
|
| 106 |
+
We propose a transformer-based VCNeF that applies self-attention to the spatial domain to capture dependencies between the spatial coordinates. The input spatio-temporal coordinates and the physical representation of ICs are represented in a latent space. Both latent representations are fed into modulation blocks that capture spatial dependencies and condition the coordinates on the IC. The output of the modulation blocks, which represent the solution, is then decoded to obtain the representation in the physical space.
|
| 107 |
+
|
| 108 |
+
The input coordinates, consisting of the query time $t \in \mathbb{R}_+$ that determines the time for which the model's prediction is sought and the spatial coordinates $\boldsymbol{x_i} \in \mathbb{R}^D$, are represented in a latent space. For 1D PDEs, a linear layer is used for encoding, whereas, for 2D and 3D PDEs, the absolute positional encoding (PE; @transformers-vaswani) to encode time $t$, similar to @ditto-diffusion-temporal-transformer-ovadia:2023 and learnable Fourier features (LFF; @fourier-features-li) to encode the spatial coordinates are used. $$\begin{equation}
|
| 109 |
+
\begin{aligned}
|
| 110 |
+
\text{1D: } \boldsymbol{c_i} &= (t \mathbin\Vert \boldsymbol{x_i})\boldsymbol{W} + \boldsymbol{b} \\
|
| 111 |
+
\text{2D: } \boldsymbol{c_i} &= (\mathtt{PE}(t) \mathbin\Vert \mathtt{LFF}(\boldsymbol{x_i}) \mathbin\Vert \hdots \mathbin\Vert \mathtt{LFF}(\boldsymbol{x_{i+15}}))\boldsymbol{W} + \boldsymbol{b} \\
|
| 112 |
+
\text{3D: } \boldsymbol{c_i} &= (\mathtt{PE}(t) \mathbin\Vert \mathtt{LFF}(\boldsymbol{x_i}) \mathbin\Vert \hdots \mathbin\Vert \mathtt{LFF}(\boldsymbol{x_{i+63}}))\boldsymbol{W} + \boldsymbol{b} \\
|
| 113 |
+
\mathtt{LFF}(\boldsymbol{x}) &= \mathtt{MLP}(\frac{1}{\sqrt{d}}(\cos(\boldsymbol{x}\boldsymbol{W_r}) \mathbin\Vert \sin(\boldsymbol{x}\boldsymbol{W_r}))^\top) \\
|
| 114 |
+
\boldsymbol{C} &= (\boldsymbol{c_1} \mathbin\Vert \hdots \mathbin\Vert \boldsymbol{c_s})^{\top}
|
| 115 |
+
\end{aligned}
|
| 116 |
+
\label{eq:vcnef-latent-coords}
|
| 117 |
+
\end{equation}$$ where $\mathbin\Vert$ stands for the concatenation of two vectors and $\mathtt{MLP}$ denotes a Multi-Layer Perceptron (MLP).
|
| 118 |
+
|
| 119 |
+
The input IC is mapped to a latent representation by either applying a shared linear layer to each solution point $u(t, \boldsymbol{x_i})$ or by dividing the spatial domain into non-overlapping patches and applying a linear layer to the patches, akin to Vision Transformers (ViTs; @transformers-images-dosovitskiy). We divide the spatial domain into patches for 2D and 3D PDEs to reduce the computational costs. However, unlike a traditional ViT, our patch generation has two branches: patches of a smaller size ($p_S = 4$ or $4 \times 4$) and of a larger size ($p_L = 16$ or $16 \times 16$) as proposed in @multi-scale-vit-chen since we aim to capture the dynamics accurately at multiple scales.
|
| 120 |
+
|
| 121 |
+
$$\begin{equation}
|
| 122 |
+
\begin{aligned}
|
| 123 |
+
\text{1D: } \boldsymbol{z^{(0)}_i} = &(u(t, \boldsymbol{x_i}) \mathbin\Vert \boldsymbol{x_i} \mathbin\Vert \boldsymbol{p})\boldsymbol{W} + \boldsymbol{b} \\
|
| 124 |
+
\text{2D: } \boldsymbol{z^{(0)}_i} = &(u(t, \boldsymbol{x_i}) \mathbin\Vert \hdots \mathbin\Vert u(t, \boldsymbol{x_{i+15}}) \mathbin\Vert \\
|
| 125 |
+
& \boldsymbol{x_i} \mathbin\Vert \hdots \mathbin\Vert \boldsymbol{x_{i+15}} \mathbin\Vert \boldsymbol{p})\boldsymbol{W} + \boldsymbol{b} \\
|
| 126 |
+
\text{3D: } \boldsymbol{z^{(0)}_i} = &(u(t, \boldsymbol{x_i}) \mathbin\Vert \hdots \mathbin\Vert u(t, \boldsymbol{x_{i+63}}) \mathbin\Vert \\
|
| 127 |
+
& \boldsymbol{x_i} \mathbin\Vert \hdots \mathbin\Vert \boldsymbol{x_{i+63}} \mathbin\Vert \boldsymbol{p})\boldsymbol{W} + \boldsymbol{b} \\
|
| 128 |
+
\boldsymbol{Z^{(0)}} = &(\boldsymbol{z^{(0)}_1} \mathbin\Vert \hdots \mathbin\Vert \boldsymbol{z^{(0)}_s})^{\top}
|
| 129 |
+
\end{aligned}
|
| 130 |
+
\label{eq:vcnef-latent-ic}
|
| 131 |
+
\end{equation}$$ A vector in the latent space (i.e., token) either represents the solution on a spatial point (for 1D) or the solution on a patch of spatial points (for 2D and 3D). The grid contains the coordinates where the solutions are sampled in the spatial domain. This information is used when generating the latent representations of the IC to ensure that each latent representation has information about the position. The PDE parameters $\boldsymbol{p}$ are also added to the latent representation. We neglect additional positional encodings to prevent length generalization problems [@rope-ruoss] that could prevent changing the spatial resolution after training.
|
| 132 |
+
|
| 133 |
+
We utilize a Linear Transformer [@linear-transformers-katharopoulos:2020] with self-attention in our VCNeF architecture to generate an attention-refined latent representation of the IC $\boldsymbol{Z^{(0)}}$. The global receptive field of the Transformer allows the proposed architecture to capture global spatial dependencies in the IC, although each token contains only local spatial information. Intuitively, the Transformer outputs latent representations that incorporate the entire spatial solution and not only a single spatial point or a subset of spatial points. We assume that this is beneficial to generate a better representation of the IC to condition the input coordinates accordingly. $$\begin{equation}
|
| 134 |
+
\bm{Z^{(n+1)}} = \mathtt{Transformer\_Block}\left(\bm{Z^{(n)}}\right)
|
| 135 |
+
\label{eq:transformer-encoder}
|
| 136 |
+
\end{equation}$$ where $\mathtt{Transformer\_Block}(\cdot)$ is a Linear Transformer block with self-attention and $n$ denotes the $n^{th}$ block.
|
| 137 |
+
|
| 138 |
+
The modulation blocks condition the input coordinates on the input IC $\boldsymbol{Z^{(3)}}$ by modulating the latent representation $\boldsymbol{C}$ of the coordinates. The block contains self-attention, a non-linearity $\sigma$, a modulation mechanism similar to Feature-wise Linear Modulation (FiLM; @film-conditioning-perez:2018), layer normalization, residual connections, and an MLP. However, the conditioning mechanism uses only the scaling (i.e., pointwise multiplication) of FiLM and omits the shift (i.e., pointwise addition). A modulation block is expressed as $$\begin{equation}
|
| 139 |
+
\begin{aligned}
|
| 140 |
+
\bm{Z^{(m+1)}} &= \mathtt{Modulation\_Block}\left(\bm{C}, \bm{Z^{(m)}}\right) \\
|
| 141 |
+
&= \mathtt{MLP}\left(\sigma\left(\mathtt{Self\_Attn}\left(\bm{Z^{(m)}}\right)\right) \circ \mathtt{MLP}(\bm{C})\right) \\
|
| 142 |
+
\sigma(\bm{X}) &= \mathtt{ELU}(\bm{X}) + 1
|
| 143 |
+
\end{aligned}
|
| 144 |
+
\label{eq:vcnef-modulation-block}
|
| 145 |
+
\end{equation}$$
|
| 146 |
+
|
| 147 |
+
where $\boldsymbol{Z^{(3)}} \in \mathbb{R}^{s \times d}$ represents the IC, $\boldsymbol{C} \in \mathbb{R}^{s \times d}$ denotes the latent representation of the input coordinates, $\circ$ represents the Hadamard product, and $m$ is the $m^{th}$ modulation block. The residual connections and layer normalization are omitted in [\[eq:vcnef-modulation-block\]](#eq:vcnef-modulation-block){reference-type="ref+label" reference="eq:vcnef-modulation-block"} for the sake of simplicity. The modulation blocks condition the spatio-temporal coordinates on the IC and PDE parameter values, and spatial self-attention incorporates dependencies between the queried spatial coordinates.
|
| 148 |
+
|
| 149 |
+
The solution's latent representation $\boldsymbol{Z^{(6)}}$ is mapped back to the physical space by either applying an MLP for 1D or by mapping the latent representations to small and large patches and outputting the weighted sum of the small and large patches for 2D and 3D.
|
| 150 |
+
|
| 151 |
+
The proposed VCNeF model has the following properties.
|
| 152 |
+
|
| 153 |
+
VCNeF can be trained on lower spatial and temporal resolutions and used for high-resolution spatial and temporal inference since the model is space and time continuous. To do so, the model can be queried with finer coordinates (i.e., intermediate spatial and temporal coordinates). Training on low-resolution data requires less computational resources and saves computing time, while inference at high-resolution data minimizes the risk of missing crucial dynamics.
|
| 154 |
+
|
| 155 |
+
The training and inference of VCNeF are accelerated by processing multiple temporal coordinates in parallel on the GPU. If the solution of multiple timesteps (e.g., $t \in \{t_1, t_2, \hdots, t_{N_t} \}$) is to be predicted, VCNeF can calculate the solution of the timesteps in parallel due to the fact that the predictions of $u(t, \cdot)$ are independent of each other. The proposed architecture uses linear attention. Nonetheless, linear attention can be replaced with an arbitrary attention mechanism. The runtime and memory consumption are influenced by the spatial resolution $s = s_x \cdot s_y \cdot s_z \cdot \hdots$ and the cardinality $N_t$ of the queried timesteps. For 1D, the model has a time and space complexity of $\mathcal{O}\left(s_x \cdot N_t\right)$. For 2D, the complexity is of $\mathcal{O}\left(\left(\frac{s_x \cdot s_y}{{p_S}^2} + \frac{s_x \cdot s_y}{{p_L}^2}\right) \cdot N_t\right)$ where $s_x$ and $s_y$ denote the spatial resolution of the x and y-axis, respectively, and $p_S, p_L$ denote the patch sizes. We omit encoding and decoding, which include the channels $c$, for the sake of simplicity.
|
| 156 |
+
|
| 157 |
+
The loss function of VCNeF can be easily extended with a physics-informed loss as in PINNs [@pinn-raissi] since VCNeF directly models the solution function $u$ and therefore, the derivatives can be computed with automatic differentiation [@hips-autograd-maclaurin:2015; @autodiff-pytorch-paszke:2017].
|
| 158 |
+
|
| 159 |
+
We suggest conditioning the model not only on the IC but also on randomly sampled timesteps along the trajectory in the training phase as data augmentation to improve the model's performance further.
|
2411.03387/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="app.diagrams.net" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36" version="24.8.3" scale="10" border="0">
|
| 2 |
+
<diagram name="Π‘ΡΠΎΡΡΠ½ΠΊΠ°-1" id="RzaDwIenC73cHk6F7PvT">
|
| 3 |
+
<mxGraphModel dx="424" dy="220" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="827" pageHeight="1169" math="1" shadow="0">
|
| 4 |
+
<root>
|
| 5 |
+
<mxCell id="0" />
|
| 6 |
+
<mxCell id="1" parent="0" />
|
| 7 |
+
<mxCell id="2" value="" style="endArrow=none;html=1;rounded=0;strokeColor=#808080;strokeWidth=1;" edge="1" parent="1">
|
| 8 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 9 |
+
<mxPoint x="1215" y="190" as="sourcePoint" />
|
| 10 |
+
<mxPoint x="1215" y="60" as="targetPoint" />
|
| 11 |
+
</mxGeometry>
|
| 12 |
+
</mxCell>
|
| 13 |
+
<mxCell id="3" value="" style="endArrow=none;html=1;rounded=0;strokeColor=#808080;strokeWidth=1;" edge="1" parent="1">
|
| 14 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 15 |
+
<mxPoint x="1245" y="190" as="sourcePoint" />
|
| 16 |
+
<mxPoint x="1245" y="60" as="targetPoint" />
|
| 17 |
+
</mxGeometry>
|
| 18 |
+
</mxCell>
|
| 19 |
+
<mxCell id="4" value="" style="endArrow=none;html=1;rounded=0;strokeColor=#808080;strokeWidth=1;" edge="1" parent="1">
|
| 20 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 21 |
+
<mxPoint x="1275" y="190" as="sourcePoint" />
|
| 22 |
+
<mxPoint x="1275" y="60" as="targetPoint" />
|
| 23 |
+
</mxGeometry>
|
| 24 |
+
</mxCell>
|
| 25 |
+
<mxCell id="5" value="" style="endArrow=none;html=1;rounded=0;strokeColor=#808080;strokeWidth=1;" edge="1" parent="1">
|
| 26 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 27 |
+
<mxPoint x="1185" y="190" as="sourcePoint" />
|
| 28 |
+
<mxPoint x="1185" y="60" as="targetPoint" />
|
| 29 |
+
</mxGeometry>
|
| 30 |
+
</mxCell>
|
| 31 |
+
<mxCell id="6" value="<p style="line-height: 110%; font-size: 16px;"><br></p>" style="rounded=0;whiteSpace=wrap;html=1;spacingTop=-1;fillColor=#d5e8d4;strokeColor=#82b366;shadow=1;" vertex="1" parent="1">
|
| 32 |
+
<mxGeometry x="1170" y="40" width="120" height="45" as="geometry" />
|
| 33 |
+
</mxCell>
|
| 34 |
+
<mxCell id="7" value="" style="shape=table;startSize=0;container=1;collapsible=0;childLayout=tableLayout;fontSize=16;strokeColor=none;fillColor=none;" vertex="1" parent="1">
|
| 35 |
+
<mxGeometry x="1170" y="40" width="120" height="44.83" as="geometry" />
|
| 36 |
+
</mxCell>
|
| 37 |
+
<mxCell id="8" style="shape=tableRow;horizontal=0;startSize=0;swimlaneHead=0;swimlaneBody=0;strokeColor=inherit;top=0;left=0;bottom=0;right=0;collapsible=0;dropTarget=0;fillColor=none;points=[[0,0.5],[1,0.5]];portConstraint=eastwest;fontSize=16;" vertex="1" parent="7">
|
| 38 |
+
<mxGeometry width="120" height="15" as="geometry" />
|
| 39 |
+
</mxCell>
|
| 40 |
+
<mxCell id="9" value="<font style="font-size: 5.5px;">Single-stage learners</font>" style="shape=partialRectangle;html=1;whiteSpace=wrap;connectable=0;strokeColor=inherit;overflow=hidden;fillColor=none;top=0;left=0;bottom=0;right=0;pointerEvents=1;fontSize=8;rowspan=1;colspan=2;" vertex="1" parent="8">
|
| 41 |
+
<mxGeometry width="60" height="15" as="geometry">
|
| 42 |
+
<mxRectangle width="30" height="15" as="alternateBounds" />
|
| 43 |
+
</mxGeometry>
|
| 44 |
+
</mxCell>
|
| 45 |
+
<mxCell id="10" style="shape=partialRectangle;html=1;whiteSpace=wrap;connectable=0;strokeColor=inherit;overflow=hidden;fillColor=none;top=0;left=0;bottom=0;right=0;pointerEvents=1;fontSize=8;" visible="0" vertex="1" parent="8">
|
| 46 |
+
<mxGeometry x="30" width="30" height="15" as="geometry">
|
| 47 |
+
<mxRectangle width="30" height="15" as="alternateBounds" />
|
| 48 |
+
</mxGeometry>
|
| 49 |
+
</mxCell>
|
| 50 |
+
<mxCell id="11" value="<font style="font-size: 5.5px;">Two-stage learners</font>" style="shape=partialRectangle;html=1;whiteSpace=wrap;connectable=0;strokeColor=inherit;overflow=hidden;fillColor=none;top=0;left=0;bottom=0;right=0;pointerEvents=1;fontSize=8;rowspan=1;colspan=2;" vertex="1" parent="8">
|
| 51 |
+
<mxGeometry x="60" width="60" height="15" as="geometry">
|
| 52 |
+
<mxRectangle width="30" height="15" as="alternateBounds" />
|
| 53 |
+
</mxGeometry>
|
| 54 |
+
</mxCell>
|
| 55 |
+
<mxCell id="12" style="shape=partialRectangle;html=1;whiteSpace=wrap;connectable=0;strokeColor=inherit;overflow=hidden;fillColor=none;top=0;left=0;bottom=0;right=0;pointerEvents=1;fontSize=8;" visible="0" vertex="1" parent="8">
|
| 56 |
+
<mxGeometry x="90" width="30" height="15" as="geometry">
|
| 57 |
+
<mxRectangle width="30" height="15" as="alternateBounds" />
|
| 58 |
+
</mxGeometry>
|
| 59 |
+
</mxCell>
|
| 60 |
+
<mxCell id="13" value="" style="shape=tableRow;horizontal=0;startSize=0;swimlaneHead=0;swimlaneBody=0;strokeColor=inherit;top=0;left=0;bottom=0;right=0;collapsible=0;dropTarget=0;fillColor=none;points=[[0,0.5],[1,0.5]];portConstraint=eastwest;fontSize=16;" vertex="1" parent="7">
|
| 61 |
+
<mxGeometry y="15" width="120" height="30" as="geometry" />
|
| 62 |
+
</mxCell>
|
| 63 |
+
<mxCell id="14" value="<p style="line-height: 90%;"><font style="font-size: 7px;">Plug-in learner </font><font style="font-size: 4px;">(existing work)</font></p>" style="shape=partialRectangle;html=1;whiteSpace=wrap;connectable=0;strokeColor=inherit;overflow=hidden;fillColor=none;top=0;left=0;bottom=0;right=0;pointerEvents=1;fontSize=8;spacingTop=-6;" vertex="1" parent="13">
|
| 64 |
+
<mxGeometry width="30" height="30" as="geometry">
|
| 65 |
+
<mxRectangle width="30" height="30" as="alternateBounds" />
|
| 66 |
+
</mxGeometry>
|
| 67 |
+
</mxCell>
|
| 68 |
+
<mxCell id="15" value="<p style="line-height: 90%;"><font style=""><span style="font-size: 7px;">IPTW-learner</span><br><span style="font-size: 4px;">(existing work)</span><br></font></p>" style="shape=partialRectangle;html=1;whiteSpace=wrap;connectable=0;strokeColor=inherit;overflow=hidden;fillColor=none;top=0;left=0;bottom=0;right=0;pointerEvents=1;fontSize=8;spacingTop=-6;" vertex="1" parent="13">
|
| 69 |
+
<mxGeometry x="30" width="30" height="30" as="geometry">
|
| 70 |
+
<mxRectangle width="30" height="30" as="alternateBounds" />
|
| 71 |
+
</mxGeometry>
|
| 72 |
+
</mxCell>
|
| 73 |
+
<mxCell id="16" value="<p style="line-height: 90%;"><font style="font-size: 7px;">CA-<br>learner</font><br><font style="font-size: 5px;">(our paper)</font></p>" style="shape=partialRectangle;html=1;whiteSpace=wrap;connectable=0;strokeColor=inherit;overflow=hidden;fillColor=none;top=0;left=0;bottom=0;right=0;pointerEvents=1;fontSize=8;spacingTop=-6;" vertex="1" parent="13">
|
| 74 |
+
<mxGeometry x="60" width="30" height="30" as="geometry">
|
| 75 |
+
<mxRectangle width="30" height="30" as="alternateBounds" />
|
| 76 |
+
</mxGeometry>
|
| 77 |
+
</mxCell>
|
| 78 |
+
<mxCell id="17" value="<p style="line-height: 90%;"><font style=""><span style="font-size: 7px;">AU-<br>learner</span><br><font style="font-size: 5px;">(our paper)</font></font></p>" style="shape=partialRectangle;html=1;whiteSpace=wrap;connectable=0;strokeColor=inherit;overflow=hidden;fillColor=none;top=0;left=0;bottom=0;right=0;pointerEvents=1;fontSize=8;spacingTop=-6;" vertex="1" parent="13">
|
| 79 |
+
<mxGeometry x="90" width="30" height="30" as="geometry">
|
| 80 |
+
<mxRectangle width="30" height="30" as="alternateBounds" />
|
| 81 |
+
</mxGeometry>
|
| 82 |
+
</mxCell>
|
| 83 |
+
<mxCell id="18" value="<p style="line-height: 50%;"><font style=""><font style="font-size: 7px;"><b>(a)</b> Addressing the selection bias?&nbsp;</font><br></font></p>" style="rounded=1;whiteSpace=wrap;html=1;strokeWidth=0.5;fillColor=#f5f5f5;fontColor=#333333;strokeColor=#666666;align=left;spacingLeft=2;" vertex="1" parent="1">
|
| 84 |
+
<mxGeometry x="1100" y="90" width="70" height="30" as="geometry" />
|
| 85 |
+
</mxCell>
|
| 86 |
+
<mxCell id="19" value="<p style="line-height: 50%;"><font style="font-size: 7px;"><b>(c)</b> Orthogonality of the loss wrt. to the&nbsp; nuisance functions?</font></p>" style="rounded=1;whiteSpace=wrap;html=1;strokeWidth=0.5;fillColor=#f5f5f5;fontColor=#333333;strokeColor=#666666;align=left;spacingLeft=2;spacingTop=-2;" vertex="1" parent="1">
|
| 87 |
+
<mxGeometry x="1100" y="160" width="70" height="30" as="geometry" />
|
| 88 |
+
</mxCell>
|
| 89 |
+
<mxCell id="20" value="<p style="line-height: 50%;"><font style="font-size: 7px;"><b>(b)</b> Targeting at Makarov bounds?</font></p>" style="rounded=1;whiteSpace=wrap;html=1;strokeWidth=0.5;fillColor=#f5f5f5;fontColor=#333333;strokeColor=#666666;align=left;spacingLeft=2;" vertex="1" parent="1">
|
| 90 |
+
<mxGeometry x="1100" y="125" width="70" height="30" as="geometry" />
|
| 91 |
+
</mxCell>
|
| 92 |
+
<mxCell id="21" value="<font style=""><font size="1" color="#009053" style="" face="sans-serif"><span style="caret-color: rgba(0, 0, 0, 0); white-space: pre; font-size: 12px;">β</span></font></font>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=0;strokeWidth=1;" vertex="1" parent="1">
|
| 93 |
+
<mxGeometry x="1205" y="95" width="20" height="20" as="geometry" />
|
| 94 |
+
</mxCell>
|
| 95 |
+
<mxCell id="22" value="<span style="color: rgb(0, 144, 83); font-family: sans-serif; caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=0;strokeWidth=1;" vertex="1" parent="1">
|
| 96 |
+
<mxGeometry x="1265" y="130" width="20" height="20" as="geometry" />
|
| 97 |
+
</mxCell>
|
| 98 |
+
<mxCell id="23" value="<font style="font-size: 12px;"><font color="#a13228" style="font-size: 12px;" face="sans-serif"><span style="caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span></font></font>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=-2;strokeWidth=1;" vertex="1" parent="1">
|
| 99 |
+
<mxGeometry x="1235" y="95" width="20" height="20" as="geometry" />
|
| 100 |
+
</mxCell>
|
| 101 |
+
<mxCell id="24" value="<span style="color: rgb(0, 144, 83); font-family: sans-serif; caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=0;strokeWidth=1;" vertex="1" parent="1">
|
| 102 |
+
<mxGeometry x="1265" y="95" width="20" height="20" as="geometry" />
|
| 103 |
+
</mxCell>
|
| 104 |
+
<mxCell id="25" value="<span style="color: rgb(0, 144, 83); font-family: sans-serif; caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=0;strokeWidth=1;" vertex="1" parent="1">
|
| 105 |
+
<mxGeometry x="1265" y="165" width="20" height="20" as="geometry" />
|
| 106 |
+
</mxCell>
|
| 107 |
+
<mxCell id="26" value="<span style="color: rgb(0, 144, 83); font-family: sans-serif; caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=0;strokeWidth=1;" vertex="1" parent="1">
|
| 108 |
+
<mxGeometry x="1205" y="165" width="20" height="20" as="geometry" />
|
| 109 |
+
</mxCell>
|
| 110 |
+
<mxCell id="27" value="<span style="color: rgb(0, 144, 83); font-family: sans-serif; caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=0;strokeWidth=1;" vertex="1" parent="1">
|
| 111 |
+
<mxGeometry x="1235" y="130" width="20" height="20" as="geometry" />
|
| 112 |
+
</mxCell>
|
| 113 |
+
<mxCell id="28" value="<font style="font-size: 12px;"><font color="#a13228" style="font-size: 12px;" face="sans-serif"><span style="caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span></font></font>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=-2;strokeWidth=1;" vertex="1" parent="1">
|
| 114 |
+
<mxGeometry x="1175" y="95" width="20" height="20" as="geometry" />
|
| 115 |
+
</mxCell>
|
| 116 |
+
<mxCell id="29" value="<font style="font-size: 12px;"><font color="#a13228" style="font-size: 12px;" face="sans-serif"><span style="caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span></font></font>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=-2;strokeWidth=1;" vertex="1" parent="1">
|
| 117 |
+
<mxGeometry x="1205" y="130" width="20" height="20" as="geometry" />
|
| 118 |
+
</mxCell>
|
| 119 |
+
<mxCell id="30" value="<font style="font-size: 12px;"><font color="#a13228" style="font-size: 12px;" face="sans-serif"><span style="caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span></font></font>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=-2;strokeWidth=1;" vertex="1" parent="1">
|
| 120 |
+
<mxGeometry x="1175" y="130" width="20" height="20" as="geometry" />
|
| 121 |
+
</mxCell>
|
| 122 |
+
<mxCell id="31" value="<font style="font-size: 12px;"><font color="#a13228" style="font-size: 12px;" face="sans-serif"><span style="caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span></font></font>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=-2;strokeWidth=1;" vertex="1" parent="1">
|
| 123 |
+
<mxGeometry x="1175" y="165" width="20" height="20" as="geometry" />
|
| 124 |
+
</mxCell>
|
| 125 |
+
<mxCell id="32" value="<font style="font-size: 12px;"><font color="#a13228" style="font-size: 12px;" face="sans-serif"><span style="caret-color: rgba(0, 0, 0, 0); white-space: pre;">β</span></font></font>" style="ellipse;whiteSpace=wrap;html=1;aspect=fixed;fillColor=#FFFFFF;strokeColor=#808080;spacingTop=-2;strokeWidth=1;" vertex="1" parent="1">
|
| 126 |
+
<mxGeometry x="1235" y="165" width="20" height="20" as="geometry" />
|
| 127 |
+
</mxCell>
|
| 128 |
+
<mxCell id="33" value="" style="endArrow=none;html=1;rounded=0;" edge="1" parent="1">
|
| 129 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 130 |
+
<mxPoint x="1287" y="55" as="sourcePoint" />
|
| 131 |
+
<mxPoint x="1233" y="55" as="targetPoint" />
|
| 132 |
+
</mxGeometry>
|
| 133 |
+
</mxCell>
|
| 134 |
+
<mxCell id="34" value="" style="endArrow=none;html=1;rounded=0;" edge="1" parent="1">
|
| 135 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 136 |
+
<mxPoint x="1227" y="55" as="sourcePoint" />
|
| 137 |
+
<mxPoint x="1173" y="55" as="targetPoint" />
|
| 138 |
+
</mxGeometry>
|
| 139 |
+
</mxCell>
|
| 140 |
+
</root>
|
| 141 |
+
</mxGraphModel>
|
| 142 |
+
</diagram>
|
| 143 |
+
</mxfile>
|
2411.03387/main_diagram/main_diagram.pdf
ADDED
|
Binary file (37.4 kB). View file
|
|
|
2411.03387/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Estimating causal quantities from observational data is crucial for decision-making in medicine [@bica2021real; @buell2024individualized; @curth2024using; @feuerriegel2024causal; @kuzmanovic2024causal]. For example, medical practitioners are interested in estimating the effect of chemotherapy vs. immunotherapy on patient survival from electronic health records to understand the best treatment strategies in cancer care. Here, common estimation targets are *averaged* causal quantities such as the average treatment effect (ATE) and the conditional average treatment effect (CATE), yet averaged causal quantities do not allow for understanding the variability of the treatment effect.
|
| 4 |
+
|
| 5 |
+
What is needed for the reliability of causal quantities in medicine? To obtain *reliable* causal quantities, one often needs to "move beyond the mean" [@heckman1997making; @kneib2023rage] and consider the inherent randomness in the treatment effect as a random variable. This randomness is referred to as *aleatoric uncertainty* [@kallus2022what; @ruiz2022non; @cui2024policy]. Quantifying the aleatoric uncertainty of the treatment effect is relevant in medical practice to understand the probability of benefit from treatment [@fan2010sharp; @kallus2022what] and the quantiles and variance of the treatment effect [@fan2010sharp; @aronow2014sharp; @firpo2019partial; @kaji2023assessing; @cui2024policy]. As an example, averaged quantities such as the CATE would simply suggest a positive effect for some patients, while the probability of benefit from treatment can inform patients about the odds of being negatively affected by the treatment. Hence, aleatoric uncertainty of the treatment effect promises additional, fine-grained insights beyond simple averages.
|
| 6 |
+
|
| 7 |
+
Methods for quantifying the aleatoric uncertainty of the treatment effect have gained surprisingly little attention in the causal machine learning community. So far, machine learning for treatment effect estimation was primarily focused on estimating averaged causal quantities [@kunzel2019metalearners; @kennedy2023towards; @nie2021quasi; @curth2021nonparametric; @vansteelandt2023orthogonal; @morzywolek2023general; @shalit2017estimating; @johansson2022generalization; @zhang2020learning]. Some research aims to quantify the epistemic uncertainty in treatment effect estimation [@jesson2020identifying] or the total uncertainty (but without distinguishing the types of uncertainty) [@kivaranovic2020conformal; @lee2020robust; @alaa2024conformal]. Other works focused on the aleatoric uncertainty of the potential outcomes [@firpo2007efficient; @chernozhukov2013inference; @kim2018causal; @melnychuk2023normalizing; @kennedy2023semiparametric][^1] or on contrasts between distributions of potential outcomes (also known as distributional treatment effects) [@park2021conditional; @chikahara2022feature; @fawkes2024doubly; @kallus2023robust; @martinez2023efficient].[^2] However, to the best of our knowledge, there is no comprehensive meta-learning theory for the estimation of *the aleatoric uncertainty in the treatment effect*.
|
| 8 |
+
|
| 9 |
+
In this paper, we aim to quantify the aleatoric uncertainty of the treatment effect at the covariate-conditional level in the form of a conditional distribution of the treatment effect(CDTE). Knowing the CDTE would automatically allow one to compute the above-mentioned quantities of aleatoric uncertainty, namely, the probability of benefit from treatment and the quantiles and variance of the treatment effect, at both population and covariate-conditional levels.
|
| 10 |
+
|
| 11 |
+
Yet, the identification and estimation of the CDTE in contrast to CATE come with three **challenges** as follows (see Fig.Β [1](#fig:ident-est-challenges){reference-type="ref" reference="fig:ident-est-challenges"}):
|
| 12 |
+
|
| 13 |
+
Challenge is that the CDTE does **not** allow for *point identifiable*, neither in the potential outcomes framework nor in randomized control trials due to the fundamental problem of causal inference as counterfactual outcomes can not be observed [@manski1997monotone; @fan2010sharp]. We thus employ *partial identification* [@ho2017partial] to obtain bounds on the CDTE and thereby quantify the aleatoric uncertainty of the treatment effect. Specifically, we focus on Makarov bounds [@makarov1982estimates; @williamson1990probabilistic; @zhang2024bounds] that give sharp bounds for both the cumulative distribution function (CDF) and the quantiles of the CDTE.
|
| 14 |
+
|
| 15 |
+
![Identification and estimation of the conditional distribution of the treatment effect(CDTE) ($=$our setting) compared to the (well-studied) identification and estimation of the CATE. In this paper, we focus specifically on the CDF of the CDTE, $\mathbb{P}(Y[1] - Y[0] \le \delta \mid x)$, shown in [orange]{style="background-color: orange"}. Our main contribution relates to the estimation, shown in [yellow]{style="background-color: yellow"}. However, moving from CATE identification and estimation to our setting comes with important challenges: Β CATE (shown in **[green]{style="color: darkgreen"}**) is point identifiable but the CDTE is *not* (shown in **[blue]{style="color: darkblue"}**); Β there is *no* closed-form expression of the target estimand in terms of nuisance functions and, because of that, CATE learners cannot be directly adapted for estimation; and Β CATE is an unconstrained target estimand whereas Makarov bounds (shown in $\protect%
|
| 16 |
+
\begin{tikzpicture}[baseline=(a.base)]
|
| 17 |
+
\node[inner xsep=0pt,inner ysep=0pt] (a) {$%
|
| 18 |
+
\begin{tikzpicture}[baseline=(a.base)]
|
| 19 |
+
\node[inner xsep=0pt,inner ysep=0pt] (a) {$\text{\colorbox{darkgray}{gray}}$};
|
| 20 |
+
\draw[line width= 1.2pt] (a.north west) -- (a.north east);
|
| 21 |
+
\end{tikzpicture}
|
| 22 |
+
$};
|
| 23 |
+
\draw[line width= 1.2pt] (a.south west) -- (a.south east);
|
| 24 |
+
\end{tikzpicture}$) are monotonous and contained in the interval $[0, 1]$.](figures/contributions.pdf){#fig:ident-est-challenges width="\\textwidth"}
|
| 25 |
+
|
| 26 |
+
Challenge is that there is **no** closed-form expression of the target estimand in terms of nuisance functions. Because of this, existing CATE learners cannot be directly adapted to our task of estimating Makarov bounds. For example, there are *no* orthogonal learners in the general setting, and existing approaches only use naΓ―ve plug-in estimators/learners. Furthermore, even the derivation of the orthogonal loss is non-trivial as there is no efficient influence function at hand for the Makarov bounds.
|
| 27 |
+
|
| 28 |
+
Challenge is that CATE is an unconstrained target estimand whereas Makarov bounds are **monotonous** and **contained** in the interval $[0, 1]$. Notably, any constraints of the target estimand could be violated by orthogonal learners [@vansteelandt2023orthogonal; @van2024combining]. Therefore, an orthogonal learner for Makarov bounds needs to be carefully adapted, especially to perform well in low-sample settings.
|
| 29 |
+
|
| 30 |
+
::: wrapfigure
|
| 31 |
+
r5.6cm
|
| 32 |
+
:::
|
| 33 |
+
|
| 34 |
+
In this paper, we develop a novel, orthogonal learner for estimating Makarov bounds which we call *AU-learner*, which allows to quantify the **a**leatoric **u**ncertainty of the treatment effect. Our *AU-learner* addresses all of the above-mentioned challenges -- . Further, our *AU-learner* has several useful theoretical properties, such as satisfying Neyman-orthogonality and, thus, quasi-oracle efficiency [@nie2021quasi]. Finally, we propose a flexible, fully-parametric deep learning instantiation of our *AU-learner*. For this, we make use of conditional normalizing flows and call our method AU-CNFs.
|
| 35 |
+
|
| 36 |
+
To summarize, our contributions are as follows: [^3]
|
| 37 |
+
|
| 38 |
+
1. We derive a novel, orthogonal leaner called *AU-learner* to quantify the aleatoric uncertainty of the treatment effect. For this, we estimate Makarov bounds on the CDF/quantiles of the conditional distribution of the treatment effect(CDTE).
|
| 39 |
+
|
| 40 |
+
2. We prove several favorable theoretical properties of our *AU-learner*, such as Neyman-orthogonality and, thus, quasi-oracle efficiency.
|
| 41 |
+
|
| 42 |
+
3. We propose a flexible deep learning instantiation of our *AU-learner* based on conditional normalizing flows, which we call AU-CNFs, and demonstrate its effectiveness over several benchmarks.
|
| 43 |
+
|
| 44 |
+
# Method
|
| 45 |
+
|
| 46 |
+
<figure id="fig:architecture" data-latex-placement="th">
|
| 47 |
+
<embed src="figures/architecture.pdf" />
|
| 48 |
+
<figcaption>Overview of our AU-CNFs. AU-CNFs combine several conditional normalizing flows (CNFs), which we call a nuisance CNF and upper/lower target CNFs. The nuisance CNF is a first stage model and aims at estimating the nuisance functions, i.βe., the propensity score, <span class="math inline"><em>ΟΜ</em><sub><em>a</em></sub>(<em>x</em>)β=β<em>a</em><em>ΟΜ</em>(<em>x</em>)β
+β
(1β
ββ
<em>a</em>)<em>ΟΜ</em>(<em>x</em>)</span>; and the conditional outcome CDFs, <span class="math inline"><em>FΜ</em><sub><em>a</em></sub>(<em>y</em>β
β£β
<em>x</em>)</span>. Upper/lower target CNFs are the second stage working models, <span class="math inline">$\overline{{\mathcal{G}}}$</span> and <span class="math inline">${\underline{\mathcal{G}}}$</span>, respectively. They aim at minimizing one of the losses of <em>AU-learner</em>, <span class="math inline">$\widehat{\underline{\overline{\mathcal{L}}}}_{\text{AU, CRPS} / W_2^2}$</span>.</figcaption>
|
| 49 |
+
</figure>
|
| 50 |
+
|
| 51 |
+
Our AU-CNFs allow us to implement the AlgorithmΒ [\[alg:au-crps\]](#alg:au-crps){reference-type="ref" reference="alg:au-crps"} of our *AU-learner*(see Fig.Β [7](#fig:architecture){reference-type="ref" reference="fig:architecture"}) by combining several conditional normalizing flows (CNFs) [@rezende2015variational; @trippe2018conditional]. It consists of a (i)Β nuisance CNF and (ii)Β two target CNFs (upper and lower). (1)Β The nuisance CNF aims to fit the nuisance functions, $(\hat{\pi}, \widehat{\mathbb{F}}_0, \widehat{\mathbb{F}}_1)$ or, equivalently, $(\hat{\pi}, \widehat{\mathbb{F}}_0^{-1}, \widehat{\mathbb{F}}_1^{-1})$. (2)Β Upper and lower target CNFs constitute the second stage working models, namely, $\overline{{\mathcal{G}}}$ and ${\underline{\mathcal{G}}}$, and minimize the loss of our *AU-learner*.
|
| 52 |
+
|
| 53 |
+
**(1) Nuisance CNF.** The nuisance CNF has three components, similarly to [@melnychuk2023normalizing]. These are two fully-connected subnetworks (FC$_1$ and FC$_2$) and a CNF, parametrized by $\theta$. The two subnetworks FC$_1$ and FC$_2$ form a hypernetwork, which outputs the conditional parameters, $\theta = \theta(X, A)$. This allows us to flexibly model the conditional outcome distribution.
|
| 54 |
+
|
| 55 |
+
The nuisance CNF has the following joint loss for the nuisance functions: $\mathcal{L}_\text{N} = \mathcal{L}_{\text{NLL}} + \alpha \mathcal{L}_\pi$. Here, $\mathcal{L}_{\text{NLL}}$ is a conditional negative log-likelihood loss, $\mathcal{L}_\pi$ is a binary cross-entropy, and $\alpha > 0$ is a hyperparameter. We additionally employed noise regularization to regularize the conditional negative log-likelihood loss [@rothfuss2019noise].
|
| 56 |
+
|
| 57 |
+
**(2) Upper and lower target CNFs.** The upper and lower target CNFs use the pseudo-CDFs / pseudo-quantiles, generated by the nuisance CNF, and then implement a second stage loss of our *AU-learner*. Both target CNFs have the same structure. Specifically, they have a fully-connected subnetwork, $\underline{\overline{\text{FC}}}_3$, and a CNF, parametrized by $\underline{\overline{\beta}}$. Analogously, $\underline{\overline{\text{FC}}}_3$ serves as a hypernetwork so that the parameters can be conditioned on $X$: $\underline{\overline{\beta}} = \underline{\overline{\beta}}(X)$.
|
| 58 |
+
|
| 59 |
+
To fit the target CNFs, we use a second stage loss of our *AU-learner*, namely, Eq.Β [\[eq:au-crps\]](#eq:au-crps){reference-type="eqref" reference="eq:au-crps"} or Eq.Β [\[eq:au-w22\]](#eq:au-w22){reference-type="eqref" reference="eq:au-w22"}. For that, we discretize the $\mathcal{Y}$-space or the $[0, 1]$-interval of $u$ into $n_d$ values and infer argmin/argmax values based on those grids. Then, to approximate the integrals, we do the same for the $\mathit{\Delta}$-space and the $[0, 1]$-interval of $\alpha$. The later creates a $\delta / \alpha$-grid with $n_\delta / n_\alpha$ points. Those grids are later used for a rectangle quadrature integration. Furthermore, we also regularize the target CNFs by applying the noise regularization [@rothfuss2019noise].
|
2505.02537/main_diagram/main_diagram.drawio
ADDED
|
@@ -0,0 +1,265 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<mxfile host="Electron" modified="2024-09-13T15:54:47.160Z" agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/24.1.0 Chrome/120.0.6099.109 Electron/28.1.0 Safari/537.36" etag="BaumnlZDt48NCg_n7b8H" version="24.1.0" type="device">
|
| 2 |
+
<diagram name="Page-1" id="VH462lW0CHwxbXwTCcah">
|
| 3 |
+
<mxGraphModel dx="619" dy="448" grid="0" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="0" pageScale="1" pageWidth="827" pageHeight="1169" math="1" shadow="0">
|
| 4 |
+
<root>
|
| 5 |
+
<mxCell id="0" />
|
| 6 |
+
<mxCell id="1" parent="0" />
|
| 7 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-1" value="" style="rounded=0;whiteSpace=wrap;html=1;fillColor=none;dashed=1;" parent="1" vertex="1">
|
| 8 |
+
<mxGeometry x="157" y="84" width="419" height="158" as="geometry" />
|
| 9 |
+
</mxCell>
|
| 10 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-2" value="`x`" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;" parent="1" vertex="1">
|
| 11 |
+
<mxGeometry x="117" y="108" width="32" height="26" as="geometry" />
|
| 12 |
+
</mxCell>
|
| 13 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-3" value="`W`" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;" parent="1" vertex="1">
|
| 14 |
+
<mxGeometry x="266" y="253" width="37" height="26" as="geometry" />
|
| 15 |
+
</mxCell>
|
| 16 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-4" value="`b`" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;" parent="1" vertex="1">
|
| 17 |
+
<mxGeometry x="409" y="253" width="33" height="26" as="geometry" />
|
| 18 |
+
</mxCell>
|
| 19 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-5" value="" style="edgeStyle=none;orthogonalLoop=1;jettySize=auto;html=1;rounded=0;" parent="1" edge="1">
|
| 20 |
+
<mxGeometry width="100" relative="1" as="geometry">
|
| 21 |
+
<mxPoint x="284" y="250" as="sourcePoint" />
|
| 22 |
+
<mxPoint x="249" y="217" as="targetPoint" />
|
| 23 |
+
<Array as="points">
|
| 24 |
+
<mxPoint x="284" y="234" />
|
| 25 |
+
<mxPoint x="249" y="234" />
|
| 26 |
+
</Array>
|
| 27 |
+
</mxGeometry>
|
| 28 |
+
</mxCell>
|
| 29 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-6" value="" style="edgeStyle=none;orthogonalLoop=1;jettySize=auto;html=1;rounded=0;" parent="1" edge="1">
|
| 30 |
+
<mxGeometry width="100" relative="1" as="geometry">
|
| 31 |
+
<mxPoint x="284" y="250" as="sourcePoint" />
|
| 32 |
+
<mxPoint x="319.2600000000002" y="217" as="targetPoint" />
|
| 33 |
+
<Array as="points">
|
| 34 |
+
<mxPoint x="284.2600000000002" y="234" />
|
| 35 |
+
<mxPoint x="319.2600000000002" y="234" />
|
| 36 |
+
</Array>
|
| 37 |
+
</mxGeometry>
|
| 38 |
+
</mxCell>
|
| 39 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-7" value="`W^+ = \max(0, W)`" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;" parent="1" vertex="1">
|
| 40 |
+
<mxGeometry x="161" y="188" width="122" height="26" as="geometry" />
|
| 41 |
+
</mxCell>
|
| 42 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-8" value="`W^- = \min(0, W)`" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;" parent="1" vertex="1">
|
| 43 |
+
<mxGeometry x="282" y="189" width="116" height="26" as="geometry" />
|
| 44 |
+
</mxCell>
|
| 45 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-9" value="" style="edgeStyle=none;orthogonalLoop=1;jettySize=auto;html=1;rounded=0;" parent="1" edge="1">
|
| 46 |
+
<mxGeometry width="100" relative="1" as="geometry">
|
| 47 |
+
<mxPoint x="146" y="123" as="sourcePoint" />
|
| 48 |
+
<mxPoint x="205" y="142" as="targetPoint" />
|
| 49 |
+
<Array as="points">
|
| 50 |
+
<mxPoint x="178" y="123" />
|
| 51 |
+
<mxPoint x="178" y="142" />
|
| 52 |
+
</Array>
|
| 53 |
+
</mxGeometry>
|
| 54 |
+
</mxCell>
|
| 55 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-10" value="" style="edgeStyle=none;orthogonalLoop=1;jettySize=auto;html=1;rounded=0;" parent="1" edge="1">
|
| 56 |
+
<mxGeometry width="100" relative="1" as="geometry">
|
| 57 |
+
<mxPoint x="221.95000000000005" y="188" as="sourcePoint" />
|
| 58 |
+
<mxPoint x="221.5" y="159" as="targetPoint" />
|
| 59 |
+
<Array as="points" />
|
| 60 |
+
</mxGeometry>
|
| 61 |
+
</mxCell>
|
| 62 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-11" value="" style="edgeStyle=none;orthogonalLoop=1;jettySize=auto;html=1;rounded=0;" parent="1" edge="1">
|
| 63 |
+
<mxGeometry width="100" relative="1" as="geometry">
|
| 64 |
+
<mxPoint x="323.89" y="189" as="sourcePoint" />
|
| 65 |
+
<mxPoint x="323.9799999999999" y="119" as="targetPoint" />
|
| 66 |
+
<Array as="points" />
|
| 67 |
+
</mxGeometry>
|
| 68 |
+
</mxCell>
|
| 69 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-12" value="" style="edgeStyle=none;orthogonalLoop=1;jettySize=auto;html=1;rounded=0;" parent="1" edge="1">
|
| 70 |
+
<mxGeometry width="100" relative="1" as="geometry">
|
| 71 |
+
<mxPoint x="146" y="122.5" as="sourcePoint" />
|
| 72 |
+
<mxPoint x="307" y="104" as="targetPoint" />
|
| 73 |
+
<Array as="points">
|
| 74 |
+
<mxPoint x="178" y="122.5" />
|
| 75 |
+
<mxPoint x="178" y="103.5" />
|
| 76 |
+
</Array>
|
| 77 |
+
</mxGeometry>
|
| 78 |
+
</mxCell>
|
| 79 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-13" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=none;" parent="1" vertex="1">
|
| 80 |
+
<mxGeometry x="314" y="95" width="20" height="19" as="geometry" />
|
| 81 |
+
</mxCell>
|
| 82 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-14" value="" style="endArrow=none;html=1;rounded=0;entryX=1;entryY=0;entryDx=0;entryDy=0;exitX=0;exitY=1;exitDx=0;exitDy=0;" parent="1" source="BvM-n0XLRNGQCFYUT1UE-13" target="BvM-n0XLRNGQCFYUT1UE-13" edge="1">
|
| 83 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 84 |
+
<mxPoint x="304" y="126" as="sourcePoint" />
|
| 85 |
+
<mxPoint x="354" y="76" as="targetPoint" />
|
| 86 |
+
</mxGeometry>
|
| 87 |
+
</mxCell>
|
| 88 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-15" value="" style="endArrow=none;html=1;rounded=0;entryX=0;entryY=0;entryDx=0;entryDy=0;exitX=1;exitY=1;exitDx=0;exitDy=0;" parent="1" source="BvM-n0XLRNGQCFYUT1UE-13" target="BvM-n0XLRNGQCFYUT1UE-13" edge="1">
|
| 89 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 90 |
+
<mxPoint x="326" y="119" as="sourcePoint" />
|
| 91 |
+
<mxPoint x="340" y="106" as="targetPoint" />
|
| 92 |
+
</mxGeometry>
|
| 93 |
+
</mxCell>
|
| 94 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-16" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=none;" parent="1" vertex="1">
|
| 95 |
+
<mxGeometry x="212" y="134" width="20" height="19" as="geometry" />
|
| 96 |
+
</mxCell>
|
| 97 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-17" value="" style="endArrow=none;html=1;rounded=0;entryX=1;entryY=0;entryDx=0;entryDy=0;exitX=0;exitY=1;exitDx=0;exitDy=0;" parent="1" source="BvM-n0XLRNGQCFYUT1UE-16" target="BvM-n0XLRNGQCFYUT1UE-16" edge="1">
|
| 98 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 99 |
+
<mxPoint x="202" y="165" as="sourcePoint" />
|
| 100 |
+
<mxPoint x="252" y="115" as="targetPoint" />
|
| 101 |
+
</mxGeometry>
|
| 102 |
+
</mxCell>
|
| 103 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-18" value="" style="endArrow=none;html=1;rounded=0;entryX=0;entryY=0;entryDx=0;entryDy=0;exitX=1;exitY=1;exitDx=0;exitDy=0;" parent="1" source="BvM-n0XLRNGQCFYUT1UE-16" target="BvM-n0XLRNGQCFYUT1UE-16" edge="1">
|
| 104 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 105 |
+
<mxPoint x="224" y="158" as="sourcePoint" />
|
| 106 |
+
<mxPoint x="238" y="145" as="targetPoint" />
|
| 107 |
+
</mxGeometry>
|
| 108 |
+
</mxCell>
|
| 109 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-19" value="" style="edgeStyle=none;orthogonalLoop=1;jettySize=auto;html=1;rounded=0;" parent="1" edge="1">
|
| 110 |
+
<mxGeometry width="100" relative="1" as="geometry">
|
| 111 |
+
<mxPoint x="239" y="142" as="sourcePoint" />
|
| 112 |
+
<mxPoint x="392" y="142" as="targetPoint" />
|
| 113 |
+
<Array as="points">
|
| 114 |
+
<mxPoint x="318" y="142" />
|
| 115 |
+
<mxPoint x="321" y="138" />
|
| 116 |
+
<mxPoint x="325" y="138" />
|
| 117 |
+
<mxPoint x="328" y="142" />
|
| 118 |
+
</Array>
|
| 119 |
+
</mxGeometry>
|
| 120 |
+
</mxCell>
|
| 121 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-20" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=none;" parent="1" vertex="1">
|
| 122 |
+
<mxGeometry x="397" y="132" width="20" height="19" as="geometry" />
|
| 123 |
+
</mxCell>
|
| 124 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-21" value="" style="endArrow=none;html=1;rounded=0;entryX=0.5;entryY=0;entryDx=0;entryDy=0;exitX=0.5;exitY=1;exitDx=0;exitDy=0;" parent="1" source="BvM-n0XLRNGQCFYUT1UE-20" target="BvM-n0XLRNGQCFYUT1UE-20" edge="1">
|
| 125 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 126 |
+
<mxPoint x="388" y="160" as="sourcePoint" />
|
| 127 |
+
<mxPoint x="438" y="110" as="targetPoint" />
|
| 128 |
+
</mxGeometry>
|
| 129 |
+
</mxCell>
|
| 130 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-22" value="" style="endArrow=none;html=1;rounded=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" parent="1" source="BvM-n0XLRNGQCFYUT1UE-20" target="BvM-n0XLRNGQCFYUT1UE-20" edge="1">
|
| 131 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 132 |
+
<mxPoint x="417" y="161" as="sourcePoint" />
|
| 133 |
+
<mxPoint x="417" y="142" as="targetPoint" />
|
| 134 |
+
</mxGeometry>
|
| 135 |
+
</mxCell>
|
| 136 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-23" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=none;" parent="1" vertex="1">
|
| 137 |
+
<mxGeometry x="436" y="95" width="20" height="19" as="geometry" />
|
| 138 |
+
</mxCell>
|
| 139 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-24" value="" style="endArrow=none;html=1;rounded=0;entryX=0.5;entryY=0;entryDx=0;entryDy=0;exitX=0.5;exitY=1;exitDx=0;exitDy=0;" parent="1" source="BvM-n0XLRNGQCFYUT1UE-23" target="BvM-n0XLRNGQCFYUT1UE-23" edge="1">
|
| 140 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 141 |
+
<mxPoint x="427" y="123" as="sourcePoint" />
|
| 142 |
+
<mxPoint x="477" y="73" as="targetPoint" />
|
| 143 |
+
</mxGeometry>
|
| 144 |
+
</mxCell>
|
| 145 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-25" value="" style="endArrow=none;html=1;rounded=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" parent="1" source="BvM-n0XLRNGQCFYUT1UE-23" target="BvM-n0XLRNGQCFYUT1UE-23" edge="1">
|
| 146 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 147 |
+
<mxPoint x="456" y="124" as="sourcePoint" />
|
| 148 |
+
<mxPoint x="456" y="105" as="targetPoint" />
|
| 149 |
+
</mxGeometry>
|
| 150 |
+
</mxCell>
|
| 151 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-26" value="" style="endArrow=classic;html=1;rounded=0;" parent="1" edge="1">
|
| 152 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 153 |
+
<mxPoint x="340" y="104.39" as="sourcePoint" />
|
| 154 |
+
<mxPoint x="425" y="104" as="targetPoint" />
|
| 155 |
+
</mxGeometry>
|
| 156 |
+
</mxCell>
|
| 157 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-27" value="" style="endArrow=classic;html=1;rounded=0;" parent="1" edge="1">
|
| 158 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 159 |
+
<mxPoint x="426" y="249" as="sourcePoint" />
|
| 160 |
+
<mxPoint x="406" y="155" as="targetPoint" />
|
| 161 |
+
<Array as="points">
|
| 162 |
+
<mxPoint x="426" y="232" />
|
| 163 |
+
<mxPoint x="406" y="232" />
|
| 164 |
+
</Array>
|
| 165 |
+
</mxGeometry>
|
| 166 |
+
</mxCell>
|
| 167 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-28" value="" style="edgeStyle=none;orthogonalLoop=1;jettySize=auto;html=1;rounded=0;" parent="1" edge="1">
|
| 168 |
+
<mxGeometry width="100" relative="1" as="geometry">
|
| 169 |
+
<mxPoint x="425" y="141" as="sourcePoint" />
|
| 170 |
+
<mxPoint x="488" y="141" as="targetPoint" />
|
| 171 |
+
<Array as="points">
|
| 172 |
+
<mxPoint x="441" y="141" />
|
| 173 |
+
<mxPoint x="444" y="137" />
|
| 174 |
+
<mxPoint x="448" y="137" />
|
| 175 |
+
<mxPoint x="451" y="141" />
|
| 176 |
+
</Array>
|
| 177 |
+
</mxGeometry>
|
| 178 |
+
</mxCell>
|
| 179 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-29" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=none;" parent="1" vertex="1">
|
| 180 |
+
<mxGeometry x="494" y="132" width="20" height="19" as="geometry" />
|
| 181 |
+
</mxCell>
|
| 182 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-32" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=none;" parent="1" vertex="1">
|
| 183 |
+
<mxGeometry x="494" y="95" width="20" height="19" as="geometry" />
|
| 184 |
+
</mxCell>
|
| 185 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-30" value="" style="endArrow=none;html=1;rounded=0;" parent="1" edge="1">
|
| 186 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 187 |
+
<mxPoint x="497" y="101" as="sourcePoint" />
|
| 188 |
+
<mxPoint x="511" y="106" as="targetPoint" />
|
| 189 |
+
<Array as="points">
|
| 190 |
+
<mxPoint x="505" y="101" />
|
| 191 |
+
</Array>
|
| 192 |
+
</mxGeometry>
|
| 193 |
+
</mxCell>
|
| 194 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-31" value="" style="endArrow=classic;html=1;rounded=0;entryX=0.65;entryY=0.127;entryDx=0;entryDy=0;entryPerimeter=0;" parent="1" edge="1">
|
| 195 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 196 |
+
<mxPoint x="462" y="104.56000000000002" as="sourcePoint" />
|
| 197 |
+
<mxPoint x="488.19999999999993" y="104.44600000000001" as="targetPoint" />
|
| 198 |
+
</mxGeometry>
|
| 199 |
+
</mxCell>
|
| 200 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-33" value="" style="endArrow=none;html=1;rounded=0;" parent="1" edge="1">
|
| 201 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 202 |
+
<mxPoint x="497.0000000000002" y="144" as="sourcePoint" />
|
| 203 |
+
<mxPoint x="511.0000000000002" y="139" as="targetPoint" />
|
| 204 |
+
<Array as="points">
|
| 205 |
+
<mxPoint x="505.0000000000002" y="144" />
|
| 206 |
+
</Array>
|
| 207 |
+
</mxGeometry>
|
| 208 |
+
</mxCell>
|
| 209 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-34" value="" style="ellipse;whiteSpace=wrap;html=1;fillColor=none;" parent="1" vertex="1">
|
| 210 |
+
<mxGeometry x="545" y="114" width="20" height="19" as="geometry" />
|
| 211 |
+
</mxCell>
|
| 212 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-35" value="" style="endArrow=none;html=1;rounded=0;entryX=0.5;entryY=0;entryDx=0;entryDy=0;exitX=0.5;exitY=1;exitDx=0;exitDy=0;" parent="1" source="BvM-n0XLRNGQCFYUT1UE-34" target="BvM-n0XLRNGQCFYUT1UE-34" edge="1">
|
| 213 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 214 |
+
<mxPoint x="536" y="142" as="sourcePoint" />
|
| 215 |
+
<mxPoint x="586" y="92" as="targetPoint" />
|
| 216 |
+
</mxGeometry>
|
| 217 |
+
</mxCell>
|
| 218 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-36" value="" style="endArrow=none;html=1;rounded=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" parent="1" source="BvM-n0XLRNGQCFYUT1UE-34" target="BvM-n0XLRNGQCFYUT1UE-34" edge="1">
|
| 219 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 220 |
+
<mxPoint x="565" y="143" as="sourcePoint" />
|
| 221 |
+
<mxPoint x="565" y="124" as="targetPoint" />
|
| 222 |
+
</mxGeometry>
|
| 223 |
+
</mxCell>
|
| 224 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-37" value="" style="endArrow=classic;html=1;rounded=0;" parent="1" edge="1">
|
| 225 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 226 |
+
<mxPoint x="519" y="105" as="sourcePoint" />
|
| 227 |
+
<mxPoint x="544" y="109" as="targetPoint" />
|
| 228 |
+
<Array as="points">
|
| 229 |
+
<mxPoint x="533" y="105" />
|
| 230 |
+
</Array>
|
| 231 |
+
</mxGeometry>
|
| 232 |
+
</mxCell>
|
| 233 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-38" value="" style="edgeStyle=none;orthogonalLoop=1;jettySize=auto;html=1;rounded=0;" parent="1" edge="1">
|
| 234 |
+
<mxGeometry width="100" relative="1" as="geometry">
|
| 235 |
+
<mxPoint x="570" y="123.46" as="sourcePoint" />
|
| 236 |
+
<mxPoint x="588" y="123.46" as="targetPoint" />
|
| 237 |
+
<Array as="points" />
|
| 238 |
+
</mxGeometry>
|
| 239 |
+
</mxCell>
|
| 240 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-39" value="`y`" style="text;html=1;align=center;verticalAlign=middle;resizable=0;points=[];autosize=1;strokeColor=none;fillColor=none;" parent="1" vertex="1">
|
| 241 |
+
<mxGeometry x="581" y="109" width="32" height="26" as="geometry" />
|
| 242 |
+
</mxCell>
|
| 243 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-40" value="" style="endArrow=classic;html=1;rounded=0;" parent="1" edge="1">
|
| 244 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 245 |
+
<mxPoint x="426.5" y="249" as="sourcePoint" />
|
| 246 |
+
<mxPoint x="446" y="122" as="targetPoint" />
|
| 247 |
+
<Array as="points">
|
| 248 |
+
<mxPoint x="426.5" y="232" />
|
| 249 |
+
<mxPoint x="446.5" y="232" />
|
| 250 |
+
</Array>
|
| 251 |
+
</mxGeometry>
|
| 252 |
+
</mxCell>
|
| 253 |
+
<mxCell id="BvM-n0XLRNGQCFYUT1UE-41" value="" style="endArrow=classic;html=1;rounded=0;" parent="1" edge="1">
|
| 254 |
+
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
| 255 |
+
<mxPoint x="519" y="141" as="sourcePoint" />
|
| 256 |
+
<mxPoint x="544" y="137" as="targetPoint" />
|
| 257 |
+
<Array as="points">
|
| 258 |
+
<mxPoint x="533" y="141" />
|
| 259 |
+
</Array>
|
| 260 |
+
</mxGeometry>
|
| 261 |
+
</mxCell>
|
| 262 |
+
</root>
|
| 263 |
+
</mxGraphModel>
|
| 264 |
+
</diagram>
|
| 265 |
+
</mxfile>
|
2505.02537/main_diagram/main_diagram.pdf
ADDED
|
Binary file (14 kB). View file
|
|
|
2505.02537/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
Monotonic neural networks represent a pivotal shift in deep learning. They bridge the gap between high-capacity non-linear models and the need for interpretable, consistent outputs in various applications. Monotonic MLPs preserve monotonic input-output relationships, making them particularly suitable for domains that require justified and transparent decisions [@gupta2016monotonic; @nguyen2019mononet]. Furthermore, monotonic MLPs have been exploited to build novel architectures for density estimators [@chilinski2020neural; @omi2019fully; @tagasovska2019single], survival analysis [@jeanselme2023neural], and remaining useful life [@sanchez2023simplified]. In general, the enforcement of constraints on the model architecture guarantees certain desired properties, such as fairness or robustness. Furthermore, explicitly designing the model with inductive biases that exploit prior knowledge has been shown to be fundamental for efficient generalization [@dugas2000incorporating; @milani2016fast; @you2017deep]. For this reason, the use of monotonic networks can help both in improving performance [@mitchell1980need] and in data efficiency [@velivckovic2019resurgence].
|
| 4 |
+
|
| 5 |
+
Recent works in this field usually fall into one of the following two categories: 'soft monotonicity' and 'hard monotonicity'. Soft monotonicity employs optimization constraints [@gupta2019incorporate; @sill1996monotonicity], usually as additional penalty terms in the loss. This class of approaches benefits from its simple implementation and inexpensive computation. They exploit the power of Multi-Layer Perceptrons (MLPs) to be able to approximate arbitrary functions. Since penalties are usually applied to dataset samples, monotonicity is enforced only *in-distribution*, therefore struggling to generalize the constraint out-of-distribution.
|
| 6 |
+
|
| 7 |
+
Hard monotonicity instead imposes constraints on the model architecture to ensure monotonicity by design [@wehenkel2019unconstrained; @kitouni2023expressive]. The simplest way to do so is to constrain the MLP weights to be non-negative and to use monotonic activations [@daniels2010monotone]. The methods proposed in the literature that exploit this parametrization [@daniels2010monotone; @wehenkel2019unconstrained] require the usage of bounded activations, such as sigmoid and hyperbolic tangent, which introduce well-known optimization challenges due to vanishing gradients [@dubey2022activation; @ravikumar2023mitigating; @szandala2021review; @nair2010rectified; @glorot2010understanding; @goodfellow2013maxout]. This shortcoming is even more evident in monotonic NNs with non-negative weights, where bounded activations make the initialization even more crucial for optimization. As discussed in [8.2](#sec:vanishing_gradient){reference-type="ref+label" reference="sec:vanishing_gradient"}, poor initialization can lead to saturated activations at the beginning of training, thus significantly slowing it down. Furthermore, the use of bounded activations leads to MLPs that can only represent bounded functions, which may hinder generalization, as shown in [1](#fig:frontpage){reference-type="ref+label" reference="fig:frontpage"}.
|
| 8 |
+
|
| 9 |
+
Indeed, most recent advances in NNs use activations in the family of rectified linear functions, such as the popular activation [@vaswani2017attention; @he2016resnet]. However, the use of activations in MLPs with non-negative weights is problematic. In fact, an MLP that uses convex activations (such as ) in conjunction with non-negative weights can only approximate convex functions, which severely limits applications [@daniels2010monotone; @mikulincer2022size]. For this reason, many approaches in the literature still rely on including bounded activations in the architecture in order to ensure universal approximation abilities of the network.
|
| 10 |
+
|
| 11 |
+
The primary aim of this work is to extend the theoretical basis of monotonic-constrained MLPs, by showing that it is still possible to achieve universal approximation using activations that saturate on one side, such as . To show that these new findings are not just theoretical tools, we create a new architecture that only uses saturating activations with performances comparable to state-of-the-art. Our contributions can be summarized as follows:
|
| 12 |
+
|
| 13 |
+
1. We show that constrained MLPs that alternate left-saturating and right-saturating monotonic activations can approximate any monotonic function. We also demonstrate that this can be achieved with a constant number of layers, which matches the best-known bound for threshold-activated networks.
|
| 14 |
+
|
| 15 |
+
2. Contrary to the non-negative-constrained formulation, we prove that an MLP with at least 4 layers, non-positive-constrained weights, and activation is a universal approximator. More generally, this holds true for any saturating monotonic activation.
|
| 16 |
+
|
| 17 |
+
3. We propose a simple parametrization scheme for monotonic MLPs that (i) can be used with saturating activations; (ii) does not require constrained parameters, thus making the optimization more stable and less sensitive to initialization; (iii) does not require multiple activations; (iv) does not require any prior choice of alternation of any activation and its point reflection.
|
| 18 |
+
|
| 19 |
+
Our discussion will primarily focus on activations, which are widely used in the latest advancements in deep learning. However, the results apply to the broader family of monotonic activations that saturate on at least one side. This includes most members of the family of -like activations such as exponential, [@clevert2015fast], SeLU [@klambauer2017self], [@barron2017continuously], [@jin2016deep], and many more.
|
| 20 |
+
|
| 21 |
+
# Method
|
| 22 |
+
|
| 23 |
+
In [4](#sec:relax_weight_constraint){reference-type="ref+label" reference="sec:relax_weight_constraint"}, we show how we can parametrize the activation switch using the sign of the weights. For such a mechanism, we propose two different parametrizations, one where the switch is applied post-activation and another one pre-activation. In [6](#fig:side_by_side_comparison){reference-type="ref+label" reference="fig:side_by_side_comparison"}, we report both the pseudo-code and the computational graph for the post-activation formulation. For completeness, in this section, we also report the pre-activation pseudo-code and computational graph, and for readability and to ease the comparison, we report them side by side, reporting again the post-activation formulation also reported in the main text. In particular, in [11](#fig:computational_graphs){reference-type="ref+label" reference="fig:computational_graphs"}, we report the two pseudocode side-by-side, and in [12](#alg:psudocodes){reference-type="ref+label" reference="alg:psudocodes"}, the relative pseudocodes.
|
| 24 |
+
|
| 25 |
+
<figure id="fig:computational_graphs" data-latex-placement="ht">
|
| 26 |
+
<div class="center">
|
| 27 |
+
<p><embed src="images/graph_dav.pdf" /> <embed src="images/graph_alb.pdf" /></p>
|
| 28 |
+
</div>
|
| 29 |
+
<figcaption>Computation graph of a single layer of a monotonic NN with the proposed learned activation via weight sign. The left plot reports the computational graph of the post-activation, and the right plot shows the pre-activation switch.</figcaption>
|
| 30 |
+
</figure>
|
| 31 |
+
|
| 32 |
+
<figure id="alg:psudocodes" data-latex-placement="ht">
|
| 33 |
+
<div class="minipage">
|
| 34 |
+
<div class="algorithm">
|
| 35 |
+
<div class="algorithmic">
|
| 36 |
+
<p>data <span class="math inline"><em>x</em>ββββ<sup><em>n</em></sup></span>, weight matrix <span class="math inline"><em>W</em>ββββ<sup><em>h</em><sub><em>l</em></sub>β
Γβ
<em>h</em><sub><em>l</em>β
ββ
1</sub></sup></span>, bias vectors <span class="math inline"><em>b</em>ββββ<sup><em>h</em><sub><em>l</em></sub></sup></span>, activation function <span class="math inline"><em>Ο</em></span> prediction <span class="math inline"><em>yΜ</em>ββββ<sup><em>h</em><sub><em>L</em></sub></sup></span></p>
|
| 37 |
+
<p><span class="math inline"><em>W</em><sup>+</sup>β:=βmaxβ(<em>W</em>,β0)</span> <span class="math inline"><em>W</em><sup>β</sup>β:=βminβ(<em>W</em>,β0)</span> <span class="math inline"><em>z</em><sup>+</sup>β:=β<em>W</em><sup>+</sup><em>x</em>β
+β
<em>b</em></span> <span class="math inline"><em>z</em><sup>β</sup>β:=β<em>W</em><sup>β</sup><em>x</em>β
+β
<em>b</em></span> <span class="math inline"><em>yΜ</em>β:=β<em>Ο</em>(<em>z</em><sup>+</sup>)β
ββ
<em>Ο</em>(<em>z</em><sup>β</sup>)</span></p>
|
| 38 |
+
</div>
|
| 39 |
+
</div>
|
| 40 |
+
</div>
|
| 41 |
+
<div class="minipage">
|
| 42 |
+
<div class="algorithm">
|
| 43 |
+
<div class="algorithmic">
|
| 44 |
+
<p>data <span class="math inline"><em>x</em>ββββ<sup><em>n</em></sup></span>, weight matrix <span class="math inline"><em>W</em>ββββ<sup><em>h</em><sub><em>l</em></sub>β
Γβ
<em>h</em><sub><em>l</em>β
ββ
1</sub></sup></span>, bias vectors <span class="math inline"><em>b</em>ββββ<sup><em>h</em><sub><em>l</em></sub></sup></span>, activation function <span class="math inline"><em>Ο</em></span> prediction <span class="math inline"><em>yΜ</em>ββββ<sup><em>h</em><sub><em>L</em></sub></sup></span></p>
|
| 45 |
+
<p><span class="math inline"><em>W</em><sup>+</sup>β:=βmaxβ(<em>W</em>,β0)</span> <span class="math inline"><em>W</em><sup>β</sup>β:=βminβ(<em>W</em>,β0)</span> <span class="math inline"><em>z</em><sup>+</sup>β:=β<em>W</em><sup>+</sup><em>Ο</em>(<em>x</em>)</span> <span class="math inline"><em>z</em><sup>β</sup>β:=β<em>W</em><sup>β</sup><em>Ο</em>(β<em>x</em>)</span> <span class="math inline"><em>yΜ</em>β:=β<em>z</em><sup>+</sup>β
+β
<em>z</em><sup>β</sup>β
+β
<em>b</em></span></p>
|
| 46 |
+
</div>
|
| 47 |
+
</div>
|
| 48 |
+
<p><span id="alg:psudocodes" data-label="alg:psudocodes"></span></p>
|
| 49 |
+
</div>
|
| 50 |
+
<figcaption>Forward pass of a Monotonic MLP with post-activation switch</figcaption>
|
| 51 |
+
</figure>
|
| 52 |
+
|
| 53 |
+
For this work, the code was heavily based on the code provided by @runje2023constrained in order to ensure that the used dataset matched exactly. For this reason, we will report a short description of the employed dataset, but for a further and more detailed description, refer to the original work [@runje2023constrained].
|
| 54 |
+
|
| 55 |
+
- **COMPAS**: This dataset is a binary classification dataset composed of criminal records, comprised of 13 features, 4 of which are monotonic.
|
| 56 |
+
|
| 57 |
+
- **Blog Feedback**: This dataset is a regression dataset comprised of 280 features, 8 of which are monotonic, aimed at predicting the number of comments within 24h.
|
| 58 |
+
|
| 59 |
+
- **Auto MPG**: This dataset is a regression dataset aimed at predicting the miles-per-gallon consumption and is comprised of 7 features, 3 of which are monotonic.
|
| 60 |
+
|
| 61 |
+
- **Heart Disease**: This dataset is a classification dataset composed of 13 features, 2 of which are monotonic, aimed at predicting a possible heart disease.
|
| 62 |
+
|
| 63 |
+
- **Loan Defaulter**: This dataset is a classification dataset composed of 28 features, 5 of which are monotonic, and is aimed at predicting loan defaults.
|
2506.06898/main_diagram/main_diagram.drawio
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2506.06898/paper_text/intro_method.md
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction
|
| 2 |
+
|
| 3 |
+
The ability to decode mental states from brain activity is a longstanding goal of neuroscience. Mental imagesβvisual representations not driven by retinal inputβare an especially appealing target for decoding since an externalized mental image could, in principle, depict information stored in brain activity patterns that is difficult or impossible to read out by other means (e.g., by speaking). Access to this information could help researchers better understand cognitive processes; it could also be useful in a clinical setting [\[42\]](#page-9-3), where millions of patients are left unable to communicate through conventional means as a result of traumatic brain injuries, and many common afflictions manifest as a profound dysregulation of unwanted or confusing mental imagery [\[22\]](#page-8-3).
|
| 4 |
+
|
| 5 |
+
Most previous attempts to classify, retrieve, or reconstruct mental images have used a cross-decoding approach in which an encoding or decoding model is trained on brain activity evoked by seen images and then tested on brain activity recorded during mental imagery [\[28,](#page-8-1) [55,](#page-10-0) [67\]](#page-10-1). This approach is strongly justified by basic neuroscience that has demonstrated the extensive overlap in the representation of seen and mental images [\[20,](#page-8-2) [29,](#page-9-4) [37,](#page-9-5) [46,](#page-9-6) [61\]](#page-10-2). Excitement about the prospects offered by cross-decoding approaches has increased significantly of late, due to the rapid and dramatic improvement in vision decoding methods (e.g., [\[50,](#page-9-7) [51\]](#page-9-8)).
|
| 6 |
+
|
| 7 |
+
It has been unclear if recently developed methods for decoding seen images might generalize to mental imagery, which is encoded in brain activity with much lower signalto-noise ratios than vision [\[48\]](#page-9-9) and at a lower spatial resolution in the early visual areas that represent much of the structural detail of seen images [\[3\]](#page-8-4). In particular, it is not known whether modern vision decoding methods can yield reconstructions that naive human observers would accurately identify as corresponding to the target images, a benchmark recently achieved by reconstructions of seen images [\[51\]](#page-9-8) and a minimum requirement for practical applications.
|
| 8 |
+
|
| 9 |
+
It has also been an open question whether the complexity of the imagined target stimuli might limit the quality of reconstructions of mental images. Although previous works (see Figure [1\)](#page-0-0) have demonstrated reconstructions of simple imagined stimuli such as blobs [\[67\]](#page-10-1), letters [\[20,](#page-8-2) [53\]](#page-9-2), and
|
| 10 |
+
|
| 11 |
+
single natural objects [\[28,](#page-8-1) [31,](#page-9-1) [55\]](#page-10-0), there is no precedent for reconstructing complex natural scenes with multiple objects. To address these gaps, we make three main contributions:
|
| 12 |
+
|
| 13 |
+
- 1. We release the benchmark dataset NSD-Imagery. This dataset is an extension to the Natural Scenes Dataset (NSD) [\[2\]](#page-8-0), which is a large-scale dataset of fMRI activity paired with seen images that is the current standard used to train brain-to-image reconstruction models [\[50,](#page-9-7) [51\]](#page-9-8). NSD-Imagery provides held-out test trials where the same NSD participants performed a mental imagery task across varying stimulus complexity, thereby allowing researchers to test the generalization capabilities of vision decoding methods trained on the core NSD data.
|
| 14 |
+
- 2. We demonstrate that while the reconstruction performance of *individual stimuli* is correlated between vision and mental imagery, *methods* with improved performance on vision decoding do not necessarily produce improved performance for mental imagery decodingβat least for the handful of high-performing vision reconstruction methods tested hereβa finding with critical importance to efforts developing methods that perform well on downstream tasks requiring mental images.
|
| 15 |
+
- 3. We conduct extensive analyses using human raters and demonstrate that, despite the nuance in the previous point, some recent vision decoding methods do generalize to mental images, a promising finding that demonstrates the utility of using contemporary vision decoding models to improve reconstructions of mental images.
|
| 16 |
+
|
| 17 |
+
# Method
|
| 18 |
+
|
| 19 |
+
The NSD-Imagery dataset extends NSD by incorporating a small set of 7T functional magnetic resonance imaging (fMRI) responses collected from the same subjects during mental imagery tasks. This enables the evaluation of vision decoding models on internally generated visual representations that are more relevant to future downstream use cases of brain decoding models. For this extension, all 8 participants from the original NSD study underwent an additional scanning session, following the same high-resolution fMRI acquisition protocols as NSD.
|
| 20 |
+
|
| 21 |
+
<span id="page-2-0"></span>
|
| 22 |
+
|
| 23 |
+
Figure 2. Overview of the tasks utilized for the NSD-Imagery benchmark.
|
| 24 |
+
|
| 25 |
+
Each participant completed runs of three task types (vi-
|
| 26 |
+
|
| 27 |
+
sion, imagery, and attention) performed with three sets of target stimuli (simple, complex, and conceptual), resulting in 9 distinct run types, of which the imagery runs were completed twice for a total of 12 runs. Each run contained 8 repetitions of the 6 stimuli in each set, for a total of 576 trials per subject. The vision and imagery tasks (Figure [2\)](#page-2-0) are the primary tasks utilized for the NSD-Imagery benchmark. The target stimuli were carefully designed to encompass varying levels of complexity:
|
| 28 |
+
|
| 29 |
+
- (A) *Simple stimuli*: Six simple geometric shapesβfour oriented bars (0Β°, 45Β°, 90Β°, 135Β°) and two crosses ("+" and "Γ")βall constructed from black bars on a gray background.
|
| 30 |
+
- (B) *Complex stimuli*: Five natural scenes selected from the NSD shared1000 and one artwork ("The Two Sisters" by Kehinde Wiley). The natural scene images were chosen based on a recognizability score derived from participants' performance in the original NSD sessions, ensuring a range of familiarity and visual content.
|
| 31 |
+
- (C) *Conceptual stimuli*: Six single-word concepts describing abstract visual features or objects (e.g., stripes, banana, mammal), rather than specific images. The *vision* trials for conceptual stimuli showed varying images of natural scenes corresponding to the target concept for each trial. Since the value of decoding a trial-averaged response to multiple visual stimuli is not clear, we recommend excluding the vision trials of conceptual stimuli from evaluations on the benchmark, and we present results here only on the imagery trials of conceptual stimuli.
|
| 32 |
+
|
| 33 |
+
Each stimulus was associated with a unique single-letter cue, and participants memorized all 18 cue-stimulus pairs prior to scanning. Pre-scan practice sessions were conducted to ensure familiarity with the cues and stimuli, involving both visual presentations and verbal recall to reinforce memory.
|
| 34 |
+
|
| 35 |
+
In the *vision runs*, participants were presented with images accompanied by the corresponding letter cue displayed at the central fixation point. Each trial lasted 4 seconds, consisting of 3 seconds of image and cue presentation, followed by a 1-second rest period with only the fixation cross displayed. Participants indicated via button press whether the presented image matched the letter cue. Images were displayed within a square frame occupying 8.4Β° Γ 8.4Β° of the subject's field of view, consistent with NSD stimuli presentations.
|
| 36 |
+
|
| 37 |
+
In the *imagery runs*, participants were shown only the cue letter at the fixation point within the same frame used in vision runs, and were instructed to imagine the corresponding stimuli. Each trial also lasted 4 seconds, with 3 seconds allocated to vividly imagine the cued target stimulus, projecting the mental image onto the space outlined by the frame. This was followed by a 1-second rest period. Participants rated the vividness of their mental image via button press, pressing
|
| 38 |
+
|
| 39 |
+
<span id="page-3-1"></span>one of two buttons to indicate "vivid" or "not vivid."
|
| 40 |
+
|
| 41 |
+
In the *attention runs*, participants were given a letter cue and asked to detect whether the cued stimulus was present in a series of rapidly presented images. While these runs are released as part of the NSD-Imagery dataset, their value for assessing vision reconstruction methods is unclear, and we recommend excluding these from evaluations on the NSD-Imagery benchmark. More information on NSD-Imagery and details on data preprocessing are in Appendix [A.1](#page-11-0)
|
| 42 |
+
|
| 43 |
+
Instructions for accessing the NSD-Imagery dataset can be found at [www.naturalscenesdataset.org.](www.naturalscenesdataset.org)
|