prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: Self-Attention Guidance, provide a description of the model | |
Given the following machine learning model name: Support-set Based Cross-Supervision, provide a description of the model | **Sscs**, or **Support-set Based Cross-Supervision**, is a module for video grounding which consists of two main components: a discriminative contrastive objective and a generative caption objective. The contrastive objective aims to learn effective representations by contrastive learning, while the caption objective can train a powerful video encoder supervised by texts. Due to the co-existence of some visual entities in both ground-truth and background intervals, i.e., mutual exclusion, naively contrastive learning is unsuitable to video grounding. This problem is addressed by boosting the cross-supervision with the support-set concept, which collects visual information from the whole video and eliminates the mutual exclusion of entities.
Specifically, in the Figure to the right, two video-text pairs { $V\_{i}, L\_{i}$}, {$V\_{j} , L\_{j}$ } in the batch are presented for clarity. After feeding them into a video and text encoder, the clip-level and sentence-level embedding ( {$X\_{i}, Y\_{i}$} and {$X\_{j} , Y\_{j}$} ) in a shared space are acquired. Base on the support-set module, the weighted average of $X\_{i}$ and $X\_{j}$ is computed to obtain $\bar{X}\_{i}$, $\bar{X}\_{j}$ respectively. Finally, the contrastive and caption objectives are combined to pull close the representations of the clips and text from the same samples and push away those from other pairs |
Given the following machine learning model name: Dorylus, provide a description of the model | **Dorylus** is a distributed system for training graph neural networks which uses cheap CPU servers and Lambda threads. It scales to
large billion-edge graphs with low-cost cloud resources. |
Given the following machine learning model name: Graphic Mutual Information, provide a description of the model | **Graphic Mutual Information**, or **GMI**, measures the correlation between input graphs and high-level hidden representations. GMI generalizes the idea of conventional mutual information computations from vector space to the graph domain where measuring mutual information from two aspects of node features and topological structure is indispensable. GMI exhibits several benefits: First, it is invariant to the isomorphic transformation of input graphs---an inevitable constraint in many existing graph representation learning algorithms; Besides, it can be efficiently estimated and maximized by current mutual information estimation methods such as MINE. |
Given the following machine learning model name: Dual Graph Convolutional Networks, provide a description of the model | A dual graph convolutional neural network jointly considers the two essential assumptions of semi-supervised learning: (1) local consistency and (2) global consistency. Accordingly, two convolutional neural networks are devised to embed the local-consistency-based and global-consistency-based knowledge, respectively.
Description and image from: [Dual Graph Convolutional Networks for Graph-Based Semi-Supervised Classification](https://persagen.com/files/misc/zhuang2018dual.pdf) |
Given the following machine learning model name: ShakeDrop, provide a description of the model | **ShakeDrop regularization** extends [Shake-Shake regularization](https://paperswithcode.com/method/shake-shake-regularization) and can be applied not only to [ResNeXt](https://paperswithcode.com/method/resnext) but also [ResNet](https://paperswithcode.com/method/resnet), [WideResNet](https://paperswithcode.com/method/wideresnet), and [PyramidNet](https://paperswithcode.com/method/pyramidnet). The proposed ShakeDrop is given as
$$G\left(x\right) = x + \left(b\_{l} + \alpha − b\_{l}\alpha\right)F\left(x\right), \text{ in train-fwd} $$
$$G\left(x\right) = x + \left(b\_{l} + \beta − b\_{l}\beta\right)F\left(x\right), \text{ in train-bwd} $$
$$G\left(x\right) = x + E\left[b\_{l} + \alpha − b\_{l}\alpha\right]F\left(x\right), \text{ in test} $$
where $b\_{l}$ is a Bernoulli random variable with probability $P\left(b\_{l} = 1\right) = E\left[b\_{l}
\right] = p\_{l}$ given by the linear decay rule in each layer, and $\alpha$ and $\beta$ are independent uniform random variables in each element.
The most effective ranges of $\alpha$ and $\beta$ were experimentally found to be different from those of Shake-Shake, and are $\alpha$ = 0, $\beta \in \left[0, 1\right]$ and $\alpha \in \left[−1, 1\right]$, $\beta \in \left[0, 1\right]$. |
Given the following machine learning model name: Recursive Feature Pyramid, provide a description of the model | An **Recursive Feature Pyramid (RFP)** builds on top of the Feature Pyramid Networks ([FPN](https://paperswithcode.com/method/fpn)) by incorporating extra feedback connections from the FPN layers into the bottom-up backbone layers. Unrolling the recursive structure to a sequential implementation, we obtain a backbone for object detector that looks at the images twice or more. Similar to the cascaded detector heads in [Cascade R-CNN](https://paperswithcode.com/method/cascade-r-cnn) trained with more selective examples, an RFP recursively enhances FPN to generate increasingly powerful representations. Resembling Deeply-Supervised Nets, the feedback connections bring the features that directly receive gradients from the detector heads back to the low levels of the bottom-up backbone to speed up training and boost performance. |
Given the following machine learning model name: Cross-Scale Non-Local Attention, provide a description of the model | **Cross-Scale Non-Local Attention**, or **CS-NL**, is a non-local attention module for image super-resolution deep networks. It learns to mine long-range dependencies between LR features to larger-scale HR patches within the same feature map. Specifically, suppose we are conducting an s-scale super-resolution with the module, given a feature map $X$ of spatial size $(W, H)$, we first bilinearly downsample it to $Y$ with scale $s$, and match the $p\times p$ patches in $X$ with the downsampled $p \times p$ candidates in $Y$ to obtain the [softmax](https://paperswithcode.com/method/softmax) matching score. Finally, we conduct deconvolution.on the score by weighted adding the patches of size $\left(sp, sp\right)$ extracted from $X$. The obtained $Z$ of size $(sW, sH)$ will be $s$ times super-resolved than $X$. |
Given the following machine learning model name: EvoNorms, provide a description of the model | **EvoNorms** are a set of normalization-activation layers that go beyond existing design patterns. Normalization and activation are unified into a single computation graph, its structure is evolved starting from low-level primitives. EvoNorms consist of two series: B series and S series. The B series are batch-dependent and were discovered by our method without any constraint. The S series work on individual samples, and were discovered by rejecting any batch-dependent operations. |
Given the following machine learning model name: ESPNetv2, provide a description of the model | **ESPNetv2** is a convolutional neural network that utilises group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. |
Given the following machine learning model name: FlexFlow, provide a description of the model | **FlexFlow** is a deep learning engine that uses guided randomized search of the SOAP (Sample, Operator, Attribute, and Parameter) space to find a fast parallelization strategy for a specific parallel machine. To accelerate this search, FlexFlow introduces a novel execution simulator that can accurately predict a parallelization strategy’s performance and is three orders of magnitude faster than prior approaches that execute each strategy.
FlexFlow uses two main components: a fast, incremental execution simulator to evaluate different parallelization strategies, and a Markov Chain Monte Carlo (MCMC) search algorithm that takes advantage of the incremental simulator to rapidly explore the large search space. |
Given the following machine learning model name: Clipped Double Q-learning, provide a description of the model | **Clipped Double Q-learning** is a variant on [Double Q-learning](https://paperswithcode.com/method/double-q-learning) that upper-bounds the less biased Q estimate $Q\_{\theta\_{2}}$ by the biased estimate $Q\_{\theta\_{1}}$. This is equivalent to taking the minimum of the two estimates, resulting in the following target update:
$$ y\_{1} = r + \gamma\min\_{i=1,2}Q\_{\theta'\_{i}}\left(s', \pi\_{\phi\_{1}}\left(s'\right)\right) $$
The motivation for this extension is that vanilla double [Q-learning](https://paperswithcode.com/method/q-learning) is sometimes ineffective if the target and current networks are too similar, e.g. with a slow-changing policy in an actor-critic framework. |
Given the following machine learning model name: Pseudoinverse Graph Convolutional Network, provide a description of the model | A [GCN](https://paperswithcode.com/method/gcn) method targeted at the unique spectral properties of dense graphs and hypergraphs, enabled by efficient numerical linear algebra. |
Given the following machine learning model name: Adaptive Masking, provide a description of the model | **Adaptive Masking** is a type of attention mechanism that allows a model to learn its own context size to attend over. For each head in [Multi-Head Attention](https://paperswithcode.com/method/multi-head-attention), a masking function is added to control for the span of the attention. A masking function is a non-increasing function that maps a
distance to a value in $\left[0, 1\right]$. Adaptive masking takes the following soft masking function $m\_{z}$ parametrized by a real value $z$ in $\left[0, S\right]$:
$$ m\_{z}\left(x\right) = \min\left[\max\left[\frac{1}{R}\left(R+z-x\right), 0\right], 1\right] $$
where $R$ is a hyper-parameter that controls its softness. The shape of this piecewise function as a function of the distance. This soft masking function is inspired by [Jernite et al. (2017)](https://arxiv.org/abs/1611.06188). The attention weights from are then computed on the masked span:
$$ a\_{tr} = \frac{m\_{z}\left(t-r\right)\exp\left(s\_{tr}\right)}{\sum^{t-1}\_{q=t-S}m\_{z}\left(t-q\right)\exp\left(s\_{tq}\right)}$$
A $\mathcal{l}\_{1}$ penalization is added on the parameters $z\_{i}$ for each attention head $i$ of the model to the loss function:
$$ L = - \log{P}\left(w\_{1}, \dots, w\_{T}\right) + \frac{\lambda}{M}\sum\_{i}z\_{i} $$
where $\lambda > 0$ is the regularization hyperparameter, and $M$ is the number of heads in each
layer. This formulation is differentiable in the parameters $z\_{i}$, and learnt jointly with the rest of the model. |
Given the following machine learning model name: Approximating Spatiotemporal Representations Using a 2DCNN, provide a description of the model | Approximating Spatiotemporal Representations Using a 2DCNN |
Given the following machine learning model name: Sandwich Batch Normalization, provide a description of the model | Sandwich Batch Normalization (**SaBN**) is a frustratingly easy improvement of [Batch Normalization](https://paperswithcode.com/method/batch-normalization) (BN) with only a few lines of code changes. SaBN is motivated by addressing the inherent *feature distribution heterogeneity* that one can be identified in many tasks, which can arise from data heterogeneity (multiple input domains) or model heterogeneity (dynamic architectures, model conditioning, etc.). Our SaBN factorizes the BN affine layer into one shared *sandwich affine* layer, cascaded by several parallel *independent affine* layers. We demonstrate the prevailing effectiveness of SaBN as a **drop-in replacement in four tasks**: *conditional image generation*, *[neural architecture search](https://paperswithcode.com/method/neural-architecture-search)* (NAS), *adversarial training*, and *arbitrary style transfer*. Leveraging SaBN immediately achieves better Inception Score and FID on CIFAR-10 and ImageNet conditional image generation with three state-of-the-art GANs; boosts the performance of a state-of-the-art weight-sharing NAS algorithm significantly on NAS-Bench-201; substantially improves the robust and standard accuracies for adversarial defense; and produces superior arbitrary stylized results. |
Given the following machine learning model name: Orientation Regularized Network, provide a description of the model | **Orientation Regularized Network** (ORN) is a multi-view image fusion technique for pose estimation. It uses IMU orientations as a structural prior to mutually fuse the image features of each pair of joints linked by IMUs. For example, it uses the features of the elbow to reinforce those of the wrist based on the IMU at the lower-arm. |
Given the following machine learning model name: CrossViT, provide a description of the model | **CrossViT** is a type of [vision transformer](https://paperswithcode.com/method/vision-transformer) that uses a dual-branch architecture to extract multi-scale feature representations for image classification. The architecture combines image patches (i.e. tokens in a [transformer](https://paperswithcode.com/method/transformer)) of different sizes to produce stronger visual features for image classification. It processes small and large patch tokens with two separate branches of different computational complexities and these tokens are fused together multiple times to complement each other.
Fusion is achieved by an efficient [cross-attention module](https://paperswithcode.com/method/cross-attention-module), in which each transformer branch creates a non-patch token as an agent to exchange information with the other branch by attention. This allows for linear-time generation of the attention map in fusion instead of quadratic time otherwise. |
Given the following machine learning model name: ShuffleNet, provide a description of the model | **ShuffleNet** is a convolutional neural network designed specially for mobile devices with very limited computing power. The architecture utilizes two new operations, pointwise group [convolution](https://paperswithcode.com/method/convolution) and [channel shuffle](https://paperswithcode.com/method/channel-shuffle), to reduce computation cost while maintaining accuracy. |
Given the following machine learning model name: Dot-Product Attention, provide a description of the model | **Dot-Product Attention** is an attention mechanism where the alignment score function is calculated as:
$$f_{att}\left(\textbf{h}_{i}, \textbf{s}\_{j}\right) = h\_{i}^{T}s\_{j}$$
It is equivalent to [multiplicative attention](https://paperswithcode.com/method/multiplicative-attention) (without a trainable weight matrix, assuming this is instead an identity matrix). Here $\textbf{h}$ refers to the hidden states for the encoder, and $\textbf{s}$ is the hidden states for the decoder. The function above is thus a type of alignment score function.
Within a neural network, once we have the alignment scores, we calculate the final scores/weights using a [softmax](https://paperswithcode.com/method/softmax) function of these alignment scores (ensuring it sums to 1). |
Given the following machine learning model name: Graph Self-Attention, provide a description of the model | **Graph Self-Attention (GSA)** is a self-attention module used in the [BP-Transformer](https://paperswithcode.com/method/bp-transformer) architecture, and is based on the [graph attentional layer](https://paperswithcode.com/method/graph-attentional-layer).
For a given node $u$, we update its representation according to its neighbour nodes, formulated as $\mathbf{h}\_{u} \leftarrow \text{GSA}\left(\mathcal{G}, \mathbf{h}^{u}\right)$.
Let $\mathbf{A}\left(u\right)$ denote the set of the neighbour nodes of $u$ in $\mathcal{G}$, $\text{GSA}\left(\mathcal{G}, \mathbf{h}^{u}\right)$ is detailed as follows:
$$ \mathbf{A}^{u} = \text{concat}\left(\{\mathbf{h}\_{v} | v \in \mathcal{A}\left(u\right)\}\right) $$
$$ \mathbf{Q}^{u}\_{i} = \mathbf{H}\_{k}\mathbf{W}^{Q}\_{i},\mathbf{K}\_{i}^{u} = \mathbf{A}^{u}\mathbf{W}^{K}\_{i},\mathbf{V}^{u}\_{i} = \mathbf{A}^{u}\mathbf{W}\_{i}^{V} $$
$$ \text{head}^{u}\_{i} = \text{softmax}\left(\frac{\mathbf{Q}^{u}\_{i}\mathbf{K}\_{i}^{uT}}{\sqrt{d}}\right)\mathbf{V}\_{i}^{u} $$
$$ \text{GSA}\left(\mathcal{G}, \mathbf{h}^{u}\right) = \left[\text{head}^{u}\_{1}, \dots, \text{head}^{u}\_{h}\right]\mathbf{W}^{O}$$
where d is the dimension of h, and $\mathbf{W}^{Q}\_{i}$, $\mathbf{W}^{K}\_{i}$ and $\mathbf{W}^{V}\_{i}$ are trainable parameters of the $i$-th attention head. |
Given the following machine learning model name: self-DIstillation with NO labels, provide a description of the model | **DINO** (self-distillation with no labels) is a self-supervised learning method that directly predicts the output of a teacher network - built with a momentum encoder - using a standard cross-entropy loss.
In the example to the right, DINO is illustrated in the case of one single pair of views $\left(x\_{1}, x\_{2}\right)$ for simplicity.
The model passes two different random transformations of an input image to the student and teacher networks. Both networks have the same architecture but other parameters.
The output of the teacher network is centered with a mean computed over the batch. Each network outputs a $K$ dimensional feature normalized with a temperature [softmax](https://paperswithcode.com/method/softmax) over the feature dimension.
Their similarity is then measured with a cross-entropy loss.
A stop-gradient (sg) operator is applied to the teacher to propagate gradients only through the student.
The teacher parameters are updated with the student parameters' exponential moving average (ema). |
Given the following machine learning model name: Non-Local Operation, provide a description of the model | A **Non-Local Operation** is a component for capturing long-range dependencies with deep neural networks. It is a generalization of the classical non-local mean operation in computer vision. Intuitively a non-local operation computes the response at a position as a weighted sum of the features at all positions in the input feature maps. The set of positions can be in space, time, or spacetime, implying that these operations are applicable for image, sequence, and video problems.
Following the non-local mean operation, a generic non-local operation for deep neural networks is defined as:
$$ \mathbb{y}\_{i} = \frac{1}{\mathcal{C}\left(\mathbb{x}\right)}\sum\_{\forall{j}}f\left(\mathbb{x}\_{i}, \mathbb{x}\_{j}\right)g\left(\mathbb{x}\_{j}\right) $$
Here $i$ is the index of an output position (in space, time, or spacetime) whose response is to be computed and $j$ is the index that enumerates all possible positions. x is the input signal (image, sequence, video; often their features) and $y$ is the output signal of the same size as $x$. A pairwise function $f$ computes a scalar (representing relationship such as affinity) between $i$ and all $j$. The unary function $g$ computes a representation of the input signal at the position $j$. The
response is normalized by a factor $C\left(x\right)$.
The non-local behavior is due to the fact that all positions ($\forall{j}$) are considered in the operation. As a comparison, a convolutional operation sums up the weighted input in a local neighborhood (e.g., $i − 1 \leq j \leq i + 1$ in a 1D case with kernel size 3), and a recurrent operation at time $i$ is often based only on the current and the latest time steps (e.g., $j = i$ or $i − 1$).
The non-local operation is also different from a fully-connected (fc) layer. The equation above computes responses based on relationships between different locations, whereas fc uses learned weights. In other words, the relationship between $x\_{j}$ and $x\_{i}$ is not a function of the input data in fc, unlike in nonlocal layers. Furthermore, the formulation in the equation above supports inputs of variable sizes, and maintains the corresponding size in the output. On the contrary, an fc layer requires a fixed-size input/output and loses positional correspondence (e.g., that from $x\_{i}$ to $y\_{i}$ at the position $i$).
A non-local operation is a flexible building block and can be easily used together with convolutional/recurrent layers. It can be added into the earlier part of deep neural networks, unlike fc layers that are often used in the end. This allows us to build a richer hierarchy that combines both non-local and local information.
In terms of parameterisation, we usually parameterise $g$ as a linear embedding of the form $g\left(x\_{j}\right) = W\_{g}\mathbb{x}\_{j}$ , where $W\_{g}$ is a weight matrix to be learned. This is implemented as, e.g., 1×1 [convolution](https://paperswithcode.com/method/convolution) in space or 1×1×1 convolution in spacetime. For $f$ we use an affinity function, a list of which can be found [here](https://paperswithcode.com/methods/category/affinity-functions). |
Given the following machine learning model name: AutoGAN, provide a description of the model | [Neural architecture search](https://paperswithcode.com/method/neural-architecture-search) (NAS) has witnessed prevailing success in image classification and (very recently) segmentation tasks. In this paper, we present the first preliminary study on introducing the NAS algorithm to generative adversarial networks (GANs), dubbed AutoGAN. The marriage of NAS and GANs faces its unique challenges. We define the search space for the generator architectural variations and use an RNN controller to guide the search, with parameter sharing and dynamic-resetting to accelerate the process. Inception score is adopted as the reward, and a multi-level search strategy is introduced to perform NAS in a progressive way. |
Given the following machine learning model name: AlphaZero, provide a description of the model | **AlphaZero** is a reinforcement learning agent for playing board games such as Go, chess, and shogi. |
Given the following machine learning model name: wav2vec Unsupervised, provide a description of the model | **wav2vec-U** is an unsupervised method to train speech recognition models without any labeled data. It leverages self-supervised speech representations to segment unlabeled language and learn a mapping from these representations to phonemes via adversarial training.
Specifically, we learn self-supervised representations with wav2vec 2.0 on unlabeled speech audio, then identify clusters in the representations with k-means to segment the audio data. Next, we build segment representations by mean pooling the wav2vec 2.0 representations, performing [PCA](https://paperswithcode.com/method/pca) and a second mean pooling step between adjacent segments. This is input to the generator which outputs a phoneme sequence that is fed to the discriminator, similar to phonemized unlabeled text to perform adversarial training. |
Given the following machine learning model name: Slime Mould Algorithm, provide a description of the model | **Slime Mould Algorithm** (**SMA**) is a new stochastic optimizer proposed based on the oscillation mode of slime mould in nature. SMA has several new features with a unique mathematical model that uses adaptive weights to simulate the process of producing positive and negative feedback of the propagation wave of slime mould based on bio-oscillator to form the optimal path for connecting food with excellent exploratory ability and exploitation propensity.
🔗 The source codes of SMA are publicly available at [https://aliasgharheidari.com/SMA.html](https://aliasgharheidari.com/SMA.html) |
Given the following machine learning model name: Laplacian Pyramid, provide a description of the model | A **Laplacian Pyramid** is a linear invertible image representation consisting of a set of band-pass
images spaced an octave apart, plus a low-frequency residual. Formally, let $d\left(.\right)$ be a downsampling operation that blurs and decimates a $j \times j$ image $I$ so that $d\left(I\right)$ is a new image of size $\frac{j}{2} \times \frac{j}{2}$. Also, let $u\left(.\right)$ be an upsampling operator which smooths and expands $I$ to be twice the size, so $u\left(I\right)$ is a new image of size $2j \times 2j$. We first build a Gaussian pyramid $G\left(I\right) = \left[I\_{0}, I\_{1}, \dots, I\_{K}\right]$, where
$I\_{0} = I$ and $I\_{k}$ is $k$ repeated application of $d\left(.\right)$ to $I$. $K$ is the number of levels in the pyramid selected so that the final level has a minimal spatial extent ($\leq 8 \times 8$ pixels).
The coefficients $h\_{k}$ at each level $k$ of the Laplacian pyramid $L\left(I\right)$ are constructed by taking the difference between adjacent levels in the Gaussian pyramid, upsampling the smaller one with $u\left(.\right)$ so that the sizes are compatible:
$$ h\_{k} = \mathcal{L}\_{k}\left(I\right) = G\_{k}\left(I\right) − u\left(G\_{k+1}\left(I\right)\right) = I\_{k} − u\left(I\_{k+1}\right) $$
Intuitively, each level captures the image structure present at a particular scale. The final level of the
Laplacian pyramid $h\_{K}$ is not a difference image, but a low-frequency residual equal to the final
Gaussian pyramid level, i.e. $h\_{K} = I\_{K}$. Reconstruction from a Laplacian pyramid coefficients
$\left[h\_{1}, \dots, h\_{K}\right]$ is performed using the backward recurrence:
$$ I\_{k} = u\left(I\_{k+1}\right) + h\_{k} $$
which is started with $I\_{K} = h\_{K}$ and the reconstructed image being $I = I\_{o}$. In other words, starting at the coarsest level, we repeatedly upsample and add the difference image h at the next finer level until we return to the full-resolution image.
Source: [LAPGAN](https://paperswithcode.com/method/lapgan)
Image : [Design of FIR Filters for Fast Multiscale Directional Filter Banks](https://www.researchgate.net/figure/Relationship-between-Gaussian-and-Laplacian-Pyramids_fig2_275038450) |
Given the following machine learning model name: V-trace, provide a description of the model | **V-trace** is an off-policy actor-critic reinforcement learning algorithm that helps tackle the lag between when actions are generated by the actors and when the learner estimates the gradient. Consider a trajectory $\left(x\_{t}, a\_{t}, r\_{t}\right)^{t=s+n}\_{t=s}$ generated by the actor following some policy $\mu$. We can define the $n$-steps V-trace target for $V\left(x\_{s}\right)$, our value approximation at state $x\_{s}$ as:
$$ v\_{s} = V\left(x\_{s}\right) + \sum^{s+n-1}\_{t=s}\gamma^{t-s}\left(\prod^{t-1}\_{i=s}c\_{i}\right)\delta\_{t}V $$
Where $\delta\_{t}V = \rho\_{t}\left(r\_{t} + \gamma{V}\left(x\_{t+1}\right) - V\left(x\_{t}\right)\right)$ is a temporal difference algorithm for $V$, and $\rho\_{t} = \text{min}\left(\bar{\rho}, \frac{\pi\left(a\_{t}\mid{x\_{t}}\right)}{\mu\left(a\_{t}\mid{x\_{t}}\right)}\right)$ and $c\_{i} = \text{min}\left(\bar{c}, \frac{\pi\left(a\_{t}\mid{x\_{t}}\right)}{\mu\left(a\_{t}\mid{x\_{t}}\right)}\right)$ are truncated importance sampling weights. We assume that the truncation levels are such that $\bar{\rho} \geq \bar{c}$. |
Given the following machine learning model name: H3DNet, provide a description of the model | Code for paper: H3DNet: 3D Object Detection Using Hybrid Geometric Primitives (ECCV 2020) |
Given the following machine learning model name: Demon, provide a description of the model | **Decaying Momentum**, or **Demon**, is a stochastic optimizer motivated by decaying the total contribution of a gradient to all future updates. By decaying the momentum parameter, the total contribution of a gradient to all future updates is decayed. A particular gradient term $g\_{t}$ contributes a total of $\eta\sum\_{i}\beta^{i}$ of its "energy" to all future gradient updates, and this results in the geometric sum, $\sum^{\infty}\_{i=1}\beta^{i} = \beta\sum^{\infty}\_{i=0}\beta^{i} = \frac{\beta}{\left(1-\beta\right)}$. Decaying this sum results in the Demon algorithm. Letting $\beta\_{init}$ be the initial $\beta$; then at the current step $t$ with total $T$ steps, the decay routine is given by solving the below for $\beta\_{t}$:
$$ \frac{\beta\_{t}}{\left(1-\beta\_{t}\right)} = \left(1-t/T\right)\beta\_{init}/\left(1-\beta\_{init}\right)$$
Where $\left(1-t/T\right)$ refers to the proportion of iterations remaining. Note that Demon typically requires no hyperparameter tuning as it is usually decayed to $0$ or a small negative value at time
$T$. Improved performance is observed by delaying the decaying. Demon can be applied to any gradient descent algorithm with a momentum parameter. |
Given the following machine learning model name: Shapley Additive Explanations, provide a description of the model | **SHAP**, or **SHapley Additive exPlanations**, is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. Shapley values are approximating using Kernel SHAP, which uses a weighting kernel for the approximation, and DeepSHAP, which uses DeepLift to approximate them. |
Given the following machine learning model name: Spectral-Normalized Identity Priors, provide a description of the model | **Spectral-Normalized Identity Priors**, or **SNIP**, is a structured pruning approach that penalizes an entire [residual module](https://paperswithcode.com/method/residual-connection) in a [Transformer model](https://paperswithcode.com/method/residual-connection) toward an identity mapping. It is applicable to any structured module, including a single [attention head](https://paperswithcode.com/method/scaled), an [entire attention block](https://paperswithcode.com/method/multi-head-attention), or a [feed-forward subnetwork](https://paperswithcode.com/method/position-wise-feed-forward-layer). The method identifies and discards unimportant non-linear mappings in the [residual connections](https://paperswithcode.com/method/residual-connection) by applying a thresholding operator on the function norm. Furthermore, [spectral normalization](https://paperswithcode.com/method/spectral-normalization) to stabilize the distribution of the post-activation values of the [Transformer](https://paperswithcode.com/method/transformer) layers, further improving the pruning effectiveness of the proposed methodology. |
Given the following machine learning model name: Chained-Tracker, provide a description of the model | **Chained-Tracker**, or **CTracker**, is an online model for multiple-object tracking. It chains paired bounding boxes regression results estimated from overlapping nodes, of which each node covers two adjacent frames. The paired regression is made attentive by object-attention (brought by a detection module) and identity-attention (ensured by an ID verification module).
The joint attention module guides the paired boxes regression branch to focus on informative spatial regions with two other branches. One is the object classification branch, which predicts the confidence scores for the first box in the detected box pairs, and such scores are used to guide the regression branch to focus on the foreground regions. The other one is the ID verification branch whose prediction facilitates the regression branch to focus on regions corresponding to the same target. Finally, the bounding box pairs are filtered according to the classification confidence. Then, the generated box pairs belonging to the adjacent frame pairs could be associated using simple methods like IoU (Intersection over Union) matching according to their boxes in the common frame. In this way, the tracking process could be achieved by chaining all the adjacent frame pairs (i.e. chain nodes) sequentially. |
Given the following machine learning model name: Vulnerability-constrained Decoding, provide a description of the model | **Vulnerability-constrained Decoding**, is a sequence decoding approach that aims to avoid generating vulnerabilities in generated code. |
Given the following machine learning model name: Structurally Regularized Deep Clustering, provide a description of the model | **Structurally Regularized Deep Clustering**, or **SRDC**, is a deep network based discriminative clustering method for domain adaptation that minimizes the KL divergence between predictive label distribution of the network and an introduced auxiliary one. Replacing the auxiliary distribution with that formed by ground-truth labels of source data implements the structural source regularization via a simple strategy of joint network training. |
Given the following machine learning model name: Performer, provide a description of the model | **Performer** is a [Transformer](https://paperswithcode.com/methods/category/transformers) architectures which can estimate regular ([softmax](https://paperswithcode.com/method/softmax)) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. To approximate softmax attention-kernels, Performers use a Fast Attention Via positive Orthogonal Random features approach (FAVOR+), leveraging new methods for approximating softmax and Gaussian kernels. |
Given the following machine learning model name: ScheduledDropPath, provide a description of the model | **ScheduledDropPath** is a modified version of [DropPath](https://paperswithcode.com/method/droppath). In DropPath, each path in the cell is stochastically dropped with some fixed probability during training. In ScheduledDropPath, each path in the cell is dropped out with a probability that is linearly increased over the course of training. |
Given the following machine learning model name: Channel Squeeze and Spatial Excitation (sSE), provide a description of the model | Inspired on the widely known [spatial squeeze and channel excitation (SE)](https://paperswithcode.com/method/squeeze-and-excitation-block) block, the sSE block performs channel squeeze and spatial excitation, to recalibrate the feature maps spatially and achieve more fine-grained image segmentation. |
Given the following machine learning model name: RPM-Net, provide a description of the model | **RPM-Net** is an end-to-end differentiable deep network for robust point matching uses learned features. It preserves robustness of RPM against noisy/outlier points while desensitizing initialization with point correspondences from learned feature distances instead of spatial distances. The network uses the differentiable Sinkhorn layer and annealing to get soft assignments of point correspondences from hybrid features learned from both spatial coordinates and local geometry. To further improve registration performance, the authors introduce a secondary network to predict optimal annealing parameters. |
Given the following machine learning model name: Unitary RNN, provide a description of the model | A **Unitary RNN** is a recurrent neural network architecture that uses a unitary hidden to hidden matrix. Specifically they concern dynamics of the form:
$$ h\_{t} = f\left(Wh\_{t−1} + Vx\_{t}\right) $$
where $W$ is a unitary matrix $\left(W^{†}W = I\right)$. The product of unitary matrices is a unitary matrix, so $W$ can be parameterised as a product of simpler unitary matrices:
$$ h\_{t} = f\left(D\_{3}R\_{2}F^{−1}D\_{2}PR\_{1}FD\_{1}h\_{t−1} + Vxt\right) $$
where $D\_{3}$, $D\_{2}$, $D\_{1}$ are learned diagonal complex matrices, and $R\_{2}$, $R\_{1}$ are learned reflection matrices. Matrices $F$ and $F^{−1}$ are the discrete Fourier transformation and its inverse. P is any constant random permutation. The activation function $f\left(h\right)$ applies a rectified linear unit with a learned bias to the modulus of each complex number. Only
the diagonal and reflection matrices, $D$ and $R$, are learned, so Unitary RNNs have fewer parameters than [LSTMs](https://paperswithcode.com/method/lstm) with comparable numbers of hidden units.
Source: [Associative LSTMs](https://arxiv.org/pdf/1602.03032.pdf) |
Given the following machine learning model name: Deep Convolutional GAN, provide a description of the model | **DCGAN**, or **Deep Convolutional GAN**, is a generative adversarial network architecture. It uses a couple of guidelines, in particular:
- Replacing any pooling layers with strided convolutions (discriminator) and fractional-strided convolutions (generator).
- Using batchnorm in both the generator and the discriminator.
- Removing fully connected hidden layers for deeper architectures.
- Using [ReLU](https://paperswithcode.com/method/relu) activation in generator for all layers except for the output, which uses tanh.
- Using LeakyReLU activation in the discriminator for all layer. |
Given the following machine learning model name: Phase Shuffle, provide a description of the model | **Phase Shuffle** is a technique for removing pitched noise artifacts that come from using transposed convolutions in audio generation models. Phase shuffle is an operation with hyperparameter $n$. It randomly perturbs the phase of each layer’s activations by −$n$ to $n$ samples before input to the next layer.
In the original application in [WaveGAN](https://paperswithcode.com/method/wavegan), the authors only apply phase shuffle to the discriminator, as the latent vector already provides the generator a mechanism to manipulate the phase
of a resultant waveform. Intuitively speaking, phase shuffle makes the discriminator’s job more challenging by requiring invariance to the phase of the input waveform. |
Given the following machine learning model name: Meena, provide a description of the model | **Meena** is a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize perplexity of the next token. A seq2seq model is used with the Evolved [Transformer](https://paperswithcode.com/method/transformer) as the main architecture. The model is trained on multi-turn conversations where the input sequence is all turns of the context and the output sequence is the response. |
Given the following machine learning model name: LSGAN, provide a description of the model | **LSGAN**, or **Least Squares GAN**, is a type of generative adversarial network that adopts the least squares loss function for the discriminator. Minimizing the objective function of LSGAN yields minimizing the Pearson $\chi^{2}$ divergence. The objective function can be defined as:
$$ \min\_{D}V\_{LSGAN}\left(D\right) = \frac{1}{2}\mathbb{E}\_{\mathbf{x} \sim p\_{data}\left(\mathbf{x}\right)}\left[\left(D\left(\mathbf{x}\right) - b\right)^{2}\right] + \frac{1}{2}\mathbb{E}\_{\mathbf{z}\sim p\_{\mathbf{z}}\left(\mathbf{z}\right)}\left[\left(D\left(G\left(\mathbf{z}\right)\right) - a\right)^{2}\right] $$
$$ \min\_{G}V\_{LSGAN}\left(G\right) = \frac{1}{2}\mathbb{E}\_{\mathbf{z} \sim p\_{\mathbf{z}}\left(\mathbf{z}\right)}\left[\left(D\left(G\left(\mathbf{z}\right)\right) - c\right)^{2}\right] $$
where $a$ and $b$ are the labels for fake data and real data and $c$ denotes the value that $G$ wants $D$ to believe for fake data. |
Given the following machine learning model name: MaskFlownet, provide a description of the model | **MaskFlownet** is an asymmetric occlusion-aware feature matching module, which can learn a rough occlusion mask that filters useless (occluded) areas immediately after feature warping without any explicit supervision. The learned occlusion mask can be further fed into a subsequent network cascade with dual feature pyramids. |
Given the following machine learning model name: ERNIE, provide a description of the model | ERNIE is a transformer-based model consisting of two stacked modules: 1) textual encoder and 2) knowledgeable encoder, which is responsible to integrate extra token-oriented knowledge information into textual information. This layer consists of stacked aggregators, designed for encoding both tokens and entities as well as fusing their heterogeneous features. To integrate this layer of enhancing representations via knowledge, a special pre-training task is adopted for ERNIE - it involves randomly masking token-entity alignments and training the model to predict all corresponding entities based on aligned tokens (aka denoising entity auto-encoder). |
Given the following machine learning model name: Precise RoI Pooling, provide a description of the model | **Precise RoI Pooling**, or **PrRoI Pooling**, is a region of interest feature extractor that avoids any quantization of coordinates and has a continuous gradient on bounding box coordinates. Given the feature map $\mathcal{F}$ before RoI/PrRoI Pooling (eg from Conv4 in [ResNet](https://paperswithcode.com/method/resnet)-50), let $w_{i,j}$ be the feature at one discrete location $(i,j)$ on the feature map. Using bilinear interpolation, the discrete feature map can be considered continuous at any continuous coordinates $(x,y)$:
$$
f(x,y) = \sum_{i,j}IC(x,y,i,j) \times w_{i,j},
$$
where $IC(x,y,i,j) = max(0,1-|x-i|)\times max(0,1-|y-j|)$ is the interpolation coefficient. Then denote a bin of a RoI as $bin=\{(x_1,y_1),(x_2,y_2)\}$, where $(x_1,y_1)$ and $(x_2,y_2)$ are the continuous coordinates of the top-left and bottom-right points, respectively. We perform pooling (e.g. [average pooling](https://paperswithcode.com/method/average-pooling)) given $bin$ and feature map $\mathcal{F}$ by computing a two-order integral: |
Given the following machine learning model name: Lovasz-Softmax, provide a description of the model | The **Lovasz-Softmax loss** is a loss function for multiclass semantic segmentation that incorporates the [softmax](https://paperswithcode.com/method/softmax) operation in the Lovasz extension. The Lovasz extension is a means by which we can achieve direct optimization of the mean intersection-over-union loss in neural networks. |
Given the following machine learning model name: CornerNet-Squeeze Hourglass Module, provide a description of the model | **CornerNet-Squeeze Hourglass Module** is an image model block used in [CornerNet](https://paperswithcode.com/method/cornernet)-Lite that is based on an [hourglass module](https://paperswithcode.com/method/hourglass-module), but uses modified fire modules instead of residual blocks. Other than replacing the residual blocks, further modifications include: reducing the maximum feature map resolution of the hourglass modules by adding one more downsampling layer before the hourglass modules, removing one downsampling layer in each hourglass module, replacing the 3 × 3 filters with 1 x 1 filters in the prediction modules of CornerNet, and finally replacing the nearest neighbor upsampling in the hourglass network with transpose [convolution](https://paperswithcode.com/method/convolution) with a 4 × 4 kernel. |
Given the following machine learning model name: Tofu, provide a description of the model | **Tofu** is an intra-layer model parallel system that partitions very large DNN models across multiple GPU devices to reduce per-GPU memory footprint. Tofu is designed to partition a dataflow graph of fine-grained tensor operators used by platforms like MXNet and TensorFlow. To optimally partition different operators in a dataflow graph, Tofu uses a recursive search algorithm that minimizes the total communication cost. |
Given the following machine learning model name: Symbolic rule learning, provide a description of the model | Symbolic rule learning methods find regularities in data that can be expressed in the form of 'if-then' rules based on symbolic representations of the data. |
Given the following machine learning model name: Parameterized ReLU, provide a description of the model | A **Parametric Rectified Linear Unit**, or **PReLU**, is an activation function that generalizes the traditional rectified unit with a slope for negative values. Formally:
$$f\left(y\_{i}\right) = y\_{i} \text{ if } y\_{i} \ge 0$$
$$f\left(y\_{i}\right) = a\_{i}y\_{i} \text{ if } y\_{i} \leq 0$$
The intuition is that different layers may require different types of nonlinearity. Indeed the authors find in experiments with convolutional neural networks that PReLus for the initial layer have more positive slopes, i.e. closer to linear. Since the filters of the first layers are Gabor-like filters such as edge or texture detectors, this shows a circumstance where positive and negative responses of filters are respected. In contrast the authors find deeper layers have smaller coefficients, suggesting the model becomes more discriminative at later layers (while it wants to retain more information at earlier layers). |
Given the following machine learning model name: Mirror-BERT, provide a description of the model | Mirror-BERT converts pretrained language models into effective universal text encoders without any supervision, in 20-30 seconds. It is an extremely simple, fast, and effective contrastive learning technique. It relies on fully identical *or* slightly modified string pairs as positive (i.e., synonymous) fine-tuning examples, and aims to maximise their similarity during identity fine-tuning. |
Given the following machine learning model name: NetAdapt, provide a description of the model | **NetAdapt** is a network shrinking algorithm to adapt a pretrained network to a mobile platform given a real resource budget. NetAdapt can incorporate direct metrics, such as latency and energy, into the optimization to maximize the adaptation performance based on the characteristics of the platform. By using empirical measurements, NetAdapt can be applied to any platform as long as we can measure the desired metrics, without any knowledge of the underlying implementation of the platform.
While many existing algorithms simplify networks based on the number of MACs or weights, optimizing those indirect metrics may not necessarily reduce the direct metrics, such as latency and energy consumption. To solve this problem, NetAdapt incorporates direct metrics into its adaptation algorithm. These direct metrics are evaluated using *empirical measurements*, so that detailed knowledge of the platform and toolchain is not required. NetAdapt automatically and progressively simplifies a pre-trained network until the resource budget is met while maximizing the accuracy. |
Given the following machine learning model name: Cycle Consistency Loss, provide a description of the model | **Cycle Consistency Loss** is a type of loss used for generative adversarial networks that performs unpaired image-to-image translation. It was introduced with the [CycleGAN](https://paperswithcode.com/method/cyclegan) architecture. For two domains $X$ and $Y$, we want to learn a mapping $G : X \rightarrow Y$ and $F: Y \rightarrow X$. We want to enforce the intuition that these mappings should be reverses of each other and that both mappings should be bijections. Cycle Consistency Loss encourages $F\left(G\left(x\right)\right) \approx x$ and $G\left(F\left(y\right)\right) \approx y$. It reduces the space of possible mapping functions by enforcing forward and backwards consistency:
$$ \mathcal{L}\_{cyc}\left(G, F\right) = \mathbb{E}\_{x \sim p\_{data}\left(x\right)}\left[||F\left(G\left(x\right)\right) - x||\_{1}\right] + \mathbb{E}\_{y \sim p\_{data}\left(y\right)}\left[||G\left(F\left(y\right)\right) - y||\_{1}\right] $$ |
Given the following machine learning model name: MATE, provide a description of the model | **MATE** is a [Transformer](https://paperswithcode.com/method/transformer) architecture designed to model the structure of web tables. It uses sparse attention in a way that allows heads to efficiently attend to either rows or columns in a table. Each attention head reorders the tokens by either column or row index and then applies a windowed attention mechanism. Unlike traditional self-attention, Mate scales linearly in the sequence length. |
Given the following machine learning model name: Automated Graph Learning, provide a description of the model | Automated graph learning is a method that aims at discovering the best hyper-parameter and neural architecture configuration for different graph tasks/data without manual design. |
Given the following machine learning model name: InceptionTime, provide a description of the model | |
Given the following machine learning model name: Extreme Value Machine, provide a description of the model | |
Given the following machine learning model name: Difference of Gaussian Random Forest, provide a description of the model | |
Given the following machine learning model name: Motion-Encoded Particle Swarm Optimization, provide a description of the model | |
Given the following machine learning model name: SM3, provide a description of the model | # Memory-Efficient Adaptive Optimization
Source: https://arxiv.org/abs/1901.11150
Adaptive gradient-based optimizers such as [AdaGrad](https://paperswithcode.com/method/adagrad) and [Adam](https://paperswithcode.com/method/adam) are among the
defacto methods of choice in modern machine learning.These methods tune the learning rate for each parameter during the optimization process using cumulative second-order statistics. These methods provide superior convergence properties and are very attractive in large scale applications due to their moderate time and space requirements which are linear in the number of parameters.
However, the recent advances in natural language processing such as [BERT](https://paperswithcode.com/method/bert) and GPT2 show that models with 10<sup>8</sup> to 10<sup>10</sup> parameters, trained with adaptive optimization methods, achieve state-of-the-art results. In such cases, the memory overhead of the optimizer can restrict the size of the model that can be used as well as the batch size, both of which can have a dramatic effect on the quality of the final model.
Here we construct a new adaptive optimization method that retains most of the benefits of standard per-parameter adaptivity while significantly reducing memory overhead.
We observe that in standard neural networks that certain entries of the stochastic gradients have (on average) similar values, and exhibit what we refer to as an activation pattern. For example, in gradients of embedding layers of deep networks, an entire row (or column) is either zero or non-zero. Similarly, in intermediate layers we often observe that gradients associated with the same unit are of similar order of magnitude. In these cases, a similar phenomenon is observed in the second-order statistics maintained by adaptive methods. With this key observation, to reduce the memory overhead of the optimizer our method takes in a cover set of the parameters. Cover sets are typically selected in practice such that parameters in each of the sets have second order statistics of similar magnitude. Our method is general enough that it can easily be extended to arbitrary cover sets. For parameters of deep networks that are organized as a collection of tensors, we form a cover consisting of slices of codimension one for each tensor. Thus, for an m x n parameter matrix, the cover consists of rows and columns of the matrix. The memory requirements therefore drop from mxn to merely m+n. For a parameter tensor of rank p, with dimensions n<sub>1</sub> ... n<sub>p</sub>, the reduction in memory consumption is even more pronounced, dropping from product of all the dimensions to the sum of all dimensions. This virtually eliminates the memory overhead associated with maintaining the adaptive learning rates!
Another practical aspect worthy of note is that our method does not require an external hand engineered learning rate decay schedule but instead relies on the per parameter adaptivity that is natural to its update rule which makes it easier to tune. We provide details in the supplementary section of the paper.
## Advice on using SM3 on your model
### Learning rate warm-up:
```python
learning_rate = lr_constant * tf.minimum(1.0, (warm_up_step / global_step) ** p)
```
* p = 1, linear ramp up of learning rate.
* p = 2, quadratic ramp up of learning rate [preferred].
We typically set `warm_up_step` as 5% of overall steps. Initially, the norm of the preconditioned gradient is much larger than norm of the weights. Learning rate warmup allows us to heuristically fix this scale mismatch.
### Learning rate decay:
We make use accumulated gradient squares for the decay. This means that each coordinate gets its own natural decay based on the scales of the gradients over time. Hence, users need not put in an external learning rate decay schedule. Moreover, we found in our experiments with translation and language models that this approach is superior to a hand-tuned learning rate decay schedules which is typically combined with exponential moving averages of the gradient squares.
Having said that if users want to add exponential moving averages instead of the standard accumulated gradient squares - It's easy to modify the optimizer implementation to switch to exponential moving averages.
For rank > 1:
| from | to |
|-------------------------------------|-------------------------------------|
| current_accumulator += grad * grad | current_accumulator = beta * current_accumulator + (1-beta) * grad * grad |
For rank <= 1:
| from | to |
|-------------------------------------|-------------------------------------|
| current_accumulator = tf.assign_add(accumulator, grad * grad) | current_accumulator = tf.assign(accumulator, beta * accumulator + (1-beta) * (grad * grad)) |
### [Polyak averaging](https://paperswithcode.com/method/polyak-averaging) of parameters:
It's useful to run [polyak averaging](https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage) of the parameters. These parameters are then used in inference / serving. Using the averaged parameters instead of the last iterate typically improves the overall performance of the model.
An **alternative** to polyak averaging which does not make use of extra memory is to decay the learning rate from the constant to zero for the last 10% of the steps of your entire training run, we term the phase a **cool-down** phase of the model. As training makes smaller and smaller steps the final iterate can be thought of as an average iterate. |
Given the following machine learning model name: ShuffleNet Block, provide a description of the model | A **ShuffleNet Block** is an image model block that utilises a [channel shuffle](https://paperswithcode.com/method/channel-shuffle) operation, along with depthwise convolutions, for an efficient architectural design. It was proposed as part of the [ShuffleNet](https://paperswithcode.com/method/shufflenet) architecture. The starting point is the [Residual Block](https://paperswithcode.com/method/residual-block) unit from [ResNets](https://paperswithcode.com/method/resnet), which is then modified with a pointwise group [convolution](https://paperswithcode.com/method/convolution) and a channel shuffle operation. |
Given the following machine learning model name: Hermite Polynomial Activation, provide a description of the model | A **Hermite Activations** is a type of activation function which uses a smooth finite Hermite polynomial base as a substitute for non-smooth [ReLUs](https://paperswithcode.com/method/relu).
Relevant Paper: [Lokhande et al](https://arxiv.org/pdf/1909.05479.pdf) |
Given the following machine learning model name: DeepLabv3, provide a description of the model | **DeepLabv3** is a semantic segmentation architecture that improves upon [DeepLabv2](https://paperswithcode.com/method/deeplabv2) with several modifications. To handle the problem of segmenting objects at multiple scales, modules are designed which employ atrous [convolution](https://paperswithcode.com/method/convolution) in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, the Atrous [Spatial Pyramid Pooling](https://paperswithcode.com/method/spatial-pyramid-pooling) module from DeepLabv2 augmented with image-level features encoding global context and further boost performance.
The changes to the ASSP module are that the authors apply [global average pooling](https://paperswithcode.com/method/global-average-pooling) on the last feature map of the model, feed the resulting image-level features to a 1 × 1 convolution with 256 filters (and [batch normalization](https://paperswithcode.com/method/batch-normalization)), and then bilinearly upsample the feature to the desired spatial dimension. In the
end, the improved [ASPP](https://paperswithcode.com/method/aspp) consists of (a) one 1×1 convolution and three 3 × 3 convolutions with rates = (6, 12, 18) when output stride = 16 (all with 256 filters and batch normalization), and (b) the image-level features.
Another interesting difference is that DenseCRF post-processing from DeepLabv2 is no longer needed. |
Given the following machine learning model name: Computation Redistribution, provide a description of the model | **Computation Redistribution** is an [neural architecture search](https://paperswithcode.com/task/architecture-search) method for [face detection](https://paperswithcode.com/task/face-detection), which reallocates the computation between the backbone, neck and head of the model based on a predefined search methodology. Directly utilising the backbone of a classification network for scale-specific face detection can be sub-optimal. Therefore, [network structure search](https://paperswithcode.com/method/regnety) is used to reallocate the computation on the backbone, neck and head, under a wide range of flop regimes. The search method is applied to [RetinaNet](https://paperswithcode.com/method/retinanet), with [ResNet](https://paperswithcode.com/method/resnet) as backbone, [Path Aggregation Feature Pyramid Network](https://paperswithcode.com/method/pafpn) (PAFPN) as the neck and stacked 3 × 3 [convolutional layers](https://paperswithcode.com/method/convolution) for the head. While the general structure is simple, the total number of possible networks in the search space is unwieldy. In the first step, the authors explore the reallocation of the computation within the backbone parts (i.e. stem, C2, C3, C4, and C5), while fixing the neck and head components. Based on the optimised computation distribution on the backbone they find, they further explore the reallocation of the computation across the backbone, neck and head. |
Given the following machine learning model name: Cross-Attention Module, provide a description of the model | The **Cross-Attention** module is an attention module used in [CrossViT](https://paperswithcode.com/method/crossvit) for fusion of multi-scale features. The CLS token of the large branch (circle) serves as a query token to interact with the patch tokens from the small branch through attention. $f\left(·\right)$ and $g\left(·\right)$ are projections to align dimensions. The small branch follows the same procedure but swaps CLS and patch tokens from another branch. |
Given the following machine learning model name: FT-Transformer, provide a description of the model | FT-Transformer (Feature Tokenizer + Transformer) is a simple adaptation of the [Transformer](/method/transformer) architecture for the tabular domain. The model (Feature Tokenizer component) transforms all features (categorical and numerical) to tokens and runs a stack of Transformer layers over the tokens, so every Transformer layer operates on the feature level of one object. (This model is similar to [AutoInt](/method/autoint)). In the Transformer component, the `[CLS]` token is appended to $T$. Then $L$ Transformer layers are applied. PreNorm is used for easier optimization and good performance. The final representation of the `[CLS]` token is used for prediction. |
Given the following machine learning model name: Syntax Heat Parse Tree, provide a description of the model | Syntax Heat Parse Tree are heatmaps over parse trees, similar to ["heat trees"](https://doi.org/10.1371/journal.pcbi.1005404) in biology. |
Given the following machine learning model name: Colorization Transformer, provide a description of the model | **Colorization Transformer** is a probabilistic [colorization](https://paperswithcode.com/method/colorization) model composed only of [axial self-attention blocks](https://paperswithcode.com/method/axial). The main advantages of these blocks are the ability to capture a global receptive field with only two layers and $\mathcal{O}(D\sqrt{D})$ instead of $\text{O}(D^{2})$ complexity. In order to enable colorization of high-resolution grayscale images, the task is decomposed into three simpler sequential subtasks: coarse low resolution autoregressive colorization, parallel color and spatial super-resolution.
For coarse low resolution colorization, a conditional variant of [Axial Transformer](https://paperswithcode.com/method/axial) is applied. The authors leverage the semi-parallel sampling mechanism of Axial Transformers. Finally, fast parallel deterministic upsampling models are employed to super-resolve the coarsely colorized image into the final high resolution output. |
Given the following machine learning model name: RepPoints, provide a description of the model | **RepPoints** is a representation for object detection that consists of a set of points which indicate the spatial extent of an object and semantically significant local areas. This representation is learned via weak localization supervision from rectangular ground-truth boxes and implicit recognition feedback. Based on the richer RepPoints representation, the authors develop an anchor-free object detector that yields improved performance compared to using bounding boxes. |
Given the following machine learning model name: BasicVSR, provide a description of the model | **BasicVSR** is a video super-resolution pipeline including optical flow and [residual blocks](https://paperswithcode.com/method/residual-connection). It adopts a typical bidirectional recurrent network. The upsampling module $U$ contains multiple [pixel-shuffle](https://paperswithcode.com/method/pixelshuffle) and convolutions. In the Figure, red and blue colors represent the backward and forward propagations, respectively. The propagation branches contain only generic components. $S, W$, and $R$ refer to the flow estimation module, spatial warping module, and residual blocks, respectively. |
Given the following machine learning model name: Mixture model network, provide a description of the model | Mixture model network (MoNet) is a general framework allowing to design convolutional deep architectures on non-Euclidean domains such as graphs and manifolds.
Image and description from: [Geometric deep learning on graphs and manifolds using mixture model CNNs](https://arxiv.org/pdf/1611.08402.pdf) |
Given the following machine learning model name: ZCA Whitening, provide a description of the model | **ZCA Whitening** is an image preprocessing method that leads to a transformation of data such that the covariance matrix $\Sigma$ is the identity matrix, leading to decorrelated features.
Image Source: [Alex Krizhevsky](http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) |
Given the following machine learning model name: Template based Graph Neural Network with Optimal Transport Distances, provide a description of the model | |
Given the following machine learning model name: LFPNet with test time augmentation, provide a description of the model | |
Given the following machine learning model name: Child-Tuning, provide a description of the model | **Child-Tuning** is a fine-tuning technique that updates a subset of parameters (called child network) of large pretrained models via strategically masking out the gradients of the non-child network during the backward process. It decreases the hypothesis space of the model via a task-specific mask applied to the full gradients, helping to effectively adapt the large-scale pretrained model to various tasks and meanwhile aiming to maintain its original generalization ability. |
Given the following machine learning model name: Shake-Shake Regularization, provide a description of the model | **Shake-Shake Regularization** aims to improve the generalization ability of multi-branch networks by replacing the standard summation of parallel branches with a stochastic affine combination. A typical pre-activation [ResNet](https://paperswithcode.com/method/resnet) with 2 residual branches would follow this equation:
$$x\_{i+1} = x\_{i} + \mathcal{F}\left(x\_{i}, \mathcal{W}\_{i}^{\left(1\right)}\right) + \mathcal{F}\left(x\_{i}, \mathcal{W}\_{i}^{\left(2\right)}\right) $$
Shake-shake regularization introduces a random variable $\alpha\_{i}$ following a uniform distribution between 0 and 1 during training:
$$x\_{i+1} = x\_{i} + \alpha\mathcal{F}\left(x\_{i}, \mathcal{W}\_{i}^{\left(1\right)}\right) + \left(1-\alpha\right)\mathcal{F}\left(x\_{i}, \mathcal{W}\_{i}^{\left(2\right)}\right) $$
Following the same logic as for [dropout](https://paperswithcode.com/method/dropout), all $\alpha\_{i}$ are set to the expected value of $0.5$ at test time. |
Given the following machine learning model name: Inception-ResNet-v2-A, provide a description of the model | **Inception-ResNet-v2-A** is an image model block for a 35 x 35 grid used in the [Inception-ResNet-v2](https://paperswithcode.com/method/inception-resnet-v2) architecture. |
Given the following machine learning model name: Spectral Clustering, provide a description of the model | Spectral clustering has attracted increasing attention due to
the promising ability in dealing with nonlinearly separable datasets [15], [16]. In spectral clustering, the spectrum of the graph Laplacian is used to reveal the cluster structure. The spectral clustering algorithm mainly consists of two steps: 1) constructs the low dimensional embedded representation of the data based on the eigenvectors of the graph Laplacian, 2) applies k-means on the constructed low dimensional data to obtain the clustering result. Thus, |
Given the following machine learning model name: Softmax, provide a description of the model | The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:
$$ P(y=j \mid{x}) = \frac{e^{x^{T}w_{j}}}{\sum^{K}_{k=1}e^{x^{T}wk}} $$ |
Given the following machine learning model name: Pruning, provide a description of the model | |
Given the following machine learning model name: Feature Pyramid Grid, provide a description of the model | **Feature Pyramid Grids**, or **FPG**, is a deep multi-pathway feature pyramid, that represents the feature scale-space as a regular grid of parallel bottom-up pathways which are fused by multi-directional lateral connections. It connects the backbone features, $C$, of a ConvNet with a regular structure of $p$ parallel top-down pyramid pathways which are fused by multi-directional lateral connections, AcrossSame, AcrossUp, AcrossDown, and AcrossSkip. AcrossSkip are direct connections while all other types use [convolutional](https://paperswithcode.com/method/convolution) and [ReLU](https://paperswithcode.com/method/relu) layers.
On a high-level, FPG is a deep generalization of [FPN](https://paperswithcode.com/method/fpn) from one to $p$ pathways under a dense lateral connectivity structure. |
Given the following machine learning model name: Spatial Pyramid Pooling, provide a description of the model | ** Spatial Pyramid Pooling (SPP)** is a pooling layer that removes the fixed-size constraint of the network, i.e. a CNN does not require a fixed-size input image. Specifically, we add an SPP layer on top of the last convolutional layer. The SPP layer pools the features and generates fixed-length outputs, which are then fed into the fully-connected layers (or other classifiers). In other words, we perform some information aggregation at a deeper stage of the network hierarchy (between convolutional layers and fully-connected layers) to avoid the need for cropping or warping at the beginning. |
Given the following machine learning model name: ReLIC, provide a description of the model | **ReLIC**, or **Representation Learning via Invariant Causal Mechanisms**, is a self-supervised learning objective that enforces invariant prediction of proxy targets across augmentations through an invariance regularizer which yields improved generalization guarantees.
We can write the objective as:
$$
\underset{X}{\mathbb{E}} \underset{\sim\_{l k}, a\_{q \mathcal{A}}}{\mathbb{E}} \sum_{b \in\left\(a\_{l k}, a\_{q t}\right\)} \mathcal{L}\_{b}\left(Y^{R}, f(X)\right) \text { s.t. } K L\left(p^{d o\left(a\_{l k}\right)}\left(Y^{R} \mid f(X)\right), p^{d o\left(a\_{q t}\right)}\left(Y^{R} \mid f(X)\right)\right) \leq \rho
$$
where $\mathcal{L}$ is the proxy task loss and $K L$ is the Kullback-Leibler (KL) divergence. Note that any distance measure on distributions can be used in place of the KL divergence.
Concretely, as proxy task we associate to every datapoint $x\_{i}$ the label $y\_{i}^{R}=i$. This corresponds to the instance discrimination task, commonly used in contrastive learning. We take pairs of points $\left(x\_{i}, x\_{j}\right)$ to compute similarity scores and use pairs of augmentations $a\_{l k}=\left(a\_{l}, a\_{k}\right) \in$ $\mathcal{A} \times \mathcal{A}$ to perform a style intervention. Given a batch of samples $\left\(x\_{i}\right\)\_{i=1}^{N} \sim \mathcal{D}$, we use
$$
p^{d o\left(a\_{l k}\right)}\left(Y^{R}=j \mid f\left(x\_{i}\right)\right) \propto \exp \left(\phi\left(f\left(x\_{i}^{a\_{l}}\right), h\left(x\_{j}^{a\_{k}}\right)\right) / \tau\right)
$$
with $x^{a}$ data augmented with $a$ and $\tau$ a softmax temperature parameter. We encode $f$ using a neural network and choose $h$ to be related to $f$, e.g. $h=f$ or as a network with an exponential moving average of the weights of $f$ (e.g. target networks). To compare representations we use the function $\phi\left(f\left(x\_{i}\right), h\left(x\_{j}\right)\right)=\left\langle g\left(f\left(x\_{i}\right)\right), g\left(h\left(x\_{j}\right)\right)\right\rangle$ where $g$ is a fully-connected neural network often called the critic.
Combining these pieces, we learn representations by minimizing the following objective over the full set of data $x\_{i} \in \mathcal{D}$ and augmentations $a_{l k} \in \mathcal{A} \times \mathcal{A}$
$$
-\sum_{i=1}^{N} \sum\_{a\_{l k}} \log \frac{\exp \left(\phi\left(f\left(x\_{i}^{a_{l}}\right), h\left(x\_{i}^{a\_{k}}\right)\right) / \tau\right)}{\sum\_{m=1}^{M} \exp \left(\phi\left(f\left(x\_{i}^{a\_{l}}\right), h\left(x\_{m}^{a\_{k}}\right)\right) / \tau\right)}+\alpha \sum\_{a\_{l k}, a\_{q t}} K L\left(p^{d o\left(a\_{l k}\right)}, p^{d o\left(a\_{q t}\right)}\right)
$$
with $M$ the number of points we use to construct the contrast set and $\alpha$ the weighting of the invariance penalty. The shorthand $p^{d o(a)}$ is used for $p^{d o(a)}\left(Y^{R}=j \mid f\left(x\_{i}\right)\right)$. The Figure shows a schematic of the RELIC objective. |
Given the following machine learning model name: Masked autoencoder, provide a description of the model | |
Given the following machine learning model name: Bottleneck Residual Block, provide a description of the model | A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101. |
Given the following machine learning model name: Random Mix-up, provide a description of the model | R-Mix (Random Mix-up) is a Mix-up family Data Augmentation method. It combines random Mix-up with Saliency-guided mix-up, producing a procedure that is fast and performant, while reserving good characteristics of Saliency-guided Mix-up such as low Expected Calibration Error and high Weakly-supervised Object Localization accuracy. |
Given the following machine learning model name: Dynamic SmoothL1 Loss, provide a description of the model | **Dynamic SmoothL1 Loss (DSL)** is a loss function in object detection where we change the shape of loss function to gradually focus on high quality samples:
$$\text{DSL}\left(x, \beta\_{now}\right) = 0.5|{x}|^{2}/\beta\_{now}, \text{ if } |x| < \beta\_{now}\text{,} $$
$$\text{DSL}\left(x, \beta\_{now}\right) = |{x}| - 0.5\beta\_{now}\text{, otherwise} $$
DSL will change the value of $\beta\_{now}$ according to the statistics of regression errors which can reflect the localization accuracy. It was introduced as part of the [Dynamic R-CNN](https://paperswithcode.com/method/dynamic-r-cnn) model. |
Given the following machine learning model name: style-based recalibration module, provide a description of the model | SRM combines style transfer with an attention mechanism. Its main contribution is style pooling which utilizes both mean and standard deviation of the input features to improve its capability to capture global information. It also adopts a lightweight channel-wise fully-connected (CFC) layer, in place of the original fully-connected layer, to reduce the computational requirements.
Given an input feature map $X \in \mathbb{R}^{C \times H \times W}$, SRM first collects global information by using style pooling ($\text{SP}(\cdot)$) which combines global average pooling and global standard deviation pooling.
Then a channel-wise fully connected ($\text{CFC}(\cdot)$) layer (i.e. fully connected per channel), batch normalization $\text{BN}$ and sigmoid function $\sigma$ are used to provide the attention vector. Finally, as in an SE block, the input features are multiplied by the attention vector. Overall, an SRM can be written as:
\begin{align}
s = F_\text{srm}(X, \theta) & = \sigma (\text{BN}(\text{CFC}(\text{SP}(X))))
\end{align}
\begin{align}
Y & = s X
\end{align}
The SRM block improves both squeeze and excitation modules, yet can be added after each residual unit like an SE block. |
Given the following machine learning model name: Patch AutoAugment, provide a description of the model | **Patch AutoAugment** is a patch-level automatic data augmentation algorithm that automatically searches for the optimal augmentation policies for the patches of an image. Specifically, PAA allows each patch DA operation to be controlled by an agent and models it as a Multi-Agent Reinforcement Learning (MARL) problem. At each step, PAA samples the most effective operation for each patch based on its content and the semantics of the whole image. The agents cooperate as a team and share a unified team reward for achieving the joint optimal DA policy of the whole image. PAA is co-trained with a target network through adversarial training. At each step, the policy network samples the most effective operation for each patch based on its content and the semantics of the image. |
Given the following machine learning model name: Entropy Minimized Ensemble of Adapters, provide a description of the model | **Entropy Minimized Ensemble of Adapters**, or **EMEA**, is a method that optimizes the ensemble weights of the pretrained language adapters for each test sentence by minimizing the entropy of its predictions. The intuition behind the method is that a good [adapter](https://paperswithcode.com/method/adapter) weight $\alpha$ for a test input $x$ should make the model more confident in its prediction for $x$, that is, it should lead to lower model entropy over the input |
Given the following machine learning model name: SEER, provide a description of the model | **SEER** is a self-supervised learning approach for training large models on random, uncurated images with no supervision. It trains [RegNet-Y](https://paperswithcode.com/method/regnet-y) architectures with the [SwAV](https://paperswithcode.com/method/swav). Several adjustments are made to self-supervised training to make it work at a larger scale, including using a [cosine learning schedule](https://paperswithcode.com/method/cosine-annealing) |
Given the following machine learning model name: DeepMind AlphaStar, provide a description of the model | **AlphaStar** is a reinforcement learning agent for tackling the game of Starcraft II. It learns a policy $\pi\_{\theta}\left(a\_{t}\mid{s\_{t}}, z\right) = P\left[a\_{t}\mid{s\_{t}}, z\right]$ using a neural network for parameters $\theta$ that receives observations $s\_{t} = \left(o\_{1:t}, a\_{1:t-1}\right)$ as inputs and chooses actions as outputs. Additionally, the policy conditions on a statistic $z$ that summarizes a strategy sampled from human data such as a build order [1].
AlphaStar uses numerous types of architecture to incorporate different types of features. Observations of player and enemy units are processed with a [Transformer](https://paperswithcode.com/method/transformer). Scatter connections are used to integrate spatial and non-spatial information. The temporal sequence of observations is processed by a core [LSTM](https://paperswithcode.com/method/lstm). Minimap features are extracted with a Residual Network. To manage the combinatorial action space, the agent uses an autoregressive policy and a recurrent [pointer network](https://paperswithcode.com/method/pointer-net).
The agent is trained first with supervised learning from human replays. Parameters are subsequently trained using reinforcement learning that maximizes the win rate against opponents. The RL algorithm is based on a policy-gradient algorithm similar to actor-critic. Updates are performed asynchronously and off-policy. To deal with this, a combination of $TD\left(\lambda\right)$ and [V-trace](https://paperswithcode.com/method/v-trace) are used, as well as a new self-imitation algorithm (UPGO).
Lastly, to address game-theoretic challenges, AlphaStar is trained with league training to try to approximate a fictitious self-play (FSP) setting which avoids cycles by computing a best response against a uniform mixture of all previous policies. The league of potential opponents includes a diverse range of agents, including policies from current and previous agents.
Image Credit: [Yekun Chai](https://ychai.uk/notes/2019/07/21/RL/DRL/Decipher-AlphaStar-on-StarCraft-II/)
#### References
1. Chai, Yekun. "AlphaStar: Grandmaster level in StarCraft II Explained." (2019). [https://ychai.uk/notes/2019/07/21/RL/DRL/Decipher-AlphaStar-on-StarCraft-II/](https://ychai.uk/notes/2019/07/21/RL/DRL/Decipher-AlphaStar-on-StarCraft-II/)
#### Code Implementation
1. https://github.com/opendilab/DI-star |
Given the following machine learning model name: CANINE, provide a description of the model | **CANINE** is a pre-trained encoder for language understanding that operates directly on character sequences—without explicit tokenization or vocabulary—and a pre-training strategy with soft inductive biases in place of hard token boundaries. To use its finer-grained input effectively and efficiently, Canine combines downsampling, which reduces the input sequence length, with a deep [transformer](https://paperswithcode.com/method/transformer) stack, which encodes context. |
Given the following machine learning model name: Bootstrap Your Own Latent, provide a description of the model | BYOL (Bootstrap Your Own Latent) is a new approach to self-supervised learning. BYOL’s goal is to learn a representation $y_θ$ which can then be used for downstream tasks. BYOL uses two neural networks to learn: the online and target networks. The online network is defined by a set of weights $θ$ and is comprised of three stages: an encoder $f_θ$, a projector $g_θ$ and a predictor $q_θ$. The target network has the same architecture
as the online network, but uses a different set of weights $ξ$. The target network provides the regression
targets to train the online network, and its parameters $ξ$ are an exponential moving average of the
online parameters $θ$.
Given the architecture diagram on the right, BYOL minimizes a similarity loss between $q_θ(z_θ)$ and $sg(z'{_ξ})$, where $θ$ are the trained weights, $ξ$ are an exponential moving average of $θ$ and $sg$ means stop-gradient. At the end of training, everything but $f_θ$ is discarded, and $y_θ$ is used as the image representation.
Source: [Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning](https://paperswithcode.com/paper/bootstrap-your-own-latent-a-new-approach-to-1)
Image credit: [Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning](https://paperswithcode.com/paper/bootstrap-your-own-latent-a-new-approach-to-1) |
Given the following machine learning model name: CRISS, provide a description of the model | **CRISS**, or **Cross-lingual Retrievial for Iterative Self-Supervised Training (CRISS)**, is a self-supervised learning method for multilingual sequence generation. CRISS is developed based on the finding that the encoder outputs of multilingual denoising autoencoder can be used as language agnostic representation to retrieve parallel sentence pairs, and training the model on these retrieved sentence pairs can further improve its sentence retrieval and translation capabilities in an iterative manner. Using only unlabeled data from many different languages, CRISS iteratively mines for parallel sentences across languages, trains a new better multilingual model using these mined sentence pairs, mines again for better parallel sentences, and repeats. |
Given the following machine learning model name: BAGUA, provide a description of the model | **BAGUA** is a communication framework whose design goal is to provide a system abstraction that is both flexible and modular to support state-of-the-art system relaxation techniques of distributed training. The abstraction goes beyond parameter server and Allreduce paradigms, and provides a collection of MPI-style collective operations to facilitate communications with different precision and centralization strategies. |
Given the following machine learning model name: Spectral Gap Rewiring Layer, provide a description of the model | **TL;DR: GAP-Layer is a GNN Layer which is able to rewire a graph in an inductive an parameter-free way optimizing the spectral gap (minimizing or maximizing the bottleneck size), learning a differentiable way to compute the Fiedler vector and the Fiedler value of the graph.**
## Summary
**GAP-Layer** is a rewiring layer based on minimizing or maximizing the spectral gap (or graph bottleneck size) in an inductive way. Depending on the mining task we want to perform in our graph, we would like to maximize or minimize the size of the bottleneck, aiming to more connected or more separated communities.
## GAP-Layer: Spectral Gap Rewiring
#### Loss and derivatives using $\mathbf{L}$ or $\mathbf{\cal L}$
For this explanation, we are going to suppose we want to minimize the spectral gap, i.e. make the graph bottleneck size smaller. For minimizing the spectral GAP we minimize this loss:
$$
L\_{Fiedler} = \|\tilde{\mathbf{A}}-\mathbf{A}\| \_F + \alpha(\lambda\_2)^2
$$
The gradients of this cost function w.r.t each element of $\mathbf{A}$ are not trivial. Depending on if we use the Laplacian, $\mathbf{L}$, or the normalized Laplacian, $\cal L$, the derivatives are going to be different. For the former case ($\mathbf{L}$), we will use the derivatives presented in Kang et al. 2019. In the latter scenario ($\cal L$), we present the **Spectral Gradients**: derivatives from the spectral gap w.r.t. the Normalized Laplacian. However, whatever option we choose, $\lambda_2$ can seen as a function of $\tilde{\mathbf{A}}$ and , hence, $\nabla\_{\tilde{\mathbf{A}}}\lambda\_2$, the gradient of $\lambda\_2$ wrt each component of $\tilde{\mathbf{A}}$ (*how does the bottleneck change with each change in our graph?*), comes from the chain rule of the matrix derivative $Tr\left[\left(\nabla\_{\tilde{\mathbf{L}}}\lambda\_2\right)^T\cdot\nabla\_{\tilde{\mathbf{A}}}\tilde{\mathbf{L}}\right]$ if using the Laplacian or $Tr\left[\left(\nabla\_{\tilde{\mathbf{\cal L}}}\lambda\_2\right)^T\cdot\nabla\_{\tilde{\mathbf{A}}}\tilde{\mathbf{\cal L}}\right]$ if using the normalized Laplacian. Both of this derivatives, relies on the Fiedler vector (2nd eigenvector: $\mathbf{f}\_2$ if we use $\mathbf{L}$ and $\mathbf{g}\_2$ if using $\mathbf{\cal L}$ instead). For more details on those derivatives, and for the sake of simplicity in this blog explanation, I suggest go to the original paper.
#### Differentiable approximation of $\mathbf{f}_2$ and $\lambda_2$
Once we have those derivatives, the problem is still not that trivial. Note that our cost function $L\_{Fiedler}$, relies on an eigenvalue $\lambda\_2$. In addition, the derivatives also depends on the Fiedler vector $\mathbf{f}\_2$ or $\mathbf{g}\_2$, which is the eigenvector corresponding to the aforementioned eigenvalue. However, we **DO NOT COMPUTE IT SPECTRALLY**, as its computation has a complexity of $O(n^3)$ and would need to be computed in every learning iteration. Instead, **we learn an approximation of $\mathbf{f}\_2$ and use its Dirichlet energy ${\cal E}(\mathbf{f}\_2)$ to approximate the $\lambda_2$**.
$$
\mathbf{f}\_2(u) = \begin{array}{cl}
+1/\sqrt{n} & \text{if}\;\; u\;\; \text{belongs to the first cluster} \\
-1/\sqrt{n} & \text{if}\;\; u\;\; \text{belongs to the second cluster}
\end{array}
$$
In addition, if using $\mathbf{\cal L}$, since $\mathbf{g}\_2=\mathbf{D}^{1/2}\mathbf{f}_2$, we first approximate $\mathbf{g}_2$ and then approximate $\lambda_2$ from ${\cal E}(\mathbf{g}\_2)$. With this approximation, we can easily compute the node belonging to each cluster with a simple MLP. In addition, such as the Fiedler value must satisfy orthogonality and normality, restrictions must be added to that MLP Clustering.
### GAP-Layer
To sum up, **GAP-Layer** can be defined as the following. Given the matrix $\mathbf{X}\_{n\times F}$ encoding the features of the nodes after any message passing (MP) layer, $\mathbf{S}\_{n\times 2}=\textrm{Softmax}(\textrm{MLP}(\mathbf{X}))$ learns the association $\mathbf{X}\rightarrow \mathbf{S}$ while $\mathbf{S}$ is optimized according to the loss:
$$
L\_{Cut} = -\frac{Tr[\mathbf{S}^T\mathbf{A}\mathbf{S}]}{Tr[\mathbf{S}^T\mathbf{D}\mathbf{S}]} + \left\|\frac{\mathbf{S}^T\mathbf{S}}{\|\mathbf{S}^T\mathbf{S}\|\_F} - \frac{\mathbf{I}\_n}{\sqrt{2}}\right\|\_F
$$
Then, the $\mathbf{f}\_2$ is approximated from $\mathbf{S}$ using $\mathbf{f}\_2(u)$ equation. Once calculated $\mathbf{f}\_2$ and $\lambda\_2$ we consider the loss:
$$
L\_{Fiedler} = \|\tilde{\mathbf{A}}-\mathbf{A}\|\_F + \alpha(\lambda\_2)^2
$$
$$\mathbf{\tilde{A}} = \mathbf{A} - \mu \nabla_\mathbf{\tilde{A}}\lambda\_2$$
returning $\tilde{\mathbf{A}}$. Then the GAP diffusion $\mathbf{T}^{GAP} = \tilde{\mathbf{A}}(\mathbf{S}) \odot \mathbf{A}$ results from minimizing
$$L_{GAP}= L\_{Cut} + L\_{Fiedler}$$
**References**
(Kang et al. 2019) Kang, J., & Tong, H. (2019, November). N2n: Network derivative mining. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (pp. 861-870). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.