prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: squeeze-and-excitation networks, provide a description of the model | SENet pioneered channel attention. The core of SENet is a squeeze-and-excitation (SE) block which is used to collect global information, capture channel-wise relationships and improve representation ability.
SE blocks are divided into two parts, a squeeze module and an excitation module. Global spatial information is collected in the squeeze module by global average pooling. The excitation module captures channel-wise relationships and outputs an attention vector by using fully-connected layers and non-linear layers (ReLU and sigmoid). Then, each channel of the input feature is scaled by multiplying the corresponding element in the attention vector. Overall, a squeeze-and-excitation block $F_\text{se}$ (with parameter $\theta$) which takes $X$ as input and outputs $Y$ can be formulated
as:
\begin{align}
s = F_\text{se}(X, \theta) & = \sigma (W_{2} \delta (W_{1}\text{GAP}(X)))
\end{align}
\begin{align}
Y = sX
\end{align} |
Given the following machine learning model name: Gated Transformer-XL, provide a description of the model | **Gated Transformer-XL**, or **GTrXL**, is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based architecture for reinforcement learning. It introduces architectural modifications that improve the stability and learning speed of the original Transformer and XL variant. Changes include:
- Placing the [layer normalization](https://paperswithcode.com/method/layer-normalization) on only the input stream of the submodules. A key benefit to this reordering is that it now enables an identity map from the input of the transformer at the first layer to the output of the transformer after the last layer. This is in contrast to the canonical transformer, where there are a series of layer normalization operations that non-linearly transform the state encoding.
- Replacing [residual connections](https://paperswithcode.com/method/residual-connection) with gating layers. The authors' experiments found that [GRUs](https://www.paperswithcode.com/method/gru) were the most effective form of gating. |
Given the following machine learning model name: Demon ADAM, provide a description of the model | **Demon Adam** is a stochastic optimizer where the [Demon](https://paperswithcode.com/method/demon) momentum rule is applied to the [Adam](https://paperswithcode.com/method/adam) optimizer.
$$ \beta\_{t} = \beta\_{init}\cdot\frac{\left(1-\frac{t}{T}\right)}{\left(1-\beta\_{init}\right) + \beta\_{init}\left(1-\frac{t}{T}\right)} $$
$$ m\_{t, i} = g\_{t, i} + \beta\_{t}m\_{t-1, i} $$
$$ v\_{t+1} = \beta\_{2}v\_{t} + \left(1-\beta\_{2}\right)g^{2}\_{t} $$
$$ \theta_{t} = \theta_{t-1} - \eta\frac{\hat{m}\_{t}}{\sqrt{\hat{v}\_{t}} + \epsilon} $$ |
Given the following machine learning model name: Instances-Pixels Balance Index, provide a description of the model | In a given dataset for semantic image segmentation, the number of samples per class should be the same, so that no classifier would be biased towards the majority class (here included the background). It is very difficult, if not impossible, to achieve a perfect balance between the several classes of objects of a dataset. Considering that the segmentation of the objects is accomplished at the pixel level, the number of pixels for each class must be taken into account. As a matter of fact, in image semantic segmentation,
different classes and the background may have quite different
sizes. Therefore, the image segmentation problem is naturally unbalanced. The IPBI is based on the concept of entropy, a common measure used in many fields of science. In a general sense, it measures the amount of disorder of a system. For the sake of semantic image segmentation, the ideal dataset should have the same number of instances per class, as well as the same number of pixels in all classes. Similar reasoning can be done considering the number of pixels of all samples in a class, so that we can obtain the
pixels balance measure for the dataset. Overall, IPBI evaluates the balance of pixels and number of instances of an image semantic segmentation dataset and, so, it is usefull to compare different datasets. |
Given the following machine learning model name: NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video, provide a description of the model | **NeuralRecon** is a framework for real-time 3D scene reconstruction from a monocular video. Unlike previous methods that estimate single-view depth maps separately on each key-frame and fuse them later, NeuralRecon proposes to directly reconstruct local surfaces represented as sparse TSDF volumes for each video fragment sequentially by a neural network. A learning-based TSDF fusion module based on gated recurrent units is used to guide the network to fuse features from previous fragments. This design allows the network to capture local smoothness prior and global shape prior of 3D surfaces. |
Given the following machine learning model name: Chimera, provide a description of the model | **Chimera** is a pipeline model parallelism scheme which combines bidirectional pipelines for efficiently training large-scale models. The key idea of Chimera is to combine two pipelines in different directions (down and up pipelines).
Denote $N$ as the number of micro-batches executed by each worker within a training iteration, and $D$ the number of pipeline stages (depth), and $P$ the number of workers.
The Figure shows an example with four pipeline stages (i.e. $D=4$). Here we assume there are $D$ micro-batches executed by each worker within a training iteration, namely $N=D$, which is the minimum to keep all the stages active.
In the down pipeline, stage$\_{0}$∼stage$\_{3}$ are mapped to $P\_{0}∼P\_{3}$ linearly, while in the up pipeline the stages are mapped in a completely opposite order. The $N$ (assuming an even number) micro-batches are equally partitioned among the two pipelines. Each pipeline schedules $N/2$ micro-batches using 1F1B strategy, as shown in the left part of the Figure. Then, by merging these two pipelines together, we obtain the pipeline schedule of Chimera. Given an even number of stages $D$ (which can be easily satisfied in practice), it is guaranteed that there is no conflict (i.e., there is at most one micro-batch occupies the same time slot on each worker) during merging. |
Given the following machine learning model name: WGAN-GP Loss, provide a description of the model | **Wasserstein Gradient Penalty Loss**, or **WGAN-GP Loss**, is a loss used for generative adversarial networks that augments the Wasserstein loss with a gradient norm penalty for random samples $\mathbf{\hat{x}} \sim \mathbb{P}\_{\hat{\mathbf{x}}}$ to achieve Lipschitz continuity:
$$ L = \mathbb{E}\_{\mathbf{\hat{x}} \sim \mathbb{P}\_{g}}\left[D\left(\tilde{\mathbf{x}}\right)\right] - \mathbb{E}\_{\mathbf{x} \sim \mathbb{P}\_{r}}\left[D\left(\mathbf{x}\right)\right] + \lambda\mathbb{E}\_{\mathbf{\hat{x}} \sim \mathbb{P}\_{\hat{\mathbf{x}}}}\left[\left(||\nabla\_{\tilde{\mathbf{x}}}D\left(\mathbf{\tilde{x}}\right)||\_{2}-1\right)^{2}\right]$$
It was introduced as part of the [WGAN-GP](https://paperswithcode.com/method/wgan-gp) overall model. |
Given the following machine learning model name: Wide Residual Block, provide a description of the model | A **Wide Residual Block** is a type of [residual block](https://paperswithcode.com/method/residual-block) that utilises two conv 3x3 layers (with [dropout](https://paperswithcode.com/method/dropout)). This is wider than other variants of residual blocks (for instance [bottleneck residual blocks](https://paperswithcode.com/method/bottleneck-residual-block)). It was proposed as part of the [WideResNet](https://paperswithcode.com/method/wideresnet) CNN architecture. |
Given the following machine learning model name: ProphetNet, provide a description of the model | **ProphetNet** is a sequence-to-sequence pre-training model that introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of optimizing one-step-ahead prediction in the traditional sequence-to-sequence model, the ProphetNet is optimized by $n$-step ahead prediction that predicts the next $n$ tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and further help predict multiple future tokens. |
Given the following machine learning model name: Noisy Linear Layer, provide a description of the model | A **Noisy Linear Layer** is a [linear layer](https://paperswithcode.com/method/linear-layer) with parametric noise added to the weights. This induced stochasticity can be used in reinforcement learning networks for the agent's policy to aid efficient exploration. The parameters of the noise are learned with gradient descent along with any other remaining network weights. Factorized Gaussian noise is the type of noise usually employed.
The noisy linear layer takes the form:
$$y = \left(b + Wx\right) + \left(b\_{noisy}\odot\epsilon^{b}+\left(W\_{noisy}\odot\epsilon^{w}\right)x\right) $$
where $\epsilon^{b}$ and $\epsilon^{w}$ are random variables. |
Given the following machine learning model name: PAUSE, provide a description of the model | **PAUSE**, or **Positive and Annealed Unlabeled Sentence Embedding**, is an approach for learning sentence embeddings from a partially labeled dataset. It is based on a dual encoder schema that is widely adopted in supervised sentence embedding training. Each individual sample $\mathbf{x}$ contains a pair of hypothesis and premise sentences $(x\_{i},x^{\prime}_{i})$, each of which is fed into a pretrained encoder (e.g. [BERT](https://paperswithcode.com/method/bert)). As shown in Figure, the two encoders are identical during the training by sharing their weights. |
Given the following machine learning model name: Self-Adversarial Negative Sampling, provide a description of the model | **Self-Adversarial Negative Sampling** is a negative sampling technique used for methods like [word embeddings](https://paperswithcode.com/methods/category/word-embeddings) and [knowledge graph embeddings](https://paperswithcode.com/methods/category/graph-embeddings). The traditional negative sampling loss from word2vec for optimizing distance-based models be written as:
$$ L = −\log\sigma\left(\gamma − d\_{r}\left(\mathbf{h}, \mathbf{t}\right)\right) − \sum^{n}\_{i=1}\frac{1}{k}\log\sigma\left(d\_{r}\left(\mathbf{h}^{'}\_{i}, \mathbf{t}^{'}\_{i}\right) - \gamma\right) $$
where $\gamma$ is a fixed margin, $\sigma$ is the sigmoid function, and $\left(\mathbf{h}^{'}\_{i}, r, \mathbf{t}^{'}\_{i}\right)$ is the $i$-th negative triplet.
The negative sampling loss samples the negative triplets in a uniform way. Such a uniform negative sampling suffers the problem of inefficiency since many samples are obviously false as training goes on, which does not provide any meaningful information. Therefore, the authors propose an approach called self-adversarial negative sampling, which samples negative triples according to the current embedding model. Specifically, we sample negative triples from the following distribution:
$$ p\left(h^{'}\_{j}, r, t^{'}\_{j} | \text{set}\left(h\_{i}, r\_{i}, t\_{i} \right) \right) = \frac{\exp\alpha{f}\_{r}\left(\mathbf{h}^{'}\_{j}, \mathbf{t}^{'}\_{j}\right)}{\sum\_{i=1}\exp\alpha{f}\_{r}\left(\mathbf{h}^{'}\_{i}, \mathbf{t}^{'}\_{i}\right)} $$
where $\alpha$ is the temperature of sampling. Moreover, since the sampling procedure may be costly, the authors treat the above probability as the weight of the negative sample. Therefore, the final negative sampling loss with self-adversarial training takes the following form:
$$ L = −\log\sigma\left(\gamma − d\_{r}\left(\mathbf{h}, \mathbf{t}\right)\right) − \sum^{n}\_{i=1}p\left(h^{'}\_{i}, r, t^{'}\_{i}\right)\log\sigma\left(d\_{r}\left(\mathbf{h}^{'}\_{i}, \mathbf{t}^{'}\_{i}\right) - \gamma\right) $$ |
Given the following machine learning model name: Contrastive Video Representation Learning, provide a description of the model | **Contrastive Video Representation Learning**, or **CVRL**, is a self-supervised contrastive learning framework for learning spatiotemporal visual representations from unlabeled videos. Representations are learned using a contrastive loss, where two clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away. Data augmentations are designed involving spatial and temporal cues. Concretely, a [temporally consistent spatial augmentation](https://paperswithcode.com/method/temporally-consistent-spatial-augmentation#) method is used to impose strong spatial augmentations on each frame of the video while maintaining the temporal consistency across frames. A sampling-based temporal augmentation method is also used to avoid overly enforcing invariance on clips that are distant in time.
End-to-end, from a raw video, we first sample a temporal interval from a monotonically decreasing distribution. The temporal interval represents the number of frames between the start points of two clips, and we sample two clips from a video according to this interval. Afterwards we apply a [temporally consistent spatial augmentation](https://paperswithcode.com/method/temporally-consistent-spatial-augmentation) to each of the clips and feed them into a 3D backbone with an MLP head. The contrastive loss is used to train the network to attract the clips from the same video and repel the clips from different videos in the embedding space. |
Given the following machine learning model name: Routing Transformer, provide a description of the model | The **Routing Transformer** is a [Transformer](https://paperswithcode.com/method/transformer) that endows self-attention with a sparse routing module based on online k-means. Each attention module considers a clustering of the space: the current timestep only attends to context belonging to the same cluster. In other word, the current time-step query is routed to a limited number of context through its cluster assignment. |
Given the following machine learning model name: Efficient Spatial Pyramid, provide a description of the model | An **Efficient Spatial Pyramid (ESP)** is an image model block based on a factorization principle that decomposes a standard [convolution](https://paperswithcode.com/method/convolution) into two steps: (1) point-wise convolutions and (2) spatial pyramid of dilated convolutions. The point-wise convolutions help in reducing the computation, while the spatial pyramid of dilated convolutions re-samples the feature maps to learn the representations from large effective receptive field. This allows for increased efficiency compared to another image blocks like [ResNeXt](https://paperswithcode.com/method/resnext) blocks and Inception modules. |
Given the following machine learning model name: MLFPN, provide a description of the model | **Multi-Level Feature Pyramid Network**, or **MLFPN**, is a feature pyramid block used in object detection models, notably [M2Det](https://paperswithcode.com/method/m2det). We first fuse multi-level features (i.e. multiple layers) extracted by a backbone as a base feature, and then feed it into a block of alternating joint Thinned U-shape Modules ([TUM](https://paperswithcode.com/method/tum)) and Feature Fusion Modules (FFM) to extract more representative, multi-level multi-scale features. Finally, we gather up the feature maps with equivalent scales to construct the final feature pyramid for object detection. Decoder layers that form the final feature pyramid are much deeper than the layers in the backbone, namely, they are more representative. Moreover, each feature map in the final feature pyramid consists of the decoder layers from multiple levels. Hence, the feature pyramid block is called Multi-Level Feature Pyramid Network (MLFPN). |
Given the following machine learning model name: ADAHESSIAN, provide a description of the model | AdaHessian achieves new state-of-the-art results by a large margin as compared to other adaptive optimization methods, including variants of [ADAM](https://paperswithcode.com/method/adam). In particular, we perform extensive tests on CV, NLP, and recommendation system tasks and find that AdaHessian: (i) achieves 1.80%/1.45% higher accuracy on ResNets20/32 on Cifar10, and 5.55% higher accuracy on ImageNet as compared to ADAM; (ii) outperforms ADAMW for transformers by 0.27/0.33 BLEU score on IWSLT14/WMT14 and 1.8/1.0 PPL on PTB/Wikitext-103; and (iii) achieves 0.032% better score than [AdaGrad](https://paperswithcode.com/method/adagrad) for DLRM on the Criteo Ad Kaggle dataset. Importantly, we show that the cost per iteration of AdaHessian is comparable to first-order methods, and that it exhibits robustness towards its hyperparameters. |
Given the following machine learning model name: Heterogeneous Molecular Graph Neural Network, provide a description of the model | As they carry great potential for modeling complex interactions, graph neural network (GNN)-based methods have been widely used to predict quantum mechanical properties of molecules. Most of the existing methods treat molecules as molecular graphs in which atoms are modeled as nodes. They characterize each atom's chemical environment by modeling its pairwise interactions with other atoms in the molecule. Although these methods achieve a great success, limited amount of works explicitly take many-body interactions, i.e., interactions between three and more atoms, into consideration. In this paper, we introduce a novel graph representation of molecules, heterogeneous molecular graph (HMG) in which nodes and edges are of various types, to model many-body interactions. HMGs have the potential to carry complex geometric information. To leverage the rich information stored in HMGs for chemical prediction problems, we build heterogeneous molecular graph neural networks (HMGNN) on the basis of a neural message passing scheme. HMGNN incorporates global molecule representations and an attention mechanism into the prediction process. The predictions of HMGNN are invariant to translation and rotation of atom coordinates, and permutation of atom indices. Our model achieves state-of-the-art performance in 9 out of 12 tasks on the QM9 dataset. |
Given the following machine learning model name: DVD-GAN, provide a description of the model | **DVD-GAN** is a generative adversarial network for video generation built upon the [BigGAN](https://paperswithcode.com/method/biggan) architecture.
DVD-GAN uses two discriminators: a Spatial Discriminator $\mathcal{D}\_{S}$ and a
Temporal Discriminator $\mathcal{D}\_{T}$. $\mathcal{D}\_{S}$ critiques single frame content and structure by randomly sampling $k$ full-resolution frames and judging them individually. The temporal discriminator $\mathcal{D}\_{T}$ must provide $G$ with the learning signal to generate movement (not evaluated by $\mathcal{D}\_{S}$).
The input to $G$ consists of a Gaussian latent noise $z \sim N\left(0, I\right)$ and a learned linear embedding $e\left(y\right)$ of the desired class $y$. Both inputs are 120-dimensional vectors. $G$ starts by computing an affine transformation of $\left[z; e\left(y\right)\right]$ to a $\left[4, 4, ch\_{0}\right]$-shaped tensor. $\left[z; e\left(y\right)\right]$ is used as the input to all class-[conditional Batch Normalization](https://paperswithcode.com/method/conditional-batch-normalization) layers
throughout $G$. This is then treated as the input (at each frame we would like to generate) to a Convolutional [GRU](https://paperswithcode.com/method/gru).
This RNN is unrolled once per frame. The output of this RNN is processed by two residual blocks. The time dimension is combined with the batch dimension here, so each frame proceeds through the blocks independently. The output of these blocks has width and height dimensions which
are doubled (we skip upsampling in the first block). This is repeated a number of times, with the
output of one RNN + residual group fed as the input to the next group, until the output tensors have
the desired spatial dimensions.
The spatial discriminator $\mathcal{D}\_{S}$ functions almost identically to BigGAN’s discriminator. A score is calculated for each of the uniformly sampled $k$ frames (default $k = 8$) and the $\mathcal{D}\_{S}$ output is the sum over per-frame scores. The temporal discriminator $\mathcal{D}\_{T}$ has a similar architecture, but pre-processes the real or generated video with a $2 \times 2$ average-pooling downsampling function $\phi$. Furthermore, the first two residual blocks of $\mathcal{D}\_{T}$ are 3-D, where every [convolution](https://paperswithcode.com/method/convolution) is replaced with a 3-D convolution with a kernel size of $3 \times 3 \times 3$. The rest of the architecture follows BigGAN. |
Given the following machine learning model name: InstaBoost, provide a description of the model | **InstaBoost** is a data augmentation technique for instance segmentation that utilises existing instance mask annotations.
Intuitively in a small neighbor area of $(x_0, y_0, 1, 0)$, the probability map $P(x, y, s, r)$ should be high-valued since images are usually continuous and redundant in pixel level. Based on this, InstaBoost is a form of augmentation where we apply object jittering that randomly samples transformation tuples from the neighboring space of identity transform $(x_0, y_0, 1, 0)$ and paste the cropped object following affine transform $\mathbf{H}$. |
Given the following machine learning model name: Source Hypothesis Transfer, provide a description of the model | **Source Hypothesis Transfer**, or **SHOT**, is a representation learning framework for unsupervised domain adaptation. SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and self-supervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. |
Given the following machine learning model name: Memory Network, provide a description of the model | A **Memory Network** provides a memory component that can be read from and written to with the inference capabilities of a neural network model. The motivation is that many neural networks lack a long-term memory component, and their existing memory component encoded by states and weights is too small and not compartmentalized enough to accurately remember facts from the past (RNNs for example, have difficult memorizing and doing tasks like copying).
A memory network consists of a memory $\textbf{m}$ (an array of objects indexed by $\textbf{m}\_{i}$ and four potentially learned components:
- Input feature map $I$ - feature representation of the data input.
- Generalization $G$ - updates old memories given the new input.
- Output feature map $O$ - produces new feature map given $I$ and $G$.
- Response $R$ - converts output into the desired response.
Given an input $x$ (e.g., an input character, word or sentence depending on the granularity chosen, an image or an audio signal) the flow of the model is as follows:
1. Convert $x$ to an internal feature representation $I\left(x\right)$.
2. Update memories $m\_{i}$ given the new input: $m\_{i} = G\left(m\_{i}, I\left(x\right), m\right)$, $\forall{i}$.
3. Compute output features $o$ given the new input and the memory: $o = O\left(I\left(x\right), m\right)$.
4. Finally, decode output features $o$ to give the final response: $r = R\left(o\right)$.
This process is applied at both train and test time, if there is a distinction between such phases, that
is, memories are also stored at test time, but the model parameters of $I$, $G$, $O$ and $R$ are not updated. Memory networks cover a wide class of possible implementations. The components $I$, $G$, $O$ and $R$ can potentially use any existing ideas from the machine learning literature.
Image Source: [Adrian Colyer](https://blog.acolyer.org/2016/03/10/memory-networks/) |
Given the following machine learning model name: PULSE, provide a description of the model | **PULSE** is a self-supervised photo upsampling algorithm. Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the downscaling loss, which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, the authors aim to restrict the search space to guarantee realistic outputs. |
Given the following machine learning model name: Conditional Batch Normalization, provide a description of the model | **Conditional Batch Normalization (CBN)** is a class-conditional variant of [batch normalization](https://paperswithcode.com/method/batch-normalization). The key idea is to predict the $\gamma$ and $\beta$ of the batch normalization from an embedding - e.g. a language embedding in VQA. CBN enables the linguistic embedding to manipulate entire feature maps by scaling them up or down, negating them, or shutting them off. CBN has also been used in [GANs](https://paperswithcode.com/methods/category/generative-adversarial-networks) to allow class information to affect the batch normalization parameters.
Consider a single convolutional layer with batch normalization module $\text{BN}\left(F\_{i,c,h,w}|\gamma\_{c}, \beta\_{c}\right)$ for which pretrained scalars $\gamma\_{c}$ and $\beta\_{c}$ are available. We would like to directly predict these affine scaling parameters from, e.g., a language embedding $\mathbf{e\_{q}}$. When starting the training procedure, these parameters must be close to the pretrained values to recover the original [ResNet](https://paperswithcode.com/method/resnet) model as a poor initialization could significantly deteriorate performance. Unfortunately, it is difficult to initialize a network to output the pretrained $\gamma$ and $\beta$. For these reasons, the authors propose to predict a change $\delta\beta\_{c}$ and $\delta\gamma\_{c}$ on the frozen original scalars, for which it is straightforward to initialize a neural network to produce an output with zero-mean and small variance.
The authors use a one-hidden-layer MLP to predict these deltas from a question embedding $\mathbf{e\_{q}}$ for all feature maps within the layer:
$$\Delta\beta = \text{MLP}\left(\mathbf{e\_{q}}\right)$$
$$\Delta\gamma = \text{MLP}\left(\mathbf{e\_{q}}\right)$$
So, given a feature map with $C$ channels, these MLPs output a vector of size $C$. We then add these predictions to the $\beta$ and $\gamma$ parameters:
$$ \hat{\beta}\_{c} = \beta\_{c} + \Delta\beta\_{c} $$
$$ \hat{\gamma}\_{c} = \gamma\_{c} + \Delta\gamma\_{c} $$
Finally, these updated $\hat{β}$ and $\hat{\gamma}$ are used as parameters for the batch normalization: $\text{BN}\left(F\_{i,c,h,w}|\hat{\gamma\_{c}}, \hat{\beta\_{c}}\right)$. The authors freeze all ResNet parameters, including $\gamma$ and $\beta$, during training. A ResNet consists of
four stages of computation, each subdivided in several residual blocks. In each block, the authors apply CBN to the three convolutional layers. |
Given the following machine learning model name: Energy Based Process, provide a description of the model | **Energy Based Processes** extend energy based models to exchangeable data while allowing neural network parameterizations of the energy function. They extend the previously separate stochastic process and latent variable model perspectives in a common framework. The result is a generalization of [Gaussian processes](https://paperswithcode.com/method/gaussian-process) and Student-t processes that exploits EBMs for greater flexibility. |
Given the following machine learning model name: Principal Components Analysis, provide a description of the model | **Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.
Image Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg) |
Given the following machine learning model name: Fast-OCR, provide a description of the model | Fast-OCR is a new lightweight detection network that incorporates features from existing models focused on the speed/accuracy trade-off, such as [YOLOv2](https://paperswithcode.com/method/yolov2), [CR-NET](https://paperswithcode.com/method/cr-net), and Fast-[YOLOv4](https://paperswithcode.com/method/yolov4). |
Given the following machine learning model name: Latent Diffusion Model, provide a description of the model | Diffusion models applied to latent spaces, which are normally built with (Variational) Autoencoders. |
Given the following machine learning model name: Protagonist Antagonist Induced Regret Environment Design, provide a description of the model | **Protagonist Antagonist Induced Regret Environment Design**, or **PAIRED**, is an adversarial method for approximate minimax regret to generate environments for reinforcement learning. It introduces an antagonist which is allied with the environment generating adversary. The primary agent we are trying to train is the protagonist. The environment adversary’s goal is to design environments in which the antagonist achieves high reward and the protagonist receives low reward. If the adversary generates unsolvable environments, the antagonist and protagonist would perform the same and the adversary would get a score of zero, but if the adversary finds environments the antagonist solves and the protagonist does not solve, the adversary achieves a positive score. Thus, the environment adversary is incentivized to create challenging but feasible environments, in which the antagonist can outperform the protagonist. Moreover, as the protagonist learns to solves the simple environments, the antagonist must generate more complex environments to make the protagonist fail, increasing the complexity of the generated tasks and leading to automatic curriculum generation. |
Given the following machine learning model name: DV3 Convolution Block, provide a description of the model | **DV3 Convolution Block** is a convolutional block used for the [Deep Voice 3](https://paperswithcode.com/method/deep-voice-3) text-to-speech architecture. It consists of a 1-D [convolution](https://paperswithcode.com/method/convolution) with a gated linear unit and a [residual connection](https://paperswithcode.com/method/residual-connection). In the Figure, $c$ denotes the dimensionality of the input. The convolution output of size $2 \cdot c$ is split into equal-sized portions: the gate vector and the input vector. A scaling factor $\sqrt{0.5}$ is used to ensure that we preserve the input variance early in training. The gated linear unit provides a linear path for the gradient flow, which alleviates the vanishing gradient issue for stacked convolution blocks while retaining non-linearity. To introduce speaker-dependent control, a speaker-dependent embedding is added as a bias to the convolution filter output, after a softsign function. The authors use the softsign nonlinearity because it limits the range of the output while also avoiding the saturation problem that exponential based nonlinearities sometimes exhibit. Convolution filter weights are initialized with zero-mean and unit-variance activations throughout the entire network. |
Given the following machine learning model name: k-Sparse Autoencoder, provide a description of the model | **k-Sparse Autoencoders** are autoencoders with linear activation function, where in hidden layers only the $k$ highest activities are kept. This achieves exact sparsity in the hidden representation. Backpropagation only goes through the the top $k$ activated units. This can be achieved with a [ReLU](https://paperswithcode.com/method/relu) layer with an adjustable threshold. |
Given the following machine learning model name: Self-Attention Network, provide a description of the model | **Self-Attention Network** (**SANet**) proposes two variations of self-attention used for image recognition: 1) pairwise self-attention which generalizes standard [dot-product attention](https://paperswithcode.com/method/dot-product-attention) and is fundamentally a set operator, and 2) patchwise self-attention which is strictly more powerful than [convolution](https://paperswithcode.com/method/convolution). |
Given the following machine learning model name: PP-YOLOv2, provide a description of the model | **PP-YOLOv2** is an object detector that extends upon [PP-YOLO](https://www.paperswithcode.com/method/pp-yolo) with several refinements:
- A [Path Aggregation Network](https://paperswithcode.com/method/pafpn) is included for the FPN to compose bottom-up paths.
- [Mish Activation functions](https://paperswithcode.com/method/mish) are used.
- The input size is expanded.
- An IoU aware branch is calculated with a soft label format. |
Given the following machine learning model name: Local SGD, provide a description of the model | **Local SGD** is a distributed training technique that runs [SGD](https://paperswithcode.com/method/sgd) independently in parallel on different workers and averages the sequences only once in a while. |
Given the following machine learning model name: Adversarial Color Enhancement, provide a description of the model | **Adversarial Color Enhancement** is an approach to generating unrestricted adversarial images by optimizing a color filter via gradient descent. |
Given the following machine learning model name: Deep Extreme Cut, provide a description of the model | **DEXTR**, or **Deep Extreme Cut**, obtains an object segmentation from its four extreme points: the left-most, right-most, top, and bottom pixels. The annotated extreme points are given as a guiding signal to the input of the network. To this end, we create a [heatmap](https://paperswithcode.com/method/heatmap) with activations in the regions of extreme points. We center a 2D Gaussian around each of the points, in order to create a single heatmap. The heatmap is concatenated with the RGB channels of the input image, to form a 4-channel input for the CNN. In order to focus on the object of interest, the input is cropped by the bounding box, formed from the extreme point annotations. To include context on the resulting
crop, we relax the tight bounding box by several pixels. After the pre-processing step that comes exclusively from the extreme clicks, the input consists of an RGB crop including an object, plus its extreme points.
[ResNet](https://paperswithcode.com/method/resnet)-101 is chosen as backbone of the architecture. We remove the fully connected layers as well as the [max pooling](https://paperswithcode.com/method/max-pooling) layers in the last two stages to preserve acceptable output resolution for dense prediction, and we introduce atrous convolutions in the last two stages to maintain the same receptive field. After the last ResNet-101 stage, we introduce a pyramid scene parsing module to aggregate global context to the final feature map. The output of the CNN is a probability map representing whether a pixel belongs to the object that we want to segment or not. The CNN is trained to minimize the standard cross entropy loss, which takes into account that different classes occur with different frequency in a dataset. |
Given the following machine learning model name: Network On Network, provide a description of the model | Network On Network (NON) is practical tabular data classification model based on deep neural network to provide accurate predictions. Various deep methods have been proposed and promising progress has been made. However, most of them use operations like neural network and factorization machines to fuse the embeddings of different features directly, and linearly combine the outputs of those operations to get the final prediction. As a result, the intra-field information and the non-linear interactions between those operations (e.g. neural network and factorization machines) are ignored. Intra-field information is the information that features inside each field belong to the same field. NON is proposed to take full advantage of intra-field information and non-linear interactions. It consists of three components: field-wise network at the bottom to capture the intra-field information, across field network in the middle to choose suitable operations data-drivenly, and operation fusion network on the top to fuse outputs of the chosen operations deeply |
Given the following machine learning model name: Center Pooling, provide a description of the model | **Center Pooling** is a pooling technique for object detection that aims to capture richer and more recognizable visual patterns. The geometric centers of objects do not necessarily convey very recognizable visual patterns (e.g., the human head contains strong visual patterns, but the center keypoint is often in the middle of the human body).
The detailed process of center pooling is as follows: the backbone outputs a feature map, and to determine if a pixel in the feature map is a center keypoint, we need to find the maximum value in its both horizontal and vertical directions and add them together. By doing this, center pooling helps the better detection of center keypoints. |
Given the following machine learning model name: TernaryBERT, provide a description of the model | **TernaryBERT** is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based model which ternarizes the weights of a pretrained [BERT](https://paperswithcode.com/method/bert) model to $\{-1,0,+1\}$, with different granularities for word embedding and weights in the Transformer layer. Instead of directly using knowledge distillation to compress a model, it is used to improve the performance of ternarized student model with the same size as the teacher model. In this way, we transfer the knowledge from the highly-accurate teacher model to the ternarized student model with smaller capacity. |
Given the following machine learning model name: AdaSqrt, provide a description of the model | **AdaSqrt** is a stochastic optimization technique that is motivated by the observation that methods like [Adagrad](https://paperswithcode.com/method/adagrad) and [Adam](https://paperswithcode.com/method/adam) can be viewed as relaxations of [Natural Gradient Descent](https://paperswithcode.com/method/natural-gradient-descent).
The updates are performed as follows:
$$ t \leftarrow t + 1 $$
$$ \alpha\_{t} \leftarrow \sqrt{t} $$
$$ g\_{t} \leftarrow \nabla\_{\theta}f\left(\theta\_{t-1}\right) $$
$$ S\_{t} \leftarrow S\_{t-1} + g\_{t}^{2} $$
$$ \theta\_{t+1} \leftarrow \theta\_{t} + \eta\frac{\alpha\_{t}g\_{t}}{S\_{t} + \epsilon} $$ |
Given the following machine learning model name: Hierarchical Style Disentanglement, provide a description of the model | **Hierarchical Style Disentanglement**, or **HiSD**, aims to disentangle different styles in image-to-image translation models. It organizes the labels into a hierarchical structure, where independent tags, exclusive attributes, and disentangled styles are allocated from top to bottom. To make the styles identified to the tags and attributes, the authors carefully redesign the modules, phases, and objectives. |
Given the following machine learning model name: Mixup, provide a description of the model | **Mixup** is a data augmentation technique that generates a weighted combination of random image pairs from the training data. Given two images and their ground truth labels: $\left(x\_{i}, y\_{i}\right), \left(x\_{j}, y\_{j}\right)$, a synthetic training example $\left(\hat{x}, \hat{y}\right)$ is generated as:
$$ \hat{x} = \lambda{x\_{i}} + \left(1 − \lambda\right){x\_{j}} $$
$$ \hat{y} = \lambda{y\_{i}} + \left(1 − \lambda\right){y\_{j}} $$
where $\lambda \sim \text{Beta}\left(\alpha = 0.2\right)$ is independently sampled for each augmented example. |
Given the following machine learning model name: Semantic Cross Attention, provide a description of the model | Semantic Cross Attention (SCA) is based on cross attention, which we restrict with respect to a semantic mask.
The goal of SCA is two-fold depending on what is the query and what is the key. Either it allows to give the feature map information from a semantically restricted set of latents or, respectively, it allows a set of latents to retrieve information in a semantically restricted region of the feature map.
SCA is defined as:
\begin{equation}
\text{SCA}(I_{1}, I_{2}, I_{3}) = \sigma\left(\frac{QK^T\odot I_{3} +\tau \left(1-I_{3}\right)}{\sqrt{d_{in}}}\right)V \quad ,
\end{equation}
where $I_{1},I_{2},I_{3}$ the inputs, with $I_{1}$ attending $I_{2}$, and $I_{3}$ the mask that forces tokens from $I_1$ to attend only specific tokens from $I_2$. The attention values requiring masking are filled with $-\infty$ before the softmax. (In practice $\tau{=}-10^9$), $Q {=} W_QI_{1}$, $K {=} W_KI_{2}$ and $V {=} W_VI_{2}$ the queries, keys and values, and $d_{in}$ the internal attention dimension. $\sigma(.)$ is the softmax operation.
Let $X\in\mathbb{R}^{n\times C}$ be the feature map with n the number of pixels, and C the number of channels. Let $Z\in\mathbb{R}^{m\times d}$ be a set of $m$ latents of dimension $d$ and $s$ the number of semantic labels. Each semantic label is attributed $k$ latents, such that $m=k\times s$. Each semantic label mask is assigned $k$ copies in $S{\in}\{0;1\}^{n \times m}$.
We can differentiate 3 types of SCA:
(a) SCA with pixels $X$ attending latents $Z$: $\text{SCA}(X, Z, S)$, where $W_{Q} {\in} \mathbb{R}^{n\times d_{in}}$ and $W_{K}, W_{V} {\in} \mathbb{R}^{m\times d_{in}}$.
The idea is to force the pixels from a semantic region to attend latents that are associated with the same label.
(b) SCA with latents $Z$ attending pixels $X$: $\text{SCA}(Z, X, S)$, where $W_{Q}{\in} \mathbb{R}^{m\times d_{in}}$, $W_{K}, W_{V} {\in} \mathbb{R}^{n\times d_{in}}$.
The idea is to semantically mask attention values to enforce latents to attend semantically corresponding pixels.
(c) SCA with latents $Z$ attending themselves: $\text{SCA}(Z, Z, M)$, where $W_{Q}, W_{K}, W_{V} {\in} \mathbb{R}^{n\times d_{in}}$. We denote $M\in\mathbb{N}^{m\times m}$ this mask, with $M_{\text{latents}}(i,j) {=} 1$ if the semantic label of latent $i$ is the same as the one of latent $j$; $0$ otherwise.
The idea is to let the latents only attend latents that share the same semantic label. |
Given the following machine learning model name: Grid Sensitive, provide a description of the model | **Grid Sensitive** is a trick for object detection introduced by [YOLOv4](https://paperswithcode.com/method/yolov4). When we decode the coordinate of the bounding box center $x$ and $y$, in original [YOLOv3](https://paperswithcode.com/method/yolov3), we can get them by
$$
\begin{aligned}
&x=s \cdot\left(g\_{x}+\sigma\left(p\_{x}\right)\right) \\
&y=s \cdot\left(g\_{y}+\sigma\left(p\_{y}\right)\right)
\end{aligned}
$$
where $\sigma$ is the sigmoid function, $g\_{x}$ and $g\_{y}$ are integers and $s$ is a scale factor. Obviously, $x$ and $y$ cannot be exactly equal to $s \cdot g\_{x}$ or $s \cdot\left(g\_{x}+1\right)$. This makes it difficult to predict the centres of bounding boxes that just located on the grid boundary. We can address this problem, by changing the equation to
$$
\begin{aligned}
&x=s \cdot\left(g\_{x}+\alpha \cdot \sigma\left(p\_{x}\right)-(\alpha-1) / 2\right) \\
&y=s \cdot\left(g\_{y}+\alpha \cdot \sigma\left(p\_{y}\right)-(\alpha-1) / 2\right)
\end{aligned}
$$
This makes it easier for the model to predict bounding box center exactly located on the grid boundary. The FLOPs added by Grid Sensitive are really small, and can be totally ignored. |
Given the following machine learning model name: YOLOv1, provide a description of the model | **YOLOv1** is a single-stage object detection model. Object detection is framed as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance.
The network uses features from the entire image to predict each bounding box. It also predicts all bounding boxes across all classes for an image simultaneously. This means the network reasons globally about the full image and all the objects in the image. |
Given the following machine learning model name: Point-wise Spatial Attention, provide a description of the model | **Point-wise Spatial Attention (PSA)** is a [semantic segmentation](https://paperswithcode.com/task/semantic-segmentation) module. The goal is capture contextual information, especially in the long range, by aggregating information. Through the PSA module, information aggregation is performed as a kind of information flow where we adaptively learn a pixel-wise global attention map for each position from two perspectives to aggregate contextual information over the entire feature map.
The PSA module takes a spatial feature map $\mathbf{X}$ as input. We denote the spatial size of $\mathbf{X}$ as $H \times W$. Through the two branches as illustrated, we generate pixel-wise global attention maps for each position in feature map $\mathbf{X}$ through several convolutional layers.
We aggregate input feature maps based on attention maps to generate new feature representations with the long-range contextual information incorporated, i.e., $\mathbf{Z}\_{c}$ from the ‘collect’ branch and $\mathbf{Z}\_{d}$ from the ‘distribute’ branch.
We concatenate the new representations $\mathbf{Z}\_{c}$ and $\mathbf{Z}\_{d}$ and apply a convolutional layer with [batch normalization](https://paperswithcode.com/method/batch-normalization) and activation layers for dimension reduction and feature fusion. Then we concatenate the new global contextual feature with the local representation feature $\mathbf{X}$. It is followed by one or several convolutional layers with batch normalization and activation layers to generate the final feature map for following subnetworks. |
Given the following machine learning model name: U-Net Generative Adversarial Network, provide a description of the model | In contrast to typical GANs, a U-Net GAN uses a segmentation network as the discriminator. This segmentation network predicts two classes: real and fake. In doing so, the discriminator gives the generator region-specific feedback. This discriminator design also enables a [CutMix](https://paperswithcode.com/method/cutmix)-based consistency regularization on the two-dimensional output of the U-Net GAN discriminator, which further improves image synthesis quality. |
Given the following machine learning model name: Probabilistic Anchor Assignment, provide a description of the model | **Probabilistic anchor assignment (PAA)** adaptively separates a set of anchors into positive and negative samples for a GT box according to the learning status of the model associated with it. To do so we first define a score of a detected bounding box that reflects both the classification and localization qualities. We then identify the connection between this score and the training objectives and represent the score as the combination of two loss objectives. Based on this scoring scheme, we calculate the scores of individual anchors that reflect how the model finds useful cues to detect a target object in each anchor. With these anchor scores, we aim to find a probability distribution of two modalities that best represents the scores as positive or negative samples as in the Figure.
Under the found probability distribution, anchors with probabilities from the positive component are high are selected as positive samples. This transforms the anchor assignment problem to a maximum likelihood estimation for a probability distribution where the parameters of the distribution is determined by anchor scores. Based on the assumption that anchor scores calculated by the model are samples drawn from a probability distribution, it is expected that the model can infer the sample separation in a probabilistic way, leading to easier training of the model compared to other non-probabilistic assignments. Moreover, since positive samples are adaptively selected based on the anchor score distribution, it does not require a pre-defined number of positive samples nor an IoU threshold. |
Given the following machine learning model name: Region-based Fully Convolutional Network, provide a description of the model | **Region-based Fully Convolutional Networks**, or **R-FCNs**, are a type of region-based object detector. In contrast to previous region-based object detectors such as Fast/[Faster R-CNN](https://paperswithcode.com/method/faster-r-cnn) that apply a costly per-region subnetwork hundreds of times, R-FCN is fully convolutional with almost all computation shared on the entire image.
To achieve this, R-FCN utilises position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. |
Given the following machine learning model name: Cascade R-CNN, provide a description of the model | **Cascade R-CNN** is an object detection architecture that seeks to address problems with degrading performance with increased IoU thresholds (due to overfitting during training and inference-time mismatch between IoUs for which detector is optimal and the inputs). It is a multi-stage extension of the [R-CNN](https://paperswithcode.com/method/r-cnn), where detector stages deeper into the cascade are sequentially more selective against close false positives. The cascade of R-CNN stages are trained sequentially, using the output of one stage to train the next. This is motivated by the observation that the output IoU of a regressor is almost invariably better than the input IoU.
Cascade R-CNN does not aim to mine hard negatives. Instead, by adjusting bounding boxes, each stage aims to find a good set of close false positives for training the next stage. When operating in this manner, a sequence of detectors adapted to increasingly higher IoUs can beat the overfitting problem, and thus be effectively trained. At inference, the same cascade procedure is applied. The progressively improved hypotheses are better matched to the increasing detector quality at each stage. |
Given the following machine learning model name: Packed Levitated Markers, provide a description of the model | **Packed Levitated Markers**, or **PL-Marker**, is a span representation approach for [named entity recognition](https://paperswithcode.com/task/named-entity-recognition-ner) that considers the dependencies between spans (pairs) by strategically packing the markers in the encoder. A pair of Levitated Markers, emphasizing a span, consists of a start marker and an end marker which share the same position embeddings with span’s start and end tokens respectively. In addition, both levitated markers adopt a restricted attention, that is, they are visible to each other, but not to the text token and other pairs of markers. sBased on the above features, the levitated marker would not affect the attended context of the original text tokens, which allows us to flexibly pack a series of related spans with their levitated markers in the encoding phase and thus model their dependencies. |
Given the following machine learning model name: Beta-VAE, provide a description of the model | **Beta-VAE** is a type of variational autoencoder that seeks to discover disentangled latent factors. It modifies [VAEs](https://paperswithcode.com/method/vae) with an adjustable hyperparameter $\beta$ that balances latent channel capacity and independence constraints with reconstruction accuracy. The idea is to maximize the probability of generating the real data while keeping the distance between the real and estimated distributions small, under a threshold $\epsilon$. We can use the Kuhn-Tucker conditions to write this as a single equation:
$$ \mathcal{F}\left(\theta, \phi, \beta; \mathbf{x}, \mathbf{z}\right) = \mathbb{E}\_{q\_{\phi}\left(\mathbf{z}|\mathbf{x}\right)}\left[\log{p}\_{\theta}\left(\mathbf{x}\mid\mathbf{z}\right)\right] - \beta\left[D\_{KL}\left(\log{q}\_{\theta}\left(\mathbf{z}\mid\mathbf{x}\right)||p\left(\mathbf{z}\right)\right) - \epsilon\right]$$
where the KKT multiplier $\beta$ is the regularization coefficient that constrains the capacity of the latent channel $\mathbf{z}$ and puts implicit independence pressure on the learnt posterior due to the isotropic nature of the Gaussian prior $p\left(\mathbf{z}\right)$.
We write this again using the complementary slackness assumption to get the Beta-VAE formulation:
$$ \mathcal{F}\left(\theta, \phi, \beta; \mathbf{x}, \mathbf{z}\right) \geq \mathcal{L}\left(\theta, \phi, \beta; \mathbf{x}, \mathbf{z}\right) = \mathbb{E}\_{q\_{\phi}\left(\mathbf{z}|\mathbf{x}\right)}\left[\log{p}\_{\theta}\left(\mathbf{x}\mid\mathbf{z}\right)\right] - \beta\{D}\_{KL}\left(\log{q}\_{\theta}\left(\mathbf{z}\mid\mathbf{x}\right)||p\left(\mathbf{z}\right)\right)$$ |
Given the following machine learning model name: Local Mixup, provide a description of the model | |
Given the following machine learning model name: Gated Recurrent Unit, provide a description of the model | A **Gated Recurrent Unit**, or **GRU**, is a type of recurrent neural network. It is similar to an [LSTM](https://paperswithcode.com/method/lstm), but only has two gates - a reset gate and an update gate - and notably lacks an output gate. Fewer parameters means GRUs are generally easier/faster to train than their LSTM counterparts.
Image Source: [here](https://www.google.com/url?sa=i&url=https%3A%2F%2Fcommons.wikimedia.org%2Fwiki%2FFile%3AGated_Recurrent_Unit%2C_type_1.svg&psig=AOvVaw3EmNX8QXC5hvyxeenmJIUn&ust=1590332062671000&source=images&cd=vfe&ved=0CA0QjhxqFwoTCMiev9-eyukCFQAAAAAdAAAAABAR) |
Given the following machine learning model name: GA-PID/NN-PID, provide a description of the model | The main control tasks in autonomous vehicles are steering (lateral) and speed (longitudinal) control. PID controllers are widely used in the industry because of their simplicity and good performance, but they are difficult to tune and need additional adaptation to control nonlinear systems with varying parameters. In this paper, the longitudinal control task is addressed by implementing adaptive PID control
using two different approaches: Genetic Algorithms (GA-PID) and then Neural Networks (NN-PID) respectively. The vehicle
nonlinear longitudinal dynamics are modeled using Powertrain blockset library. Finally, simulations are performed to assess
and compare the performance of the two controllers subject to external disturbances. |
Given the following machine learning model name: VisTR, provide a description of the model | **VisTR** is a [Transformer](https://paperswithcode.com/method/transformer) based video instance segmentation model. It views video instance segmentation as a direct end-to-end parallel sequence decoding/prediction problem. Given a video clip consisting of multiple image frames as input, VisTR outputs the sequence of masks for each instance in the video in order directly. At the core is a new, effective instance sequence matching and segmentation strategy, which supervises and segments instances at the sequence level as a whole. VisTR frames the instance segmentation and tracking in the same perspective of similarity learning, thus considerably simplifying the overall pipeline and is significantly different from existing approaches. |
Given the following machine learning model name: Compact Global Descriptor, provide a description of the model | A **Compact Global Descriptor** is an image model block for modelling interactions between positions across different dimensions (e.g., channels, frames). This descriptor enables subsequent convolutions to access the informative global features. It is a form of attention. |
Given the following machine learning model name: Multi-scale Progressive Fusion Network, provide a description of the model | **Multi-scale Progressive Fusion Network** (MSFPN) is a neural network representation for single image deraining. It aims to exploit the correlated information of rain streaks across scales for single image deraining.
Specifically, we first generate the Gaussian pyramid rain images using Gaussian kernels to down-sample the original rain image in sequence. A coarse-fusion module (CFM) is designed to capture the global texture information from these multi-scale rain images through recurrent calculation (Conv-[LSTM](https://paperswithcode.com/method/lstm)), thus enabling the network to cooperatively represent the target rain streak using similar counterparts from global feature space. Meanwhile, the representation of the high-resolution pyramid layer is guided by previous outputs as well as all low-resolution pyramid layers. A finefusion module (FFM) is followed to further integrate these correlated information from different scales. By using the channel attention mechanism, the network not only discriminatively learns the scale-specific knowledge from all preceding pyramid layers, but also reduces the feature redundancy effectively. Moreover, multiple FFMs can be cascaded to form a progressive multi-scale fusion. Finally, a reconstruction module (RM) is appended to aggregate the coarse and fine rain information extracted respectively from CFM and FFM for learning the residual rain image, which is the approximation of real rain streak distribution. |
Given the following machine learning model name: VoiceFilter-Lite, provide a description of the model | **VoiceFilter-Lite** is a single-channel source separation model that runs on the device to preserve only the speech signals from a target user, as part of a streaming speech recognition system. In this architecture, the voice filtering model operates as a frame-by-frame frontend signal processor to enhance the features consumed by the speech recognizer, without reconstructing audio signals from the features. The key contributions are (1) A system to perform speech separation directly on ASR input features; (2) An asymmetric loss function to penalize oversuppression during training, to make the model harmless under various acoustic environments, (3) An adaptive suppression strength mechanism to adapt to different noise conditions. |
Given the following machine learning model name: Modularity preserving NMF, provide a description of the model | |
Given the following machine learning model name: Inception Module, provide a description of the model | An **Inception Module** is an image model block that aims to approximate an optimal local sparse structure in a CNN. Put simply, it allows for us to use multiple types of filter size, instead of being restricted to a single filter size, in a single image block, which we then concatenate and pass onto the next layer. |
Given the following machine learning model name: PLATO-2, provide a description of the model | |
Given the following machine learning model name: Neural Tangent Kernel, provide a description of the model | |
Given the following machine learning model name: Manifold Mixup, provide a description of the model | **Manifold Mixup** is a regularization method that encourages neural networks to predict less confidently on interpolations of hidden representations. It leverages semantic interpolations as an additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance.
Consider training a deep neural network $f\left(x\right) = f\_{k}\left(g\_{k}\left(x\right)\right)$, where $g\_{k}$ denotes the part of the neural network mapping the input data to the hidden representation at layer $k$, and $f\_{k}$ denotes the
part mapping such hidden representation to the output $f\left(x\right)$. Training $f$ using Manifold Mixup is performed in five steps:
(1) Select a random layer $k$ from a set of eligible layers $S$ in the neural network. This set may include the input layer $g\_{0}\left(x\right)$.
(2) Process two random data minibatches $\left(x, y\right)$ and $\left(x', y'\right)$ as usual, until reaching layer $k$. This provides us with two intermediate minibatches $\left(g\_{k}\left(x\right), y\right)$ and $\left(g\_{k}\left(x'\right), y'\right)$.
(3) Perform Input [Mixup](https://paperswithcode.com/method/mixup) on these intermediate minibatches. This produces the mixed minibatch:
$$
\left(\tilde{g}\_{k}, \tilde{y}\right) = \left(\text{Mix}\_{\lambda}\left(g\_{k}\left(x\right), g\_{k}\left(x'\right)\right), \text{Mix}\_{\lambda}\left(y, y'\right
)\right),
$$
where $\text{Mix}\_{\lambda}\left(a, b\right) = \lambda \cdot a + \left(1 − \lambda\right) \cdot b$. Here, $\left(y, y'
\right)$ are one-hot labels, and the mixing coefficient
$\lambda \sim \text{Beta}\left(\alpha, \alpha\right)$ as in mixup. For instance, $\alpha = 1.0$ is equivalent to sampling $\lambda \sim U\left(0, 1\right)$.
(4) Continue the forward pass in the network from layer $k$ until the output using the mixed minibatch $\left(\tilde{g}\_{k}, \tilde{y}\right)$.
(5) This output is used to compute the loss value and
gradients that update all the parameters of the neural network. |
Given the following machine learning model name: node2vec, provide a description of the model | **node2vec** is a framework for learning graph embeddings for nodes in graphs. Node2vec maximizes a likelihood objective over mappings which preserve neighbourhood distances in higher dimensional spaces. From an algorithm design perspective, node2vec exploits the freedom to define neighbourhoods for nodes and provide an explanation for the effect of the choice of neighborhood on the learned representations.
For each node, node2vec simulates biased random walks based on an efficient network-aware search strategy and the nodes appearing in the random walk define neighbourhoods. The search strategy accounts for the relative influence nodes exert in a network. It also generalizes prior work alluding to naive search strategies by providing flexibility in exploring neighborhoods. |
Given the following machine learning model name: OSA (identity mapping + eSE), provide a description of the model | **One-Shot Aggregation with an Identity Mapping and eSE** is an image model block that extends [one-shot aggregation](https://paperswithcode.com/method/one-shot-aggregation) with a [residual connection](https://paperswithcode.com/method/residual-connection) and [effective squeeze-and-excitation block](https://paperswithcode.com/method/effective-squeeze-and-excitation-block). It is proposed as part of the [VoVNetV2](https://paperswithcode.com/method/vovnetv2) CNN architecture.
The module adds an identity mapping to the OSA module - the input path is connected to the end of an OSA module that is able to backpropagate the gradients of every OSA module in an end-to-end manner on each stage like a [ResNet](https://paperswithcode.com/method/resnet). Additionally, a [channel attention module](https://paperswithcode.com/method/channel-attention-module) - effective Squeeze-Excitation - is used which is like regular [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-block) but uses only one FC layer with $C$ channels instead of two FCs without a channel dimension reduction, which maintains channel information. |
Given the following machine learning model name: Teacher-Tutor-Student Knowledge Distillation, provide a description of the model | **Teacher-Tutor-Student Knowledge Distillation** is a method for image virtual try-on models. It treats fake images produced by the parser-based method as "tutor knowledge", where the artifacts can be corrected by real "teacher knowledge", which is extracted from the real person images in a self-supervised way. Other than using real images as supervisions, knowledge distillation is formulated in the try-on problem as distilling the appearance flows between the person image and the garment image, enabling the finding of dense correspondences between them to produce high-quality results. |
Given the following machine learning model name: mT0, provide a description of the model | **mT0** is a Multitask prompted finetuning (MTF) variant of mT5. |
Given the following machine learning model name: Dynamic Graph Event Detection, provide a description of the model | |
Given the following machine learning model name: CPM-2, provide a description of the model | **CPM-2** is a 11 billion parameters pre-trained language model based on a standard Transformer architecture consisting of a bidirectional encoder and a unidirectional decoder. The model is pre-trained on WuDaoCorpus which contains 2.3TB cleaned Chinese data as well as 300GB cleaned English data. The pre-training process of CPM-2 can be divided into three stages: Chinese pre-training, bilingual pre-training, and MoE pre-training. Multi-stage training with knowledge inheritance can significantly reduce the computation cost. |
Given the following machine learning model name: Noise2Fast, provide a description of the model | **Noise2Fast** is a model for single image blind denoising. It is similar to masking based methods -- filling in the pixel gaps -- in that the network is blind to many of the input pixels during training. The method is inspired by Neighbor2Neighbor, where the neural network learns a mapping between adjacent pixels. Noise2Fast is tuned to speed by using a discrete four image training set obtained by a form of downsampling called “checkerboard downsampling. |
Given the following machine learning model name: Context Aggregated Bi-lateral Network for Semantic Segmentation, provide a description of the model | With the increasing demand of autonomous systems, pixelwise semantic segmentation for visual scene understanding needs to be not only accurate but also efficient for potential real-time applications. In this paper, we propose Context Aggregation Network, a dual branch convolutional neural network, with significantly lower computational costs as compared to the state-of-the-art, while maintaining a competitive prediction accuracy. Building upon the existing dual branch architectures for high-speed semantic segmentation, we design a high resolution branch for effective spatial detailing and a context branch with light-weight versions of global aggregation and local distribution blocks, potent to capture both long-range and local contextual dependencies required for accurate semantic segmentation, with low computational overheads. We evaluate our method on two semantic segmentation datasets, namely Cityscapes dataset and UAVid dataset. For Cityscapes test set, our model achieves state-of-the-art results with mIOU of 75.9%, at 76 FPS on an NVIDIA RTX 2080Ti and 8 FPS on a Jetson Xavier NX. With regards to UAVid dataset, our proposed network achieves mIOU score of 63.5% with high execution speed (15 FPS). |
Given the following machine learning model name: Adaptive Content Generating and Preserving Network, provide a description of the model | **ACGPN**, or **Adaptive Content Generating and Preserving Network**, is a [generative adversarial network](https://www.paperswithcode.com/method/category/generative-adversarial-network) for virtual try-on clothing applications.
In Step I, the Semantic Generation Module (SGM) takes the target clothing image $\mathcal{T}\_{c}$, the pose map $\mathcal{M}\_{p}$, and the fused body part mask $\mathcal{M}^{F}$ as the input to predict the semantic layout and to output the synthesized body part mask $\mathcal{M}^{S}\_{\omega}$ and the target clothing mask $\mathcal{M}^{S\_{c}$.
In Step II, the Clothes Warping Module (CWM) warps the target clothing image to $\mathcal{T}^{R}\_{c}$ according to the predicted semantic layout, where a second-order difference constraint is introduced to stabilize the warping process.
In Steps III and IV, the Content Fusion Module (CFM) first produces the composited body part mask $\mathcal{M}^{C}\_{\omega}$ using the original clothing mask $\mathcal{M}\_{c}$, the synthesized clothing mask $\mathcal{M}^{S}\_{c}$, the body part mask $\mathcal{M}\_{\omega}$, and the synthesized body part mask $\mathcal{M}\_{\omega}^{S}$, and then exploits a fusion network to generate the try-on images $\mathcal{I}^{S}$ by utilizing the information $\mathcal{T}^{R}\_{c}$, $\mathcal{M}^{S}\_{c}$, and the body part image $I\_{\omega}$ from previous steps. |
Given the following machine learning model name: Self-Supervised Temporal Domain Adaptation, provide a description of the model | **Self-Supervised Temporal Domain Adaptation (SSTDA)** is a method for action segmentation with self-supervised temporal domain adaptation. It contains two self-supervised auxiliary tasks (binary and sequential domain prediction) to jointly align cross-domain feature spaces embedded with local and global temporal dynamics. |
Given the following machine learning model name: Simulation as Augmentation, provide a description of the model | **SimAug**, or **Simulation as Augmentation**, is a data augmentation method for trajectory prediction. It augments the representation such that it is robust to the variances in semantic scenes and camera views. First, to deal with the gap between real and synthetic semantic scene, it represents each training trajectory by high-level scene semantic segmentation features, and defends the model from adversarial examples generated by whitebox attack methods. Second, to overcome the changes in camera views, it generates multiple views for the same trajectory, and encourages the model to focus on the “hardest” view to which the model has learned. The classification loss is adopted and the view with the highest loss is favored during training. Finally, the augmented trajectory is computed as a convex combination of the trajectories generated in previous steps. The trajectory prediction model is built on a multi-scale representation and the final model is trained to minimize the empirical vicinal risk over the distribution of augmented trajectories. |
Given the following machine learning model name: Parsing Incrementally for Constrained Auto-Regressive Decoding, provide a description of the model | |
Given the following machine learning model name: HetPipe, provide a description of the model | **HetPipe** is a hybrid parallel method that integrates pipelined model parallelism (PMP) with data parallelism (DP). In HetPipe, a group of multiple GPUs, called a virtual worker, processes minibatches in a pipelined manner, and multiple such virtual workers employ data parallelism for higher performance. |
Given the following machine learning model name: RoBERTa, provide a description of the model | **RoBERTa** is an extension of [BERT](https://paperswithcode.com/method/bert) with changes to the pretraining procedure. The modifications include:
- training the model longer, with bigger batches, over more data
- removing the next sentence prediction objective
- training on longer sequences
- dynamically changing the masking pattern applied to the training data. The authors also collect a large new dataset ($\text{CC-News}$) of comparable size to other privately used datasets, to better control for training set size effects |
Given the following machine learning model name: Pointwise Convolution, provide a description of the model | **Pointwise Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) that uses a 1x1 kernel: a kernel that iterates through every single point. This kernel has a depth of however many channels the input image has. It can be used in conjunction with [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution) to produce an efficient class of convolutions known as [depthwise-separable convolutions](https://paperswithcode.com/method/depthwise-separable-convolution).
Image Credit: [Chi-Feng Wang](https://towardsdatascience.com/a-basic-introduction-to-separable-convolutions-b99ec3102728) |
Given the following machine learning model name: GAN Least Squares Loss, provide a description of the model | **GAN Least Squares Loss** is a least squares loss function for generative adversarial networks. Minimizing this objective function is equivalent to minimizing the Pearson $\chi^{2}$ divergence. The objective function (here for [LSGAN](https://paperswithcode.com/method/lsgan)) can be defined as:
$$ \min\_{D}V\_{LS}\left(D\right) = \frac{1}{2}\mathbb{E}\_{\mathbf{x} \sim p\_{data}\left(\mathbf{x}\right)}\left[\left(D\left(\mathbf{x}\right) - b\right)^{2}\right] + \frac{1}{2}\mathbb{E}\_{\mathbf{z}\sim p\_{data}\left(\mathbf{z}\right)}\left[\left(D\left(G\left(\mathbf{z}\right)\right) - a\right)^{2}\right] $$
$$ \min\_{G}V\_{LS}\left(G\right) = \frac{1}{2}\mathbb{E}\_{\mathbf{z} \sim p\_{\mathbf{z}}\left(\mathbf{z}\right)}\left[\left(D\left(G\left(\mathbf{z}\right)\right) - c\right)^{2}\right] $$
where $a$ and $b$ are the labels for fake data and real data and $c$ denotes the value that $G$ wants $D$ to believe for fake data. |
Given the following machine learning model name: TimeSformer, provide a description of the model | **TimeSformer** is a [convolution](https://paperswithcode.com/method/convolution)-free approach to video classification built exclusively on self-attention over space and time. It adapts the standard [Transformer](https://paperswithcode.com/method/transformer) architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Specifically, the method adapts the image model [[Vision Transformer](https://paperswithcode.com/method/vision-transformer)](https//www.paperswithcode.com/method/vision-transformer) (ViT) to video by extending the self-attention mechanism from the image space to the space-time 3D volume. As in ViT, each patch is linearly mapped into an embedding and augmented with positional information. This makes it possible to interpret the resulting sequence of vector |
Given the following machine learning model name: SimCSE, provide a description of the model | **SimCSE** is a contrastive learning framework for generating sentence embeddings. It utilizes an unsupervised approach, which takes an input sentence and predicts itself in contrastive objective, with only standard [dropout](https://paperswithcode.com/method/dropout) used as noise. The authors find that dropout acts as minimal “data augmentation” of hidden representations, while removing it leads to a representation collapse. Afterwards a supervised approach is used, which incorporates annotated pairs from natural language inference datasets into the contrastive framework, by using “entailment” pairs as positives and “contradiction |
Given the following machine learning model name: 2D Discrete Wavelet Transform, provide a description of the model | |
Given the following machine learning model name: NesT, provide a description of the model | **NesT** stacks canonical transformer layers to conduct local self-attention on every image block independently, and then "nests" them hierarchically. Coupling of processed information between spatially adjacent blocks is achieved through a proposed block aggregation between every two hierarchies. The overall hierarchical structure can be determined by two key hyper-parameters: patch size $S × S$ and number of block hierarchies $T_d$. All blocks inside each hierarchy share one set of parameters. Given input of image, each image is linearly projected to an embedding. All embeddings are partitioned to blocks and flattened to generate final input. Each transformer layers is composed of a multi-head self attention (MSA) layer followed by a feed-forward fully-connected network (FFN) with skip-connection and Layer normalization. Positional embeddings are added to encode spatial information before feeding into the block. Lastly, a nested hierarchy with block aggregation is built -- every four spatially connected blocks are merged into one. |
Given the following machine learning model name: DeepViT, provide a description of the model | **DeepViT** is a type of [vision transformer](https://paperswithcode.com/method/vision-transformer) that replaces the self-attention layer within the [transformer](https://paperswithcode.com/method/transformer) block with a [Re-attention module](https://paperswithcode.com/method/re-attention-module) to address the issue of attention collapse and enables training deeper ViTs. |
Given the following machine learning model name: UCTransNet, provide a description of the model | **UCTransNet** is an end-to-end deep learning network for semantic segmentation that takes [U-Net](https://paperswithcode.com/method/u-net) as the main structure of the network. The original skip connections of U-Net are replaced by CTrans consisting of two components: [Channel-wise Cross fusion Transformer](https://paperswithcode.com/method/channel-wise-cross-fusion-transformer) ([CCT](https://paperswithcode.com/method/cct)) and [Channel-wise Cross Attention](https://paperswithcode.com/method/channel-wise-cross-attention) (CCA) to guide the fused multi-Scale channel-wise information to effectively connect to the decoder features for eliminating the ambiguity. |
Given the following machine learning model name: One-Shot Aggregation, provide a description of the model | **One-Shot Aggregation** is an image model block that is an alternative to [Dense Blocks](https://paperswithcode.com/method/dense-block), by aggregating intermediate features. It is proposed as part of the [VoVNet](https://paperswithcode.com/method/vovnet) architecture. Each [convolution](https://paperswithcode.com/method/convolution) layer is connected by two-way connection. One way is connected to the subsequent layer to produce the feature with a larger receptive field while the other way is aggregated only once into the final output feature map. The difference with [DenseNet](https://paperswithcode.com/method/densenet) is that the output of each layer is not routed to all subsequent intermediate layers which makes the input size of intermediate layers constant. |
Given the following machine learning model name: True Online TD Lambda, provide a description of the model | **True Online $TD\left(\lambda\right)$** seeks to approximate the ideal online $\lambda$-return algorithm. It seeks to invert this ideal forward-view algorithm to produce an efficient backward-view algorithm using eligibility traces. It uses dutch traces rather than accumulating traces.
Source: [Sutton and Seijen](http://proceedings.mlr.press/v32/seijen14.pdf) |
Given the following machine learning model name: UNiversal Image-TExt Representation Learning, provide a description of the model | UNITER or UNiversal Image-TExt Representation model is a large-scale pre-trained model for joint multimodal embedding. It is pre-trained using four image-text datasets COCO, Visual Genome, Conceptual Captions, and SBU Captions. It can power heterogeneous downstream V+L tasks with joint multimodal embeddings.
UNITER takes the visual regions of the image and textual tokens of the sentence as inputs. A faster R-CNN is used in Image Embedder to extract the visual features of each region and a Text Embedder is used to tokenize the input sentence into WordPieces.
It proposes WRA via the Optimal Transport to provide more fine-grained alignment between word tokens and image regions that is effective in calculating the minimum cost of transporting the contextualized image embeddings to word embeddings and vice versa.
Four pretraining tasks were designed for this model. They are Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). This model is different from the previous models because it uses conditional masking on pre-training tasks. |
Given the following machine learning model name: Stochastic Weight Averaging, provide a description of the model | **Stochastic Weight Averaging** is an optimization procedure that averages multiple points along the trajectory of [SGD](https://paperswithcode.com/method/sgd), with a cyclical or constant learning rate. On the one hand it averages weights, but it also has the property that, with a cyclical or constant learning rate, SGD proposals are approximately sampling from the loss surface of the network, leading to stochastic weights and helping to discover broader optima. |
Given the following machine learning model name: efficient channel attention, provide a description of the model | An ECA block has similar formulation to an SE block including a squeeze module for aggregating global spatial information and an efficient excitation module for modeling cross-channel interaction. Instead of indirect correspondence, an ECA block only considers direct interaction between each channel and its k-nearest neighbors to control model complexity. Overall, the formulation of an ECA block is:
\begin{align}
s = F_\text{eca}(X, \theta) & = \sigma (\text{Conv1D}(\text{GAP}(X)))
\end{align}
\begin{align}
Y & = s X
\end{align}
where $\text{Conv1D}(\cdot)$ denotes 1D convolution with a kernel of shape $k$ across the channel domain, to model local cross-channel interaction. The parameter $k$ decides the coverage of interaction, and in ECA the kernel size $k$ is adaptively determined from the channel dimensionality $C$ instead of by manual tuning, using cross-validation:
\begin{equation}
k = \psi(C) = \left | \frac{\log_2(C)}{\gamma}+\frac{b}{\gamma}\right |_\text{odd}
\end{equation}
where $\gamma$ and $b$ are hyperparameters. $|x|_\text{odd}$ indicates the nearest odd function of $x$.
Compared to SENet, ECANet has an
improved excitation module, and provides an efficient and effective block which can readily be
incorporated into various
CNNs. |
Given the following machine learning model name: 3DSSD, provide a description of the model | **3DSSD** is a point-based 3D single stage object detection detector. In this paradigm, all upsampling layers and refinement stage, which are indispensable in all existing point-based methods, are abandoned to reduce the large computation cost. The authors propose a fusion sampling strategy in the downsampling process to make detection on less representative points feasible. A delicate box prediction network including a candidate generation layer, an anchor-free regression head with a 3D center-ness assignment strategy is designed to meet the needs of accuracy and speed. |
Given the following machine learning model name: Distance Shrinking with Angular Marginalizing Loss, provide a description of the model | |
Given the following machine learning model name: Locally-Grouped Self-Attention, provide a description of the model | **Locally-Grouped Self-Attention**, or **LSA**, is a local attention mechanism used in the [Twins-SVT](https://paperswithcode.com/method/twins-svt) architecture. Locally-grouped self-attention (LSA). Motivated by the group design in depthwise convolutions for efficient inference, we first equally divide the 2D feature maps into sub-windows, making self-attention communications only happen within each sub-window. This design also resonates with the multi-head design in self-attention, where the communications only occur within the channels of the same head. To be specific, the feature maps are divided into $m \times n$ sub-windows. Without loss of generality, we assume $H \% m=0$ and $W \% n=0$. Each group contains $\frac{H W}{m n}$ elements, and thus the computation cost of the self-attention in this window is $\mathcal{O}\left(\frac{H^{2} W^{2}}{m^{2} n^{2}} d\right)$, and the total cost is $\mathcal{O}\left(\frac{H^{2} W^{2}}{m n} d\right)$. If we let $k\_{1}=\frac{H}{n}$ and $k\_{2}=\frac{W}{n}$, the cost can be computed as $\mathcal{O}\left(k\_{1} k\_{2} H W d\right)$, which is significantly more efficient when $k\_{1} \ll H$ and $k\_{2} \ll W$ and grows linearly with $H W$ if $k\_{1}$ and $k\_{2}$ are fixed.
Although the locally-grouped self-attention mechanism is computation friendly, the image is divided into non-overlapping sub-windows. Thus, we need a mechanism to communicate between different sub-windows, as in Swin. Otherwise, the information would be limited to be processed locally, which makes the receptive field small and significantly degrades the performance as shown in our experiments. This resembles the fact that we cannot replace all standard convolutions by depth-wise convolutions in CNNs. |
Given the following machine learning model name: GreedyNAS-B, provide a description of the model | **GreedyNAS-B** is a convolutional neural network discovered using the [GreedyNAS](https://paperswithcode.com/method/greedynas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building blocks used are inverted residual blocks (from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2)) and squeeze-and-excitation blocks. |
Given the following machine learning model name: Fast Minimum-Norm Attack, provide a description of the model | **Fast Minimum-Norm Attack**, or **FNM**, is a type of adversarial attack that works with different $\ell_{p}$-norm perturbation models ($p=0,1,2,\infty$), is robust to hyperparameter choices, does not require adversarial starting points, and converges within few lightweight steps. It works by iteratively finding the sample misclassified with maximum confidence within an $\ell_{p}$-norm constraint of size $\epsilon$, while adapting $\epsilon$ to minimize the distance of the current sample to the decision boundary. |
Given the following machine learning model name: PowerSGD, provide a description of the model | **PowerSGD** is a distributed optimization technique that computes a low-rank approximation of the gradient using a generalized power iteration (known as subspace iteration). The approximation is computationally light-weight, avoiding any prohibitively expensive Singular Value Decomposition. To improve the quality of the efficient approximation, the authors warm-start the power iteration by reusing the approximation from the previous optimization step. |
Given the following machine learning model name: Bridge-net, provide a description of the model | **Bridge-net** is an audio model block used in the [ClariNet](https://paperswithcode.com/method/clarinet) text-to-speech architecture. Bridge-net maps frame-level hidden representation to sample-level through several [convolution](https://paperswithcode.com/method/convolution) blocks and [transposed convolution](https://paperswithcode.com/method/transposed-convolution) layers interleaved with softsign non-linearities. |
Given the following machine learning model name: Noisy Student, provide a description of the model | **Noisy Student Training** is a semi-supervised learning approach. It extends the idea of self-training
and distillation with the use of equal-or-larger student models and noise added to the student during learning. It has three main steps:
1. train a teacher model on labeled images
2. use the teacher to generate pseudo labels on unlabeled images
3. train a student model on the combination of labeled images and pseudo labeled images.
The algorithm is iterated a few times by treating the student as a teacher to relabel the unlabeled data and training a new student.
Noisy Student Training seeks to improve on self-training and distillation in two ways. First, it makes the student larger than, or at least equal to, the teacher so the student can better learn from a larger dataset. Second, it adds noise to the student so the noised student is forced to learn harder from the pseudo labels. To noise the student, it uses input noise such as [RandAugment](https://paperswithcode.com/method/randaugment) data augmentation, and model noise such as [dropout](https://paperswithcode.com/method/dropout) and [stochastic depth](https://paperswithcode.com/method/stochastic-depth) during training. |
Given the following machine learning model name: Distributed Distributional DDPG, provide a description of the model | **D4PG**, or **Distributed Distributional DDPG**, is a policy gradient algorithm that extends upon the [DDPG](https://paperswithcode.com/method/ddpg). The improvements include a distributional updates to the DDPG algorithm, combined with the use of multiple distributed workers all writing into the same replay table. The biggest performance gain of other simpler changes was the use of $N$-step returns. The authors found that the use of [prioritized experience replay](https://paperswithcode.com/method/prioritized-experience-replay) was less crucial to the overall D4PG algorithm especially on harder problems. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.