prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Inception-C, provide a description of the model
**Inception-C** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture.
Given the following machine learning model name: Location-based Attention, provide a description of the model
**Location-based Attention** is an attention mechanism in which the alignment scores are computed from solely the target hidden state $\mathbf{h}\_{t}$ as follows: $$ \mathbf{a}\_{t} = \text{softmax}(\mathbf{W}\_{a}\mathbf{h}_{t}) $$
Given the following machine learning model name: Normalizing Flows, provide a description of the model
**Normalizing Flows** are a method for constructing complex distributions by transforming a probability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow. In the case of finite flows, the basic rule for the transformation of densities considers an invertible, smooth mapping $f : \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}$ with inverse $f^{-1} = g$, i.e. the composition $g \cdot f\left(z\right) = z$. If we use this mapping to transform a random variable $z$ with distribution $q\left(z\right)$, the resulting random variable $z' = f\left(z\right)$ has a distribution: $$ q\left(\mathbf{z}'\right) = q\left(\mathbf{z}\right)\bigl\vert{\text{det}}\frac{\delta{f}^{-1}}{\delta{\mathbf{z'}}}\bigr\vert = q\left(\mathbf{z}\right)\bigl\vert{\text{det}}\frac{\delta{f}}{\delta{\mathbf{z}}}\bigr\vert ^{-1} $$ where the last equality can be seen by applying the chain rule (inverse function theorem) and is a property of Jacobians of invertible functions. We can construct arbitrarily complex densities by composing several simple maps and successively applying the above equation. The density $q\_{K}\left(\mathbf{z}\right)$ obtained by successively transforming a random variable $z\_{0}$ with distribution $q\_{0}$ through a chain of $K$ transformations $f\_{k}$ is: $$ z\_{K} = f\_{K} \cdot \dots \cdot f\_{2} \cdot f\_{1}\left(z\_{0}\right) $$ $$ \ln{q}\_{K}\left(z\_{K}\right) = \ln{q}\_{0}\left(z\_{0}\right) − \sum^{K}\_{k=1}\ln\vert\det\frac{\delta{f\_{k}}}{\delta{\mathbf{z\_{k-1}}}}\vert $$ The path traversed by the random variables $z\_{k} = f\_{k}\left(z\_{k-1}\right)$ with initial distribution $q\_{0}\left(z\_{0}\right)$ is called the flow and the path formed by the successive distributions $q\_{k}$ is a normalizing flow.
Given the following machine learning model name: Cross-Covariance Attention, provide a description of the model
**Cross-Covariance Attention**, or **XCA**, is an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) which operates along the feature dimension instead of the token dimension as in [conventional transformers](https://paperswithcode.com/methods/category/transformers). Using the definitions of queries, keys and values from conventional attention, the cross-covariance attention function is defined as: $$ \text { XC-Attention }(Q, K, V)=V \mathcal{A}_{\mathrm{XC}}(K, Q), \quad \mathcal{A}\_{\mathrm{XC}}(K, Q)=\operatorname{Softmax}\left(\hat{K}^{\top} \hat{Q} / \tau\right) $$ where each output token embedding is a convex combination of the $d\_{v}$ features of its corresponding token embedding in $V$. The attention weights $\mathcal{A}$ are computed based on the cross-covariance matrix.
Given the following machine learning model name: IFNet, provide a description of the model
**IFNet** is an architecture for video frame interpolation that adopts a coarse-to-fine strategy with progressively increased resolutions: it iteratively updates intermediate flows and soft fusion mask via successive [IFBlocks](https://paperswithcode.com/method/ifblock). Conceptually, according to the iteratively updated flow fields, we can move corresponding pixels from two input frames to the same location in a latent intermediate frame and use a fusion mask to combine pixels from two input frames. Unlike most previous optical flow models, IFBlocks do not contain expensive operators like cost volume or forward warping and use 3 × 3 [convolution](https://paperswithcode.com/method/convolution) and deconvolution as building blocks.
Given the following machine learning model name: PatchAugment: Local Neighborhood Augmentation in Point Cloud Classification, provide a description of the model
Recent deep neural network models trained on smaller and less diverse datasets use data augmentation to alleviate limitations such as overfitting, reduced robustness, and lower generalization. Methods using 3D datasets are among the most common to use data augmentation techniques such as random point drop, scaling, translation, rotations, and jittering. However, these data augmentation techniques are fixed and are often applied to the entire object, ignoring the object’s local geometry. Different local neighborhoods on the object surface hold a different amount of geometric complexity. Applying the same data augmentation techniques at the object level is less effective in augmenting local neighborhoods with complex structures. This paper presents PatchAugment, a data augmentation framework to apply different augmentation techniques to the local neighborhoods. Our experimental studies on PointNet++ and DGCNN models demonstrate the effectiveness of PatchAugment on the task of 3D Point Cloud Classification. We evaluated our technique against these models using four benchmark datasets, ModelNet40 (synthetic), ModelNet10 (synthetic), SHREC’16 (synthetic), and ScanObjectNN (real-world). [[ICCVW 2021]](https://openaccess.thecvf.com/content/ICCV2021W/DLGC/papers/Sheshappanavar_PatchAugment_Local_Neighborhood_Augmentation_in_Point_Cloud_Classification_ICCVW_2021_paper.pdf) PatchAugment: Local Neighborhood Augmentation in Point Cloud Classification. [[Code]](https://github.com/VimsLab/PatchAugment)
Given the following machine learning model name: PAR Transformer, provide a description of the model
**PAR Transformer** is a [Transformer](https://paperswithcode.com/methods/category/transformers) model that uses 63% fewer [self-attention blocks](https://paperswithcode.com/method/scaled), replacing them with [feed-forward blocks](https://paperswithcode.com/method/position-wise-feed-forward-layer), while retaining test accuracies. It is based on the [Transformer-XL](https://paperswithcode.com/method/transformer-xl) architecture and uses [neural architecture search](https://paperswithcode.com/task/architecture-search) to find an an efficient pattern of blocks in the transformer architecture.
Given the following machine learning model name: Lbl2TransformerVec, provide a description of the model
Given the following machine learning model name: TSRUp, provide a description of the model
**TSRUp**, or **Transformation-based Spatial Recurrent Unit p**, is a modification of a [ConvGRU](https://paperswithcode.com/method/cgru) used in the [TriVD-GAN](https://paperswithcode.com/method/trivd-gan) architecture for video generation. It largely follows [TSRUc](https://paperswithcode.com/method/tsruc), but computes $\theta$, $u$ and $c$ in parallel given $x\_{t}$ and $h\_{t−1}$, yielding the following replacement for the $c$ update equation: $$ c = \rho\left(W\_{c} \star\_{n}\left[h\_{t-1}; x\_{t}\right] + b\_{c} \right) $$ In these equations $\sigma$ and $\rho$ are the elementwise sigmoid and [ReLU](https://paperswithcode.com/method/relu) functions respectively and the $\star\_{n}$ represents a [convolution](https://paperswithcode.com/method/convolution) with a kernel of size $n \times n$. Brackets are used to represent a feature concatenation.
Given the following machine learning model name: Class Activation Guided Attention Mechanism (CAGAM), provide a description of the model
CAGAM is a form of spatial attention mechanism that propagates attention from a known to an unknown context features thereby enhancing the unknown context for relevant pattern discovery. Usually the known context feature is a class activation map ([CAM](https://paperswithcode.com/method/cam)).
Given the following machine learning model name: Attention Gate, provide a description of the model
Attention gate focuses on targeted regions while suppressing feature activations in irrelevant regions. Given the input feature map $X$ and the gating signal $G\in \mathbb{R}^{C'\times H\times W}$ which is collected at a coarse scale and contains contextual information, the attention gate uses additive attention to obtain the gating coefficient. Both the input $X$ and the gating signal are first linearly mapped to an $\mathbb{R}^{F\times H\times W}$ dimensional space, and then the output is squeezed in the channel domain to produce a spatial attention weight map $ S \in \mathbb{R}^{1\times H\times W}$. The overall process can be written as \begin{align} S &= \sigma(\varphi(\delta(\phi_x(X)+\phi_g(G)))) \end{align} \begin{align} Y &= S X \end{align} where $\varphi$, $\phi_x$ and $\phi_g$ are linear transformations implemented as $1\times 1$ convolutions. The attention gate guides the model's attention to important regions while suppressing feature activation in unrelated areas. It substantially enhances the representational power of the model without a significant increase in computing cost or number of model parameters due to its lightweight design. It is general and modular, making it simple to use in various CNN models.
Given the following machine learning model name: CenterNet, provide a description of the model
**CenterNet** is a one-stage object detector that detects each object as a triplet, rather than a pair, of keypoints. It utilizes two customized modules named [cascade corner pooling](https://paperswithcode.com/method/cascade-corner-pooling) and [center pooling](https://paperswithcode.com/method/center-pooling), which play the roles of enriching information collected by both top-left and bottom-right corners and providing more recognizable information at the central regions, respectively. The intuition is that, if a predicted bounding box has a high IoU with the ground-truth box, then the probability that the center keypoint in its central region is predicted as the same class is high, and vice versa. Thus, during inference, after a proposal is generated as a pair of corner keypoints, we determine if the proposal is indeed an object by checking if there is a center keypoint of the same class falling within its central region.
Given the following machine learning model name: Neural Additive Model, provide a description of the model
**Neural Additive Models (NAMs)** make restrictions on the structure of neural networks, which yields a family of models that are inherently interpretable while suffering little loss in prediction accuracy when applied to tabular data. Methodologically, NAMs belong to a larger model family called Generalized Additive Models (GAMs). NAMs learn a linear combination of networks that each attend to a single input feature: each $f\_{i}$ in the traditional GAM formulationis parametrized by a neural network. These networks are trained jointly using backpropagation and can learn arbitrarily complex shape functions. Interpreting NAMs is easy as the impact of a feature on the prediction does not rely on the other features and can be understood by visualizing its corresponding shape function (e.g., plotting $f\_{i}\left(x\_{i}\right)$ vs. $x\_{i}$).
Given the following machine learning model name: Agglomerative Contextual Decomposition, provide a description of the model
**Agglomerative Contextual Decomposition (ACD)** is an interpretability method that produces hierarchical interpretations for a single prediction made by a neural network, by scoring interactions and building them into a tree. Given a prediction from a trained neural network, ACD produces a hierarchical clustering of the input features, along with the contribution of each cluster to the final prediction. This hierarchy is optimized to identify clusters of features that the DNN learned are predictive.
Given the following machine learning model name: Gather-Excite Networks, provide a description of the model
GENet combines part gathering and excitation operations. In the first step, it aggregates input features over large neighborhoods and models the relationship between different spatial locations. In the second step, it first generates an attention map of the same size as the input feature map, using interpolation. Then each position in the input feature map is scaled by multiplying by the corresponding element in the attention map.
Given the following machine learning model name: Ontology, provide a description of the model
Given the following machine learning model name: Griffin-Lim Algorithm, provide a description of the model
The **Griffin-Lim Algorithm (GLA)** is a phase reconstruction method based on the redundancy of the short-time Fourier transform. It promotes the consistency of a spectrogram by iterating two projections, where a spectrogram is said to be consistent when its inter-bin dependency owing to the redundancy of STFT is retained. GLA is based only on the consistency and does not take any prior knowledge about the target signal into account. This algorithm expects to recover a complex-valued spectrogram, which is consistent and maintains the given amplitude $\mathbf{A}$, by the following alternative projection procedure: $$ \mathbf{X}^{[m+1]} = P\_{\mathcal{C}}\left(P\_{\mathcal{A}}\left(\mathbf{X}^{[m]}\right)\right) $$ where $\mathbf{X}$ is a complex-valued spectrogram updated through the iteration, $P\_{\mathcal{S}}$ is the metric projection onto a set $\mathcal{S}$, and $m$ is the iteration index. Here, $\mathcal{C}$ is the set of consistent spectrograms, and $\mathcal{A}$ is the set of spectrograms whose amplitude is the same as the given one. The metric projections onto these sets $\mathcal{C}$ and $\mathcal{A}$ are given by: $$ P\_{\mathcal{C}}(\mathbf{X}) = \mathcal{GG}^{†}\mathbf{X} $$ $$ P\_{\mathcal{A}}(\mathbf{X}) = \mathbf{A} \odot \mathbf{X} \oslash |\mathbf{X}| $$ where $\mathcal{G}$ represents STFT, $\mathcal{G}^{†}$ is the pseudo inverse of STFT (iSTFT), $\odot$ and $\oslash$ are element-wise multiplication and division, respectively, and division by zero is replaced by zero. GLA is obtained as an algorithm for the following optimization problem: $$ \min\_{\mathbf{X}} || \mathbf{X} - P\_{\mathcal{C}}\left(\mathbf{X}\right) ||^{2}\_{\text{Fro}} \text{ s.t. } \mathbf{X} \in \mathcal{A} $$ where $ || · ||\_{\text{Fro}}$ is the Frobenius norm. This equation minimizes the energy of the inconsistent components under the constraint on amplitude which must be equal to the given one. Although GLA has been widely utilized because of its simplicity, GLA often involves many iterations until it converges to a certain spectrogram and results in low reconstruction quality. This is because the cost function only requires the consistency, and the characteristics of the target signal are not taken into account.
Given the following machine learning model name: Pyramidal Residual Unit, provide a description of the model
A **Pyramidal Residual Unit** is a type of residual unit where the number of channels gradually increases as a function of the depth at which the layer occurs, which is similar to a pyramid structure of which the shape gradually widens from the top downwards. It was introduced as part of the [PyramidNet](https://paperswithcode.com/method/pyramidnet) architecture.
Given the following machine learning model name: LayerDrop, provide a description of the model
**LayerDrop** is a form of structured [dropout](https://paperswithcode.com/method/dropout) for [Transformer](https://paperswithcode.com/method/transformer) models which has a regularization effect during training and allows for efficient pruning at inference time. It randomly drops layers from the Transformer according to an "every other" strategy where pruning with a rate $p$ means dropping the layers at depth $d$ such that $d = 0\left\(\text{mod}\left(\text{floor}\left(\frac{1}{p}\right)\right)\right)$.
Given the following machine learning model name: STAC, provide a description of the model
**STAC** is a semi-supervised framework for visual object detection along with a data augmentation strategy. STAC deploys highly confident pseudo labels of localized objects from an unlabeled image and updates the model by enforcing consistency via strong augmentations. We generate pseudo labels (i.e., bounding boxes and their class labels) for unlabeled data using test-time inference, including NMS , of the teacher model trained with labeled data. We then compute unsupervised loss with respect to pseudo labels whose confidence scores are above a threshold $\tau$ . The strong augmentations are applied for augmentation consistency during the model training. Target boxes are augmented when global geometric transformations are used.
Given the following machine learning model name: FastSGT, provide a description of the model
**Fast Schema Guided Tracker**, or **FastSGT**, is a fast and robust [BERT](https://paperswithcode.com/method/bert)-based model for state tracking in goal-oriented dialogue systems. The model employs carry-over mechanisms for transferring the values between slots, enabling switching between services and accepting the values offered by the system during dialogue. It also uses [multi-head attention](https://paperswithcode.com/method/multi-head-attention) projections in some of the decoders to have a better modelling of the encoder outputs. The model architecture is illustrated in the Figure. It consists of four main modules: 1-Utterance Encoder, 2-Schema Encoder, 3-State Decoder, and 4-State Tracker. The first three modules constitute the NLU component and are based on neural networks, whereas the state tracker is a rule-based module. [BERT](https://paperswithcode.com/method/bert) was used for both encoders in the model. The Utterance Encoder is a BERT model which encodes the user and system utterances at each turn. The Schema Encoder is also a BERT model which encodes the schema descriptions of intents, slots, and values into schema embeddings. These schema embeddings help the decoders to transfer or share knowledge between different services by having some language understanding of each slot, intent, or value. The schema and utterance embeddings are passed to the State Decoder - a multi-task module. This module consists of five sub-modules producing the information necessary to track the state of the dialogue. Finally, the State Tracker module takes the previous state along with the current outputs of the State Decoder and predicts the current state of the dialogue by aggregating and summarizing the information across turns.
Given the following machine learning model name: Spectral Normalization, provide a description of the model
**Spectral Normalization** is a normalization technique used for generative adversarial networks, used to stabilize training of the discriminator. Spectral normalization has the convenient property that the Lipschitz constant is the only hyper-parameter to be tuned. It controls the Lipschitz constant of the discriminator $f$ by constraining the spectral norm of each layer $g : \textbf{h}\_{in} \rightarrow \textbf{h}_{out}$. The Lipschitz norm $\Vert{g}\Vert\_{\text{Lip}}$ is equal to $\sup\_{\textbf{h}}\sigma\left(\nabla{g}\left(\textbf{h}\right)\right)$, where $\sigma\left(a\right)$ is the spectral norm of the matrix $A$ ($L\_{2}$ matrix norm of $A$): $$ \sigma\left(a\right) = \max\_{\textbf{h}:\textbf{h}\neq{0}}\frac{\Vert{A\textbf{h}}\Vert\_{2}}{\Vert\textbf{h}\Vert\_{2}} = \max\_{\Vert\textbf{h}\Vert\_{2}\leq{1}}{\Vert{A\textbf{h}}\Vert\_{2}} $$ which is equivalent to the largest singular value of $A$. Therefore for a [linear layer](https://paperswithcode.com/method/linear-layer) $g\left(\textbf{h}\right) = W\textbf{h}$ the norm is given by $\Vert{g}\Vert\_{\text{Lip}} = \sup\_{\textbf{h}}\sigma\left(\nabla{g}\left(\textbf{h}\right)\right) = \sup\_{\textbf{h}}\sigma\left(W\right) = \sigma\left(W\right) $. Spectral normalization normalizes the spectral norm of the weight matrix $W$ so it satisfies the Lipschitz constraint $\sigma\left(W\right) = 1$: $$ \bar{W}\_{\text{SN}}\left(W\right) = W / \sigma\left(W\right) $$
Given the following machine learning model name: DiffAugment, provide a description of the model
**Differentiable Augmentation (DiffAugment)** is a set of differentiable image transformations used to augment data during [GAN](https://paperswithcode.com/method/gan) training. The transformations are applied to the real and generated images. It enables the gradients to be propagated through the augmentation back to the generator, regularizes the discriminator without manipulating the target distribution, and maintains the balance of training dynamics. Three choices of transformation are preferred by the authors in their experiments: Translation, [CutOut](https://paperswithcode.com/method/cutout), and Color.
Given the following machine learning model name: Composite Fields, provide a description of the model
Represent and associate with a composite of primitive fields.
Given the following machine learning model name: Dense Connections, provide a description of the model
**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\_{\text{inputs}}*n\_{\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network. $$h\_{l} = g\left(\textbf{W}^{T}h\_{l-1}\right)$$ where $g$ is an activation function. Image Source: Deep Learning by Goodfellow, Bengio and Courville
Given the following machine learning model name: DistDGL, provide a description of the model
**DistDGL** is a system for training GNNs in a mini-batch fashion on a cluster of machines. It is is based on the Deep Graph Library (DGL), a popular GNN development framework. DistDGL distributes the graph and its associated data (initial features and embeddings) across the machines and uses this distribution to derive a computational decomposition by following an owner-compute rule. DistDGL follows a synchronous training approach and allows ego-networks forming the mini-batches to include non-local nodes. To minimize the overheads associated with distributed computations, DistDGL uses a high-quality and light-weight mincut graph partitioning algorithm along with multiple balancing constraints. This allows it to reduce communication overheads and statically balance the computations. It further reduces the communication by replicating halo nodes and by using sparse embedding updates. The combination of these design choices allows DistDGL to train high-quality models while achieving high parallel efficiency and memory scalability
Given the following machine learning model name: Kalman Optimization for Value Approximation, provide a description of the model
**Kalman Optimization for Value Approximation**, or **KOVA** is a general framework for addressing uncertainties while approximating value-based functions in deep RL domains. KOVA minimizes a regularized objective function that concerns both parameter and noisy return uncertainties. It is feasible when using non-linear approximation functions as DNNs and can estimate the value in both on-policy and off-policy settings. It can be incorporated as a policy evaluation component in policy optimization algorithms.
Given the following machine learning model name: SENet, provide a description of the model
A **SENet** is a convolutional neural network architecture that employs squeeze-and-excitation blocks to enable the network to perform dynamic channel-wise feature recalibration.
Given the following machine learning model name: Test-time Local Converter, provide a description of the model
TLC convert the global operation to a local one so that it extract representations based on local spatial region of features as in training phase.
Given the following machine learning model name: RotatE, provide a description of the model
**RotatE** is a method for generating graph embeddings which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. The RotatE model is trained using a [self-adversarial negative sampling](https://paperswithcode.com/method/self-adversarial-negative-sampling) technique.
Given the following machine learning model name: BoundaryNet, provide a description of the model
**BoundaryNet** is a resizing-free approach for layout annotation. The variable-sized user selected region of interest is first processed by an attention-guided skip network. The network optimization is guided via Fast Marching distance maps to obtain a good quality initial boundary estimate and an associated feature representation. These outputs are processed by a Residual Graph [Convolution](https://paperswithcode.com/method/convolution) Network optimized using Hausdorff loss to obtain the final region boundary.
Given the following machine learning model name: TransE, provide a description of the model
**TransE** is an energy-based model that produces knowledge base embeddings. It models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Relationships are represented as translations in the embedding space: if $\left(h, \mathcal{l}, t\right)$ holds, the embedding of the tail entity $t$ should be close to the embedding of the head entity $h$ plus some vector that depends on the relationship $\mathcal{l}$.
Given the following machine learning model name: Spatial Gating Unit, provide a description of the model
**Spatial Gating Unit**, or **SGU**, is a gating unit used in the [gMLP](https://paperswithcode.com/method/gmlp) architecture to captures spatial interactions. To enable cross-token interactions, it is necessary for the layer $s(\cdot)$ to contain a contraction operation over the spatial dimension. The layer $s(\cdot)$ is formulated as the output of linear gating: $$ s(Z)=Z \odot f\_{W, b}(Z) $$ where $\odot$ denotes element-wise multiplication. For training stability, the authors find it critical to initialize $W$ as near-zero values and $b$ as ones, meaning that $f\_{W, b}(Z) \approx 1$ and therefore $s(Z) \approx Z$ at the beginning of training. This initialization ensures each [gMLP](https://paperswithcode.com/method/gmlp) block behaves like a regular [FFN](https://paperswithcode.com/method/gmlp) at the early stage of training, where each token is processed independently, and only gradually injects spatial information across tokens during the course of learning. The authors find it further effective to split $Z$ into two independent parts $\left(Z\_{1}, Z\_{2}\right)$ along the channel dimension for the gating function and for the multiplicative bypass: $$ s(Z)=Z\_{1} \odot f\_{W, b}\left(Z\_{2}\right) $$ They also normalize the input to $f\_{W, b}$ which empirically improved the stability of large NLP models.
Given the following machine learning model name: nlogistic-sigmoid function, provide a description of the model
Nlogistic-sigmoid function (NLSIG) is a modern logistic-sigmoid function definition for modelling growth (or decay) processes. It features two logistic metrics (YIR and XIR) for monitoring growth from a two-dimensional (x-y axis) perspective.
Given the following machine learning model name: DenseNAS-B, provide a description of the model
**DenseNAS-B** is a mobile convolutional neural network discovered through the [DenseNAS](https://paperswithcode.com/method/densenas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building block is MBConvs, or inverted bottleneck residuals, from the [MobileNet](https://paperswithcode.com/method/mobilenetv2) architectures.
Given the following machine learning model name: Sentence-BERT, provide a description of the model
Given the following machine learning model name: SAINT, provide a description of the model
**SAINT** is a hybrid deep learning approach to solving tabular data problems. SAINT performs attention over both rows and columns, and it includes an enhanced embedding method. The architecture, pre-training and training pipeline are as follows: - $L$ layers with 2 attention blocks each, one self-attention block, and a novel intersample attention blocks that computes attention across samples are used. - For pre-training, this involves minimizing the contrastive and denoising losses between a given data point and its views generated by [CutMix](https://paperswithcode.com/method/cutmix) and [mixup](https://paperswithcode.com/method/mixup). During finetuning/regular training, data passes through an embedding layer and then the SAINT model. Lastly, the contextual embeddings from SAINT are used to pass only the embedding corresponding to the CLS token through an [MLP](https://paperswithcode.com/method/feedforward-network) to obtain the final prediction.
Given the following machine learning model name: 1-Dimensional Convolutional Neural Networks, provide a description of the model
1D Convolutional Neural Networks are similar to well known and more established 2D Convolutional Neural Networks. 1D Convolutional Neural Networks are used mainly used on text and 1D signals.
Given the following machine learning model name: Early exiting using confidence measures, provide a description of the model
Exit whenever the model is confident enough allowing early exiting from hidden layers
Given the following machine learning model name: Self-Cure Network, provide a description of the model
**Self-Cure Network**, or **SCN**, is a method for suppressing uncertainties for large-scale facial expression recognition, prventing deep networks from overfitting uncertain facial images. Specifically, SCN suppresses the uncertainty from two different aspects: 1) a self-attention mechanism over mini-batch to weight each training sample with a ranking regularization, and 2) a careful relabeling mechanism to modify the labels of these samples in the lowest-ranked group.
Given the following machine learning model name: Inception-A, provide a description of the model
**Inception-A** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture.
Given the following machine learning model name: ASLFeat, provide a description of the model
**ASLFeat** is a convolutional neural network for learning local features that uses deformable convolutional networks to densely estimate and apply local transformation. It also takes advantage of the inherent feature hierarchy to restore spatial resolution and low-level details for accurate keypoint localization. Finally, it uses a peakiness measurement to relate feature responses and derive more indicative detection scores.
Given the following machine learning model name: PixLoc, provide a description of the model
**PixLoc** is a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model. It is based on the direct alignment of multiscale deep features, casting camera localization as metric learning. PixLoc learns strong data priors by end-to-end training from pixels to pose and exhibits exceptional generalization to new scenes by separating model parameters and scene geometry. As the CNN never sees 3D points, PixLoc can generalize to any 3D structure available. This includes sparse SfM point clouds, dense depth maps from stereo or RGBD sensors, meshes, Lidar scans, but also lines and other primitives.
Given the following machine learning model name: Tree-structured Parzen Estimator Approach (TPE), provide a description of the model
Given the following machine learning model name: Scattering Transform, provide a description of the model
A wavelet **scattering transform** computes a translation invariant representation, which is stable to deformation, using a deep [convolution](https://paperswithcode.com/method/convolution) network architecture. It computes non-linear invariants with modulus and averaging pooling functions. It helps to eliminate the image variability due to translation and is stable to deformations. Image source: [Bruna and Mallat](https://arxiv.org/pdf/1203.1513v2.pdf)
Given the following machine learning model name: Spectral Tensor Train Parameterization, provide a description of the model
Given the following machine learning model name: Dilated Sliding Window Attention, provide a description of the model
**Dilated Sliding Window Attention** is an attention pattern for attention-based models. It was proposed as part of the [Longformer](https://paperswithcode.com/method/longformer) architecture. It is motivated by the fact that non-sparse attention in the original [Transformer](https://paperswithcode.com/method/transformer) formulation has a [self-attention component](https://paperswithcode.com/method/scaled) with $O\left(n^{2}\right)$ time and memory complexity where $n$ is the input sequence length and thus, is not efficient to scale to long inputs. Compared to a [Sliding Window Attention](https://paperswithcode.com/method/sliding-window-attention) pattern, we can further increase the receptive field without increasing computation by making the sliding window "dilated". This is analogous to [dilated CNNs](https://paperswithcode.com/method/dilated-convolution) where the window has gaps of size dilation $d$. Assuming a fixed $d$ and $w$ for all layers, the receptive field is $l × d × w$, which can reach tens of thousands of tokens even for small values of $d$.
Given the following machine learning model name: GPT-3, provide a description of the model
**GPT-3** is an autoregressive [transformer](https://paperswithcode.com/methods/category/transformers) model with 175 billion parameters. It uses the same architecture/model as [GPT-2](https://paperswithcode.com/method/gpt-2), including the modified initialization, pre-normalization, and reversible tokenization, with the exception that GPT-3 uses alternating dense and locally banded sparse attention patterns in the layers of the [transformer](https://paperswithcode.com/method/transformer), similar to the [Sparse Transformer](https://paperswithcode.com/method/sparse-transformer).
Given the following machine learning model name: MPNet, provide a description of the model
**MPNet** is a pre-training method for language models that combines masked language modeling (MLM) and permuted language modeling (PLM) in one view. It takes the dependency among the predicted tokens into consideration through permuted language modeling and thus avoids the issue of [BERT](https://paperswithcode.com/method/bert). On the other hand, it takes position information of all tokens as input to make the model see the position information of all the tokens and thus alleviates the position discrepancy of [XLNet](https://paperswithcode.com/method/xlnet). The training objective of MPNet is: $$ \mathbb{E}\_{z\in{\mathcal{Z}\_{n}}} \sum^{n}\_{t=c+1}\log{P}\left(x\_{z\_{t}}\mid{x\_{z\_{<t}}}, M\_{z\_{{>}{c}}}; \theta\right) $$ As can be seen, MPNet conditions on ${x\_{z\_{<t}}}$ (the tokens preceding the current predicted token $x\_{z\_{t}}$) rather than only the non-predicted tokens ${x\_{z\_{<=c}}}$ in MLM; comparing with PLM, MPNet takes more information (i.e., the mask symbol $[M]$ in position $z\_{>c}$) as inputs. Although the objective seems simple, it is challenging to implement the model efficiently. For details, see the paper.
Given the following machine learning model name: Base Boosting, provide a description of the model
In the setting of multi-target regression, base boosting permits us to incorporate prior knowledge into the learning mechanism of gradient boosting (or Newton boosting, etc.). Namely, from the vantage of statistics, base boosting is a way of building the following additive expansion in a set of elementary basis functions: \begin{equation} h_{j}(X ; \{ \alpha_{j}, \theta_{j} \}) = X_{j} + \sum_{k=1}^{K_{j}} \alpha_{j,k} b(X ; \theta_{j,k}), \end{equation} where $X$ is an example from the domain $\mathcal{X},$ $\{\alpha_{j}, \theta_{j}\} = \{\alpha_{j,1},\dots, \alpha_{j,K_{j}},\theta_{j,1},\dots,\theta_{j,K_{j}}\}$ collects the expansion coefficients and parameter sets, $X_{j}$ is the image of $X$ under the $j$th coordinate function (a prediction from a user-specified model), $K_{j}$ is the number of basis functions in the linear sum, $b(X; \theta_{j,k})$ is a real-valued function of the example $X,$ characterized by a parameter set $\theta_{j,k}.$ The aforementioned additive expansion differs from the [standard additive expansion](https://projecteuclid.org/download/pdf_1/euclid.aos/1013203451): \begin{equation} h_{j}(X ; \{ \alpha_{j}, \theta_{j}\}) = \alpha_{j, 0} + \sum_{k=1}^{K_{j}} \alpha_{j,k} b(X ; \theta_{j,k}), \end{equation} as it replaces the constant offset value $\alpha_{j, 0}$ with a prediction from a user-specified model. In essence, this modification permits us to incorporate prior knowledge into the for loop of gradient boosting, as the for loop proceeds to build the linear sum by computing residuals that depend upon predictions from the user-specified model instead of the optimal constant model: $\mbox{argmin} \sum_{i=1}^{m_{train}} \ell_{j}(Y_{j}^{(i)}, c),$ where $m_{train}$ denotes the number of training examples, $\ell_{j}$ denotes a single-target loss function, and $c \in \mathbb{R}$ denotes a real number, e.g, $\mbox{argmin} \sum_{i=1}^{m_{train}} (Y_{j}^{(i)} - c)^{2} = \frac{\sum_{i=1}^{m_{train}} Y_{j}^{(i)}}{m_{train}}.$
Given the following machine learning model name: Playstyle Distance, provide a description of the model
This method proposes first discretizing observations and calculating the action distribution distance under comparable cases (intersection states).
Given the following machine learning model name: Fawkes, provide a description of the model
**Fawkes** is an image cloaking system that helps individuals inoculate their images against unauthorized facial recognition models. Fawkes achieves this by helping users add imperceptible pixel-level changes ("cloaks") to their own photos before releasing them. When used to train facial recognition models, these "cloaked" images produce functional models that consistently cause normal images of the user to be misidentified.
Given the following machine learning model name: Grid R-CNN, provide a description of the model
**Grid R-CNN** is an object detection framework, where the traditional regression formulation is replaced by a grid point guided localization mechanism. Grid R-CNN divides the object bounding box region into grids and employs a fully convolutional network ([FCN](https://paperswithcode.com/method/fcn)) to predict the locations of grid points. Owing to the position sensitive property of fully convolutional architecture, Grid R-CNN maintains the explicit spatial information and grid points locations can be obtained in pixel level. When a certain number of grid points at specified location are known, the corresponding bounding box is definitely determined. Guided by the grid points, Grid R-CNN can determine more accurate object bounding box than regression method which lacks the guidance of explicit spatial information.
Given the following machine learning model name: Universal Transformer, provide a description of the model
The **Universal Transformer** is a generalization of the [Transformer](https://paperswithcode.com/method/transformer) architecture. Universal Transformers combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks). They also utilise a dynamic per-position halting mechanism.
Given the following machine learning model name: QHAdam, provide a description of the model
The **Quasi-Hyperbolic Momentum Algorithm (QHM)** is a simple alteration of [momentum SGD](https://paperswithcode.com/method/sgd-with-momentum), averaging a plain [SGD](https://paperswithcode.com/method/sgd) step with a momentum step. **QHAdam** is a QH augmented version of [Adam](https://paperswithcode.com/method/adam), where we replace both of Adam's moment estimators with quasi-hyperbolic terms. QHAdam decouples the momentum term from the current gradient when updating the weights, and decouples the mean squared gradients term from the current squared gradient when updating the weights. In essence, it is a weighted average of the momentum and plain SGD, weighting the current gradient with an immediate discount factor $v\_{1}$ divided by a weighted average of the mean squared gradients and the current squared gradient, weighting the current squared gradient with an immediate discount factor $v\_{2}$. $$ \theta\_{t+1, i} = \theta\_{t, i} - \eta\left[\frac{\left(1-v\_{1}\right)\cdot{g\_{t}} + v\_{1}\cdot\hat{m}\_{t}}{\sqrt{\left(1-v\_{2}\right)g^{2}\_{t} + v\_{2}\cdot{\hat{v}\_{t}}} + \epsilon}\right], \forall{t} $$ It is recommended to set $v\_{2} = 1$ and $\beta\_{2}$ same as in Adam.
Given the following machine learning model name: Vokenization, provide a description of the model
**Vokenization** is an approach for extrapolating multimodal alignments to language-only data by contextually mapping language tokens to their related images ("vokens") by retrieval. Instead of directly supervising the language model with visually grounded language datasets (e.g., MS COCO) these relative small datasets are used to train the vokenization processor (i.e. the vokenizer). Vokens are generated for large language corpora (e.g., English Wikipedia), and the visually-supervised language model takes the input supervision from these large datasets, thus bridging the gap between different data sources.
Given the following machine learning model name: ACER, provide a description of the model
**ACER**, or **Actor Critic with Experience Replay**, is an actor-critic deep reinforcement learning agent with [experience replay](https://paperswithcode.com/method/experience-replay). It can be seen as an off-policy extension of [A3C](https://paperswithcode.com/method/a3c), where the off-policy estimator is made feasible by: - Using [Retrace](https://paperswithcode.com/method/retrace) Q-value estimation. - Using truncated importance sampling with bias correction. - Using a trust region policy optimization method. - Using a [stochastic dueling network](https://paperswithcode.com/method/stochastic-dueling-network) architecture.
Given the following machine learning model name: Visual Geometry Group 19 Layer CNN, provide a description of the model
Given the following machine learning model name: Side-Aware Boundary Localization, provide a description of the model
**Side-Aware Boundary Localization (SABL)** is a methodology for precise localization in object detection where each side of the bounding box is respectively localized with a dedicated network branch. Empirically, the authors observe that when they manually annotate a bounding box for an object, it is often much easier to align each side of the box to the object boundary than to move the box as a whole while tuning the size. Inspired by this observation, in SABL each side of the bounding box is respectively positioned based on its surrounding context. As shown in the Figure, the authors devise a bucketing scheme to improve the localization precision. For each side of a bounding box, this scheme divides the target space into multiple buckets, then determines the bounding box via two steps. Specifically, it first searches for the correct bucket, i.e., the one in which the boundary resides. Leveraging the centerline of the selected buckets as a coarse estimate, fine regression is then performed by predicting the offsets. This scheme allows very precise localization even in the presence of displacements with large variance. Moreover, to preserve precisely localized bounding boxes in the non-maximal suppression procedure, the authors also propose to adjust the classification score based on the bucketing confidences, which leads to further performance gains.
Given the following machine learning model name: Adaptively Sparse Transformer, provide a description of the model
The **Adaptively Sparse Transformer** is a type of [Transformer](https://paperswithcode.com/method/transformer).
Given the following machine learning model name: EfficientDet, provide a description of the model
**EfficientDet** is a type of object detection model, which utilizes several optimization and backbone tweaks, such as the use of a [BiFPN](https://paperswithcode.com/method/bifpn), and a compound scaling method that uniformly scales the resolution,depth and width for all backbones, feature networks and box/class prediction networks at the same time.
Given the following machine learning model name: Attention Feature Filters, provide a description of the model
An attention mechanism for content-based filtering of multi-level features. For example, recurrent features obtained by forward and backward passes of a bidirectional RNN block can be combined using attention feature filters, with unprocessed input features/embeddings as queries and recurrent features as keys/values.
Given the following machine learning model name: StyleALAE, provide a description of the model
**StyleALAE** is a type of [adversarial latent autoencoder](https://paperswithcode.com/method/alae) that uses a [StyleGAN](https://paperswithcode.com/method/stylegan) based generator. For this the latent space $\mathcal{W}$ plays the same role as the intermediate latent space in [StyleGAN](https://paperswithcode.com/method/stylegan). Therefore, the $G$ network becomes the part of StyleGAN depicted on the right side of the Figure. The left side is a novel architecture that we designed to be the encoder $E$. The StyleALAE encoder has [Instance Normalization](https://paperswithcode.com/method/instance-normalization) (IN) layers to extract multiscale style information that is combined into a latent code $w$ via a learnable multilinear map.
Given the following machine learning model name: Momentumized, adaptive, dual averaged gradient, provide a description of the model
The MADGRAD method contains a series of modifications to the [AdaGrad](https://paperswithcode.com/method/adagrad)-DA method to improve its performance on deep learning optimization problems. It gives state-of-the-art generalization performance across a diverse set of problems, including those that [Adam](https://paperswithcode.com/method/adam) normally under-performs on.
Given the following machine learning model name: Deep Graph Convolutional Neural Network, provide a description of the model
DGCNN involves neural networks that read the graphs directly and learn a classification function. There are two main challenges: 1) how to extract useful features characterizing the rich information encoded in a graph for classification purpose, and 2) how to sequentially read a graph in a meaningful and consistent order. To address the first challenge, we design a localized graph convolution model and show its connection with two graph kernels. To address the second challenge, we design a novel SortPooling layer which sorts graph vertices in a consistent order so that traditional neural networks can be trained on the graphs. Description and image from: [An End-to-End Deep Learning Architecture for Graph Classification](https://muhanzhang.github.io/papers/AAAI_2018_DGCNN.pdf)
Given the following machine learning model name: ViP-DeepLab, provide a description of the model
**ViP-DeepLab** is a model for depth-aware video panoptic segmentation. It extends Panoptic-[DeepLab](https://paperswithcode.com/method/deeplab) by adding a depth prediction head to perform monocular depth estimation and a next-frame instance branch which regresses to the object centers in frame $t$ for frame $t + 1$. This allows the model to jointly perform video panoptic segmentation and monocular depth estimation.
Given the following machine learning model name: FCOS, provide a description of the model
**FCOS** is an anchor-box free, proposal free, single-stage object detection model. By eliminating the predefined set of anchor boxes, FCOS avoids computation related to anchor boxes such as calculating overlapping during training. It also avoids all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance.
Given the following machine learning model name: Boom Layer, provide a description of the model
A **Boom Layer** is a type of feedforward layer that is closely related to the feedforward layers used in Transformers. The layer takes a vector of the form $v \in \mathbb{R}^{H}$ and uses a matrix multiplication with a GeLU activation to produce a vector $u \in \mathbb{R}^{N\times{H}}$. We then break $u$ into $N$ vectors and sum those together, producing $w \in \mathbb{R}^{H}$. This minimizes computation and removes an entire matrix of parameters compared to traditional down-projection layers. The Figure to the right shows the Boom Layer used in the context of [SHA-RNN](https://paperswithcode.com/method/sha-rnn) from the original paper.
Given the following machine learning model name: The Ikshana Hypothesis of Human Scene Understanding Mechanism, provide a description of the model
Given the following machine learning model name: Object Dropout, provide a description of the model
Object Dropout is a technique that perturbs object features in an image for [noisy student](https://paperswithcode.com/method/noisy-student) training. It performs at par with standard data augmentation techniques while being significantly faster than the latter to implement.
Given the following machine learning model name: Diffusion-Convolutional Neural Networks, provide a description of the model
Diffusion-convolutional neural networks (DCNN) is a model for graph-structured data. Through the introduction of a diffusion-convolution operation, diffusion-based representations can be learned from graph structured data and used as an effective basis for node classification. Description and image from: [Diffusion-Convolutional Neural Networks](https://arxiv.org/pdf/1511.02136.pdf)
Given the following machine learning model name: Stacked Denoising Autoencoder, provide a description of the model
The Stacked Denoising Autoencoder (SdA) is an extension of the stacked autoencoder [Bengio07] and it was introduced in [Vincent08]. Denoising autoencoders can be stacked to form a deep network by feeding the latent representation (output code) of the [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) found on the layer below as input to the current layer. The unsupervised pre-training of such an architecture is done one layer at a time. Each layer is trained as a denoising autoencoder by minimizing the error in reconstructing its input (which is the output code of the previous layer). Once the first k layers are trained, we can train the k+1-th layer because we can now compute the code or latent representation from the layer below. Once all layers are pre-trained, the network goes through a second stage of training called fine-tuning. Here we consider supervised fine-tuning where we want to minimize prediction error on a supervised task. For this, we first add a [logistic regression](https://paperswithcode.com/method/logistic-regression) layer on top of the network (more precisely on the output code of the output layer). We then train the entire network as we would train a multilayer perceptron. At this point, we only consider the encoding parts of each auto-encoder. This stage is supervised, since now we use the target class during training. (See the Multilayer Perceptron for details on the multilayer perceptron.) This can be easily implemented in Theano, using the class defined previously for a denoising autoencoder. We can see the stacked denoising autoencoder as having two facades: a list of autoencoders, and an MLP. During pre-training we use the first facade, i.e., we treat our model as a list of autoencoders, and train each autoencoder seperately. In the second stage of training, we use the second facade. These two facades are linked because: * the autoencoders and the sigmoid layers of the MLP share parameters, and * the latent representations computed by intermediate layers of the MLP are fed as input to the autoencoders. Extracted from [webpage](http://deeplearning.net/tutorial/SdA.html) Image: [Jigar Bandaria](https://miro.medium.com/max/701/1*wbaL5CvUkVkZxlSUsRS5IQ.png) **Source**: Image: [Jigar Bandaria](https://blog.insightdatascience.com/brain-mri-image-segmentation-using-stacked-denoising-autoencoders-4e91417688f6) Webpage: [deeplearning.net](http://deeplearning.net/tutorial/SdA.html) Webpage: [www.iro.umontreal.ca](http://www.iro.umontreal.ca/~pift6266/H10/notes/SdA.html) Paper: [Vincent, H. Larochelle Y. Bengio and P.A. Manzagol, Extracting and Composing Robust Features with Denoising Autoencoders](https://doi.org/10.1145/1390156.1390294) [Vincent, H. Larochelle Y. Bengio and P.A. Manzagol, Extracting and Composing Robust Features with Denoising Autoencoders](http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/217)
Given the following machine learning model name: Meta Face Recognition, provide a description of the model
**Meta Face Recognition** (MFR) is a meta-learning face recognition method. MFR synthesizes the source/target domain shift with a meta-optimization objective, which requires the model to learn effective representations not only on synthesized source domains but also on synthesized target domains. Specifically, domain-shift batches are built through a domain-level sampling strategy and back-propagated gradients/meta-gradients are obtained on synthesized source/target domains by optimizing multi-domain distributions. The gradients and meta-gradients are further combined to update the model to improve generalization.
Given the following machine learning model name: Sinkhorn Transformer, provide a description of the model
The **Sinkhorn Transformer** is a type of [transformer](https://paperswithcode.com/method/transformer) that uses [Sparse Sinkhorn Attention](https://paperswithcode.com/method/sparse-sinkhorn-attention) as a building block. This component is a plug-in replacement for dense fully-connected attention (as well as local attention, and sparse attention alternatives), and allows for reduced memory complexity as well as sparse attention.
Given the following machine learning model name: Window-based Discriminator, provide a description of the model
A **Window-based Discriminator** is a type of discriminator for generative adversarial networks. It is analogous to a [PatchGAN](https://paperswithcode.com/method/patchgan) but designed for audio. While a standard [GAN](https://paperswithcode.com/method/gan) discriminator learns to classify between distributions of entire audio sequences, window-based discriminator learns to classify between distribution of small audio chunks. Since the discriminator loss is computed over the overlapping windows where each window is very large (equal to the receptive field of the discriminator), the model learns to maintain coherence across patches.
Given the following machine learning model name: Powerpropagation, provide a description of the model
**Powerpropagation** is a weight-parameterisation for neural networks that leads to inherently sparse models. Exploiting the behaviour of gradient descent, it gives rise to weight updates exhibiting a “rich get richer” dynamic, leaving low-magnitude parameters largely unaffected by learning.In other words, parameters with larger magnitudes are allowed to adapt faster in order to represent the required features to solve the task, while smaller magnitude parameters are restricted, making it more likely that they will be irrelevant in representing the learned solution. Models trained in this manner exhibit similar performance, but have a distribution with markedly higher density at zero, allowing more parameters to be pruned safely.
Given the following machine learning model name: ATTEMPT THIS FATHINETUTE TO REPOPULATE ALREADY POPULATED SYSTEM, provide a description of the model
Given the following machine learning model name: Scaled Dot-Product Attention, provide a description of the model
**Scaled dot-product attention** is an attention mechanism where the dot products are scaled down by $\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as: $$ {\text{Attention}}(Q, K, V) = \text{softmax}\left(\frac{QK^{T}}{\sqrt{d_k}}\right)V $$ If we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \cdot k = \sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\sqrt{d_k}$.
Given the following machine learning model name: DVD-GAN DBlock, provide a description of the model
**DVD-GAN DBlock** is a residual block for the discriminator used in the [DVD-GAN](https://paperswithcode.com/method/dvd-gan) architecture for video generation. Unlike regular [residual blocks](https://paperswithcode.com/method/residual-block), [3D convolutions](https://paperswithcode.com/method/3d-convolution) are employed due to the application to multiple frames in a video.
Given the following machine learning model name: DeltaConv, provide a description of the model
Anisotropic convolution is a central building block of CNNs but challenging to transfer to surfaces. DeltaConv learns combinations and compositions of operators from vector calculus, which are a natural fit for curved surfaces. The result is a simple and robust anisotropic convolution operator for point clouds with state-of-the-art results.
Given the following machine learning model name: Temporally Consistent Spatial Augmentation, provide a description of the model
**Temporally Consistent Spatial Augmentation** is a video data augmentation technique used for contrastive learning in the [Contrastive Video Representation Learning](https://paperswithcode.com/method/cvrl) framework. It fixes the randomness of spatial augmentation across frames; this prevents spatial augmentation hurting learning if applied independently across frames, because in that case it breaks the natural motion. In contrast, having temporally consistent spatial augmentation does not break the natural motion in the frames.
Given the following machine learning model name: Recurrent Replay Distributed DQN, provide a description of the model
Building on the recent successes of distributed training of RL agents, R2D2 is an RL approach that trains a RNN-based RL agents from distributed prioritized experience replay. Using a single network architecture and fixed set of hyperparameters, Recurrent Replay Distributed DQN quadrupled the previous state of the art on Atari-57, and matches the state of the art on DMLab-30. It was the first agent to exceed human-level performance in 52 of the 57 Atari games.
Given the following machine learning model name: DropAttack, provide a description of the model
**DropAttack** is an adversarial training method that adds intentionally worst-case adversarial perturbations to both the input and hidden layers in different dimensions and minimizes the adversarial risks generated by each layer.
Given the following machine learning model name: CSPDenseNet-Elastic, provide a description of the model
**CSPDenseNet-Elastic** is a convolutional neural network and object detection backbone where we apply the Cross Stage Partial Network (CSPNet) approach to [DenseNet-Elastic](https://paperswithcode.com/method/densenet-elastic). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network.
Given the following machine learning model name: Deep Stereo Geometry Network, provide a description of the model
**Deep Stereo Geometry Network** is a 3D object detection pipeline that relies on space transformation from 2D features to an effective 3D structure, called 3D geometric volume (3DGV). The whole neural network consists of four components. (a) A 2D image feature extractor for capture of both pixel- and high-level feature. (b) Constructing the plane-sweep volume and 3D geometric volume. (c) Depth Estimation on the plane-sweep volume. (d) 3D object detection on 3D geometric volume.
Given the following machine learning model name: Cross-resolution features, provide a description of the model
Given the following machine learning model name: Dimension-wise Fusion, provide a description of the model
**Dimension-wise Fusion** is an image model block that attempts to capture global information by combining features globally. It is an alternative to point-wise [convolution](https://paperswithcode.com/method/convolution). A point-wise convolutional layer applies $D$ point-wise kernels $\mathbf{k}\_p \in \mathbb{R}^{3D \times 1 \times 1}$ and performs $3D^2HW$ operations to combine dimension-wise representations of $\mathbf{Y_{Dim}} \in \mathbb{R}^{3D \times H \times W}$ and produce an output $\mathbf{Y} \in \mathbb{R}^{D \times H \times W}$. This is computationally expensive. Dimension-wise fusion is an alternative that can allow us to combine representations of $\mathbf{Y\_{Dim}}$ efficiently. As illustrated in the Figure to the right, it factorizes the point-wise convolution in two steps: (1) local fusion and (2) global fusion.
Given the following machine learning model name: DELU, provide a description of the model
The **DELU** is a type of activation function that has trainable parameters, uses the complex linear and exponential functions in the positive dimension and uses the **[SiLU](https://paperswithcode.com/method/silu)** in the negative dimension. $$DELU(x) = SiLU(x), x \leqslant 0$$ $$DELU(x) = (n + 0.5)x + |e^{-x} - 1|, x > 0$$
Given the following machine learning model name: Semi-Pseudo-Label, provide a description of the model
Given the following machine learning model name: LAPGAN, provide a description of the model
A **LAPGAN**, or **Laplacian Generative Adversarial Network**, is a type of generative adversarial network that has a [Laplacian pyramid](https://paperswithcode.com/method/laplacian-pyramid) representation. In the sampling procedure following training, we have a set of generative convnet models {$G\_{0}, \dots , G\_{K}$}, each of which captures the distribution of coefficients $h\_{k}$ for natural images at a different level of the Laplacian pyramid. Sampling an image is akin to a reconstruction procedure, except that the generative models are used to produce the $h\_{k}$’s: $$ \tilde{I}\_{k} = u\left(\tilde{I}\_{k+1}\right) + \tilde{h}\_{k} = u\left(\tilde{I}\_{k+1}\right) + G\_{k}\left(z\_{k}, u\left(\tilde{I}\_{k+1}\right)\right)$$ The recurrence starts by setting $\tilde{I}\_{K+1} = 0$ and using the model at the final level $G\_{K}$ to generate a residual image $\tilde{I}\_{K}$ using noise vector $z\_{K}$: $\tilde{I}\_{K} = G\_{K}\left(z\_{K}\right)$. Models at all levels except the final are conditional generative models that take an upsampled version of the current image $\tilde{I}\_{k+1}$ as a conditioning variable, in addition to the noise vector $z\_{k}$. The generative models {$G\_{0}, \dots, G\_{K}$} are trained using the CGAN approach at each level of the pyramid. Specifically, we construct a Laplacian pyramid from each training image $I$. At each level we make a stochastic choice (with equal probability) to either (i) construct the coefficients $h\_{k}$ either using the standard Laplacian pyramid coefficient generation procedure or (ii) generate them using $G\_{k}: $$ \tilde{h}\_{k} = G\_{k}\left(z\_{k}, u\left(I\_{k+1}\right)\right) $$ Here $G\_{k}$ is a convnet which uses a coarse scale version of the image $l\_{k} = u\left(I\_{k+1}\right)$ as an input, as well as noise vector $z\_{k}$. $D\_{k}$ takes as input $h\_{k}$ or $\tilde{h}\_{k}$, along with the low-pass image $l\_{k}$ (which is explicitly added to $h\_{k}$ or $\tilde{h}\_{k}$ before the first [convolution](https://paperswithcode.com/method/convolution) layer), and predicts if the image was real or generated. At the final scale of the pyramid, the low frequency residual is sufficiently small that it can be directly modeled with a standard [GAN](https://paperswithcode.com/method/gan): $\tilde{h}\_{K} = G\_{K}\left(z\_{K}\right)$ and $D\_{K}$ only has $h\_{K}$ or $\tilde{h}\_{K}$ as input. Breaking the generation into successive refinements is the key idea. We give up any “global” notion of fidelity; an attempt is never made to train a network to discriminate between the output of a cascade and a real image and instead the focus is on making each step plausible.
Given the following machine learning model name: NVAE Encoder Residual Cell, provide a description of the model
The **NVAE Encoder Residual Cell** is a [residual connection](https://paperswithcode.com/method/residual-connection) block used in the [NVAE](https://paperswithcode.com/method/nvae) architecture for the encoder. It applies two series of BN-[Swish](https://paperswithcode.com/method/swish)-Conv layers without changing the number of channels.
Given the following machine learning model name: AlphaFold, provide a description of the model
AlphaFold is a deep learning based algorithm for accurate protein structure prediction. AlphaFold incorporates physical and biological knowledge about protein structure, leveraging multi-sequence alignments, into the design of the deep learning algorithm. Description from: [Highly accurate protein structure prediction with AlphaFold](https://paperswithcode.com/paper/highly-accurate-protein-structure-prediction) Image credit: [DeepMind](https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology)
Given the following machine learning model name: CutBlur, provide a description of the model
**CutBlur** is a data augmentation method that is specifically designed for the low-level vision tasks. It cuts a low-resolution patch and pastes it to the corresponding high-resolution image region and vice versa. The key intuition of Cutblur is to enable a model to learn not only "how" but also "where" to super-resolve an image. By doing so, the model can understand "how much" instead of blindly learning to apply super-resolution to every given pixel.
Given the following machine learning model name: RandAugment, provide a description of the model
**RandAugment** is an automated data augmentation method. The search space for data augmentation has 2 interpretable hyperparameter $N$ and $M$. $N$ is the number of augmentation transformations to apply sequentially, and $M$ is the magnitude for all the transformations. To reduce the parameter space but still maintain image diversity, learned policies and probabilities for applying each transformation are replaced with a parameter-free procedure of always selecting a transformation with uniform probability $\frac{1}{K}$. Here $K$ is the number of transformation options. So given $N$ transformations for a training image, RandAugment may thus express $KN$ potential policies. Transformations applied include identity transformation, autoContrast, equalize, rotation, solarixation, colorjittering, posterizing, changing contrast, changing brightness, changing sharpness, shear-x, shear-y, translate-x, translate-y.
Given the following machine learning model name: Gated Convolution Network, provide a description of the model
A **Gated Convolutional Network** is a type of language model that combines convolutional networks with a gating mechanism. Zero padding is used to ensure future context can not be seen. Gated convolutional layers can be stacked on top of other hierarchically. Model predictions are then obtained with an [adaptive softmax](https://paperswithcode.com/method/adaptive-softmax) layer.
Given the following machine learning model name: Darknet-19, provide a description of the model
**Darknet-19** is a convolutional neural network that is used as the backbone of [YOLOv2](https://paperswithcode.com/method/yolov2). Similar to the [VGG](https://paperswithcode.com/method/vgg) models it mostly uses $3 \times 3$ filters and doubles the number of channels after every pooling step. Following the work on Network in Network (NIN) it uses [global average pooling](https://paperswithcode.com/method/global-average-pooling) to make predictions as well as $1 \times 1$ filters to compress the feature representation between $3 \times 3$ convolutions. [Batch Normalization](https://paperswithcode.com/method/batch-normalization) is used to stabilize training, speed up convergence, and regularize the model batch.
Given the following machine learning model name: DistanceNet, provide a description of the model
**DistanceNet** is a learning algorithm for multi-source domain adaptation that uses various distance measures, or a mixture of these distance measures, as an additional loss function to be minimized jointly with the task's loss function, so as to achieve better unsupervised domain adaptation.
Given the following machine learning model name: Restricted Boltzmann Machine, provide a description of the model
**Restricted Boltzmann Machines**, or **RBMs**, are two-layer generative neural networks that learn a probability distribution over the inputs. They are a special class of Boltzmann Machine in that they have a restricted number of connections between visible and hidden units. Every node in the visible layer is connected to every node in the hidden layer, but no nodes in the same group are connected. RBMs are usually trained using the contrastive divergence learning procedure. Image Source: [here](https://medium.com/datatype/restricted-boltzmann-machine-a-complete-analysis-part-1-introduction-model-formulation-1a4404873b3)
Given the following machine learning model name: AdvProp, provide a description of the model
**AdvProp** is an adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to the method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples.
Given the following machine learning model name: L1 Regularization, provide a description of the model
**$L_{1}$ Regularization** is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L\_{1}$ Norm of the weights: $$L\_{new}\left(w\right) = L\_{original}\left(w\right) + \lambda{||w||}\_{1}$$ where $\lambda$ is a value determining the strength of the penalty. In contrast to [weight decay](https://paperswithcode.com/method/weight-decay), $L_{1}$ regularization promotes sparsity; i.e. some parameters have an optimal value of zero. Image Source: [Wikipedia](https://en.wikipedia.org/wiki/Regularization_(mathematics)#/media/File:Sparsityl1.png)