prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Polynomial, provide a description of the model
Given the following machine learning model name: Unigram Segmentation, provide a description of the model
**Unigram Segmentation** is a subword segmentation algorithm based on a unigram language model. It provides multiple segmentations with probabilities. The language model allows for emulating the noise generated during the segmentation of actual data. The unigram language model makes an assumption that each subword occurs independently, and consequently, the probability of a subword sequence $\mathbf{x} = (x_1,\ldots,x_M)$ is formulated as the product of the subword occurrence probabilities $p(x_i)$: $$ P(\mathbf{x}) = \prod_{i=1}^{M} p(x_i), \\\\ \forall i\,\, x_i \in \mathcal{V},\,\,\, \sum_{x \in \mathcal{V}} p(x) = 1, \nonumber $$ where $\mathcal{V}$ is a pre-determined vocabulary. The most probable segmentation $\mathbf{x}^*$ for the input sentence $X$ is then given by: $$ \mathbf{x}^{*} = \text{argmax}_{\mathbf{x} \in \mathcal{S}(X)} P(\mathbf{x}), $$ where $\mathcal{S}(X)$ is a set of segmentation candidates built from the input sentence $X$. $\mathbf{x}^*$ is obtained with the Viterbi algorithm.
Given the following machine learning model name: Fast Focal Detection Network, provide a description of the model
F2DNet, a novel two-stage object detection architecture which eliminates redundancy of classical two-stage detectors by replacing the region proposal network with focal detection network and bounding box head with fast suppression head.
Given the following machine learning model name: SwiGLU, provide a description of the model
**SwiGLU** is an activation function which is a variant of [GLU](https://paperswithcode.com/method/glu). The definition is as follows: $$ \text{SwiGLU}\left(x, W, V, b, c, \beta\right) = \text{Swish}\_{\beta}\left(xW + b\right) \otimes \left(xV + c\right) $$
Given the following machine learning model name: Channel & Spatial attention, provide a description of the model
Channel & spatial attention combines the advantages of channel attention and spatial attention. It adaptively selects both important objects and regions
Given the following machine learning model name: R-CNN, provide a description of the model
**R-CNN**, or **Regions with CNN Features**, is an object detection model that uses high-capacity CNNs to bottom-up region proposals in order to localize and segment objects. It uses [selective search](https://paperswithcode.com/method/selective-search) to identify a number of bounding-box object region candidates (“regions of interest”), and then extracts features from each region independently for classification.
Given the following machine learning model name: Keypoint Pose Encoding, provide a description of the model
Given the following machine learning model name: Skip-gram Word2Vec, provide a description of the model
**Skip-gram Word2Vec** is an architecture for computing word embeddings. Instead of using surrounding words to predict the center word, as with CBow Word2Vec, Skip-gram Word2Vec uses the central word to predict the surrounding words. The skip-gram objective function sums the log probabilities of the surrounding $n$ words to the left and right of the target word $w\_{t}$ to produce the following objective: $$J\_\theta = \frac{1}{T}\sum^{T}\_{t=1}\sum\_{-n\leq{j}\leq{n}, \neq{0}}\log{p}\left(w\_{j+1}\mid{w\_{t}}\right)$$
Given the following machine learning model name: Unbiased Online Recurrent Optimization, provide a description of the model
Given the following machine learning model name: Transformer-XL, provide a description of the model
**Transformer-XL** (meaning extra long) is a [Transformer](https://paperswithcode.com/method/transformer) architecture that introduces the notion of recurrence to the deep self-attention network. Instead of computing the hidden states from scratch for each new segment, Transformer-XL reuses the hidden states obtained in previous segments. The reused hidden states serve as memory for the current segment, which builds up a recurrent connection between the segments. As a result, modeling very long-term dependency becomes possible because information can be propagated through the recurrent connections. As an additional contribution, the Transformer-XL uses a new relative positional encoding formulation that generalizes to attention lengths longer than the one observed during training.
Given the following machine learning model name: Submanifold Convolution, provide a description of the model
**Submanifold Convolution (SC)** is a spatially sparse [convolution](https://paperswithcode.com/method/convolution) operation used for tasks with sparse data like semantic segmentation of 3D point clouds. An SC convolution computes the set of active sites in the same way as a regular convolution: it looks for the presence of any active sites in its receptive field of size $f^{d}$. If the input has size $l$ then the output will have size $\left(l − f + s\right)/s$. Unlike a regular convolution, an SC convolution discards the ground state for non-active sites by assuming that the input from those sites is zero. For more details see the [paper](https://paperswithcode.com/paper/3d-semantic-segmentation-with-submanifold), or the official code [here](https://github.com/facebookresearch/SparseConvNet).
Given the following machine learning model name: Group Normalization, provide a description of the model
**Group Normalization** is a normalization layer that divides channels into groups and normalizes the features within each group. GN does not exploit the batch dimension, and its computation is independent of batch sizes. In the case where the group size is 1, it is equivalent to [Instance Normalization](https://paperswithcode.com/method/instance-normalization). As motivation for the method, many classical features like SIFT and HOG had *group-wise* features and involved *group-wise normalization*. For example, a HOG vector is the outcome of several spatial cells where each cell is represented by a normalized orientation histogram. Formally, Group Normalization is defined as: $$ \mu\_{i} = \frac{1}{m}\sum\_{k\in\mathcal{S}\_{i}}x\_{k} $$ $$ \sigma^{2}\_{i} = \frac{1}{m}\sum\_{k\in\mathcal{S}\_{i}}\left(x\_{k}-\mu\_{i}\right)^{2} $$ $$ \hat{x}\_{i} = \frac{x\_{i} - \mu\_{i}}{\sqrt{\sigma^{2}\_{i}+\epsilon}} $$ Here $x$ is the feature computed by a layer, and $i$ is an index. Formally, a Group Norm layer computes $\mu$ and $\sigma$ in a set $\mathcal{S}\_{i}$ defined as: $\mathcal{S}\_{i} = ${$k \mid k\_{N} = i\_{N} ,\lfloor\frac{k\_{C}}{C/G}\rfloor = \lfloor\frac{I\_{C}}{C/G}\rfloor $}. Here $G$ is the number of groups, which is a pre-defined hyper-parameter ($G = 32$ by default). $C/G$ is the number of channels per group. $\lfloor$ is the floor operation, and the final term means that the indexes $i$ and $k$ are in the same group of channels, assuming each group of channels are stored in a sequential order along the $C$ axis.
Given the following machine learning model name: Deep Graph Infomax, provide a description of the model
Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. Description and image from: [DEEP GRAPH INFOMAX](https://arxiv.org/pdf/1809.10341.pdf)
Given the following machine learning model name: Ghost Module, provide a description of the model
A **Ghost Module** is an image block for convolutional neural network that aims to generate more features by using fewer parameters. Specifically, an ordinary convolutional layer in deep neural networks is split into two parts. The first part involves ordinary convolutions but their total number is controlled. Given the intrinsic feature maps from the first part, a series of simple linear operations are applied for generating more feature maps. Given the widely existing redundancy in intermediate feature maps calculated by mainstream CNNs, ghost modules aim to reduce them. In practice, given the input data $X\in\mathbb{R}^{c\times h\times w}$, where $c$ is the number of input channels and $h$ and $w$ are the height and width of the input data, respectively, the operation of an arbitrary convolutional layer for producing $n$ feature maps can be formulated as $$ Y = X*f+b, $$ where $*$ is the [convolution](https://paperswithcode.com/method/convolution) operation, $b$ is the bias term, $Y\in\mathbb{R}^{h'\times w'\times n}$ is the output feature map with $n$ channels, and $f\in\mathbb{R}^{c\times k\times k \times n}$ is the convolution filters in this layer. In addition, $h'$ and $w'$ are the height and width of the output data, and $k\times k$ is the kernel size of convolution filters $f$, respectively. During this convolution procedure, the required number of FLOPs can be calculated as $n\cdot h'\cdot w'\cdot c\cdot k\cdot k$, which is often as large as hundreds of thousands since the number of filters $n$ and the channel number $c$ are generally very large (e.g. 256 or 512). Here, the number of parameters (in $f$ and $b$) to be optimized is explicitly determined by the dimensions of input and output feature maps. The output feature maps of convolutional layers often contain much redundancy, and some of them could be similar with each other. We point out that it is unnecessary to generate these redundant feature maps one by one with large number of FLOPs and parameters. Suppose that the output feature maps are *ghosts* of a handful of intrinsic feature maps with some cheap transformations. These intrinsic feature maps are often of smaller size and produced by ordinary convolution filters. Specifically, $m$ intrinsic feature maps $Y'\in\mathbb{R}^{h'\times w'\times m}$ are generated using a primary convolution: $$ Y' = X*f', $$ where $f'\in\mathbb{R}^{c\times k\times k \times m}$ is the utilized filters, $m\leq n$ and the bias term is omitted for simplicity. The hyper-parameters such as filter size, stride, padding, are the same as those in the ordinary convolution to keep the spatial size (ie $h'$ and $w'$) of the output feature maps consistent. To further obtain the desired $n$ feature maps, we apply a series of cheap linear operations on each intrinsic feature in $Y'$ to generate $s$ ghost features according to the following function: $$ y_{ij} = \Phi_{i,j}(y'_i),\quad \forall\; i = 1,...,m,\;\; j = 1,...,s, $$ where $y'\_i$ is the $i$-th intrinsic feature map in $Y'$, $\Phi\_{i,j}$ in the above function is the $j$-th (except the last one) linear operation for generating the $j$-th ghost feature map $y_{ij}$, that is to say, $y'\_i$ can have one or more ghost feature maps $\{y\_{ij}\}\_{j=1}^{s}$. The last $\Phi\_{i,s}$ is the identity mapping for preserving the intrinsic feature maps. we can obtain $n=m\cdot s$ feature maps $Y=[y\_{11},y\_{12},\cdots,y\_{ms}]$ as the output data of a Ghost module. Note that the linear operations $\Phi$ operate on each channel whose computational cost is much less than the ordinary convolution. In practice, there could be several different linear operations in a Ghost module, eg $3\times 3$ and $5\times5$ linear kernels, which will be analyzed in the experiment part.
Given the following machine learning model name: GreedyNAS, provide a description of the model
**GreedyNAS** is a one-shot [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. Previous methods held the assumption that a supernet should give a reasonable ranking over all paths. They thus treat all paths equally, and spare much effort to train paths. However, it is harsh for a single supernet to evaluate accurately on such a huge-scale search space (eg, $7^{21}$). GreedyNAS eases the burden of supernet by encouraging focus more on evaluation of potentially-good candidates, which are identified using a surrogate portion of validation data. Concretely, during training, GreedyNAS utilizes a multi-path sampling strategy with rejection, and greedily filters the weak paths. The training efficiency is thus boosted since the training space has been greedily shrunk from all paths to those potentially-good ones. An exploration and exploitation policy is adopted by introducing an empirical candidate path pool.
Given the following machine learning model name: Gated Graph Sequence Neural Networks, provide a description of the model
Gated Graph Sequence Neural Networks (GGS-NNs) is a novel graph-based neural network model. GGS-NNs modifies Graph Neural Networks (Scarselli et al., 2009) to use gated recurrent units and modern optimization techniques and then extend to output sequences. Source: [Li et al.](https://arxiv.org/pdf/1511.05493v4.pdf) Image source: [Li et al.](https://arxiv.org/pdf/1511.05493v4.pdf)
Given the following machine learning model name: Tacotron, provide a description of the model
**Tacotron** is an end-to-end generative text-to-speech model that takes a character sequence as input and outputs the corresponding spectrogram. The backbone of Tacotron is a seq2seq model with attention. The Figure depicts the model, which includes an encoder, an attention-based decoder, and a post-processing net. At a high-level, the model takes characters as input and produces spectrogram frames, which are then converted to waveforms.
Given the following machine learning model name: Minibatch Discrimination, provide a description of the model
**Minibatch Discrimination** is a discriminative technique for generative adversarial networks where we discriminate between whole minibatches of samples rather than between individual samples. This is intended to avoid collapse of the generator.
Given the following machine learning model name: Multi-Head Attention, provide a description of the model
**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). $$ \text{MultiHead}\left(\textbf{Q}, \textbf{K}, \textbf{V}\right) = \left[\text{head}\_{1},\dots,\text{head}\_{h}\right]\textbf{W}_{0}$$ $$\text{where} \text{ head}\_{i} = \text{Attention} \left(\textbf{Q}\textbf{W}\_{i}^{Q}, \textbf{K}\textbf{W}\_{i}^{K}, \textbf{V}\textbf{W}\_{i}^{V} \right) $$ Above $\textbf{W}$ are all learnable parameter matrices. Note that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism. Source: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)
Given the following machine learning model name: Targeted Dropout, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: ooJpiued, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Path Length Regularization, provide a description of the model
**Path Length Regularization** is a type of regularization for [generative adversarial networks](https://paperswithcode.com/methods/category/generative-adversarial-networks) that encourages good conditioning in the mapping from latent codes to images. The idea is to encourage that a fixed-size step in the latent space $\mathcal{W}$ results in a non-zero, fixed-magnitude change in the image. We can measure the deviation from this ideal empirically by stepping into random directions in the image space and observing the corresponding $\mathbf{w}$ gradients. These gradients should have close to an equal length regardless of $\mathbf{w}$ or the image-space direction, indicating that the mapping from the latent space to image space is well-conditioned. At a single $\mathbf{w} \in \mathcal{W}$ the local metric scaling properties of the generator mapping $g\left(\mathbf{w}\right) : \mathcal{W} \rightarrow \mathcal{Y}$ are captured by the Jacobian matrix $\mathbf{J\_{w}} = \delta{g}\left(\mathbf{w}\right)/\delta{\mathbf{w}}$. Motivated by the desire to preserve the expected lengths of vectors regardless of the direction, we formulate the regularizer as: $$ \mathbb{E}\_{\mathbf{w},\mathbf{y} \sim \mathcal{N}\left(0, \mathbf{I}\right)} \left(||\mathbf{J}^{\mathbf{T}}\_{\mathbf{w}}\mathbf{y}||\_{2} - a\right)^{2} $$ where $y$ are random images with normally distributed pixel intensities, and $w \sim f\left(z\right)$, where $z$ are normally distributed. To avoid explicit computation of the Jacobian matrix, we use the identity $\mathbf{J}^{\mathbf{T}}\_{\mathbf{w}}\mathbf{y} = \nabla\_{\mathbf{w}}\left(g\left(\mathbf{w}\right)·y\right)$, which is efficiently computable using standard backpropagation. The constant $a$ is set dynamically during optimization as the long-running exponential moving average of the lengths $||\mathbf{J}^{\mathbf{T}}\_{\mathbf{w}}\mathbf{y}||\_{2}$, allowing the optimization to find a suitable global scale by itself. The authors note that they find that path length regularization leads to more reliable and consistently behaving models, making architecture exploration easier. They also observe that the smoother generator is significantly easier to invert.
Given the following machine learning model name: Triplet Attention, provide a description of the model
Triplet attention comprises of three branches each responsible for capturing crossdimension between the spatial dimensions and channel dimension of the input. Given an input tensor with shape (C × H × W), each branch is responsible for aggregating cross-dimensional interactive features between either the spatial dimension H or W and the channel dimension C.
Given the following machine learning model name: Deep-MAC, provide a description of the model
**Deep-MAC**, or **Deep Mask-heads Above CenterNet**, is a type of anchor-free instance segmentation model based on [CenterNet](https://paperswithcode.com/method/centernet). The motivation for this new architecture is that boxes are much cheaper to annotate than masks, so the authors address the “partially supervised” instance segmentation problem, where all classes have bounding box annotations but only a subset of classes have mask annotations. For predicting bounding boxes, CenterNet outputs 3 tensors: (1) a class-specific [heatmap](https://paperswithcode.com/method/heatmap) which indicates the probability of the center of a bounding box being present at each location, (2) a class-agnostic 2-channel tensor indicating the height and width of the bounding box at each center pixel, and (3) since the output feature map is typically smaller than the image (stride 4 or 8), CenterNet also predicts an x and y direction offset to recover this discretization error at each center pixel. For Deep-MAC, in parallel to the box-related prediction heads, we add a fourth pixel embedding branch $P$. For each bounding box $b$, we crop a region $P\_{b}$ from $P$ corresponding to $b$ via [ROIAlign](https://paperswithcode.com/method/roi-align) which results in a 32 × 32 tensor. We then feed each $P\_{b}$ to a mask-head. The final prediction at the end is a class-agnostic 32 × 32 tensor which we pass through a sigmoid to get per-pixel probabilities. We train this mask-head via a per-pixel cross-entropy loss averaged over all pixels and instances. During post-processing, the predicted mask is re-aligned according to the predicted box and resized to the resolution of the image. In addition to this 32 × 32 cropped feature map, we add two inputs for improved stability of some mask-heads: (1) Instance embedding: an additional head is added to the backbone that predicts a per-pixel embedding. For each bounding box $b$ we extract its embedding from the center pixel. This embedding is tiled to a size of 32 × 32 and concatenated to the pixel embedding crop. This helps condition the mask-head on a particular instance and disambiguate it from others. (2) Coordinate Embedding: Inspired by [CoordConv](https://paperswithcode.com/method/coordconv), the authors add a 32 × 32 × 2 tensor holding normalized $\left(x, y\right)$ coordinates relative to the bounding box $b$.
Given the following machine learning model name: Funnel Transformer, provide a description of the model
**Funnel Transformer** is a type of [Transformer](https://paperswithcode.com/methods/category/transformers) that gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. By re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, the model capacity is further improved. In addition, to perform token-level predictions as required by common pretraining objectives, Funnel-[transformer](https://paperswithcode.com/method/transformer) is able to recover a deep representation for each token from the reduced hidden sequence via a decoder. The proposed model keeps the same overall skeleton of interleaved S-[Attn](https://paperswithcode.com/method/scaled) and P-[FFN](https://paperswithcode.com/method/dense-connections) sub-modules wrapped by [residual connection](https://paperswithcode.com/method/residual-connection) and [layer normalization](https://paperswithcode.com/method/layer-normalization). But differently, to achieve representation compression and computation reduction, THE model employs an encoder that gradually reduces the sequence length of the hidden states as the layer gets deeper. In addition, for tasks involving per-token predictions like pretraining, a simple decoder is used to reconstruct a full sequence of token-level representations from the compressed encoder output. Compression is achieved via a pooling operation,
Given the following machine learning model name: Feedback Alignment, provide a description of the model
Given the following machine learning model name: Deactivable Skip Connection, provide a description of the model
A **Deactivable Skip Connection** is a type of skip connection which, instead of concatenating the encoder features (red) and decoder features (blue), as with [standard skip connections](https://paperswithcode.com/methods/category/skip-connections), it instead fuses the encoder features with part of the decoder features (light blue), to be able to deactivate this operation when needed.
Given the following machine learning model name: VQSVD, provide a description of the model
**Variational Quantum Singular Value Decomposition** is a variational quantum algorithm for singular value decomposition (VQSVD). By exploiting the variational principles for singular values and the Ky Fan Theorem, a novel loss function is designed such that two quantum neural networks (or parameterized quantum circuits) could be trained to learn the singular vectors and output the corresponding singular values.
Given the following machine learning model name: Position-Wise Feed-Forward Layer, provide a description of the model
**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.
Given the following machine learning model name: Progressively Growing GAN, provide a description of the model
**ProGAN**, or **Progressively Growing GAN**, is a generative adversarial network that utilises a progressively growing training approach. The idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses.
Given the following machine learning model name: Height-driven Attention Network, provide a description of the model
**Height-driven Attention Network**, or **HANet**, is a general add-on module for improving semantic segmentation for urban-scene images. It emphasizes informative features or classes selectively according to the vertical position of a pixel. The pixel-wise class distributions are significantly different from each other among horizontally segmented sections in the urban-scene images. Likewise, urban-scene images have their own distinct characteristics, but most semantic segmentation networks do not reflect such unique attributes in the architecture. The proposed network architecture incorporates the capability exploiting the attributes to handle the urban scene dataset effectively.
Given the following machine learning model name: Neural Probabilistic Language Model, provide a description of the model
A **Neural Probablistic Language Model** is an early language modelling architecture. It involves a feedforward architecture that takes in input vector representations (i.e. word embeddings) of the previous $n$ words, which are looked up in a table $C$. The word embeddings are concatenated and fed into a hidden layer which then feeds into a [softmax](https://paperswithcode.com/method/softmax) layer to estimate the probability of the word given the context.
Given the following machine learning model name: Multiscale Vision Transformer, provide a description of the model
**Multiscale Vision Transformer**, or **MViT**, is a [transformer](https://paperswithcode.com/method/transformer) architecture for modeling visual data such as images and videos. Unlike conventional transformers, which maintain a constant channel capacity and resolution throughout the network, Multiscale Transformers have several channel-resolution scale stages. Starting from the input resolution and a small channel dimension, the stages hierarchically expand the channel capacity while reducing the spatial resolution. This creates a multiscale pyramid of features with early layers operating at high spatial resolution to model simple low-level visual information, and deeper layers at spatially coarse, but complex, high-dimensional features.
Given the following machine learning model name: FiLM Module, provide a description of the model
The **Feature-wise linear modulation** (**FiLM**) module combines information from both noisy waveform and input mel-spectrogram. It is used in the [WaveGrad](https://paperswithcode.com/method/wavegrad) model. The authors also added iteration index $n$ which indicates the noise level of the input waveform by using the [Transformer](https://paperswithcode.com/method/transformer) sinusoidal positional embedding. To condition on the noise level directly, $n$ is replaced by $\sqrt{\bar{\alpha}}$ and a linear scale $C = 5000$ is applied. The FiLM module produces both scale and bias vectors given inputs, which are used in a UBlock for feature-wise affine transformation as: $$ \gamma\left(D, \sqrt{\bar{\alpha}}\right) \odot U + \zeta\left(D, \sqrt{\bar{\alpha}}\right) $$ where $\gamma$ and $\zeta$ correspond to the scaling and shift vectors from the FiLM module, $D$ is the output from corresponding [DBlock](https://paperswithcode.com/method/dblock), $U$ is an intermediate output in the UBlock.
Given the following machine learning model name: FastGCN, provide a description of the model
FastGCN is a fast improvement of the GCN model recently proposed by Kipf & Welling (2016a) for learning graph embeddings. It generalizes transductive training to an inductive manner and also addresses the memory bottleneck issue of GCN caused by recursive expansion of neighborhoods. The crucial ingredient is a sampling scheme in the reformulation of the loss and the gradient, well justified through an alternative view of graph convoluntions in the form of integral transforms of embedding functions. Description and image from: [FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling](https://arxiv.org/pdf/1801.10247.pdf)
Given the following machine learning model name: Global and Sliding Window Attention, provide a description of the model
**Global and Sliding Window Attention** is an attention pattern for attention-based models. It is motivated by the fact that non-sparse attention in the original [Transformer](https://paperswithcode.com/method/transformer) formulation has a [self-attention component](https://paperswithcode.com/method/scaled) with $O\left(n^{2}\right)$ time and memory complexity where $n$ is the input sequence length and thus, is not efficient to scale to long inputs. Since [windowed](https://paperswithcode.com/method/sliding-window-attention) and [dilated](https://paperswithcode.com/method/dilated-sliding-window-attention) attention patterns are not flexible enough to learn task-specific representations, the authors of the [Longformer](https://paperswithcode.com/method/longformer) add “global attention” on few pre-selected input locations. This attention is operation symmetric: that is, a token with a global attention attends to all tokens across the sequence, and all tokens in the sequence attend to it. The Figure to the right shows an example of a sliding window attention with global attention at a few tokens at custom locations. For the example of classification, global attention is used for the [CLS] token, while in the example of Question Answering, global attention is provided on all question tokens.
Given the following machine learning model name: CodeGen, provide a description of the model
**CodeGen** is an autoregressive transformers with next-token prediction language modeling as the learning objective trained on a natural language corpus and programming language data curated from GitHub.
Given the following machine learning model name: AdaGPR, provide a description of the model
**AdaGPR** is an adaptive, layer-wise graph [convolution](https://paperswithcode.com/method/convolution) model. AdaGPR applies adaptive generalized Pageranks at each layer of a [GCNII](https://paperswithcode.com/method/gcnii) model by learning to predict the coefficients of generalized Pageranks using sparse solvers.
Given the following machine learning model name: Context-aware Visual Attention-based (CoVA) webpage object detection pipeline, provide a description of the model
Context-Aware Visual Attention-based end-to-end pipeline for Webpage Object Detection (_CoVA_) aims to learn function _f_ to predict labels _y = [$y_1, y_2, ..., y_N$]_ for a webpage containing _N_ elements. The input to CoVA consists of: 1. a screenshot of a webpage, 2. list of bounding boxes _[x, y, w, h]_ of the web elements, and 3. neighborhood information for each element obtained from the DOM tree. This information is processed in four stages: 1. the graph representation extraction for the webpage, 2. the Representation Network (_RN_), 3. the Graph Attention Network (_GAT_), and 4. a fully connected (_FC_) layer. The graph representation extraction computes for every web element _i_ its set of _K_ neighboring web elements _$N_i$_. The _RN_ consists of a Convolutional Neural Net (_CNN_) and a positional encoder aimed to learn a visual representation _$v_i$_ for each web element _i ∈ {1, ..., N}_. The _GAT_ combines the visual representation _$v_i$_ of the web element _i_ to be classified and those of its neighbors, i.e., _$v_k$ ∀k ∈ $N_i$_ to compute the contextual representation _$c_i$_ for web element _i_. Finally, the visual and contextual representations of the web element are concatenated and passed through the _FC_ layer to obtain the classification output.
Given the following machine learning model name: BS-Net, provide a description of the model
**BS-Net** is an architecture for COVID-19 severity prediction based on clinical data from different modalities. The architecture comprises 1) a shared multi-task feature extraction backbone, 2) a lung segmentation branch, 3) an original registration mechanism that acts as a ”multi-resolution feature alignment” block operating on the encoding backbone , and 4) a multi-regional classification part for the final six-valued score estimation. All these blocks act together in the final training thanks to a loss specifically crated for this task. This loss guarantees also performance robustness, comprising a differentiable version of the target discrete metric. The learning phase operates in a weakly-supervised fashion. This is due to the fact that difficulties and pitfalls in the visual interpretation of the disease signs on CXRs (spanning from subtle findings to heavy lung impairment), and the lack of detailed localization information, produces unavoidable inter-rater variability among radiologists in assigning scores. Specifically the architectural details are: - The input image is processed with a convolutional backbone; the authors opt for a [ResNet](https://paperswithcode.com/method/resnet)-18. - Segmentation is performed by a nested version of [U-Net](https://paperswithcode.com/method/u-net) (U-Net++). - Alignment is estimated through the segmentation probability map produced by the U-Net++ decoder, which is achieved through a [spatial transformer network](https://paperswithcode.com/method/spatial-transformer) -- able to estimate the spatial transform matrix in order to center, rotate, and correctly zoom the lungs. After alignment at various scales, features are forward to a [ROIPool](https://paperswithcode.com/method/roi-pooling). - The alignment block is pre-trained on the synthetic alignment dataset in a weakly-supervised setting, using a Dice loss. - The scoring head uses [FPNs](https://paperswithcode.com/method/fpn) for the combination of multi-scale feature maps. The multiresolution feature aligner produces input feature maps that are well focused on the specific area of interest. Eventually, the output of the FPN layer flows in a series of convolutional blocks to retrieve the output map. The classification is performed by a final [Global Average Pooling](https://paperswithcode.com/method/global-average-pooling) layer and a [SoftMax](https://paperswithcode.com/method/softmax) activation. - The Loss function used for training is a sparse categorical cross entropy (SCCE) with a (differentiable) mean absolute error contribution.
Given the following machine learning model name: Review-guided Answer Helpfulness Prediction, provide a description of the model
**Review-guided Answer Helpfulness Prediction** (RAHP) is a textual inference model for identifying helpful answers in e-commerce. It not only considers the interactions between QA pairs, but also investigates the opinion coherence between the answer and crowds' opinions reflected in the reviews, which is another important factor to identify helpful answers.
Given the following machine learning model name: Radial Basis Function, provide a description of the model
Given the following machine learning model name: AggMo, provide a description of the model
**Aggregated Momentum (AggMo)** is a variant of the [classical momentum](https://paperswithcode.com/method/sgd-with-momentum) stochastic optimizer which maintains several velocity vectors with different $\beta$ parameters. AggMo averages the velocity vectors when updating the parameters. It resolves the problem of choosing a momentum parameter by taking a linear combination of multiple momentum buffers. Each of $K$ momentum buffers have a different discount factor $\beta \in \mathbb{R}^{K}$, and these are averaged for the update. The update rule is: $$ \textbf{v}\_{t}^{\left(i\right)} = \beta^{(i)}\textbf{v}\_{t-1}^{\left(i\right)} - \nabla\_{\theta}f\left(\mathbf{\theta}\_{t-1}\right) $$ $$ \mathbf{\theta\_{t}} = \mathbf{\theta\_{t-1}} + \frac{\gamma\_{t}}{K}\sum^{K}\_{i=1}\textbf{v}\_{t}^{\left(i\right)} $$ where $v^{\left(i\right)}_{0}$ for each $i$. The vector $\mathcal{\beta} = \left[\beta^{(1)}, \ldots, \beta^{(K)}\right]$ is the dampening factor.
Given the following machine learning model name: Dilated Bottleneck Block, provide a description of the model
**Dilated Bottleneck Block** is an image model block used in the [DetNet](https://paperswithcode.com/method/detnet) convolutional neural network architecture. It employs a bottleneck structure with dilated convolutions to efficiently enlarge the receptive field.
Given the following machine learning model name: GraphSAGE, provide a description of the model
GraphSAGE is a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Image from: [Inductive Representation Learning on Large Graphs](https://arxiv.org/pdf/1706.02216v4.pdf)
Given the following machine learning model name: DistilBERT, provide a description of the model
**DistilBERT** is a small, fast, cheap and light [Transformer](https://paperswithcode.com/method/transformer) model based on the [BERT](https://paperswithcode.com/method/bert) architecture. Knowledge distillation is performed during the pre-training phase to reduce the size of a BERT model by 40%. To leverage the inductive biases learned by larger models during pre-training, the authors introduce a triple loss combining language modeling, distillation and cosine-distance losses.
Given the following machine learning model name: nnFormer, provide a description of the model
**nnFormer**, or **not-another transFormer**, is a semantic segmentation model with an interleaved architecture based on empirical combination of self-attention and [convolution](https://paperswithcode.com/method/convolution). Firstly, a light-weight convolutional embedding layer ahead is used ahead of [transformer](https://paperswithcode.com/method/transformer) blocks. In comparison to directly flattening raw pixels and applying 1D pre-processing, the convolutional embedding layer encodes precise (i.e., pixel-level) spatial information and provide low-level yet high-resolution 3D features. After the embedding block, transformer and convolutional down-sampling blocks are interleaved to fully entangle long-term dependencies with high-level and hierarchical object concepts at various scales, which helps improve the generalization ability and robustness of learned representations.
Given the following machine learning model name: Counterfactuals Explanations, provide a description of the model
Given the following machine learning model name: Strip Pooling Network, provide a description of the model
Spatial pooling usually operates on a small region which limits its capability to capture long-range dependencies and focus on distant regions. To overcome this, Hou et al. proposed strip pooling, a novel pooling method capable of encoding long-range context in either horizontal or vertical spatial domains. Strip pooling has two branches for horizontal and vertical strip pooling. The horizontal strip pooling part first pools the input feature $F \in \mathcal{R}^{C \times H \times W}$ in the horizontal direction: \begin{align} y^1 = \text{GAP}^w (X) \end{align} Then a 1D convolution with kernel size 3 is applied in $y$ to capture the relationship between different rows and channels. This is repeated $W$ times to make the output $y_v$ consistent with the input shape: \begin{align} y_h = \text{Expand}(\text{Conv1D}(y^1)) \end{align} Vertical strip pooling is performed in a similar way. Finally, the outputs of the two branches are fused using element-wise summation to produce the attention map: \begin{align} s &= \sigma(Conv^{1\times 1}(y_{v} + y_{h})) \end{align} \begin{align} Y &= s X \end{align} The strip pooling module (SPM) is further developed in the mixed pooling module (MPM). Both consider spatial and channel relationships to overcome the locality of convolutional neural networks. SPNet achieves state-of-the-art results for several complex semantic segmentation benchmarks.
Given the following machine learning model name: Wasserstein GAN (Gradient Penalty), provide a description of the model
**Wasserstein GAN + Gradient Penalty**, or **WGAN-GP**, is a generative adversarial network that uses the Wasserstein loss formulation plus a gradient norm penalty to achieve Lipschitz continuity. The original [WGAN](https://paperswithcode.com/method/wgan) uses weight clipping to achieve 1-Lipschitz functions, but this can lead to undesirable behaviour by creating pathological value surfaces and capacity underuse, as well as gradient explosion/vanishing without careful tuning of the weight clipping parameter $c$. A Gradient Penalty is a soft version of the Lipschitz constraint, which follows from the fact that functions are 1-Lipschitz iff the gradients are of norm at most 1 everywhere. The squared difference from norm 1 is used as the gradient penalty.
Given the following machine learning model name: Generalized additive models, provide a description of the model
Given the following machine learning model name: Varifocal Loss, provide a description of the model
**Varifocal Loss** is a loss function for training a dense object detector to predict the IACS, inspired by [focal loss](https://paperswithcode.com/method/focal-loss). Unlike the focal loss that deals with positives and negatives equally, Varifocal Loss treats them asymmetrically. $$ VFL\left(p, q\right) = −q\left(q\log\left(p\right) + \left(1 − q\right)\log\left(1 − p\right)\right) \text{ if } q > 0 $$ $$ VFL\left(p, q\right) = −\alpha{p^{\gamma}}\log\left(1-p\right) $$ where $p$ is the predicted IACS and $q$ is the target IoU score. For a positive training example, $q$ is set as the IoU between the generated bounding box and the ground-truth one (gt IoU), whereas for a negative training example, the training target $q$ for all classes is $0$.
Given the following machine learning model name: Learnable graph convolutional layer, provide a description of the model
Learnable graph convolutional layer (LGCL) automatically selects a fixed number of neighboring nodes for each feature based on value ranking in order to transform graph data into grid-like structures in 1-D format, thereby enabling the use of regular convolutional operations on generic graphs. Description and image from: [Large-Scale Learnable Graph Convolutional Networks](https://arxiv.org/pdf/1808.03965.pdf)
Given the following machine learning model name: SCNet, provide a description of the model
**Sample Consistency Network (SCNet)** is a method for instance segmentation which ensures the IoU distribution of the samples at training time are as close to that at inference time. To this end, only the outputs of the last box stage are used for mask predictions at both training and inference. The Figure shows the IoU distribution of the samples going to the mask branch at training time with/without sample consistency compared to that at inference time.
Given the following machine learning model name: Euclidean Norm Regularization, provide a description of the model
**Euclidean Norm Regularization** is a regularization step used in [generative adversarial networks](https://paperswithcode.com/methods/category/generative-adversarial-networks), and is typically added to both the generator and discriminator losses: $$ R\_{z} = w\_{r} \cdot ||\Delta{z}||^{2}\_{2} $$ where the scalar weight $w\_{r}$ is a parameter. Image: [LOGAN](https://paperswithcode.com/method/logan)
Given the following machine learning model name: T5, provide a description of the model
**T5**, or **Text-to-Text Transfer Transformer**, is a [Transformer](https://paperswithcode.com/method/transformer) based architecture that uses a text-to-text approach. Every task – including translation, question answering, and classification – is cast as feeding the model text as input and training it to generate some target text. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks. The changes compared to [BERT](https://paperswithcode.com/method/bert) include: - adding a *causal* decoder to the bidirectional architecture. - replacing the fill-in-the-blank cloze task with a mix of alternative pre-training tasks.
Given the following machine learning model name: Rectified Linear Unit N, provide a description of the model
The **Rectified Linear Unit N**, or **ReLUN**, is a modification of **[ReLU6](https://paperswithcode.com/method/relu6)** activation function that has trainable parameter **n**. $$ReLUN(x) = min(max(0, x), n)$$
Given the following machine learning model name: TuckER with Relation Prediction, provide a description of the model
TuckER model trained with a relation prediction objective on top of the 1vsAll loss
Given the following machine learning model name: Hyper-parameter optimization, provide a description of the model
In machine learning, a hyperparameter is a parameter whose value is used to control learning process, and HPO is the problem of choosing a set of optimal hyperparameters for a learning algorithm.
Given the following machine learning model name: Random Search, provide a description of the model
**Random Search** replaces the exhaustive enumeration of all combinations by selecting them randomly. This can be simply applied to the discrete setting described above, but also generalizes to continuous and mixed spaces. It can outperform Grid search, especially when only a small number of hyperparameters affects the final performance of the machine learning algorithm. In this case, the optimization problem is said to have a low intrinsic dimensionality. Random Search is also embarrassingly parallel, and additionally allows the inclusion of prior knowledge by specifying the distribution from which to sample. Extracted from [Wikipedia](https://en.wikipedia.org/wiki/Hyperparameter_optimization#Random_search) Source [Paper](https://dl.acm.org/doi/10.5555/2188385.2188395) Image Source: [BERGSTRA AND BENGIO](https://dl.acm.org/doi/pdf/10.5555/2188385.2188395)
Given the following machine learning model name: Spatial CNN with UNet based Encoder-decoder and ConvLSTM, provide a description of the model
Spatial CNN with UNet based Encoder-decoder and ConvLSTM
Given the following machine learning model name: Soft Nearest Neighbor Loss with Annealing Temperature, provide a description of the model
Given the following machine learning model name: ClipBERT, provide a description of the model
**ClipBERT** is a framework for end-to-end-learning for video-and-language tasks, by employing sparse sampling, where only a single or a few sparsely sampled short clips from a video are used at each training step. Two aspects distinguish ClipBERT from previous work. First, in contrast to densely extracting video features (adopted by most existing methods), CLIPBERT sparsely samples only one single or a few short clips from the full-length videos at each training step. The hypothesis is that visual features from sparse clips already capture key visual and semantic information in the video, as consecutive clips usually contain similar semantics from a continuous scene. Thus, a handful of clips are sufficient for training, instead of using the full video. Then, predictions from multiple densely-sampled clips are aggregated to obtain the final video-level prediction during inference, which is less computational demanding. The second differentiating aspect concerns the initialization of model weights (i.e., transfer through pre-training). The authors use 2D architectures (e.g., [ResNet](https://paperswithcode.com/method/resnet)-50) instead of 3D features as the visual backbone for video encoding, allowing them to harness the power of image-text pretraining for video-text understanding along with the advantages of low memory cost and runtime efficiency.
Given the following machine learning model name: Neighborhood Contrastive Learning, provide a description of the model
Given the following machine learning model name: Barlow Twins, provide a description of the model
**Barlow Twins** is a self-supervised learning method that applies redundancy-reduction — a principle first proposed in neuroscience — to self supervised learning. The objective function measures the cross-correlation matrix between the embeddings of two identical networks fed with distorted versions of a batch of samples, and tries to make this matrix close to the identity. This causes the embedding vectors of distorted version of a sample to be similar, while minimizing the redundancy between the components of these vectors. Barlow Twins does not require large batches nor asymmetry between the network twins such as a predictor network, gradient stopping, or a moving average on the weight updates. Intriguingly it benefits from very high-dimensional output vectors.
Given the following machine learning model name: Symbolic Deep Learning, provide a description of the model
This is a general approach to convert a neural network into an analytic equation. The technique works as follows: 1. Encourage sparse latent representations 2. Apply symbolic regression to approximate the transformations between in/latent/out layers 3. Compose the symbolic expressions. In the [paper](https://arxiv.org/abs/2006.11287), we show that we find the correct known equations, including force laws and Hamiltonians, can be extracted from the neural network. We then apply our method to a non-trivial cosmology example-a detailed dark matter simulation-and discover a new analytic formula which can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn.
Given the following machine learning model name: OASIS, provide a description of the model
OASIS is a [GAN](https://paperswithcode.com/method/gan)-based model to translate semantic label maps into realistic-looking images. The model builds on preceding work such as [Pix2Pix](https://paperswithcode.com/method/pix2pix) and SPADE. OASIS introduces the following innovations: 1. The method is not dependent on the perceptual loss, which is commonly used for the semantic image synthesis task. A [VGG](https://paperswithcode.com/method/vgg) network trained on ImageNet is routinely employed as the perceptual loss to strongly improve the synthesis quality. The authors show that this perceptual loss also has negative effects: First, it reduces the diversity of the generated images. Second, it negatively influences the color distribution to be more biased towards ImageNet. OASIS eliminates the dependence on the perceptual loss by changing the common discriminator design: The OASIS discriminator segments an image into one of the real classes or an additional fake class. In doing so, it makes more efficient use of the label maps that the discriminator normally receives. This distinguishes the discriminator from the commonly used encoder-shaped discriminators, which concatenate the label maps to the input image and predict a single score per image. With the more fine-grained supervision through the loss of the OASIS discriminator, the perceptual loss is shown to become unnecessary. 2. A user can generate a diverse set of images per label map by simply resampling noise. This is achieved by conditioning the [spatially-adaptive denormalization](https://arxiv.org/abs/1903.07291) module in each layer of the GAN generator directly on spatially replicated input noise. A side effect of this conditioning is that at inference time an image can be resampled either globally or locally (either the complete image changes or a restricted region in the image).
Given the following machine learning model name: None, provide a description of the model
Given the following machine learning model name: RESCAL, provide a description of the model
Given the following machine learning model name: Res2Net Block, provide a description of the model
A **Res2Net Block** is an image model block that constructs hierarchical residual-like connections within one single [residual block](https://paperswithcode.com/method/residual-block). It was proposed as part of the [Res2Net](https://paperswithcode.com/method/res2net) CNN architecture. The block represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The $3 \times 3$ filters of $n$ channels is replaced with a set of smaller filter groups, each with $w$ channels. These smaller filter groups are connected in a hierarchical residual-like style to increase the number of scales that the output features can represent. Specifically, we divide input feature maps into several groups. A group of filters first extracts features from a group of input feature maps. Output features of the previous group are then sent to the next group of filters along with another group of input feature maps. This process repeats several times until all input feature maps are processed. Finally, feature maps from all groups are concatenated and sent to another group of $1 \times 1$ filters to fuse information altogether. Along with any possible path in which input features are transformed to output features, the equivalent receptive field increases whenever it passes a $3 \times 3$ filter, resulting in many equivalent feature scales due to combination effects. One way of thinking of these blocks is that they expose a new dimension, **scale**, alongside the existing dimensions of depth, width, and cardinality.
Given the following machine learning model name: Multi-Attention Network, provide a description of the model
Given the following machine learning model name: Cosine Normalization, provide a description of the model
Multi-layer neural networks traditionally use dot products between the output vector of previous layer and the incoming weight vector as the input to activation function. The result of dot product is unbounded. To bound dot product and decrease the variance, **Cosine Normalization** uses cosine similarity or centered cosine similarity (Pearson Correlation Coefficient) instead of dot products in neural networks. Using cosine normalization, the output of a hidden unit is computed by: $$o = f(net_{norm})= f(\cos \theta) = f(\frac{\vec{w} \cdot \vec{x}} {\left|\vec{w}\right| \left|\vec{x}\right|})$$ where $net_{norm}$ is the normalized pre-activation, $\vec{w}$ is the incoming weight vector and $\vec{x}$ is the input vector, ($\cdot$) indicates dot product, $f$ is nonlinear activation function. Cosine normalization bounds the pre-activation between -1 and 1.
Given the following machine learning model name: Spatially-Adaptive Normalization, provide a description of the model
**SPADE**, or **Spatially-Adaptive Normalization** is a conditional normalization method for semantic image synthesis. Similar to [Batch Normalization](https://www.paperswithcode.com/method/batch-normalization), the activation is normalized in the channel-wise manner and then modulated with learned scale and bias. In the SPADE, the mask is first projected onto an embedding space and then convolved to produce the modulation parameters $\gamma$ and $\beta .$ Unlike prior conditional normalization methods, $\gamma$ and $\mathbf{\beta}$ are not vectors, but tensors with spatial dimensions. The produced $\gamma$ and $\mathbf{\beta}$ are multiplied and added to the normalized activation element-wise.
Given the following machine learning model name: RoIAlign, provide a description of the model
**Region of Interest Align**, or **RoIAlign**, is an operation for extracting a small feature map from each RoI in detection and segmentation based tasks. It removes the harsh quantization of [RoI Pool](https://paperswithcode.com/method/roi-pooling), properly *aligning* the extracted features with the input. To avoid any quantization of the RoI boundaries or bins (using $x/16$ instead of $[x/16]$), RoIAlign uses bilinear interpolation to compute the exact values of the input features at four regularly sampled locations in each RoI bin, and the result is then aggregated (using max or average).
Given the following machine learning model name: Robust Predictable Control, provide a description of the model
**Robust Predictable Control**, or **RPC**, is an RL algorithm for learning policies that uses only a few bits of information. RPC brings together ideas from information bottlenecks, model-based RL, and bits-back coding. The main idea of RPC is that if the agent can accurately predict the future, then the agent will not need to observe as many bits from future observations. Precisely, the agent will learn a latent dynamics model that predicts the next representation using the current representation and action. In addition to predicting the future, the agent can also decrease the number of bits by changing its behavior. States where the dynamics are hard to predict will require more bits, so the agent will prefer visiting states where its learned model can accurately predict the next state.
Given the following machine learning model name: MDTVSFA, provide a description of the model
Given the following machine learning model name: MoGA-C, provide a description of the model
**MoGA-C** is a convolutional neural network optimized for mobile latency and discovered via Mobile GPU-Aware (MoGA) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). The basic building block is MBConvs (inverted residual blocks) from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2). Squeeze-and-excitation layers are also experimented with.
Given the following machine learning model name: Graph InfoClust, provide a description of the model
Given the following machine learning model name: Poly-CAM, provide a description of the model
Given the following machine learning model name: Attention Model, provide a description of the model
Given the following machine learning model name: Seq2Edits, provide a description of the model
**Seq2Edits** is an open-vocabulary approach to sequence editing for natural language processing (NLP) tasks with a high degree of overlap between input and output texts. In this approach, each sequence-to-sequence transduction is represented as a sequence of edit operations, where each operation either replaces an entire source span with target tokens or keeps it unchanged. For text normalization, sentence fusion, sentence splitting & rephrasing, text simplification, and grammatical error correction, the approach improves explainability by associating each edit operation with a human-readable tag. Rather than generating the target sentence as a series of tokens, the model predicts a sequence of edit operations that, when applied to the source sentence, yields the target sentence. Each edit operates on a span in the source sentence and either copies, deletes, or replaces it with one or more target tokens. Edits are generated auto-regressively from left to right using a modified [Transformer](https://paperswithcode.com/method/transformer) architecture to facilitate learning of long-range dependencies.
Given the following machine learning model name: EMQAP, provide a description of the model
**EMQAP**, or **E-Manual Question Answering Pipeline**, is an approach for answering questions pertaining to electronics devices. Built upon the pretrained [RoBERTa](https://paperswithcode.com/method/roberta), it harbors a supervised multi-task learning framework which efficiently performs the dual tasks of identifying the section in the E-manual where the answer can be found and the exact answer span within that section.
Given the following machine learning model name: Concurrent Spatial and Channel Squeeze & Excitation (scSE), provide a description of the model
Combines the channel attention of the widely known [spatial squeeze and channel excitation (SE)](https://paperswithcode.com/method/squeeze-and-excitation-block) block and the spatial attention of the [channel squeeze and spatial excitation (sSE)](https://paperswithcode.com/method/channel-squeeze-and-spatial-excitation#) block to build a spatial and channel attention mechanism for image segmentation tasks.
Given the following machine learning model name: ARMA GNN, provide a description of the model
The ARMA GNN layer implements a rational graph filter with a recursive approximation.
Given the following machine learning model name: Bilateral Guided Aggregation Layer, provide a description of the model
**Bilateral Guided Aggregation Layer** is a feature fusion layer for semantic segmentation that aims to enhance mutual connections and fuse different types of feature representation. It was used in the [BiSeNet V2](https://paperswithcode.com/method/bisenet-v2) architecture. Specifically, within the BiSeNet implementation, the layer was used to employ the contextual information of the Semantic Branch to guide the feature response of Detail Branch. With different scale guidance, different scale feature representations can be captured, which inherently encodes the multi-scale information.
Given the following machine learning model name: Learning Cross-Modality Encoder Representations from Transformers, provide a description of the model
LXMERT is a model for learning vision-and-language cross-modality representations. It consists of a Transformer model that consists three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. The model takes two inputs: image with its related sentence. The images are represented as a sequence of objects, whereas each sentence is represented as sequence of words. By combining the self-attention and cross-attention layers the model is able to generated language representation, image representations, and cross-modality representations from the input. The model is pre-trained with image-sentence pairs via five pre-training tasks: masked language modeling, masked object prediction, cross-modality matching, and image questions answering. These tasks help the model to learn both intra-modality and cross-modality relationships.
Given the following machine learning model name: FastPitch, provide a description of the model
**FastPitch** is a fully-parallel text-to-speech model based on FastSpeech, conditioned on fundamental frequency contours. The architecture of FastPitch is shown in the Figure. It is based on FastSpeech and composed mainly of two feed-forward [Transformer](https://paperswithcode.com/method/transformer) (FFTr) stacks. The first one operates in the resolution of input tokens, the second one in the resolution of the output frames. Let $x=\left(x\_{1}, \ldots, x\_{n}\right)$ be the sequence of input lexical units, and $\mathbf{y}=\left(y\_{1}, \ldots, y\_{t}\right)$ be the sequence of target mel-scale spectrogram frames. The first FFTr stack produces the hidden representation $\mathbf{h}=\operatorname{FFTr}(\mathbf{x})$. The hidden representation $h$ is used to make predictions about the duration and average pitch of every character with a 1-D CNN $$ \hat{\mathbf{d}}=\text { DurationPredictor }(\mathbf{h}), \quad \hat{\mathbf{p}}=\operatorname{PitchPredictor}(\mathbf{h}) $$ where $\hat{\mathbf{d}} \in \mathbb{N}^{n}$ and $\hat{\mathbf{p}} \in \mathbb{R}^{n}$. Next, the pitch is projected to match the dimensionality of the hidden representation $h \in$ $\mathbb{R}^{n \times d}$ and added to $\mathbf{h}$. The resulting sum $\mathbf{g}$ is discretely upsampled and passed to the output FFTr, which produces the output mel-spectrogram sequence $$ \mathbf{g}=\mathbf{h}+\operatorname{PitchEmbedding}(\mathbf{p}) $$ $$ \hat{\mathbf{y}}=\operatorname{FFTr}\left([\underbrace{g\_{1}, \ldots, g\_{1}}\_{d\_{1}}, \ldots \underbrace{g\_{n}, \ldots, g\_{n}}_{d\_{n}}]\right) $$ Ground truth $\mathbf{p}$ and $\mathbf{d}$ are used during training, and predicted $\hat{\mathbf{p}}$ and $\hat{\mathbf{d}}$ are used during inference. The model optimizes mean-squared error (MSE) between the predicted and ground-truth modalities $$ \mathcal{L}=\|\hat{\mathbf{y}}-\mathbf{y}\|\_{2}^{2}+\alpha\|\hat{\mathbf{p}}-\mathbf{p}\|\_{2}^{2}+\gamma\|\hat{\mathbf{d}}-\mathbf{d}\|\_{2}^{2} $$
Given the following machine learning model name: DouZero, provide a description of the model
**DouZero** is an AI system for the card game DouDizhu that enhances traditional Monte-Carlo methods with deep neural networks, action encoding, and parallel actors. The [Q-network](https://paperswithcode.com/method/dqn) of DouZero consists of an [LSTM](https://paperswithcode.com/method/lstm) to encode historical actions and six layers of [MLP](https://paperswithcode.com/method/feedforward-network) with hidden dimension of 512. The network predicts a value for a given state-action pair based on the concatenated representation of action and state.
Given the following machine learning model name: Local Importance-based Pooling, provide a description of the model
**Local Importance-based Pooling (LIP)** is a pooling layer that can enhance discriminative features during the downsampling procedure by learning adaptive importance weights based on inputs. By using a learnable network $G$ in $F$, the importance function now is not limited in hand-crafted forms and able to learn the criterion for the discriminativeness of features. Also, the window size of LIP is restricted to be not less than stride to fully utilize the feature map and avoid the issue of fixed interval sampling scheme. More specifically, the importance function in LIP is implemented by a tiny fully convolutional network, which learns to produce the importance map based on inputs in an end-to-end manner.
Given the following machine learning model name: Primal Wasserstein Imitation Learning, provide a description of the model
**Primal Wasserstein Imitation Learning**, or **PWIL**, is a method for imitation learning which ties to the primal form of the Wasserstein distance between the expert and the agent state-action distributions. The reward function is derived offline, as opposed to recent adversarial IL algorithms that learn a reward function through interactions with the environment, and requires little fine-tuning.
Given the following machine learning model name: DAMO-YOLO, provide a description of the model
Given the following machine learning model name: ReInfoSelect, provide a description of the model
**ReInfoSelect** is a reinforcement weak supervision selection method for information retrieval. It learns to select anchor-document pairs that best weakly supervise the neural ranker (action), using the ranking performance on a handful of relevance labels as the reward. Iteratively, for a batch of anchor-document pairs, ReInfoSelect back propagates the gradients through the neural ranker, gathers its NDCG reward, and optimizes the data selection network using policy gradients, until the neural ranker's performance peaks on target relevance metrics (convergence).
Given the following machine learning model name: TD-VAE, provide a description of the model
**TD-VAE**, or **Temporal Difference VAE**, is a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an analogue of [temporal difference learning](https://paperswithcode.com/method/td-lambda) used in reinforcement learning.
Given the following machine learning model name: Balanced Feature Pyramid, provide a description of the model
**Balanced Feature Pyramid** is a feature pyramid module. It differs from approaches like [FPNs](https://paperswithcode.com/method/fpn) that integrate multi-level features using lateral connections. Instead the BFP strengthens the multi-level features using the same deeply integrated balanced semantic features. The pipeline is shown in the Figure to the right. It consists of four steps, rescaling, integrating, refining and strengthening. Features at resolution level $l$ are denoted as $C\_{l}$. The number of multi-level features is denoted as $L$. The indexes of involved lowest and highest levels are denoted as $l\_{min}$ and $l\_{max}$. In the Figure, $C\_{2}$ has the highest resolution. To integrate multi-level features and preserve their semantic hierarchy at the same time, we first resize the multi-level features {$C\_{2}, C\_{3}, C\_{4}, C\_{5}$} to an intermediate size, i.e., the same size as $C\_{4}$, with interpolation and max-pooling respectively. Once the features are rescaled, the balanced semantic features are obtained by simple averaging as: $$ C = \frac{1}{L}\sum^{l\_{max}}\_{l=l\_{min}}C\_{l} $$ The obtained features are then rescaled using the same but reverse procedure to strengthen the original features. Each resolution obtains equal information from others in this procedure. Note that this procedure does not contain any parameter. The authors observe improvement with this nonparametric method, proving the effectiveness of the information flow. The balanced semantic features can be further refined to be more discriminative. The authors found both the refinements with convolutions directly and the non-local module work well. But the non-local module works in a more stable way. Therefore, embedded Gaussian non-local attention is utilized as default. The refining step helps us enhance the integrated features and further improve the results. With this method, features from low-level to high-level are aggregated at the same time. The outputs {$P\_{2}, P\_{3}, P\_{4}, P\_{5}$} are used for object detection following the same pipeline in FPN.
Given the following machine learning model name: Skim and Intensive Reading Model, provide a description of the model
**Skim and Intensive Reading Model**, or **SIRM**, is a deep neural network for figuring out implied textual meaning. It consists of two main components, namely the skim reading component and intensive reading component. N-gram features are quickly extracted from the skim reading component, which is a combination of several convolutional neural networks, as skim (entire) information. An intensive reading component enables a hierarchical investigation for both local (sentence) and global (paragraph) representation, which encapsulates the current embedding and the contextual information with a dense connection.
Given the following machine learning model name: SpineNet, provide a description of the model
**SpineNet** is a convolutional neural network backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by [Neural Architecture Search](https://paperswithcode.com/method/neural-architecture-search).
Given the following machine learning model name: Bottleneck Transformer Block, provide a description of the model
A **Bottleneck Transformer Block** is a block used in [Bottleneck Transformers](https://www.paperswithcode.com/method/bottleneck-transformer) that replaces the spatial 3 × 3 [convolution](https://paperswithcode.com/method/convolution) layer in a [Residual Block](https://paperswithcode.com/method/residual-block) with Multi-Head Self-Attention (MHSA).
Given the following machine learning model name: Contextual Word Vectors, provide a description of the model
**CoVe**, or **Contextualized Word Vectors**, uses a deep [LSTM](https://paperswithcode.com/method/lstm) encoder from an attentional sequence-to-sequence model trained for machine translation to contextualize word vectors. $\text{CoVe}$ word embeddings are therefore a function of the entire input sequence. These word embeddings can then be used in downstream tasks by concatenating them with $\text{GloVe}$ embeddings: $$ v = \left[\text{GloVe}\left(x\right), \text{CoVe}\left(x\right)\right]$$ and then feeding these in as features for the task-specific models.
Given the following machine learning model name: RegionViT, provide a description of the model
**RegionViT** consists of two tokenization processes that convert an image into regional (upper path) and local tokens (lower path). Each tokenization is a convolution with different patch sizes, the patch size of regional tokens is $28^2$ while $4^2$ is used for local tokens with dimensions projected to $C$, which means that one regional token covers $7^2$ local tokens based on the spatial locality, leading to the window size of a local region to $7^2$. At stage 1, two set of tokens are passed through the proposed regional-to-local transformer encoders. However, for the later stages, to balance the computational load and to have feature maps at different resolution, the approach uses a downsampling process to halve the spatial resolution while doubling the channel dimension like CNN on both regional and local tokens before going to the next stage. Finally, at the end of the network, it simply averages the remaining regional tokens as the final embedding for the classification while the detection uses all local tokens at each stage since it provides more fine-grained location information. By having the pyramid structure, the ViT can generate multi-scale features and hence it could be easily extended to more vision applications, e.g., object detection, rather than image classification only.
Given the following machine learning model name: Visual-Spatial-Graph Network, provide a description of the model
**Visual-Spatial-Graph Network** (VSGNet) is a network for human-object interaction detection. It extracts visual features from the image representing the human-object pair, refines the features with spatial configurations of the pair, and utilizes the structural connections between the pair via graph convolutions.