prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: O-Net, provide a description of the model | |
Given the following machine learning model name: Non-Local Block, provide a description of the model | A **Non-Local Block** is an image block module used in neural networks that wraps a [non-local operation](https://paperswithcode.com/method/non-local-operation). We can define a non-local block as:
$$ \mathbb{z}\_{i} = W\_{z}\mathbb{y\_{i}} + \mathbb{x}\_{i} $$
where $y\_{i}$ is the output from the non-local operation and $+ \mathbb{x}\_{i}$ is a [residual connection](https://paperswithcode.com/method/residual-connection). |
Given the following machine learning model name: TransferQA, provide a description of the model | **TransferQA** is a transferable generative QA model, built upon [T5](https://paperswithcode.com/method/t5) that combines extractive QA and multi-choice QA via a text-to-text [transformer](https://paperswithcode.com/method/transformer) framework, and tracks both categorical slots and non-categorical slots in DST. In addition, it introduces two effective ways to construct unanswerable questions, namely, negative question sampling and context truncation, which enable the model to handle “none” value slots in the zero-shot DST setting. |
Given the following machine learning model name: Model-Agnostic Meta-Learning, provide a description of the model | **MAML**, or **Model-Agnostic Meta-Learning**, is a model and task-agnostic algorithm for meta-learning that trains a model’s parameters such that a small number of gradient updates will lead to fast learning on a new task.
Consider a model represented by a parametrized function $f\_{\theta}$ with parameters $\theta$. When adapting to a new task $\mathcal{T}\_{i}$, the model’s parameters $\theta$ become $\theta'\_{i}$. With MAML, the updated parameter vector $\theta'\_{i}$ is computed using one or more gradient descent updates on task $\mathcal{T}\_{i}$. For example, when using one gradient update,
$$ \theta'\_{i} = \theta - \alpha\nabla\_{\theta}\mathcal{L}\_{\mathcal{T}\_{i}}\left(f\_{\theta}\right) $$
The step size $\alpha$ may be fixed as a hyperparameter or metalearned. The model parameters are trained by optimizing for the performance of $f\_{\theta'\_{i}}$ with respect to $\theta$ across tasks sampled from $p\left(\mathcal{T}\_{i}\right)$. More concretely the meta-objective is as follows:
$$ \min\_{\theta} \sum\_{\mathcal{T}\_{i} \sim p\left(\mathcal{T}\right)} \mathcal{L}\_{\mathcal{T\_{i}}}\left(f\_{\theta'\_{i}}\right) = \sum\_{\mathcal{T}\_{i} \sim p\left(\mathcal{T}\right)} \mathcal{L}\_{\mathcal{T\_{i}}}\left(f\_{\theta - \alpha\nabla\_{\theta}\mathcal{L}\_{\mathcal{T}\_{i}}\left(f\_{\theta}\right)}\right) $$
Note that the meta-optimization is performed over the model parameters $\theta$, whereas the objective is computed using the updated model parameters $\theta'$. In effect MAML aims to optimize the model parameters such that one or a small number of gradient steps on a new task will produce maximally effective behavior on that task. The meta-optimization across tasks is performed via stochastic gradient descent ([SGD](https://paperswithcode.com/method/sgd)), such that the model parameters $\theta$ are updated as follows:
$$ \theta \leftarrow \theta - \beta\nabla\_{\theta} \sum\_{\mathcal{T}\_{i} \sim p\left(\mathcal{T}\right)} \mathcal{L}\_{\mathcal{T\_{i}}}\left(f\_{\theta'\_{i}}\right)$$
where $\beta$ is the meta step size. |
Given the following machine learning model name: Graph sampling based inductive learning method, provide a description of the model | Scalable method to train large scale GNN models via sampling small subgraphs. |
Given the following machine learning model name: Reformer, provide a description of the model | **Reformer** is a [Transformer](https://paperswithcode.com/method/transformer) based architecture that seeks to make efficiency improvements. [Dot-product attention](https://paperswithcode.com/method/dot-product-attention) is replaced by one that uses locality-sensitive hashing, changing its complexity
from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. Furthermore, Reformers use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. |
Given the following machine learning model name: Surface Nomral-based Spatial Propagation, provide a description of the model | Inspired by the spatial propagation mechanism utilized in the depth completion task \cite{NLSPN}, we introduce a normal incorporated non-local disparity propagation module in which we hub NDP to generate non-local affinities and offsets for spatial propagation at the disparity level. The motivation lies that the sampled pixels for edges and occluded regions are supposed to be selected. The propagation process aggregates disparities via plane affinity relations, which alleviates the phenomenon of disparity blurring at object edges due to frontal parallel windows. And the disparities in occluded areas are also optimized at the same time by being propagated from non-occluded areas where the predicted disparities are with high confidence. |
Given the following machine learning model name: Generalized Mean Pooling, provide a description of the model | **Generalized Mean Pooling (GeM)** computes the generalized mean of each channel in a tensor. Formally:
$$ \textbf{e} = \left[\left(\frac{1}{|\Omega|}\sum\_{u\in{\Omega}}x^{p}\_{cu}\right)^{\frac{1}{p}}\right]\_{c=1,\cdots,C} $$
where $p > 0$ is a parameter. Setting this exponent as $p > 1$ increases the contrast of the pooled feature map and focuses on the salient features of the image. GeM is a generalization of the [average pooling](https://paperswithcode.com/method/average-pooling) commonly used in classification networks ($p = 1$) and of spatial max-pooling layer ($p = \infty$).
Source: [MultiGrain](https://paperswithcode.com/method/multigrain)
Image Source: [Eva Mohedano](https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.slideshare.net%2Fxavigiro%2Fd1l5-contentbased-image-retrieval-upc-2018-deep-learning-for-computer-vision&psig=AOvVaw2-9Hx23FNGFDe4GHU22Oo5&ust=1591798200590000&source=images&cd=vfe&ved=0CA0QjhxqFwoTCOiP-9P09OkCFQAAAAAdAAAAABAD) |
Given the following machine learning model name: Watch Your Step, provide a description of the model | |
Given the following machine learning model name: ProxylessNet-Mobile, provide a description of the model | **ProxylessNet-Mobile** is a convolutional neural architecture learnt with the [ProxylessNAS](https://paperswithcode.com/method/proxylessnas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) algorithm that is optimized for mobile devices. It uses inverted residual blocks (MBConvs) from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2) as its basic building block. |
Given the following machine learning model name: MobileNetV1, provide a description of the model | **MobileNet** is a type of convolutional neural network designed for mobile and embedded vision applications. They are based on a streamlined architecture that uses depthwise separable convolutions to build lightweight deep neural networks that can have low latency for mobile and embedded devices. |
Given the following machine learning model name: Recurrent Back Projection Network, provide a description of the model | |
Given the following machine learning model name: Bidirectional GRU, provide a description of the model | A **Bidirectional GRU**, or **BiGRU**, is a sequence processing model that consists of two [GRUs](https://paperswithcode.com/method/gru). one taking the input in a forward direction, and the other in a backwards direction. It is a bidirectional recurrent neural network with only the input and forget gates.
Image Source: *Rana R (2016). Gated Recurrent Unit (GRU) for Emotion Classification from Noisy Speech.* |
Given the following machine learning model name: Global-Local Attention, provide a description of the model | **Global-Local Attention** is a type of attention mechanism used in the [ETC](https://paperswithcode.com/method/etc) architecture. ETC receives two separate input sequences: the global input $x^{g} = (x^{g}\_{1}, \dots, x^{g}\_{n\_{g}})$ and the long input $x^{l} = (x^{l}\_{1}, \dots x^{l}\_{n\_{l}})$. Typically, the long input contains the input a [standard Transformer](https://paperswithcode.com/method/transformer) would receive, while the global input contains a much smaller number of auxiliary tokens ($n\_{g} \ll n\_{l}$). Attention is then split into four separate pieces: global-to-global (g2g), global-tolong (g2l), long-to-global (l2g), and long-to-long (l2l). Attention in the l2l piece (the most computationally expensive piece) is restricted to a fixed radius $r \ll n\_{l}$. To compensate for this limited attention span, the tokens in the global input have unrestricted attention, and thus long input tokens can transfer information to each other through global input tokens. Accordingly, g2g, g2l, and l2g pieces of attention are unrestricted. |
Given the following machine learning model name: End-To-End Memory Network, provide a description of the model | An **End-to-End Memory Network** is a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of [Memory Network](https://paperswithcode.com/method/memory-network), but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol.
The model takes a discrete set of inputs $x\_{1}, \dots, x\_{n}$ that are to be stored in the memory, a query $q$, and outputs an answer $a$. Each of the $x\_{i}$, $q$, and $a$ contains symbols coming from a dictionary with $V$ words. The model writes all $x$ to the memory up to a fixed buffer size, and then finds a continuous representation for the $x$ and $q$. The continuous representation is then processed via multiple hops to output $a$. |
Given the following machine learning model name: SuperpixelGridCut, SuperpixelGridMean, SuperpixelGridMix, provide a description of the model | Karim Hammoudi, Adnane Cabani, Bouthaina Slika, Halim Benhabiles, Fadi Dornaika and Mahmoud Melkemi. SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation, arXiv:2204.08458, 2022. https://doi.org/10.48550/arxiv.2204.08458 |
Given the following machine learning model name: Separate And Diffuse, provide a description of the model | |
Given the following machine learning model name: Canonical Tensor Decomposition with N3 Regularizer, provide a description of the model | Canonical Tensor Decomposition, trained with N3 regularizer |
Given the following machine learning model name: Canvas Method, provide a description of the model | **Canvas Method** is a method for inference attacks on object detection models. It draws a predicted bounding box distribution on an empty canvas for an attack model input. The canvas is initially set to an image of 300$\times$300 pixels in size, where every pixel has a value of zero and the boxes drawn on the canvas have the same center as the predicted boxes and the same intensity as the prediction scores. |
Given the following machine learning model name: Good Feature Matching, provide a description of the model | **Good Feature Matching** is an active map-to-frame feature matching method. Feature matching effort is tied to submatrix selection, which has combinatorial time complexity and requires choosing a scoring metric. Via simulation, the Max-logDet matrix revealing metric is shown to perform best. |
Given the following machine learning model name: Sparsemax, provide a description of the model | **Sparsemax** is a type of activation/output function similar to the traditional [softmax](https://paperswithcode.com/method/softmax), but able to output sparse probabilities.
$$ \text{sparsemax}\left(z\right) = \arg\_{p∈\Delta^{K−1}}\min||\mathbf{p} - \mathbf{z}||^{2} $$ |
Given the following machine learning model name: Batchboost, provide a description of the model | **Batchboost** is a variation on [MixUp](https://paperswithcode.com/method/mixup) that instead of mixing just two images, mixes many images together. |
Given the following machine learning model name: GCNet, provide a description of the model | A **Global Context Network**, or **GCNet**, utilises global context blocks to model long-range dependencies in images. It is based on the [Non-Local Network](https://paperswithcode.com/method/non-local-block), but it modifies the architecture so less computation is required. Global context blocks are applied to multiple layers in a backbone network to construct the GCNet. |
Given the following machine learning model name: DenseNAS, provide a description of the model | **DenseNAS** is a [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method that utilises a densely connected search space. The search space is represented as a dense super network, which is built upon designed routing blocks. In the super network, routing blocks are densely connected and we search for the best path between them to derive the final architecture. A chained cost estimation algorithm is used to approximate the model cost during the search. |
Given the following machine learning model name: Laplacian EigenMap, provide a description of the model | |
Given the following machine learning model name: Receptive Field Block, provide a description of the model | **Receptive Field Block (RFB)** is a module for strengthening the deep features learned from lightweight CNN models so that they can contribute to fast and accurate detectors. Specifically, RFB makes use of multi-branch pooling with varying kernels corresponding to RFs of different sizes, applies [dilated convolution](https://paperswithcode.com/method/dilated-convolution) layers to control their eccentricities, and reshapes them to generate
final representation. |
Given the following machine learning model name: Self-critical Sequence Training, provide a description of the model | |
Given the following machine learning model name: Distribution-induced Bidirectional Generative Adversarial Network for Graph Representation Learning, provide a description of the model | DBGAN is a method for graph representation learning. Instead of the widely used normal distribution assumption, the prior distribution of latent representation in DBGAN is estimated in a structure-aware way, which implicitly bridges the graph and feature spaces by prototype learning.
Source: [Distribution-induced Bidirectional Generative Adversarial Network for Graph Representation Learning](https://arxiv.org/abs/1912.01899) |
Given the following machine learning model name: Greedy Policy Search, provide a description of the model | **Greedy Policy Search** (GPS) is a simple algorithm that learns a policy for test-time data augmentation based on the predictive performance on a validation set. GPS starts with an empty policy and builds it in an iterative fashion. Each step selects a sub-policy that provides the largest improvement in calibrated log-likelihood of ensemble predictions and adds it to the current policy. |
Given the following machine learning model name: Dueling Network, provide a description of the model | A **Dueling Network** is a type of Q-Network that has two streams to separately estimate (scalar) state-value and the advantages for each action. Both streams share a common convolutional feature learning module. The two streams are combined via a special aggregating layer to produce an
estimate of the state-action value function Q as shown in the figure to the right.
The last module uses the following mapping:
$$ Q\left(s, a, \theta, \alpha, \beta\right) =V\left(s, \theta, \beta\right) + \left(A\left(s, a, \theta, \alpha\right) - \frac{1}{|\mathcal{A}|}\sum\_{a'}A\left(s, a'; \theta, \alpha\right)\right) $$
This formulation is chosen for identifiability so that the advantage function has zero advantage for the chosen action, but instead of a maximum we use an average operator to increase the stability of the optimization. |
Given the following machine learning model name: Mogrifier LSTM, provide a description of the model | The **Mogrifier LSTM** is an extension to the [LSTM](https://paperswithcode.com/method/lstm) where the LSTM’s input $\mathbf{x}$ is gated conditioned on the output of the previous step $\mathbf{h}\_{prev}$. Next, the gated input is used in a similar manner to gate the output of the
previous time step. After a couple of rounds of this mutual gating, the last updated $\mathbf{x}$ and $\mathbf{h}\_{prev}$ are fed to an LSTM.
In detail, the Mogrifier is an LSTM where two inputs $\mathbf{x}$ and $\mathbf{h}\_{prev}$ modulate one another in an alternating fashion before the usual LSTM computation takes place. That is: $ \text{Mogrify}\left(\mathbf{x}, \mathbf{c}\_{prev}, \mathbf{h}\_{prev}\right) = \text{LSTM}\left(\mathbf{x}^{↑}, \mathbf{c}\_{prev}, \mathbf{h}^{↑}\_{prev}\right)$ where the modulated inputs $\mathbf{x}^{↑}$ and $\mathbf{h}^{↑}\_{prev}$ are defined as the highest indexed $\mathbf{x}^{i}$ and $\mathbf{h}^{i}\_{prev}$, respectively, from the interleaved sequences:
$$ \mathbf{x}^{i} = 2\sigma\left(\mathbf{Q}^{i}\mathbf{h}^{i−1}\_{prev}\right) \odot x^{i-2} \text{ for odd } i \in \left[1 \dots r\right] $$
$$ \mathbf{h}^{i}\_{prev} = 2\sigma\left(\mathbf{R}^{i}\mathbf{x}^{i-1}\right) \odot \mathbf{h}^{i-2}\_{prev} \text{ for even } i \in \left[1 \dots r\right] $$
with $\mathbf{x}^{-1} = \mathbf{x}$ and $\mathbf{h}^{0}\_{prev} = \mathbf{h}\_{prev}$. The number of "rounds", $r \in \mathbb{N}$, is a hyperparameter; $r = 0$ recovers the LSTM. Multiplication with the constant 2 ensures that randomly initialized $\mathbf{Q}^{i}$, $\mathbf{R}^{i}$ matrices result in transformations close to identity. To reduce the number of additional model parameters, we typically factorize the $\mathbf{Q}^{i}$, $\mathbf{R}^{i}$ matrices as products of low-rank matrices: $\mathbf{Q}^{i}$ =
$\mathbf{Q}^{i}\_{left}\mathbf{Q}^{i}\_{right}$ with $\mathbf{Q}^{i} \in \mathbb{R}^{m\times{n}}$, $\mathbf{Q}^{i}\_{left} \in \mathbb{R}^{m\times{k}}$, $\mathbf{Q}^{i}\_{right} \in \mathbb{R}^{k\times{n}}$, where $k < \min\left(m, n\right)$ is the rank. |
Given the following machine learning model name: Problem Agnostic Speech Encoder +, provide a description of the model | **PASE+** is a problem-agnostic speech encoder that combines a convolutional encoder followed by multiple neural networks, called workers, tasked to solve self-supervised problems (i.e., ones that do not require manual annotations as ground truth). An online speech distortion module is employed, that contaminates the input signals with a variety of random disturbances. A revised encoder is also proposed that better learns short- and long-term speech dynamics with an efficient combination of recurrent and convolutional networks. Finally, the authors refine the set of workers used in self-supervision to encourage better cooperation. |
Given the following machine learning model name: 3-dimensional interaction space, provide a description of the model | A **trainable 3D interaction space** aims to captures the associations between the triplet components and helps model the recognition of multiple triplets in the same frame.
Source: [Nwoye et al.](https://arxiv.org/pdf/2007.05405v1.pdf)
Image source: [Nwoye et al.](https://arxiv.org/pdf/2007.05405v1.pdf) |
Given the following machine learning model name: DExTra, provide a description of the model | **DExTra**, or **Deep and Light-weight Expand-reduce Transformation**, is a light-weight expand-reduce transformation that enables learning wider representations efficiently.
DExTra maps a $d\_{m}$ dimensional input vector into a high dimensional space (expansion) and then
reduces it down to a $d\_{o}$ dimensional output vector (reduction) using $N$ layers of group transformations. During these expansion and reduction phases, DExTra uses group linear transformations because they learn local representations by deriving the output from a specific part of the input and are more efficient than linear transformations. To learn global representations, DExTra shares information between different groups in the group linear transformation using feature shuffling
Formally, the DExTra transformation is controlled by five configuration parameters: (1) depth $N$, (2)
width multiplier $m\_{w}$, (3) input dimension $d\_{m}$, (4) output dimension $d\_{o}$, and (5) maximum groups $g\_{max}$ in a group linear transformation. In the expansion phase, DExTra projects the $d\_{m}$-dimensional input to a high-dimensional space, $d\_{max} = m\_{w}d\_{m}$, linearly using $\text{ceil}\left(\frac{N}{2}\right)$ layers. In the reduction phase, DExTra projects the $d\_{max}$-dimensional vector to a $d\_{o}$-dimensional space using the remaining $N -\text{ceil}\left(\frac{N}{2}\right)$ layers. Mathematically, we define the output $Y$ at each layer $l$ as:
$$ \mathbf{Y}\_{l} = \mathcal{F}\left(\mathbf{X}, \mathbf{W}^{l}, \mathbf{b}^{l}, g^{l}\right) \text{ if } l=1 $$
$$ \mathbf{Y}\_{l} = \mathcal{F}\left(\mathcal{H}\left(\mathbf{X}, \mathbf{Y}^{l-1}\right), \mathbf{W}^{l}, \mathbf{b}^{l}, g^{l}\right) \text{ Otherwise } $$
where the number of groups at each layer $l$ are computed as:
$$ g^{l} = \text{min}\left(2^{l-1}, g\_{max}\right), 1 \leq l \leq \text{ceil}\left(N/2\right) $$
$$ g^{N-l}, \text{Otherwise}$$
In the above equations, $\mathcal{F}$ is a group linear transformation function. The function $\mathcal{F}$ takes the input $\left(\mathbf{X} \text{ or } \mathcal{H}\left(\mathbf{X}, \mathbf{Y}^{l-1}\right) \right)$, splits it into $g^{l}$ groups, and then applies a linear transformation with learnable parameters $\mathbf{W}^{l}$ and bias $\mathbf{b}^{l}$ to each group independently. The outputs of each group are then concatenated to produce the final output $\mathbf{Y}^{l}$. The function $\mathcal{H}$ first shuffles the output of each group in $\mathbf{Y}^{l−1}$ and then combines it with the input $\mathbf{X}$ using an input mixer connection.
In the authors' experiments, they use $g\_{max} = \text{ceil}\left(\frac{d\_{m}}{32}\right)$ so that each group has at least 32 input elements. Note that (i) group linear transformations reduce to linear transformations when $g^{l} = 1$, and (ii) DExTra is equivalent to a multi-layer perceptron when $g\_{max} = 1$. |
Given the following machine learning model name: Color Jitter, provide a description of the model | **ColorJitter** is a type of image data augmentation where we randomly change the brightness, contrast and saturation of an image.
Image Credit: [Apache MXNet](https://mxnet.apache.org/versions/1.5.0/tutorials/gluon/data_augmentation.html) |
Given the following machine learning model name: StruBERT: Structure-aware BERT for Table Search and Matching, provide a description of the model | A large amount of information is stored in data tables. Users can search for data tables using a keyword-based query. A table is composed primarily of data values that are organized in rows and columns providing implicit structural information. A table is usually accompanied by secondary information such as the caption, page title, etc., that form the textual information. Understanding the connection between the textual and structural information is an important yet neglected aspect in table retrieval as previous methods treat each source of information independently. In addition, users can search for data tables that are similar to an existing table, and this setting can be seen as a content-based table retrieval. In this paper, we propose StruBERT, a structure-aware BERT model that fuses the textual and structural information of a data table to produce context-aware representations for both textual and tabular content of a data table. StruBERT features are integrated in a new end-to-end neural ranking model to solve three table-related downstream tasks: keyword- and content-based table retrieval, and table similarity. We evaluate our approach using three datasets, and we demonstrate substantial improvements in terms of retrieval and classification metrics over state-of-the-art methods. |
Given the following machine learning model name: G-GLN Neuron, provide a description of the model | A **G-GLN Neuron** is a type of neuron used in the [G-GLN](https://paperswithcode.com/method/g-gln) architecture. G-GLN. The key idea is that further representational power can be added to a weighted product of Gaussians via a contextual gating procedure. This is achieved by extending a weighted product of Gaussians model with an additional type of input called side information. The side information will be used by a neuron to select a weight vector to apply for a given example from a table of weight vectors. In typical applications to regression, the side information is defined as the (normalized) input features for an input example: i.e. $z=(x-\bar{x}) / \sigma\_{x}$.
More formally, associated with each neuron is a context function $c: \mathcal{Z} \rightarrow \mathcal{C}$, where $\mathcal{Z}$ is the set of possible side information and $\mathcal{C}=\{0, \ldots, k-1\}$ for some $k \in \mathbb{N}$ is the context space. Each neuron $i$ is now parameterized by a weight matrix $W\_{i}=\left[w\_{i, 0} \ldots w\_{i, k-1}\right]^{\top}$ with each row vector $w\_{i j} \in \mathcal{W}$ for $0 \leq j<k$. The context function $c$ is responsible for mapping side information $z \in \mathcal{Z}$ to a particular row $w\_{i, c(z)}$ of $W_{i}$, which we then use to weight the Product of Gaussians. In other words, a G-GLN neuron can be defined by:
$$
\operatorname{PoG}\_{W}^{c}\left(y ; f_{1}(\cdot), \ldots, f\_{m}(\cdot), z\right):=\operatorname{PoG}\_{w^{c(z)}}\left(y ; f\_{1}(\cdot), \ldots, f\_{m}(\cdot)\right)
$$
with the associated loss function $-\log \left(\operatorname{PoG}\_{W}^{c}\left(y ; f\_{1}(y), \ldots, f\_{m}(y), z\right)\right)$ inheriting all the properties needed to apply Online Convex Programming. |
Given the following machine learning model name: Estimation Statistics, provide a description of the model | Estimation statistics is a data analysis framework that uses a combination of effect sizes, confidence intervals, precision planning, and meta-analysis to plan experiments, analyze data and interpret results. It is distinct from null hypothesis significance testing (NHST), which is considered to be less informative. The primary aim of estimation methods is to report an effect size (a point estimate) along with its confidence interval, the latter of which is related to the precision of the estimate. The confidence interval summarizes a range of likely values of the underlying population effect. Proponents of estimation see reporting a P value as an unhelpful distraction from the important business of reporting an effect size with its confidence intervals, and believe that estimation should replace significance testing for data analysis. |
Given the following machine learning model name: Contrastive Multiview Coding, provide a description of the model | **Contrastive Multiview Coding (CMC)** is a self-supervised learning approach, based on [CPC](https://paperswithcode.com/method/contrastive-predictive-coding), that learns representations that capture information shared between multiple sensory views. The core idea is to set an anchor view and the sample positive and negative data points from the other view and maximise agreement between positive pairs in learning from two views. Contrastive learning is used to build the embedding. |
Given the following machine learning model name: Asynchronous Interaction Aggregation, provide a description of the model | **Asynchronous Interaction Aggregation**, or **AIA**, is a network that leverages different interactions to boost action detection. There are two key designs in it: one is the Interaction Aggregation structure (IA) adopting a uniform paradigm to model and integrate multiple types of interaction; the other is the Asynchronous Memory Update algorithm (AMU) that enables us to achieve better performance by modeling very long-term interaction dynamically. |
Given the following machine learning model name: PixelShuffle, provide a description of the model | **PixelShuffle** is an operation used in super-resolution models to implement efficient sub-pixel convolutions with a stride of $1/r$. Specifically it rearranges elements in a tensor of shape $(\*, C \times r^2, H, W)$ to a tensor of shape $(\*, C, H \times r, W \times r)$.
Image Source: [Remote Sensing Single-Image Resolution Improvement Using A Deep Gradient-Aware Network with Image-Specific Enhancement](https://www.researchgate.net/figure/The-pixel-shuffle-layer-transforms-feature-maps-from-the-LR-domain-to-the-HR-image_fig3_339531308) |
Given the following machine learning model name: Linear Warmup With Linear Decay, provide a description of the model | **Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards. |
Given the following machine learning model name: Adaptive Feature Pooling, provide a description of the model | **Adaptive Feature Pooling** pools features from all levels for each proposal in object detection and fuses them for the following prediction. For each proposal, we map them to different feature levels. Following the idea of [Mask R-CNN](https://paperswithcode.com/method/adaptive-feature-pooling), [RoIAlign](https://paperswithcode.com/method/roi-align) is used to pool feature grids from each level. Then a fusion operation (element-wise max or sum) is utilized to fuse feature grids from different levels.
The motivation for this technique is that in an [FPN](https://paperswithcode.com/method/fpn) we assign proposals to different feature levels based on the size of proposals, which could be suboptimal if images with small differences are assigned to different levels, or if the importance of features is not strongly correlated to their level which they belong. |
Given the following machine learning model name: Region Proposal Network, provide a description of the model | A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.
RPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios. |
Given the following machine learning model name: Compressive Transformer, provide a description of the model | The **Compressive Transformer** is an extension to the [Transformer](https://paperswithcode.com/method/transformer) which maps past hidden activations (memories) to a smaller set of compressed representations (compressed memories). The Compressive Transformer uses the same attention mechanism over its set of memories and compressed memories, learning to query both its short-term granular memory and longer-term coarse memory. It builds on the ideas of [Transformer-XL](https://paperswithcode.com/method/transformer-xl) which maintains a memory of past activations at each layer to preserve a longer history of context. The Transformer-XL discards past activations when they become sufficiently old (controlled by the size of the memory). The key principle of the Compressive Transformer is to compress these old memories, instead of discarding them, and store them in an additional [compressed memory](https://paperswithcode.com/method/compressed-memory).
At each time step $t$, we discard the oldest compressed memories (FIFO) and then the oldest $n$ states from ordinary memory are compressed and shifted to the new slot in compressed memory. During training, the compressive memory component is optimized separately from the main language model (separate training loop). |
Given the following machine learning model name: Social-STGCNN, provide a description of the model | **Social-STGCNN** is a method for human trajectory prediction. Pedestrian trajectories are not only influenced by the pedestrian itself but also by interaction with surrounding objects. |
Given the following machine learning model name: PolarNet, provide a description of the model | **PolarNet** is an improved grid representation for online, single-scan LiDAR point clouds. Instead of using common spherical or bird's-eye-view projection, the polar bird's-eye-view representation balances the points across grid cells in a polar coordinate system, indirectly aligning a segmentation network's attention with the long-tailed distribution of the points along the radial axis. |
Given the following machine learning model name: double-stage parameter tuning, provide a description of the model | Parameter tuning method for neural network models with adaptive activation functions. |
Given the following machine learning model name: FixMatch, provide a description of the model | FixMatch is an algorithm that first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image.
Description from: [FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence](https://paperswithcode.com/paper/fixmatch-simplifying-semi-supervised-learning)
Image credit: [FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence](https://paperswithcode.com/paper/fixmatch-simplifying-semi-supervised-learning) |
Given the following machine learning model name: Categorical Modularity, provide a description of the model | A novel low-resource intrinsic metric to evaluate word
embedding quality based on graph modularity. |
Given the following machine learning model name: GPT-Neo, provide a description of the model | An implementation of model & data parallel [GPT3-like](https://paperswithcode.com/method/gpt-3) models using the [mesh-tensorflow](https://github.com/tensorflow/mesh) library.
Source: [EleutherAI/GPT-Neo](https://github.com/EleutherAI/gpt-neo) |
Given the following machine learning model name: Pose-Appearance Disentangling, provide a description of the model | A method to disentangle pose from other factors in a scene. |
Given the following machine learning model name: TorchBeast, provide a description of the model | **TorchBeast** is a platform for reinforcement learning (RL) research in PyTorch. It implements a version of the popular [IMPALA](https://paperswithcode.com/method/impala) algorithm for fast, asynchronous, parallel training of RL agents. |
Given the following machine learning model name: Mixture of Softmaxes, provide a description of the model | **Mixture of Softmaxes** performs $K$ different softmaxes and mixes them. The motivation is that the traditional [softmax](https://paperswithcode.com/method/softmax) suffers from a softmax bottleneck, i.e. the expressiveness of the conditional probability we can model is constrained by the combination of a dot product and the softmax. By using a mixture of softmaxes, we can model the conditional probability more expressively. |
Given the following machine learning model name: Swish, provide a description of the model | **Swish** is an activation function, $f(x) = x \cdot \text{sigmoid}(\beta x)$, where $\beta$ a learnable parameter. Nearly all implementations do not use the learnable parameter $\beta$, in which case the activation function is $x\sigma(x)$ ("Swish-1").
The function $x\sigma(x)$ is exactly the [SiLU](https://paperswithcode.com/method/silu), which was introduced by other authors before the swish.
See [Gaussian Error Linear Units](https://arxiv.org/abs/1606.08415) ([GELUs](https://paperswithcode.com/method/gelu)) where the SiLU (Sigmoid Linear Unit) was originally coined, and see [Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning](https://arxiv.org/abs/1702.03118) and [Swish: a Self-Gated Activation Function](https://arxiv.org/abs/1710.05941v1) where the same activation function was experimented with later. |
Given the following machine learning model name: Cosine Linear Unit, provide a description of the model | The **Cosine Linear Unit**, or **CosLU**, is a type of activation function that has trainable parameters and uses the cosine function.
$$CosLU(x) = (x + \alpha \cos(\beta x))\sigma(x)$$ |
Given the following machine learning model name: MACEst, provide a description of the model | **Model Agnostic Confidence Estimator**, or **MACEst**, is a model-agnostic confidence estimator. Using a set of nearest neighbours, the algorithm differs from other methods by estimating confidence independently as a local quantity which explicitly accounts for both aleatoric and epistemic uncertainty. This approach differs from standard calibration methods that use a global point prediction model as a starting point for the confidence estimate. |
Given the following machine learning model name: SpecGAN, provide a description of the model | **SpecGAN** is a generative adversarial network method for spectrogram-based, frequency-domain audio generation. The problem is suited for GANs designed for image generation. The model can be approximately inverted.
To process audio into suitable spectrograms, the authors perform the short-time Fourier transform with 16 ms windows and 8ms stride, resulting in 128 frequency bins, linearly spaced from 0 to 8 kHz. They take the magnitude of the resultant spectra and scale amplitude values logarithmically to better-align with human perception. They then normalize each frequency bin to have zero mean and unit variance. They clip the spectra to $3$ standard deviations and rescale to $\left[−1, 1\right]$.
They then use the [DCGAN](https://paperswithcode.com/method/dcgan) approach on the result spectra. |
Given the following machine learning model name: InfoGAN, provide a description of the model | **InfoGAN** is a type of generative adversarial network that modifies the [GAN](https://paperswithcode.com/method/gan) objective to
encourage it to learn interpretable and meaningful representations. This is done by maximizing the
mutual information between a fixed small subset of the GAN’s noise variables and the observations.
Formally, InfoGAN is defined as a minimax game with a variational regularization of mutual information and the hyperparameter $\lambda$:
$$ \min\_{G, Q}\max\_{D}V\_{INFOGAN}\left(D, G, Q\right) = V\left(D, G\right) - \lambda{L}\_{I}\left(G, Q\right) $$
Where $Q$ is an auxiliary distribution that approximates the posterior $P\left(c\mid{x}\right)$ - the probability of the latent code $c$ given the data $x$ - and $L\_{I}$ is the variational lower bound of the mutual information between the latent code and the observations.
In the practical implementation, there is another fully-connected layer to output parameters for the conditional distribution $Q$ (negligible computation ontop of regular GAN structures). Q is represented with a [softmax](https://paperswithcode.com/method/softmax) non-linearity for a categorical latent code. For a continuous latent code, the authors assume a factored Gaussian. |
Given the following machine learning model name: Two-Way Dense Layer, provide a description of the model | **Two-Way Dense Layer** is an image model block used in the [PeleeNet](https://paperswithcode.com/method/peleenet) architectures. Motivated by [GoogLeNet](https://paperswithcode.com/method/googlenet), the 2-way dense layer is used to get different scales of receptive fields. One way of the layer uses a 3x3 kernel size. The other way of the layer uses two stacked 3x3 [convolution](https://paperswithcode.com/method/convolution) to learn visual patterns for large objects. |
Given the following machine learning model name: AlterNet, provide a description of the model | |
Given the following machine learning model name: OpenPose, provide a description of the model | |
Given the following machine learning model name: Voxel RoI Pooling, provide a description of the model | **Voxel RoI Pooling** is a RoI feature extractor extracts RoI features directly from voxel features for further refinement. It starts by dividing a region proposal into $G \times G \times G$ regular sub-voxels. The center point is taken as the grid point of the corresponding sub-voxel. Since $3 D$ feature volumes are extremely sparse (non-empty voxels account for $<3 \%$ spaces), we cannot directly utilize max pooling over features of each sub-voxel. Instead, features are integrated from neighboring voxels into the grid points for feature extraction. Specifically, given a grid point $g\_{i}$, we first exploit voxel query to group a set of neighboring voxels $\Gamma\_{i}=\left\(\mathbf{v}\_{i}^{1}, \mathbf{v}\_{i}^{2}, \cdots, \mathbf{v}\_{i}^{K}\right\) .$ Then, we aggregate the neighboring voxel features with a [PointNet](https://paperswithcode.com/method/pointnet) module $\mathrm{a}$ as:
$$
\mathbf{\eta}\_{i}=\max _{k=1,2, \cdots, K}\left\(\Psi\left(\left[\mathbf{v}\_{i}^{k}-\mathbf{g}\_{i} ; \mathbf{\phi}\_{i}^{k}\right]\right)\right\)
$$
where $\mathbf{v}\_{i}-\mathbf{g}\_{i}$ represents the relative coordinates, $\mathbf{\phi}\_{i}^{k}$ is the voxel feature of $\mathbf{v}\_{i}^{k}$, and $\Psi(\cdot)$ indicates an MLP. The [max pooling](https://paperswithcode.com/method/max-pooling) operation $\max (\cdot)$ is performed along the channels to obtain the aggregated feature vector $\eta_{i} .$ Particularly, Voxel RoI pooling is exploited to extract voxel features from the 3D feature volumes out of the last two stages in the $3 \mathrm{D}$ backbone network. And for each stage, two Manhattan distance thresholds are set to group voxels with multiple scales. Then, we concatenate the aggregated features pooled from different stages and scales to obtain the RoI features. |
Given the following machine learning model name: TridentNet Block, provide a description of the model | A **TridentNet Block** is a feature extractor used in object detection models. Instead of feeding in multi-scale inputs like the image pyramid, in a [TridentNet](https://paperswithcode.com/method/tridentnet) block we adapt the backbone network for different scales. These blocks create multiple scale-specific feature maps. With the help of dilated convolutions, different branches of trident blocks have the same network structure and share the
same parameters yet have different receptive fields. Furthermore, to avoid training objects with extreme scales, a scale-aware training scheme is employed to make each branch specific to a given scale range matching its receptive field. Weight sharing is used to prevent overfitting. |
Given the following machine learning model name: Mask Scoring R-CNN, provide a description of the model | **Mask Scoring R-CNN** is a Mask RCNN with MaskIoU Head, which takes the instance feature and the predicted mask together as input, and predicts the IoU between input mask and ground truth mask. |
Given the following machine learning model name: Atrous Spatial Pyramid Pooling, provide a description of the model | **Atrous Spatial Pyramid Pooling (ASPP)** is a semantic segmentation module for resampling a given feature layer at multiple rates prior to [convolution](https://paperswithcode.com/method/convolution). This amounts to probing the original image with multiple filters that have complementary effective fields of view, thus capturing objects as well as useful image context at multiple scales. Rather than actually resampling features, the mapping is implemented using multiple parallel atrous convolutional layers with different sampling rates. |
Given the following machine learning model name: Neo-fuzzy-neuron, provide a description of the model | **Neo-fuzzy-neuron** is a type of artificial neural network that combines the characteristics of both fuzzy logic and neural networks. It uses a fuzzy inference system to model non-linear relationships between inputs and outputs, and a feedforward neural network to learn the parameters of the fuzzy system. The combination of these two approaches provides a flexible and powerful tool for solving a wide range of problems in areas such as pattern recognition, control, and prediction. |
Given the following machine learning model name: VocGAN, provide a description of the model | Please enter a description about the method here |
Given the following machine learning model name: PCA Whitening, provide a description of the model | **PCA Whitening** is a processing step for image based data that makes input less redundant. Adjacent pixel or feature values can be highly correlated, and whitening through the use of [PCA](https://paperswithcode.com/method/pca) reduces this degree of correlation.
Image Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg) |
Given the following machine learning model name: CSPResNeXt, provide a description of the model | **CSPResNeXt** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNeXt](https://paperswithcode.com/method/resnext). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. |
Given the following machine learning model name: Filter Response Normalization, provide a description of the model | **Filter Response Normalization (FRN)** is a type of normalization that combines normalization and an activation function, which can be used as a replacement for other normalizations and activations. It operates on each activation channel of each batch element independently, eliminating the dependency on other batch elements.
To demonstrate, assume we are dealing with the feed-forward convolutional neural network. We follow the usual convention that the filter responses (activation maps) produced after a [convolution](https://paperswithcode.com/method/convolution) operation are a [4D ](https://paperswithcode.com/method/4d-a)tensor $X$ with shape $[B, W, H, C]$, where $B$ is the mini-batch size, $W, H$ are the spatial extents of the map, and $C$ is the number of filters used in convolution. $C$ is also referred to as output channels. Let $x = X_{b,:,:,c} \in \mathcal{R}^{N}$, where $N = W \times H$, be the vector of filter responses for the $c^{th}$ filter for the $b^{th}$ batch point.
Let $\nu^2 = \sum\_i x_i^2/N$, be the mean squared norm of $x$.
Then Filter Response Normalization is defined as the following:
$$
\hat{x} = \frac{x}{\sqrt{\nu^2 + \epsilon}},
$$
where $\epsilon$ is a small positive constant to prevent division by zero.
A lack of mean centering in FRN can lead to activations having an arbitrary bias away from zero. Such a bias in conjunction with [ReLU](https://paperswithcode.com/method/relu) can have a detrimental effect on learning and lead to poor performance and dead units. To address this the authors augment ReLU with a learned threshold $\tau$ to yield:
$$
z = \max(y, \tau)
$$
Since $\max(y, \tau){=}\max(y-\tau,0){+}\tau{=}\text{ReLU}{(y{-}\tau)}{+}\tau$, the effect of this activation is the same as having a shared bias before and after ReLU. |
Given the following machine learning model name: Graph Attention Network v2, provide a description of the model | The __GATv2__ operator from the [“How Attentive are Graph Attention Networks?”](https://arxiv.org/abs/2105.14491) paper, which fixes the static attention problem of the standard [GAT](https://paperswithcode.com/method/gat) layer: since the linear layers in the standard GAT are applied right after each other, the ranking of attended nodes is unconditioned on the query node. In contrast, in GATv2, every node can attend to any other node.
GATv2 scoring function:
$e_{i,j} =\mathbf{a}^{\top}\mathrm{LeakyReLU}\left(\mathbf{W}[\mathbf{h}_i \, \Vert \,\mathbf{h}_j]\right)$ |
Given the following machine learning model name: Make-A-Scene, provide a description of the model | Make-A-Scene is a text-to-image method that (i) enables a simple control mechanism complementary to text in the form of a scene, (ii) introduces elements that improve the tokenization process by employing domain-specific knowledge over key image regions (faces and salient objects), and (iii) adapts classifier-free guidance for the transformer use case. |
Given the following machine learning model name: energy-based model, provide a description of the model | |
Given the following machine learning model name: Distance to Modelled Embedding, provide a description of the model | **DIME**, or **Distance to Modelled Embedding**, is a method for detecting out-of-distribution examples during prediction time. Given a trained neural network, the training data drawn from some high-dimensional distribution in data space $X$ is transformed into the model’s intermediate feature vector space $\mathbb{R}^{p}$. The training set embedding is linearly approximated as a hyperplane. When we then receive new observations it is difficult to assess if observations are out-of-distribution directly in data space, so we transform them into the same intermediate feature space. Finally, the Distance-to-Modelled-Embedding (DIME) can be used to assess whether new observations fit into the expected embedding covariance structure. |
Given the following machine learning model name: Experience Replay, provide a description of the model | **Experience Replay** is a replay memory technique used in reinforcement learning where we store the agent’s experiences at each time-step, $e\_{t} = \left(s\_{t}, a\_{t}, r\_{t}, s\_{t+1}\right)$ in a data-set $D = e\_{1}, \cdots, e\_{N}$ , pooled over many episodes into a replay memory. We then usually sample the memory randomly for a minibatch of experience, and use this to learn off-policy, as with Deep Q-Networks. This tackles the problem of autocorrelation leading to unstable training, by making the problem more like a supervised learning problem.
Image Credit: [Hands-On Reinforcement Learning with Python, Sudharsan Ravichandiran](https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781788836524) |
Given the following machine learning model name: Bayesian Reward Extrapolation, provide a description of the model | **Bayesian Reward Extrapolation** is a Bayesian reward learning algorithm that scales to high-dimensional imitation learning problems by pre-training a low-dimensional feature encoding via self-supervised tasks and then leveraging preferences over demonstrations to perform fast Bayesian inference. |
Given the following machine learning model name: simple Copy-Paste, provide a description of the model | |
Given the following machine learning model name: Wide&Deep, provide a description of the model | **Wide&Deep** jointly trains wide linear models and deep neural networks to combine the benefits of memorization and generalization for real-world recommender systems. In summary, the wide component is a generalized linear model. The deep component is a feed-forward neural network. The deep and wide components are combined using a weighted sum of their output log odds as the prediction. This is then fed to a logistic loss function for joint training, which is done by back-propagating the gradients from the output to both the wide and deep part of the model simultaneously using mini-batch stochastic optimization. The AdaGrad optimizer is used for the wider part. The combined model is illustrated in the figure (center). |
Given the following machine learning model name: SCARLET, provide a description of the model | **SCARLET** is a type of convolutional neural architecture learnt by the [SCARLET-NAS](https://paperswithcode.com/method/scarlet-nas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The three variants are SCARLET-A, SCARLET-B and SCARLET-C. The basic building block is MBConvs from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2). Squeeze-and-excitation layers are also experimented with. |
Given the following machine learning model name: Deep Boltzmann Machine, provide a description of the model | A **Deep Boltzmann Machine (DBM)** is a three-layer generative model. It is similar to a [Deep Belief Network](https://paperswithcode.com/method/deep-belief-network), but instead allows bidirectional connections in the bottom layers. Its energy function is as an extension of the energy function of the RBM:
$$ E\left(v, h\right) = -\sum^{i}\_{i}v\_{i}b\_{i} - \sum^{N}\_{n=1}\sum_{k}h\_{n,k}b\_{n,k}-\sum\_{i, k}v\_{i}w\_{ik}h\_{k} - \sum^{N-1}\_{n=1}\sum\_{k,l}h\_{n,k}w\_{n, k, l}h\_{n+1, l}$$
for a DBM with $N$ hidden layers.
Source: [On the Origin of Deep Learning](https://arxiv.org/pdf/1702.07800.pdf) |
Given the following machine learning model name: Depthwise Separable Convolution, provide a description of the model | While [standard convolution](https://paperswithcode.com/method/convolution) performs the channelwise and spatial-wise computation in one step, **Depthwise Separable Convolution** splits the computation into two steps: [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) applies a single convolutional filter per each input channel and [pointwise convolution](https://paperswithcode.com/method/pointwise-convolution) is used to create a linear combination of the output of the depthwise convolution. The comparison of standard convolution and depthwise separable convolution is shown to the right.
Credit: [Depthwise Convolution Is All You Need for Learning Multiple Visual Domains](https://paperswithcode.com/paper/depthwise-convolution-is-all-you-need-for) |
Given the following machine learning model name: Probabilistic Continuously Indexed Domain Adaptation, provide a description of the model | **Probabilistic Continuously Indexed Domain Adaptation** (**PCIDA**) enjoys better theoretical guarantees to match both the mean and variance of the distribution $p(u|z)$. PCIDA can be extended to match higher-order moments. |
Given the following machine learning model name: Blended Diffusion, provide a description of the model | Blended Diffusion enables a zero-shot local text-guided image editing of natural images.
Given an input image $x$, an input mask $m$ and a target guiding text $t$ - the method enables to change the masked area within the image corresponding the the guiding text s.t. the unmasked area is left unchanged. |
Given the following machine learning model name: Orthogonal Regularization, provide a description of the model | **Orthogonal Regularization** is a regularization technique for convolutional neural networks, introduced with generative modelling as the task in mind. Orthogonality is argued to be a desirable quality in ConvNet filters, partially because multiplication by an orthogonal matrix leaves the norm of the original matrix unchanged. This property is valuable in deep or recurrent networks, where repeated matrix multiplication can result in signals vanishing or exploding. To try to maintain orthogonality throughout training, Orthogonal Regularization encourages weights to be orthogonal by pushing them towards the nearest orthogonal manifold. The objective function is augmented with the cost:
$$ \mathcal{L}\_{ortho} = \sum\left(|WW^{T} − I|\right) $$
Where $\sum$ indicates a sum across all filter banks, $W$ is a filter bank, and $I$ is the identity matrix |
Given the following machine learning model name: Auxiliary Classifier, provide a description of the model | **Auxiliary Classifiers** are type of architectural component that seek to improve the convergence of very deep networks. They are classifier heads we attach to layers before the end of the network. The motivation is to push useful gradients to the lower layers to make them immediately useful and improve the convergence during training by combatting the vanishing gradient problem. They are notably used in the Inception family of convolutional neural networks. |
Given the following machine learning model name: FoveaBox, provide a description of the model | **FoveaBox** is anchor-free framework for object detection. Instead of using predefined anchors to enumerate possible locations, scales and aspect ratios for the search of the objects, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object. The scales of target boxes are naturally associated with feature pyramid representations for each input image
It is a single, unified network composed of a backbone network and two task-specific subnetworks. The backbone is responsible for computing a convolutional feature map over an entire input image and is an off-the-shelf convolutional network. The first subnet performs per pixel classification on the backbone’s output; the second subnet performs bounding box prediction for the corresponding
position. |
Given the following machine learning model name: Contour Proposal Network, provide a description of the model | The Contour Proposal Network (CPN) detects possibly overlapping objects in an image while simultaneously fitting pixel-precise closed object contours. The CPN can incorporate state of the art object detection architectures as backbone networks into a fast single-stage instance segmentation model that can be trained end-to-end. |
Given the following machine learning model name: Part Affinity Fields, provide a description of the model | |
Given the following machine learning model name: Adam, provide a description of the model | **Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients.
The weight updates are performed as:
$$ w_{t} = w_{t-1} - \eta\frac{\hat{m}\_{t}}{\sqrt{\hat{v}\_{t}} + \epsilon} $$
with
$$ \hat{m}\_{t} = \frac{m_{t}}{1-\beta^{t}_{1}} $$
$$ \hat{v}\_{t} = \frac{v_{t}}{1-\beta^{t}_{2}} $$
$$ m_{t} = \beta_{1}m_{t-1} + (1-\beta_{1})g_{t} $$
$$ v_{t} = \beta_{2}v_{t-1} + (1-\beta_{2})g_{t}^{2} $$
$ \eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \beta_{1} $ and $ \beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively. |
Given the following machine learning model name: Path Planning and Motion Control, provide a description of the model | **Path Planning and Motion Control**, or **PPMC RL**, is a training algorithm that teaches path planning and motion control to robots using reinforcement learning in a simulated environment. The focus is on promoting generalization where there are environmental uncertainties such as rough environments like lunar services. The algorithm is coupled with any generic reinforcement learning algorithm to teach robots how to respond to user commands and to travel to designated locations on a single neural network. The algorithm works independently of the robot structure, demonstrating that it works on a wheeled rover in addition to the past results on a quadruped walking robot. |
Given the following machine learning model name: Joint Learning Architecture, provide a description of the model | **JLA**, or **Joint Learning Architecture**, is an approach for multiple object tracking and trajectory forecasting. It jointly trains a tracking and trajectory forecasting model, and the trajectory forecasts are used for short-term motion estimates in lieu of linear motion prediction methods such as the Kalman filter. It uses a [FairMOT](https://paperswithcode.com/method/fairmot) model as the base model because this architecture already performs detection and tracking. A forecasting branch is added to the network and is trained end-to-end. [FairMOT](https://paperswithcode.com/method/fairmot) consist of a backbone network utilizing [Deep Layer Aggregation](https://www.paperswithcode.com/method/dla), an object detection head, and a reID head. |
Given the following machine learning model name: ComiRec, provide a description of the model | **ComiRec** is a multi-interest framework for sequential recommendation. The multi-interest module captures multiple interests from user behavior sequences, which can be exploited for retrieving candidate items from the large-scale item pool. These items are then fed into an aggregation module to obtain the overall recommendation. The aggregation module leverages a controllable factor to balance the recommendation accuracy and diversity. |
Given the following machine learning model name: FLAVR, provide a description of the model | **FLAVR** is an architecture for video frame interpolation. It uses 3D space-time convolutions to enable end-to-end learning and inference for video frame interpolation. Overall, it consists of a [U-Net](https://paperswithcode.com/method/u-net) style architecture with 3D space-time convolutions and
deconvolutions (yellow blocks). Channel gating is used after all (de-)[convolution](https://paperswithcode.com/method/convolution) layers (blue blocks). The final prediction layer (the purple block) is implemented as a convolution layer to project the 3D feature maps into $(k−1)$ frame predictions. This design allows FLAVR to predict multiple frames in one inference forward pass. |
Given the following machine learning model name: Weight excitation, provide a description of the model | A novel built-in attention mechanism, that is complementary to all other prior attention mechanisms (e.g. squeeze and excitation, transformers) that are external (i.e., not built-in - please read paper for more details) |
Given the following machine learning model name: Graph Path Feature Learning, provide a description of the model | **Graph Path Feature Learning** is a probabilistic rule learner optimized to mine instantiated first-order logic rules from knowledge graphs. Instantiated rules contain constants extracted from KGs. Compared to abstract rules that contain no constants, instantiated rules are capable of explaining and expressing concepts in more detail. GPFL utilizes a novel two-stage rule generation mechanism that first generalizes extracted paths into templates that are acyclic abstract rules until a certain degree of template saturation is achieved, then specializes the generated templates into instantiated rules. |
Given the following machine learning model name: RealFormer, provide a description of the model | **RealFormer** is a type of [Transformer](https://paperswithcode.com/methods/category/transformers) based on the idea of [residual](https://paperswithcode.com/method/residual-connection) attention. It adds skip edges to the backbone [Transformer](https://paperswithcode.com/method/transformer) to create multiple direct paths, one for each type of attention module. It adds no parameters or hyper-parameters. Specifically, RealFormer uses a Post-[LN](https://paperswithcode.com/method/layer-normalization) style Transformer as backbone and adds skip edges to connect [Multi-Head Attention](https://paperswithcode.com/method/multi-head-attention) modules in adjacent layers. |
Given the following machine learning model name: EdgeBoxes, provide a description of the model | **EdgeBoxes** is an approach for generating object bounding box proposals directly from edges. Similar to segments, edges provide a simplified but informative representation of an image. In fact, line drawings of an image can accurately convey the high-level information contained in an image
using only a small fraction of the information.
The main insight behind the method is the observation: the number of contours wholly enclosed by a bounding box is indicative of the likelihood of the box containing an object. We say a contour is wholly enclosed by a box if all edge pixels belonging to the contour lie within the interior of the box. Edges tend to correspond to object boundaries, and as such boxes that tightly enclose a set of edges are likely to contain an object. However, some edges that lie within an object’s bounding box may not be part of the contained object. Specifically, edge pixels that belong to contours straddling the box’s boundaries are likely to correspond to objects or structures that lie outside the box.
Source: [Zitnick and Dollar](https://pdollar.github.io/files/papers/ZitnickDollarECCV14edgeBoxes.pdf) |
Given the following machine learning model name: Human Robot Interaction Pipeline, provide a description of the model | The pipeline we propose consists of three parts: 1) recognizing the interaction type; 2) detecting the object that the interaction is targeting; and 3) learning incrementally the models from data recorded by the robot sensors. Our main contributions lie in the target object detection, guided by the recognized interaction, and in the incremental object learning. The novelty of our approach is the focus on natural, heterogeneous, and multimodal HRIs to incrementally learn new object models. |
Given the following machine learning model name: AdaGrad, provide a description of the model | **AdaGrad** is a stochastic optimization method that adapts the learning rate to the parameters. It performs smaller updates for parameters associated with frequently occurring features, and larger updates for parameters associated with infrequently occurring features. In its update rule, Adagrad modifies the general learning rate $\eta$ at each time step $t$ for every parameter $\theta\_{i}$ based on the past gradients for $\theta\_{i}$:
$$ \theta\_{t+1, i} = \theta\_{t, i} - \frac{\eta}{\sqrt{G\_{t, ii} + \epsilon}}g\_{t, i} $$
The benefit of AdaGrad is that it eliminates the need to manually tune the learning rate; most leave it at a default value of $0.01$. Its main weakness is the accumulation of the squared gradients in the denominator. Since every added term is positive, the accumulated sum keeps growing during training, causing the learning rate to shrink and becoming infinitesimally small.
Image: [Alec Radford](https://twitter.com/alecrad) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.