prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Fastformer, provide a description of the model
**Fastformer** is an type of [Transformer](https://paperswithcode.com/method/transformer) which uses [additive attention](https://www.paperswithcode.com/method/additive-attention) as a building block. Instead of modeling the pair-wise interactions between tokens, [additive attention](https://paperswithcode.com/method/additive-attention) is used to model global contexts, and then each token representation is further transformed based on its interaction with global context representations.
Given the following machine learning model name: Thinned U-shape Module, provide a description of the model
**Thinned U-shape Module**, or **TUM**, is a feature extraction block used for object detection models. It was introduced as part of the [M2Det](https://paperswithcode.com/method/m2det) architecture. Different from [FPN](https://paperswithcode.com/method/fpn) and [RetinaNet](https://paperswithcode.com/method/retinanet), TUM adopts a thinner U-shape structure as illustrated in the Figure to the right. The encoder is a series of 3x3 [convolution](https://paperswithcode.com/method/convolution) layers with stride 2. And the decoder takes the outputs of these layers as its reference set of feature maps, while the original FPN chooses the output of the last layer of each stage in [ResNet](https://paperswithcode.com/method/resnet) backbone. In addition, with TUM, we add [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) layers after the upsample and element-wise sum operation at the decoder branch to enhance learning ability and keep smoothness for the features. In the context of M2Det, all of the outputs in the decoder of each TUM form the multi-scale features of the current level. As a whole, the outputs of stacked TUMs form the multi-level multi-scale features, while the front TUM mainly provides shallow-level features, the middle TUM provides medium-level features, and the back TUM provides deep-level features.
Given the following machine learning model name: Knowledge Enhanced Masked Language Model, provide a description of the model
Given the following machine learning model name: VEGA, provide a description of the model
**VEGA** is an AutoML framework that is compatible and optimized for multiple hardware platforms. It integrates various modules of AutoML, including [Neural Architecture Search](https://paperswithcode.com/method/neural-architecture-search) (NAS), Hyperparameter Optimization (HPO), Auto Data Augmentation, Model Compression, and Fully Train. To support a variety of search algorithms and tasks, it involves a fine-grained search space and a description language to enable easy adaptation to different search algorithms and tasks.
Given the following machine learning model name: Detection Transformer, provide a description of the model
**Detr**, or **Detection Transformer**, is a set-based object detector using a [Transformer](https://paperswithcode.com/method/transformer) on top of a convolutional backbone. It uses a conventional CNN backbone to learn a 2D representation of an input image. The model flattens it and supplements it with a positional encoding before passing it into a transformer encoder. A transformer decoder then takes as input a small fixed number of learned positional embeddings, which we call object queries, and additionally attends to the encoder output. We pass each output embedding of the decoder to a shared feed forward network (FFN) that predicts either a detection (class and bounding box) or a “no object” class.
Given the following machine learning model name: Early Dropout, provide a description of the model
Introduced by Hinton et al. in 2012, dropout has stood the test of time as a regularizer for preventing overfitting in neural networks. In this study, we demonstrate that dropout can also mitigate underfitting when used at the start of training. During the early phase, we find dropout reduces the directional variance of gradients across mini-batches and helps align the mini-batch gradients with the entire dataset's gradient. This helps counteract the stochasticity of SGD and limit the influence of individual batches on model training. Our findings lead us to a solution for improving performance in underfitting models - early dropout: dropout is applied only during the initial phases of training, and turned off afterwards. Models equipped with early dropout achieve lower final training loss compared to their counterparts without dropout. Additionally, we explore a symmetric technique for regularizing overfitting models - late dropout, where dropout is not used in the early iterations and is only activated later in training. Experiments on ImageNet and various vision tasks demonstrate that our methods consistently improve generalization accuracy. Our results encourage more research on understanding regularization in deep learning and our methods can be useful tools for future neural network training, especially in the era of large data. Code is available at https://github.com/facebookresearch/dropout .
Given the following machine learning model name: Inception v2, provide a description of the model
**Inception v2** is the second generation of Inception convolutional neural network architectures which notably uses [batch normalization](https://paperswithcode.com/method/batch-normalization). Other changes include dropping [dropout](https://paperswithcode.com/method/dropout) and removing [local response normalization](https://paperswithcode.com/method/local-response-normalization), due to the benefits of batch normalization.
Given the following machine learning model name: Temporal attention, provide a description of the model
Temporal attention can be seen as a dynamic time selection mechanism determining when to pay attention, and is thus usually used for video processing.
Given the following machine learning model name: Neural Cache, provide a description of the model
A **Neural Cache**, or a **Continuous Cache**, is a module for language modelling which stores previous hidden states in memory cells. They are then used as keys to retrieve their corresponding word, that is the next word. There is no transformation applied to the storage during writing and reading. More formally it exploits the hidden representations $h\_{t}$ to define a probability distribution over the words in the cache. As illustrated in the Figure, the cache stores pairs $\left(h\_{i}, x\_{i+1}\right)$ of a hidden representation, and the word which was generated based on this representation (the vector $h\_{i}$ encodes the history $x\_{i}, \dots, x\_{1}$). At time $t$, we then define a probability distribution over words stored in the cache based on the stored hidden representations and the current one $h\_{t}$ as: $$ p\_{cache}\left(w | h\_{1\dots{t}}, x\_{1\dots{t}}\right) \propto \sum^{t-1}\_{i=1}\mathcal{1}\_{\text{set}\left(w=x\_{i+1}\right)} \exp\left(θ\_{h}>h\_{t}^{T}h\_{i}\right) $$ where the scalar $\theta$ is a parameter which controls the flatness of the distribution. When $\theta$ is equal to zero, the probability distribution over the history is uniform, and the model is equivalent to a unigram cache model.
Given the following machine learning model name: MagFace, provide a description of the model
**MagFace** is a category of losses for face recognition that learn a universal feature embedding whose magnitude can measure the quality of a given face. Under the new loss, it can be proven that the magnitude of the feature embedding monotonically increases if the subject is more likely to be recognized. In addition, MagFace introduces an adaptive mechanism to learn a well-structured within-class feature distributions by pulling easy samples to class centers while pushing hard samples away. For face recognition, MagFace helps prevent model overfitting on noisy and low-quality samples by an adaptive mechanism to learn well-structured within-class feature distributions -- by pulling easy samples to class centers while pushing hard samples away.
Given the following machine learning model name: ParaNet Convolution Block, provide a description of the model
A **ParaNet Convolution Block** is a convolutional block that appears in the encoder and decoder of the [ParaNet](https://paperswithcode.com/method/paranet) text-to-speech architecture. It consists of a 1-D [convolution](https://paperswithcode.com/method/convolution) with a gated linear unit ([GLU](https://paperswithcode.com/method/glu)) and a [residual connection](https://paperswithcode.com/method/residual-connection). It is similar to the [DV3 Convolution Block](https://paperswithcode.com/method/dv3-convolution-block).
Given the following machine learning model name: ProxylessNAS, provide a description of the model
**ProxylessNAS** directly learns neural network architectures on the target task and target hardware without any proxy task. Additional contributions include: - Using a new path-level pruning perspective for [neural architecture search](https://paperswithcode.com/method/neural-architecture-search), showing a close connection between NAS and model compression. Memory consumption is saved by one order of magnitude by using path-level binarization. - Using a novel gradient-based approach (latency regularization loss) for handling hardware objectives (e.g. latency). Given different hardware platforms: CPU/GPU/Mobile, ProxylessNAS enables hardware-aware neural network specialization that’s exactly optimized for the target hardware.
Given the following machine learning model name: MoGA-B, provide a description of the model
**MoGA-B** is a convolutional neural network optimized for mobile latency and discovered via Mobile GPU-Aware (MoGA) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). The basic building block is MBConvs (inverted residual blocks) from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2). Squeeze-and-excitation layers are also experimented with.
Given the following machine learning model name: Graph Neural Networks with Continual Learning, provide a description of the model
Although significant effort has been applied to fact-checking, the prevalence of fake news over social media, which has profound impact on justice, public trust and our society, remains a serious problem. In this work, we focus on propagation-based fake news detection, as recent studies have demonstrated that fake news and real news spread differently online. Specifically, considering the capability of graph neural networks (GNNs) in dealing with non-Euclidean data, we use GNNs to differentiate between the propagation patterns of fake and real news on social media. In particular, we concentrate on two questions: (1) Without relying on any text information, e.g., tweet content, replies and user descriptions, how accurately can GNNs identify fake news? Machine learning models are known to be vulnerable to adversarial attacks, and avoiding the dependence on text-based features can make the model less susceptible to the manipulation of advanced fake news fabricators. (2) How to deal with new, unseen data? In other words, how does a GNN trained on a given dataset perform on a new and potentially vastly different dataset? If it achieves unsatisfactory performance, how do we solve the problem without re-training the model on the entire data from scratch? We study the above questions on two datasets with thousands of labelled news items, and our results show that: (1) GNNs can achieve comparable or superior performance without any text information to state-of-the-art methods. (2) GNNs trained on a given dataset may perform poorly on new, unseen data, and direct incremental training cannot solve the problem---this issue has not been addressed in the previous work that applies GNNs for fake news detection. In order to solve the problem, we propose a method that achieves balanced performance on both existing and new datasets, by using techniques from continual learning to train GNNs incrementally.
Given the following machine learning model name: Embedding Dropout, provide a description of the model
**Embedding Dropout** is equivalent to performing [dropout](https://paperswithcode.com/method/dropout) on the embedding matrix at a word level, where the dropout is broadcast across all the word vector’s embedding. The remaining non-dropped-out word embeddings are scaled by $\frac{1}{1-p\_{e}}$ where $p\_{e}$ is the probability of embedding dropout. As the dropout occurs on the embedding matrix that is used for a full forward and backward pass, this means that all occurrences of a specific word will disappear within that pass, equivalent to performing [variational dropout](https://paperswithcode.com/method/variational-dropout) on the connection between the one-hot embedding and the embedding lookup. Source: Merity et al, Regularizing and Optimizing [LSTM](https://paperswithcode.com/method/lstm) Language Models
Given the following machine learning model name: Neural Turing Machine, provide a description of the model
A **Neural Turing Machine** is a working memory neural network model. It couples a neural network architecture with external memory resources. The whole architecture is differentiable end-to-end with gradient descent. The models can infer tasks such as copying, sorting and associative recall. A Neural Turing Machine (NTM) architecture contains two basic components: a neural network controller and a memory bank. The Figure presents a high-level diagram of the NTM architecture. Like most neural networks, the controller interacts with the external world via input and output vectors. Unlike a standard network, it also interacts with a memory matrix using selective read and write operations. By analogy to the Turing machine we refer to the network outputs that parameterise these operations as “heads.” Every component of the architecture is differentiable. This is achieved by defining 'blurry' read and write operations that interact to a greater or lesser degree with all the elements in memory (rather than addressing a single element, as in a normal Turing machine or digital computer). The degree of blurriness is determined by an attentional “focus” mechanism that constrains each read and write operation to interact with a small portion of the memory, while ignoring the rest. Because interaction with the memory is highly sparse, the NTM is biased towards storing data without interference. The memory location brought into attentional focus is determined by specialised outputs emitted by the heads. These outputs define a normalised weighting over the rows in the memory matrix (referred to as memory “locations”). Each weighting, one per read or write head, defines the degree to which the head reads or writes at each location. A head can thereby attend sharply to the memory at a single location or weakly to the memory at many locations
Given the following machine learning model name: MT-PET, provide a description of the model
**MT-PET** is a multi-task version of [Pattern Exploiting Training](https://arxiv.org/abs/2001.07676) (PET) for exaggeration detection, which leverages knowledge from complementary cloze-style QA tasks to improve few-shot learning. It defines pairs of complementary pattern-verbalizer pairs for a main task and auxiliary task. These PVPs are then used to train PET on data from both tasks. PET uses the masked language modeling objective of pretrained language models to transform a task into one or more cloze-style question answering tasks. In the original PET implementation, PVPs are defined for a single target task. MT-PET extends this by allowing for auxiliary PVPs from related tasks, adding complementary cloze-style QA tasks during training. The motivation for the multi-task approach is two-fold: 1) complementary cloze-style tasks can potentially help the model to learn different aspects of the main task, i.e. the similar tasks of exaggeration detection and claim strength prediction; 2) data on related tasks can be utilized during training, which is important in situations where data for the main task is limited.
Given the following machine learning model name: Kaleido-BERT, provide a description of the model
**Kaleido-BERT**(CVPR2021) is the pioneering work that focus on solving PTM in e-commerce field. It achieves SOTA performances compared with many models published in general domain.
Given the following machine learning model name: ShuffleNet V2 Block, provide a description of the model
**ShuffleNet V2 Block** is an image model block used in the [ShuffleNet V2](https://paperswithcode.com/method/shufflenet-v2) architecture, where speed is the metric optimized for (instead of indirect ones like FLOPs). It utilizes a simple operator called channel split. At the beginning of each unit, the input of $c$ feature channels are split into two branches with $c - c'$ and $c'$ channels, respectively. Following **G3**, one branch remains as identity. The other branch consists of three convolutions with the same input and output channels to satisfy **G1**. The two $1\times1$ convolutions are no longer group-wise, unlike the original [ShuffleNet](https://paperswithcode.com/method/shufflenet). This is partially to follow **G2**, and partially because the split operation already produces two groups. After [convolution](https://paperswithcode.com/method/convolution), the two branches are concatenated. So, the number of channels keeps the same (G1). The same “[channel shuffle](https://paperswithcode.com/method/channel-shuffle)” operation as in ShuffleNet is then used to enable information communication between the two branches. The motivation behind channel split is that alternative architectures, where pointwise group convolutions and bottleneck structures are used, lead to increased memory access cost. Additionally more network fragmentation with group convolutions reduces parallelism (less friendly for GPU), and the element-wise addition operation, while they have low FLOPs, have high memory access cost. Channel split is an alternative where we can maintain a large number of equally wide channels (equally wide minimizes memory access cost) without having dense convolutions or too many groups.
Given the following machine learning model name: Multi-source Sentiment Generative Adversarial Network, provide a description of the model
**Multi-source Sentiment Generative Adversarial Network** is a multi-source domain adaptation (MDA) method for visual sentiment classification. It is composed of three pipelines, i.e., image reconstruction, image translation, and cycle-reconstruction. To handle data from multiple source domains, it learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution. This is achieved via cycle consistent adversarial learning in an end-to-end manner. Notably, thanks to the unified sentiment latent space, MSGAN requires a single classification network to handle data from different source domains.
Given the following machine learning model name: online deep learning, provide a description of the model
Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream form. We aim to address an open challenge of "Online Deep Learning" (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is significantly more challenging since the optimization of the DNN objective function is non-convex, and regular backpropagation does not work well in practice, especially for online learning settings.
Given the following machine learning model name: DynaBERT, provide a description of the model
**DynaBERT** is a [BERT](https://paperswithcode.com/method/bert)-variant which can flexibly adjust the size and latency by selecting adaptive width and depth. The training process of DynaBERT includes first training a width-adaptive BERT and then allowing both adaptive width and depth, by distilling knowledge from the full-sized model to small sub-networks. Network rewiring is also used to keep the more important attention heads and neurons shared by more sub-networks. A two-stage procedure is used to train DynaBERT. First, using knowledge distillation (dashed lines) to transfer the knowledge from a fixed teacher model to student sub-networks with adaptive width in DynaBERTW. Then, using knowledge distillation (dashed lines) to transfer the knowledge from a trained DynaBERTW to student sub-networks with adaptive width and depth in DynaBERT.
Given the following machine learning model name: AdaShift, provide a description of the model
**AdaShift** is a type of adaptive stochastic optimizer that decorrelates $v\_{t}$ and $g\_{t}$ in [Adam](https://paperswithcode.com/method/adam) by temporal shifting, i.e., using temporally shifted gradient $g\_{t−n}$ to calculate $v\_{t}$. The authors argue that an inappropriate correlation between gradient $g\_{t}$ and the second-moment term $v\_{t}$ exists in Adam, which results in a large gradient being likely to have a small step size while a small gradient may have a large step size. The authors argue that such biased step sizes are the fundamental cause of non-convergence of Adam. The AdaShift updates, based on the idea of temporal independence between gradients, are as follows: $$ g\_{t} = \nabla{f\_{t}}\left(\theta\_{t}\right) $$ $$ m\_{t} = \sum^{n-1}\_{i=0}\beta^{i}\_{1}g\_{t-i}/\sum^{n-1}\_{i=0}\beta^{i}\_{1} $$ Then for $i=1$ to $M$: $$ v\_{t}\left[i\right] = \beta\_{2}v\_{t-1}\left[i\right] + \left(1-\beta\_{2}\right)\phi\left(g^{2}\_{t-n}\left[i\right]\right) $$ $$ \theta\_{t}\left[i\right] = \theta\_{t-1}\left[i\right] - \alpha\_{t}/\sqrt{v\_{t}\left[i\right]}\cdot{m\_{t}\left[i\right]} $$
Given the following machine learning model name: Rotary Position Embedding, provide a description of the model
**Rotary Position Embedding**, or **RoPE**, is a type of position embedding which encodes absolute positional information with rotation matrix and naturally incorporates explicit relative position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and capability of equipping the linear self-attention with relative position encoding.
Given the following machine learning model name: Inception-ResNet-v2 Reduction-B, provide a description of the model
**Inception-ResNet-v2 Reduction-B** is an image model block used in the [Inception-ResNet-v2](https://paperswithcode.com/method/inception-resnet-v2) architecture.
Given the following machine learning model name: Directed Acyclic Graph Neural Network, provide a description of the model
A GNN for dags, which injects their topological order as an inductive bias via asynchronous message passing.
Given the following machine learning model name: Linear Regression, provide a description of the model
**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\hat{y} = \textbf{X}\hat{\beta}$ and actual values $y$: $\left(y-\textbf{X}\beta\right)^{2}$. We can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\hat{\beta}$. Image Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)
Given the following machine learning model name: An Easier Data Augmentation, provide a description of the model
**AEDA**, or **An Easier Data Augmentation**, is a type of data augmentation technique for text classification which includes only the insertion of various punctuation marks into the input sequence. AEDA preserves all the input information and does not mislead the network since it keeps the word order intact while changing their positions in that the words are shifted to the right.
Given the following machine learning model name: Reversible Residual Block, provide a description of the model
**Reversible Residual Blocks** are skip-connection blocks that learn *reversible* residual functions with reference to the layer inputs. It is proposed as part of the [RevNet](https://paperswithcode.com/method/revnet) CNN architecture. Units in each layer are partitioned into two groups, denoted $x\_{1}$ and $x\_{2}$; the authors find what works best is partitioning the channels. Each reversible block takes inputs $\left(x\_{1}, x\_{2}\right)$ and produces outputs $\left(y\_{1}, y\_{2}\right)$ according to the following additive coupling rules – inspired by the transformation in [NICE](https://paperswithcode.com/method/nice) (nonlinear independent components estimation) – and residual functions $F$ and $G$ analogous to those in standard [ResNets](https://paperswithcode.com/method/resnet): $$y\_{1} = x\_{1} + F\left(x\_{2}\right)$$ $$y\_{2} = x\_{2} + G\left(y\_{1}\right)$$ Each layer’s activations can be reconstructed from the next layer’s activations as follows: $$ x\_{2} = y\_{2} − G\left(y\_{1}\right)$$ $$ x\_{1} = y\_{1} − F\left(x\_{2}\right)$$
Given the following machine learning model name: BP-Transformer, provide a description of the model
The **BP-Transformer (BPT)** is a type of [Transformer](https://paperswithcode.com/method/transformer) that is motivated by the need to find a better balance between capability and computational complexity for self-attention. The architecture partitions the input sequence into different multi-scale spans via binary partitioning (BP). It incorporates an inductive bias of attending the context information from fine-grain to coarse-grain as the relative distance increases. The farther the context information is, the coarser its representation is. BPT can be regard as graph neural network, whose nodes are the multi-scale spans. A token node can attend the smaller-scale span for the closer context and the larger-scale span for the longer distance context. The representations of nodes are updated with [Graph Self-Attention](https://paperswithcode.com/method/graph-self-attention).
Given the following machine learning model name: Coresets, provide a description of the model
Given the following machine learning model name: Global Context Block, provide a description of the model
A **Global Context Block** is an image model block for global context modeling. The aim is to have both the benefits of the simplified [non-local block](https://paperswithcode.com/method/non-local-block) with effective modeling of long-range dependencies, and the [squeeze-excitation block](https://paperswithcode.com/method/squeeze-and-excitation-block) with lightweight computation. In the Global Context framework, we have (a) global attention pooling, which adopts a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) $W_{k}$ and [softmax](https://paperswithcode.com/method/softmax) function to obtain the attention weights, and then performs the attention pooling to obtain the global context features, (b) feature transform via a 1x1 [convolution](https://paperswithcode.com/method/convolution) $W\_{v}$; (c) feature aggregation, which employs addition to aggregate the global context features to the features of each position. Taken as a whole, the GC block is proposed as a lightweight way to achieve global context modeling.
Given the following machine learning model name: Holographic Reduced Representation, provide a description of the model
**Holographic Reduced Representations** are a simple mechanism to represent an associative array of key-value pairs in a fixed-size vector. Each individual key-value pair is the same size as the entire associative array; the array is represented by the sum of the pairs. Concretely, consider a complex vector key $r = (a\_{r}[1]e^{iφ\_{r}[1]}, a\_{r}[2]e^{iφ\_{r}[2]}, \dots)$, which is the same size as the complex vector value x. The pair is "bound" together by element-wise complex multiplication, which multiplies the moduli and adds the phases of the elements: $$ y = r \otimes x $$ $$ y = \left(a\_{r}[1]a\_{x}[1]e^{i(φ\_{r}[1]+φ\_{x}[1])}, a\_{r}[2]a\_{x}[2]e^{i(φ\_{r}[2]+φ\_{x}[2])}, \dots\right) $$ Given keys $r\_{1}$, $r\_{2}$, $r\_{3}$ and input vectors $x\_{1}$, $x\_{2}$, $x\_{3}$, the associative array is: $$c = r\_{1} \otimes x\_{1} + r\_{2} \otimes x\_{2} + r\_{3} \otimes x\_{3} $$ where we call $c$ a memory trace. Define the key inverse: $$ r^{-1} = \left(a\_{r}[1]^{−1}e^{−iφ\_{r}[1]}, a\_{r}[2]^{−1}e^{−iφ\_{r}[2]}, \dots\right) $$ To retrieve the item associated with key $r\_{k}$, we multiply the memory trace element-wise by the vector $r^{-1}\_{k}$. For example: $$ r\_{2}^{−1} \otimes c = r\_{2}^{-1} \otimes \left(r\_{1} \otimes x\_{1} + r\_{2} \otimes x\_{2} + r\_{3} \otimes x\_{3}\right) $$ $$ r\_{2}^{−1} \otimes c = x\_{2} + r^{-1}\_{2} \otimes \left(r\_{1} \otimes x\_{1} + r\_{3} \otimes x3\right) $$ $$ r\_{2}^{−1} \otimes c = x\_{2} + noise $$ The product is exactly $x\_{2}$ together with a noise term. If the phases of the elements of the key vector are randomly distributed, the noise term has zero mean. Source: [Associative LSTMs](https://arxiv.org/pdf/1602.03032.pdf)
Given the following machine learning model name: Random Synthesized Attention, provide a description of the model
**Random Synthesized Attention** is a form of synthesized attention where the attention weights are not conditioned on any input tokens. Instead, the attention weights are initialized to random values. It was introduced with the [Synthesizer](https://paperswithcode.com/method/synthesizer) architecture. Random Synthesized Attention contrasts with [Dense Synthesized Attention](https://paperswithcode.com/method/dense-synthesized-attention) which conditions on each token independently, as opposed to pairwise token interactions in the vanilla [Transformer](https://paperswithcode.com/method/transformer) model. Let $R$ be a randomly initialized matrix. Random Synthesized Attention is defined as: $$Y = \text{Softmax}\left(R\right)G\left(X\right) $$ where $R \in \mathbb{R}^{l \text{ x } l}$. Notably, each head adds 2 parameters to the overall network. The basic idea of the Random Synthesizer is to not rely on pairwise token interactions or any information from individual token but rather to learn a task-specific alignment that works well globally across many samples. This is a direct generalization of the recently proposed fixed self-attention patterns of [Raganato et al (2020)](https://arxiv.org/abs/2002.10260).
Given the following machine learning model name: CORAD: Correlation-Aware Compression of Massive Time Series using Sparse Dictionary Coding, provide a description of the model
Given the following machine learning model name: BezierAlign, provide a description of the model
**BezierAlign** is a feature sampling method for arbitrarily-shaped scene text recognition that exploits parameterization nature of a compact Bezier curve bounding box. Unlike RoIAlign, the shape of sampling grid of BezierAlign is not rectangular. Instead, each column of the arbitrarily-shaped grid is orthogonal to the Bezier curve boundary of the text. The sampling points have equidistant interval in width and height, respectively, which are bilinear interpolated with respect to the coordinates. Formally given an input feature map and Bezier curve control points, we concurrently process all the output pixels of the rectangular output feature map with size $h\_{\text {out }} \times w\_{\text {out }}$. Taking pixel $g\_{i}$ with position $\left(g\_{i w}, g\_{i h}\right)$ (from output feature map) as an example, we calculate $t$ by: $$ t=\frac{g\_{i w}}{w\_{o u t}} $$ We then calculate the point of upper Bezier curve boundary $tp$ and lower Bezier curve boundary $bp$. Using $tp$ and $bp$, we can linearly index the sampling point $op$ by: $$ op=bp \cdot \frac{g\_{i h}}{h\_{\text {out }}}+tp \cdot\left(1-\frac{g\_{i h}}{h\_{\text {out }}}\right) $$ With the position of $op$, we can easily apply bilinear interpolation to calculate the result. Comparisons among previous sampling methods and BezierAlign are shown in the Figure.
Given the following machine learning model name: Highway Network, provide a description of the model
A **Highway Network** is an architecture designed to ease gradient-based training of very deep networks. They allow unimpeded information flow across several layers on "information highways". The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions.
Given the following machine learning model name: Spatial Attention Module (ThunderNet), provide a description of the model
**Spatial Attention Module (SAM)** is a feature extraction module for object detection used in [ThunderNet](https://paperswithcode.com/method/thundernet). The ThunderNet SAM explicitly re-weights the feature map before RoI warping over the spatial dimensions. The key idea of SAM is to use the knowledge from [RPN](https://paperswithcode.com/method/rpn) to refine the feature distribution of the feature map. RPN is trained to recognize foreground regions under the supervision of ground truths. Therefore, the intermediate features in RPN can be used to distinguish foreground features from background features. SAM accepts two inputs: the intermediate feature map from RPN $\mathcal{F}^{RPN}$ and the thin feature map from the [Context Enhancement Module](https://paperswithcode.com/method/context-enhancement-module) $\mathcal{F}^{CEM}$. The output of SAM $\mathcal{F}^{SAM}$ is defined as: $$ \mathcal{F}^{SAM} = \mathcal{F}^{CEM} * \text{sigmoid}\left(\theta\left(\mathcal{F}^{RPN}\right)\right) $$ Here $\theta\left(·\right)$ is a dimension transformation to match the number of channels in both feature maps. The sigmoid function is used to constrain the values within $\left[0, 1\right]$. At last, $\mathcal{F}^{CEM}$ is re-weighted by the generated feature map for better feature distribution. For computational efficiency, we simply apply a 1×1 [convolution](https://paperswithcode.com/method/convolution) as $\theta\left(·\right)$, so the computational cost of CEM is negligible. The Figure to the right shows the structure of SAM. SAM has two functions. The first one is to refine the feature distribution by strengthening foreground features and suppressing background features. The second one is to stabilize the training of RPN as SAM enables extra gradient flow from [R-CNN](https://paperswithcode.com/method/r-cnn) subnet to RPN. As a result, RPN receives additional supervision from RCNN subnet, which helps the training of RPN.
Given the following machine learning model name: Conditional DBlock, provide a description of the model
**Conditional DBlock** is a residual based block used in the discriminator of the [GAN-TTS](https://paperswithcode.com/method/gan-tts) architecture. They are similar to the [GBlocks](https://paperswithcode.com/method/gblock) used in the generator, but without [batch normalization](https://paperswithcode.com/method/batch-normalization). Unlike the [DBlock](https://paperswithcode.com/method/dblock), the Conditional DBlock adds the embedding of the linguistic features after the first [convolution](https://paperswithcode.com/method/convolution).
Given the following machine learning model name: Adversarial Soft Advantage Fitting (ASAF), provide a description of the model
Given the following machine learning model name: ChebNet, provide a description of the model
ChebNet involves a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Description from: [Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering](https://arxiv.org/pdf/1606.09375.pdf)
Given the following machine learning model name: MacBERT, provide a description of the model
**MacBERT** is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based model for Chinese NLP that alters [RoBERTa](https://paperswithcode.com/method/roberta) in several ways, including a modified masking strategy. Instead of masking with [MASK] token, which never appears in the fine-tuning stage, MacBERT masks the word with its similar word. Specifically MacBERT shares the same pre-training tasks as [BERT](https://paperswithcode.com/method/bert) with several modifications. For the MLM task, the following modifications are performed: - Whole word masking is used as well as Ngram masking strategies for selecting candidate tokens for masking, with a percentage of 40%, 30%, 20%, 10% for word-level unigram to 4-gram. - Instead of masking with [MASK] token, which never appears in the fine-tuning stage, similar words are used for the masking purpose. A similar word is obtained by using Synonyms toolkit which is based on word2vec similarity calculations. If an N-gram is selected to mask, we will find similar words individually. In rare cases, when there is no similar word, we will degrade to use random word replacement. - A percentage of 15% input words is used for masking, where 80% will replace with similar words, 10% replace with a random word, and keep with original words for the rest of 10%.
Given the following machine learning model name: Causal Convolution, provide a description of the model
**Causal convolutions** are a type of [convolution](https://paperswithcode.com/method/convolution) used for temporal data which ensures the model cannot violate the ordering in which we model the data: the prediction $p(x_{t+1} | x_{1}, \ldots, x_{t})$ emitted by the model at timestep $t$ cannot depend on any of the future timesteps $x_{t+1}, x_{t+2}, \ldots, x_{T}$. For images, the equivalent of a causal convolution is a [masked convolution](https://paperswithcode.com/method/masked-convolution) which can be implemented by constructing a mask tensor and doing an element-wise multiplication of this mask with the convolution kernel before applying it. For 1-D data such as audio one can more easily implement this by shifting the output of a normal convolution by a few timesteps.
Given the following machine learning model name: DeeBERT, provide a description of the model
**DeeBERT** is a method for accelerating [BERT](https://paperswithcode.com/method/bert) inference. It inserts extra classification layers (which are referred to as off-ramps) between each [transformer](https://paperswithcode.com/method/transformer) layer of BERT. All transformer layers and off-ramps are jointly fine-tuned on a given downstream dataset. At inference time, after a sample goes through a transformer layer, it is passed to the following off-ramp. If the off-ramp is confident of the prediction, the result is returned; otherwise, the sample is sent to the next transformer layer.
Given the following machine learning model name: AutoDropout, provide a description of the model
**AutoDropout** automates the process of designing [dropout](https://paperswithcode.com/method/dropout) patterns using a [Transformer](https://paperswithcode.com/method/transformer) based controller. In this method, a controller learns to generate a dropout pattern at every channel and layer of a target network, such as a [ConvNet](https://paperswithcode.com/methods/category/convolutional-neural-networks) or a Transformer. The target network is then trained with the dropped-out pattern, and its resulting validation performance is used as a signal for the controller to learn from. The resulting pattern is applied to a convolutional output channel, which is a common building block of image recognition models. The controller network generates the tokens to describe the configurations of the dropout pattern. The tokens are generated like words in a language model. For every layer in a ConvNet, a group of 8 tokens need to be made to create a dropout pattern. These 8 tokens are generated sequentially. In the figure above, size, stride, and repeat indicate the size and the tiling of the pattern; rotate, shear_x, and shear_y specify the geometric transformations of the pattern; share_c is a binary deciding whether a pattern is applied to all $C$ channels; and residual is a binary deciding whether the pattern is applied to the residual branch as well. If we need $L$ dropout patterns, the controller will generate $8L$ decisions.
Given the following machine learning model name: SimpleNet, provide a description of the model
**SimpleNet** is a convolutional neural network with 13 layers. The network employs a homogeneous design utilizing 3 × 3 kernels for convolutional layer and 2 × 2 kernels for pooling operations. The only layers which do not use 3 × 3 kernels are 11th and 12th layers, these layers, utilize 1 × 1 convolutional kernels. Feature-map down-sampling is carried out using nonoverlaping 2 × 2 max-pooling. In order to cope with the problem of vanishing gradient and also over-fitting, SimpleNet also uses batch-normalization with moving average fraction of 0.95 before any [ReLU](https://paperswithcode.com/method/relu) non-linearity.
Given the following machine learning model name: HyperGraph Self-Attention, provide a description of the model
An extension of Self-Attention to hypergraph Skeleton-based action recognition aims to recognize human actions given human joint coordinates with skeletal interconnections. By defining a graph with joints as vertices and their natural connections as edges, previous works successfully adopted Graph Convolutional networks (GCNs) to model joint co-occurrences and achieved superior performance. More recently, a limitation of GCNs is identified, i.e., the topology is fixed after training. To relax such a restriction, Self-Attention (SA) mechanism has been adopted to make the topology of GCNs adaptive to the input, resulting in the state-of-the-art hybrid models. Concurrently, attempts with plain Transformers have also been made, but they still lag behind state-of-the-art GCN-based methods due to the lack of structural prior. Unlike hybrid models, we propose a more elegant solution to incorporate the bone connectivity into Transformer via a graph distance embedding. Our embedding retains the information of skeletal structure during training, whereas GCNs merely use it for initialization. More importantly, we reveal an underlying issue of graph models in general, i.e., pairwise aggregation essentially ignores the high-order kinematic dependencies between body joints. To fill this gap, we propose a new self-attention (SA) mechanism on hypergraph, termed Hypergraph Self-Attention (HyperSA), to incorporate intrinsic higher-order relations into the model. We name the resulting model Hyperformer, and it beats state-of-the-art graph models w.r.t. accuracy and efficiency on NTU RGB+D, NTU RGB+D 120, and Northwestern-UCLA datasets.
Given the following machine learning model name: Non Maximum Suppression, provide a description of the model
**Non Maximum Suppression** is a computer vision method that selects a single entity out of many overlapping entities (for example bounding boxes in object detection). The criteria is usually discarding entities that are below a given probability bound. With remaining entities we repeatedly pick the entity with the highest probability, output that as the prediction, and discard any remaining box where a $\text{IoU} \geq 0.5$ with the box output in the previous step. Image Credit: [Martin Kersner](https://github.com/martinkersner/non-maximum-suppression-cpp)
Given the following machine learning model name: Unsupervised Feature Loss, provide a description of the model
**UFLoss**, or **Unsupervised Feature Loss**, is a patch-based unsupervised learned feature loss for deep learning (DL) based reconstructions. The UFLoss provides instance-level discrimination by mapping similar instances to similar low-dimensional feature vectors using a pre-trained mapping network (UFLoss Network). The rationale of using features from large-patches (typically 40×40 pixels for a 300×300 pixels image) is that we want the UFLoss to capture mid-level structural and semantic features instead of using small patches (typically around 10×10 pixels), which only contain local edge information. On the other hand, the authors avoid using global features due to the fact that the training set (typically around 5000 slices) is usually not large enough to capture common and general features at a large-image scale.
Given the following machine learning model name: TabTransformer, provide a description of the model
**TabTransformer** is a deep tabular data modeling architecture for supervised and semi-supervised learning. The TabTransformer is built upon self-attention based Transformers. The Transformer layers transform the embeddings of categorical features into robust contextual embeddings to achieve higher prediction accuracy. As an overview, the architecture comprises a column embedding layer, a stack of $N$ [Transformer](/method/transformer) layers, and a multi-layer perceptron (MLP). The contextual embeddings (outputted by the Transformer layer) are concatenated along with continuous features which is inputted to an MLP. The loss function is then minimized to learn all the parameters in an end-to-end learning.
Given the following machine learning model name: Gait Emotion Recognition, provide a description of the model
We present a novel classifier network called STEP, to classify perceived human emotion from gaits, based on a Spatial Temporal Graph Convolutional Network (ST-[GCN](https://paperswithcode.com/method/gcn)) architecture. Given an RGB video of an individual walking, our formulation implicitly exploits the gait features to classify the perceived emotion of the human into one of four emotions: happy, sad, angry, or neutral. We train STEP on annotated real-world gait videos, augmented with annotated synthetic gaits generated using a novel generative network called STEP-Gen, built on an ST-GCN based Conditional Variational Autoencoder (CVAE). We incorporate a novel push-pull regularization loss in the CVAE formulation of STEP-Gen to generate realistic gaits and improve the classification accuracy of STEP. We also release a novel dataset (E-Gait), which consists of 4,227 human gaits annotated with perceived emotions along with thousands of synthetic gaits. In practice, STEP can learn the affective features and exhibits classification accuracy of 88\% on E-Gait, which is 14--30\% more accurate over prior methods.
Given the following machine learning model name: Denoising Score Matching, provide a description of the model
Training a denoiser on signals gives you a powerful prior over this signal that you can then use to sample examples of this signal.
Given the following machine learning model name: Model-Free Episodic Control, provide a description of the model
Non-parametric approximation of Q-values by storing all visited states and doing inference through k-Nearest Neighbors.
Given the following machine learning model name: Revision Network, provide a description of the model
**Revision Network** is a style transfer module that aims to revise the rough stylized image via generating residual details image $r_{c s}$, while the final stylized image is generated by combining $r\_{c s}$ and rough stylized image $\bar{x}\_{c s}$. This procedure ensures that the distribution of global style pattern in $\bar{x}\_{c s}$ is properly kept. Meanwhile, learning to revise local style patterns with residual details image is easier for the Revision Network. As shown in the Figure, the Revision Network is designed as a simple yet effective encoder-decoder architecture, with only one down-sampling and one up-sampling layer. Further, a [patch discriminator](https://paperswithcode.com/method/patchgan) is used to help Revision Network to capture fine patch textures under adversarial learning setting. The patch discriminator $D$ is defined following SinGAN, where $D$ owns 5 convolution layers and 32 hidden channels. A relatively shallow $D$ is chosen to (1) avoid overfitting since we only have one style image and (2) control the receptive field to ensure D can only capture local patterns.
Given the following machine learning model name: Local Interpretable Model-Agnostic Explanations, provide a description of the model
**LIME**, or **Local Interpretable Model-Agnostic Explanations**, is an algorithm that can explain the predictions of any classifier or regressor in a faithful way, by approximating it locally with an interpretable model. It modifies a single data sample by tweaking the feature values and observes the resulting impact on the output. It performs the role of an "explainer" to explain predictions from each data sample. The output of LIME is a set of explanations representing the contribution of each feature to a prediction for a single sample, which is a form of local interpretability. Interpretable models in LIME can be, for instance, [linear regression](https://paperswithcode.com/method/linear-regression) or decision trees, which are trained on small perturbations (e.g. adding noise, removing words, hiding parts of the image) of the original model to provide a good local approximation.
Given the following machine learning model name: LayerScale, provide a description of the model
**LayerScale** is a method used for [vision transformer](https://paperswithcode.com/methods/category/vision-transformer) architectures to help improve training dynamics. It adds a learnable diagonal matrix on output of each residual block, initialized close to (but not at) 0. Adding this simple layer after each residual block improves the training dynamic, allowing for the training of deeper high-capacity image transformers that benefit from depth. Specifically, LayerScale is a per-channel multiplication of the vector produced by each residual block, as opposed to a single scalar, see Figure (d). The objective is to group the updates of the weights associated with the same output channel. Formally, LayerScale is a multiplication by a diagonal matrix on output of each residual block. In other words: $$ x\_{l}^{\prime} =x\_{l}+\operatorname{diag}\left(\lambda\_{l, 1}, \ldots, \lambda\_{l, d}\right) \times \operatorname{SA}\left(\eta\left(x\_{l}\right)\right) $$ $$ x\_{l+1} =x\_{l}^{\prime}+\operatorname{diag}\left(\lambda\_{l, 1}^{\prime}, \ldots, \lambda\_{l, d}^{\prime}\right) \times \operatorname{FFN}\left(\eta\left(x\_{l}^{\prime}\right)\right) $$ where the parameters $\lambda\_{l, i}$ and $\lambda\_{l, i}^{\prime}$ are learnable weights. The diagonal values are all initialized to a fixed small value $\varepsilon:$ we set it to $\varepsilon=0.1$ until depth 18 , $\varepsilon=10^{-5}$ for depth 24 and $\varepsilon=10^{-6}$ for deeper networks. This formula is akin to other [normalization](https://paperswithcode.com/methods/category/normalization) strategies [ActNorm](https://paperswithcode.com/method/activation-normalization) or [LayerNorm](https://paperswithcode.com/method/layer-normalization) but executed on output of the residual block. Yet LayerScale seeks a different effect: [ActNorm](https://paperswithcode.com/method/activation-normalization) is a data-dependent initialization that calibrates activations so that they have zero-mean and unit variance, like [BatchNorm](https://paperswithcode.com/method/batch-normalization). In contrast, in LayerScale, we initialize the diagonal with small values so that the initial contribution of the residual branches to the function implemented by the transformer is small. In that respect the motivation is therefore closer to that of [ReZero](https://paperswithcode.com/method/rezero), [SkipInit](https://paperswithcode.com/method/skipinit), [Fixup](https://paperswithcode.com/method/fixup-initialization) and [T-Fixup](https://paperswithcode.com/method/t-fixup): to train closer to the identity function and let the network integrate the additional parameters progressively during the training. LayerScale offers more diversity in the optimization than just adjusting the whole layer by a single learnable scalar as in [ReZero](https://paperswithcode.com/method/rezero)/[SkipInit](https://paperswithcode.com/method/skipinit), [Fixup](https://paperswithcode.com/method/fixup-initialization) and [T-Fixup](https://paperswithcode.com/method/t-fixup).
Given the following machine learning model name: Adaptive Robust Loss, provide a description of the model
The Robust Loss is a generalization of the Cauchy/Lorentzian, Geman-McClure, Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2 loss functions. By introducing robustness as a continuous parameter, the loss function allows algorithms built around robust loss minimization to be generalized, which improves performance on basic vision tasks such as registration and clustering. Interpreting the loss as the negative log of a univariate density yields a general probability distribution that includes normal and Cauchy distributions as special cases. This probabilistic interpretation enables the training of neural networks in which the robustness of the loss automatically adapts itself during training, which improves performance on learning-based tasks such as generative image synthesis and unsupervised monocular depth estimation, without requiring any manual parameter tuning.
Given the following machine learning model name: Polynomial Rate Decay, provide a description of the model
**Polynomial Rate Decay** is a learning rate schedule where we polynomially decay the learning rate.
Given the following machine learning model name: Unsupervised Deep Manifold Attributed Graph Embedding, provide a description of the model
Unsupervised attributed graph representation learning is challenging since both structural and feature information are required to be represented in the latent space. Existing methods concentrate on learning latent representation via reconstruction tasks, but cannot directly optimize representation and are prone to oversmoothing, thus limiting the applications on downstream tasks. To alleviate these issues, we propose a novel graph embedding framework named Deep Manifold Attributed Graph Embedding (DMAGE). A node-to-node geodesic similarity is proposed to compute the inter-node similarity between the data space and the latent space and then use Bergman divergence as loss function to minimize the difference between them. We then design a new network structure with fewer aggregation to alleviate the oversmoothing problem and incorporate graph structure augmentation to improve the representation's stability. Our proposed DMAGE surpasses state-of-the-art methods by a significant margin on three downstream tasks: unsupervised visualization, node clustering, and link prediction across four popular datasets.
Given the following machine learning model name: Dropout, provide a description of the model
**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$). The idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.
Given the following machine learning model name: ASGD Weight-Dropped LSTM, provide a description of the model
**ASGD Weight-Dropped LSTM**, or **AWD-LSTM**, is a type of recurrent neural network that employs [DropConnect](https://paperswithcode.com/method/dropconnect) for regularization, as well as [NT-ASGD](https://paperswithcode.com/method/nt-asgd) for optimization - non-monotonically triggered averaged [SGD](https://paperswithcode.com/method/sgd) - which returns an average of last iterations of weights. Additional regularization techniques employed include variable length backpropagation sequences, [variational dropout](https://paperswithcode.com/method/variational-dropout), [embedding dropout](https://paperswithcode.com/method/embedding-dropout), [weight tying](https://paperswithcode.com/method/weight-tying), independent embedding/hidden size, [activation regularization](https://paperswithcode.com/method/activation-regularization) and [temporal activation regularization](https://paperswithcode.com/method/temporal-activation-regularization).
Given the following machine learning model name: Gumbel Cross Entropy, provide a description of the model
Gumbel activation function, is defined using the cumulative Gumbel distribution and it can be used to perform Gumbel regression. Gumbel activation is an alternative activation function to the sigmoid or softmax activation functions and can be used to transform the unormalised output of a model to probability. Gumbel activation $\eta_{Gumbel}$ is defined as follows: $\eta_{Gumbel}(q_i) = exp(-exp(-q_i))$ It can be combined with Cross Entropy loss function to solve long-tailed classification problems. Gumbel Cross Entropy (GCE) is defined as follows: $GCE(\eta_{Gumbel}(q_i),y_i) = -y_i \log(\eta_{Gumbel}(q_i))+ (1-y_i) \log(1-\eta_{Gumbel}(q_i))$
Given the following machine learning model name: Deformable Convolutional Networks, provide a description of the model
Deformable ConvNets do not learn an affine transformation. They divide convolution into two steps, firstly sampling features on a regular grid $ \mathcal{R} $ from the input feature map, then aggregating sampled features by weighted summation using a convolution kernel. The process can be written as: \begin{align} Y(p_{0}) &= \sum_{p_i \in \mathcal{R}} w(p_{i}) X(p_{0} + p_{i}) \end{align} \begin{align} \mathcal{R} &= \{(-1,-1), (-1, 0), \dots, (1, 1)\} \end{align} The deformable convolution augments the sampling process by introducing a group of learnable offsets $\Delta p_{i}$ which can be generated by a lightweight CNN. Using the offsets $\Delta p_{i}$, the deformable convolution can be formulated as: \begin{align} Y(p_{0}) &= \sum_{p_i \in \mathcal{R}} w(p_{i}) X(p_{0} + p_{i} + \Delta p_{i}). \end{align} Through the above method, adaptive sampling is achieved. However, $\Delta p_{i}$ is a floating point value unsuited to grid sampling. To address this problem, bilinear interpolation is used. Deformable RoI pooling is also used, which greatly improves object detection. Deformable ConvNets adaptively select the important regions and enlarge the valid receptive field of convolutional neural networks; this is important in object detection and semantic segmentation tasks.
Given the following machine learning model name: UL2, provide a description of the model
**UL2** is a unified framework for pretraining models that are universally effective across datasets and setups. UL2 uses Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes.
Given the following machine learning model name: Hard Swish, provide a description of the model
**Hard Swish** is a type of activation function based on [Swish](https://paperswithcode.com/method/swish), but replaces the computationally expensive sigmoid with a piecewise linear analogue: $$\text{h-swish}\left(x\right) = x\frac{\text{ReLU6}\left(x+3\right)}{6} $$
Given the following machine learning model name: Short-Term Dense Concatenate, provide a description of the model
**STDC**, or **Short-Term Dense Concatenate**, is a module for semantic segmentation to extract deep features with scalable receptive field and multi-scale information. It aims to remove structure redundancy in the BiSeNet architecture, specifically BiSeNet adds an extra path to encode spatial information which can be time-consuming,. Instead, STDC gradually reduces the dimension of feature maps and use the aggregation of them for image representation. We concatenate response maps from multiple continuous layers, each of which encodes input image/feature in different scales and respective fields, leading to multi-scale feature representation. To speed up, the filter size of layers is gradually reduced with negligible loss in segmentation performance.
Given the following machine learning model name: SAGA, provide a description of the model
SAGA is a method in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem.
Given the following machine learning model name: Focus, provide a description of the model
Given the following machine learning model name: Batch Nuclear-norm Maximization, provide a description of the model
**Batch Nuclear-norm Maximization** is an approach for aiding classification in label insufficient situations. It involves maximizing the nuclear-norm of the batch output matrix. The nuclear-norm of a matrix is an upper bound of the Frobenius-norm of the matrix. Maximizing nuclear-norm ensures large Frobenius-norm of the batch matrix, which leads to increased discriminability. The nuclear-norm of the batch matrix is also a convex approximation of the matrix rank, which refers to the prediction diversity.
Given the following machine learning model name: Low-Rank Factorization-based Multi-Head Attention, provide a description of the model
**Low-Rank Factorization-based Multi-head Attention Mechanism**, or **LAMA**, is a type of attention module that uses low-rank factorization to reduce computational complexity. It uses low-rank bilinear pooling to construct a structured sentence representation that attends to multiple aspects of a sentence.
Given the following machine learning model name: TaxoExpan, provide a description of the model
**TaxoExpan** is a self-supervised taxonomy expansion framework. It automatically generates a set of <query concept, anchor concept> pairs from the existing taxonomy as training data. Using such self-supervision data, TaxoExpan learns a model to predict whether a query concept is the direct hyponym of an anchor concept. TaxoExpan features: (1) a position-enhanced graph neural network that encodes the local structure of an anchor concept in the existing taxonomy, and (2) a noise-robust training objective that enables the learned model to be insensitive to the label noise in the self-supervision data.
Given the following machine learning model name: Attention-augmented Convolution, provide a description of the model
**Attention-augmented Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) with a two-dimensional relative self-attention mechanism that can replace convolutions as a stand-alone computational primitive for image classification. It employs [scaled-dot product attention](https://paperswithcode.com/method/scaled) and [multi-head attention](https://paperswithcode.com/method/multi-head-attention) as with [Transformers](https://paperswithcode.com/method/transformer). It works by concatenating convolutional and attentional feature map. To see this, consider an original convolution operator with kernel size $k$, $F\_{in}$ input filters and $F\_{out}$ output filters. The corresponding attention augmented convolution can be written as" $$\text{AAConv}\left(X\right) = \text{Concat}\left[\text{Conv}(X), \text{MHA}(X)\right] $$ $X$ originates from an input tensor of shape $\left(H, W, F\_{in}\right)$. This is flattened to become $X \in \mathbb{R}^{HW \times F\_{in}}$ which is passed into a multi-head attention module, as well as a convolution (see above). Similarly to the convolution, the attention augmented convolution 1) is equivariant to translation and 2) can readily operate on inputs of different spatial dimensions.
Given the following machine learning model name: OFA, provide a description of the model
In this work, we pursue a unified paradigm for multimodal pretraining to break the scaffolds of complex task/modality-specific customization. We propose OFA, a Task-Agnostic and Modality-Agnostic framework that supports Task Comprehensiveness. OFA unifies a diverse set of cross-modal and unimodal tasks, including image generation, visual grounding, image captioning, image classification, language modeling, etc., in a simple sequence-to-sequence learning framework. OFA follows the instruction-based learning in both pretraining and finetuning stages, requiring no extra task-specific layers for downstream tasks. In comparison with the recent state-of-the-art vision & language models that rely on extremely large cross-modal datasets, OFA is pretrained on only 20M publicly available image-text pairs. Despite its simplicity and relatively small-scale training data, OFA achieves new SOTAs in a series of cross-modal tasks while attaining highly competitive performances on uni-modal tasks. Our further analysis indicates that OFA can also effectively transfer to unseen tasks and unseen domains. Our code and models are publicly available at https://github.com/OFA-Sys/OFA.
Given the following machine learning model name: Fire Module, provide a description of the model
A **Fire Module** is a building block for convolutional neural networks, notably used as part of [SqueezeNet](https://paperswithcode.com/method/squeezenet). A Fire module is comprised of: a squeeze [convolution](https://paperswithcode.com/method/convolution) layer (which has only 1x1 filters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters. We expose three tunable dimensions (hyperparameters) in a Fire module: $s\_{1x1}$, $e\_{1x1}$, and $e\_{3x3}$. In a Fire module, $s\_{1x1}$ is the number of filters in the squeeze layer (all 1x1), $e\_{1x1}$ is the number of 1x1 filters in the expand layer, and $e\_{3x3}$ is the number of 3x3 filters in the expand layer. When we use Fire modules we set $s\_{1x1}$ to be less than ($e\_{1x1}$ + $e\_{3x3}$), so the squeeze layer helps to limit the number of input channels to the 3x3 filters.
Given the following machine learning model name: RoIPool, provide a description of the model
**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then classified and bounding box regression performed. The actual scaling to, e.g., $7×7$, occurs by dividing the region proposal into equally sized sections, finding the largest value in each section, and then copying these max values to the output buffer. In essence, **RoIPool** is [max pooling](https://paperswithcode.com/method/max-pooling) on a discrete grid based on a box. Image Source: [Joyce Xu](https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9)
Given the following machine learning model name: TD Lambda, provide a description of the model
**TD_INLINE_MATH_1** is a generalisation of **TD_INLINE_MATH_2** reinforcement learning algorithms, but it employs an [eligibility trace](https://paperswithcode.com/method/eligibility-trace) $\lambda$ and $\lambda$-weighted returns. The eligibility trace vector is initialized to zero at the beginning of the episode, and it is incremented on each time step by the value gradient, and then fades away by $\gamma\lambda$: $$ \textbf{z}\_{-1} = \mathbf{0} $$ $$ \textbf{z}\_{t} = \gamma\lambda\textbf{z}\_{t-1} + \nabla\hat{v}\left(S\_{t}, \mathbf{w}\_{t}\right), 0 \leq t \leq T$$ The eligibility trace keeps track of which components of the weight vector contribute to recent state valuations. Here $\nabla\hat{v}\left(S\_{t}, \mathbf{w}\_{t}\right)$ is the feature vector. The TD error for state-value prediction is: $$ \delta\_{t} = R\_{t+1} + \gamma\hat{v}\left\(S\_{t+1}, \mathbf{w}\_{t}\right) - \hat{v}\left(S\_{t}, \mathbf{w}\_{t}\right) $$ In **TD_INLINE_MATH_1**, the weight vector is updated on each step proportional to the scalar TD error and the vector eligibility trace: $$ \mathbf{w}\_{t+1} = \mathbf{w}\_{t} + \alpha\delta\mathbf{z}\_{t} $$ Source: Sutton and Barto, Reinforcement Learning, 2nd Edition
Given the following machine learning model name: Boost-GNN, provide a description of the model
**Boost-GNN** is an architecture that trains GBDT and GNN jointly to get the best of both worlds: the GBDT model deals with heterogeneous features, while GNN accounts for the graph structure. The model benefits from end-to-end optimization by allowing new trees to fit the gradient updates of GNN.
Given the following machine learning model name: Matrix Non-Maximum Suppression, provide a description of the model
**Matrix NMS**, or **Matrix Non-Maximum Suppression**, performs [non-maximum suppression](https://paperswithcode.com/method/non-maximum-suppression) with parallel matrix operations in one shot. It is motivated by [Soft-NMS](https://paperswithcode.com/method/soft-nms). Soft-NMS decays the other detection scores as a monotonic decreasing function $f(iou)$ of their overlaps. By decaying the scores according to IoUs recursively, higher IoU detections will be eliminated with a minimum score threshold. However, such process is sequential like traditional Greedy NMS and can not be implemented in parallel. Matrix NMS views this process from another perspective by considering how a predicted mask $m\_{j}$ being suppressed. For $m\_{j}$, its decay factor is affected by: (a) The penalty of each prediction $m\_{i}$ on $m\_{j}$ $\left(s\_{i}>s\_{j}\right)$, where $s\_{i}$ and $s\_{j}$ are the confidence scores; and (b) the probability of $m\_{i}$ being suppressed. For (a), the penalty of each prediction $m\_{i}$ on $m\_{j}$ could be easily computed by $f\left(\right.$ iou $\left.\_{i, j}\right)$. For (b), the probability of $m\_{i}$ being suppressed is not so elegant to be computed. However, the probability usually has positive correlation with the IoUs. So here we directly approximate the probability by the most overlapped prediction on $m\_{i}$ as $$ f\left(\text { iou. }\_{, i}\right)=\min\_{\forall s\_{k}>s\_{i}} f\left(\text { iou }\_{k, i}\right) $$ To this end, the final decay factor becomes $$ \operatorname{decay}\_{j}=\min\_{\forall s\_{i}>s\_{j}} \frac{f\left(\text { iou }\_{i, j}\right)}{f\left(\text { iou }\_{\cdot, i}\right)} $$ and the updated score is computed by $s\_{j}=s\_{j} \cdot$ decay $\_{j} .$ The authors consider the two most simple decremented functions, denoted as linear $f\left(\right.$ iou $\left.\_{i, j}\right)=1-$ iou $\_{i, j}$, and Gaussian $f\left(\right.$ iou $\left.\_{i, j}\right)=\exp \left(-\frac{i o u\_{i, j}^{2}}{\sigma}\right)$.
Given the following machine learning model name: Voxel Transformer, provide a description of the model
**VoTr** is a [Transformer](https://paperswithcode.com/method/transformer)-based 3D backbone for 3D object detection from point clouds. It contains a series of sparse and submanifold voxel modules. Submanifold voxel modules perform multi-head self-attention strictly on the non-empty voxels, while sparse voxel modules can extract voxel features at empty locations. Long-range relationships between voxels are captured via self-attention. Given the fact that non-empty voxels are naturally sparse but numerous, directly applying standard Transformer on voxels is non-trivial. To this end, VoTr uses a sparse voxel module and a submanifold voxel module, which can operate on the empty and non-empty voxel positions effectively. To further enlarge the attention range while maintaining comparable computational overhead to the convolutional counterparts, two attention mechanisms are used for [multi-head attention](https://paperswithcode.com/method/multi-head-attention) in those two modules: Local Attention and Dilated Attention. Furthermore [Fast Voxel Query](https://paperswithcode.com/method/fast-voxel-query) is used to accelerate the querying process in multi-head attention.
Given the following machine learning model name: Gumbel Softmax, provide a description of the model
**Gumbel-Softmax** is a continuous distribution that has the property that it can be smoothly annealed into a categorical distribution, and whose parameter gradients can be easily computed via the reparameterization trick.
Given the following machine learning model name: Time-homogenuous Top-K Ranking, provide a description of the model
Please enter a description about the method here
Given the following machine learning model name: Harm-Net, provide a description of the model
A **Harmonic Network**, or **Harm-Net**, is a type of convolutional neural network that replaces convolutional layers with "harmonic blocks" that use [Discrete Cosine Transform](https://paperswithcode.com/method/discrete-cosine-transform) (DCT) filters. These blocks can be useful in truncating high-frequency information (possible due to the redundancies in the spectral domain).
Given the following machine learning model name: Location Sensitive Attention, provide a description of the model
**Location Sensitive Attention** is an attention mechanism that extends the [additive attention mechanism](https://paperswithcode.com/method/additive-attention) to use cumulative attention weights from previous decoder time steps as an additional feature. This encourages the model to move forward consistently through the input, mitigating potential failure modes where some subsequences are repeated or ignored by the decoder. Starting with additive attention where $h$ is a sequential representation from a BiRNN encoder and ${s}\_{i-1}$ is the $(i − 1)$-th state of a recurrent neural network (e.g. a [LSTM](https://paperswithcode.com/method/lstm) or [GRU](https://paperswithcode.com/method/gru)): $$ e\_{i, j} = w^{T}\tanh\left(W{s}\_{i-1} + Vh\_{j} + b\right) $$ where $w$ and $b$ are vectors, $W$ and $V$ are matrices. We extend this to be location-aware by making it take into account the alignment produced at the previous step. First, we extract $k$ vectors $f\_{i,j} \in \mathbb{R}^{k}$ for every position $j$ of the previous alignment $\alpha\_{i−1}$ by convolving it with a matrix $F \in R^{k\times{r}}$: $$ f\_{i} = F ∗ \alpha\_{i−1} $$ These additional vectors $f\_{i,j}$ are then used by the scoring mechanism $e\_{i,j}$: $$ e\_{i,j} = w^{T}\tanh\left(Ws\_{i−1} + Vh\_{j} + Uf\_{i,j} + b\right) $$
Given the following machine learning model name: Optimal Transport Modeling, provide a description of the model
Given the following machine learning model name: Strain Elevation Tension Spring embedding, provide a description of the model
SETSe is a deterministic physics based graph embedding algorithm. It embeds weighted feature rich networks. It treats each edge as a spring and each node as a bead whose movement is constrained by the graph adjacency matrix so that the nodes move in parallel planes enforcing a minimum distance between neighboring nodes. The node features act as forces moving the nodes up and down. The network converges to the embedded state when the force produced by each node is equal and opposite to the sum of the forces exerted by its edges, creating a net force of 0. SETSe has no conventional loss function and does not attempt to place similar nodes close to each other.
Given the following machine learning model name: DELG, provide a description of the model
**DELG** is a convolutional neural network for image retrieval that combines generalized mean pooling for global features and attentive selection for local features. The entire network can be learned end-to-end by carefully balancing the gradient flow between two heads – requiring only image-level labels. This allows for efficient inference by extracting an image’s global feature, detected keypoints and local descriptors within a single model. The model is enabled by leveraging hierarchical image representations that arise in [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks), which are coupled to [generalized mean pooling](https://paperswithcode.com/method/generalized-mean-pooling) and attentive local feature detection. Secondly, a convolutional autoencoder module is adopted that can successfully learn low-dimensional local descriptors. This can be readily integrated into the unified model, and avoids the need of post-processing learning steps, such as [PCA](https://paperswithcode.com/method/pca), that are commonly used. Finally, a procedure is used that enables end-to-end training of the proposed model using only image-level supervision. This requires carefully controlling the gradient flow between the global and local network heads during backpropagation, to avoid disrupting the desired representations.
Given the following machine learning model name: Edge-augmented Graph Transformer, provide a description of the model
Transformer neural networks have achieved state-of-the-art results for unstructured data such as text and images but their adoption for graph-structured data has been limited. This is partly due to the difficulty of incorporating complex structural information in the basic transformer framework. We propose a simple yet powerful extension to the transformer - residual edge channels. The resultant framework, which we call Edge-augmented Graph Transformer (EGT), can directly accept, process and output structural information as well as node information. It allows us to use global self-attention, the key element of transformers, directly for graphs and comes with the benefit of long-range interaction among nodes. Moreover, the edge channels allow the structural information to evolve from layer to layer, and prediction tasks on edges/links can be performed directly from the output embeddings of these channels. In addition, we introduce a generalized positional encoding scheme for graphs based on Singular Value Decomposition which can improve the performance of EGT. Our framework, which relies on global node feature aggregation, achieves better performance compared to Convolutional/Message-Passing Graph Neural Networks, which rely on local feature aggregation within a neighborhood. We verify the performance of EGT in a supervised learning setting on a wide range of experiments on benchmark datasets. Our findings indicate that convolutional aggregation is not an essential inductive bias for graphs and global self-attention can serve as a flexible and adaptive alternative.
Given the following machine learning model name: Batch Normalization, provide a description of the model
**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout). We apply a batch normalization layer as follows for a minibatch $\mathcal{B}$: $$ \mu\_{\mathcal{B}} = \frac{1}{m}\sum^{m}\_{i=1}x\_{i} $$ $$ \sigma^{2}\_{\mathcal{B}} = \frac{1}{m}\sum^{m}\_{i=1}\left(x\_{i}-\mu\_{\mathcal{B}}\right)^{2} $$ $$ \hat{x}\_{i} = \frac{x\_{i} - \mu\_{\mathcal{B}}}{\sqrt{\sigma^{2}\_{\mathcal{B}}+\epsilon}} $$ $$ y\_{i} = \gamma\hat{x}\_{i} + \beta = \text{BN}\_{\gamma, \beta}\left(x\_{i}\right) $$ Where $\gamma$ and $\beta$ are learnable parameters.
Given the following machine learning model name: Capsule Network, provide a description of the model
**Capsule Network** is a machine learning system that is a type of artificial neural network that can be used to better model hierarchical relationships. The approach is an attempt to more closely mimic biological neural organization.
Given the following machine learning model name: Quick Attention, provide a description of the model
\begin{equation} QA\left( x \right) = \sigma\left( f\left( x \right)^{1x1} \right) + x \end{equation} Quick Attention takes in the feature map as an input WxHxC (Width x Height x Channels) and creates two instances of the input feature map then it performs the 1x1xC convolution on the first instance and calculates the sigmoid activations after that it is added with the second instance to generate the final attention map as output which is of same dimensions as of input.
Given the following machine learning model name: PipeTransformer, provide a description of the model
**PipeTransformer** is a method for automated elastic pipelining for efficient distributed training of [Transformer](https://paperswithcode.com/method/transformer) models. In PipeTransformer, an adaptive on the fly freeze algorithm is used that can identify and freeze some layers gradually during training, as well as an elastic pipelining system that can dynamically allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the pipeline, packs active layers into fewer GPUs, and forks more replicas to increase data-parallel width.
Given the following machine learning model name: ELMo, provide a description of the model
**Embeddings from Language Models**, or **ELMo**, is a type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. A biLM combines both a forward and backward LM. ELMo jointly maximizes the log likelihood of the forward and backward directions. To add ELMo to a supervised model, we freeze the weights of the biLM and then concatenate the ELMo vector $\textbf{ELMO}^{task}_k$ with $\textbf{x}_k$ and pass the ELMO enhanced representation $[\textbf{x}_k; \textbf{ELMO}^{task}_k]$ into the task RNN. Here $\textbf{x}_k$ is a context-independent token representation for each token position. Image Source: [here](https://medium.com/@duyanhnguyen_38925/create-a-strong-text-classification-with-the-help-from-elmo-e90809ba29da)
Given the following machine learning model name: You Only Hypothesize Once, provide a description of the model
**You Only Hypothesize Once** is a local descriptor-based framework for the registration of two unaligned point clouds. The proposed descriptor achieves the rotation invariance by recent technologies of group equivariant feature learning, which brings more robustness to point density and noise. The descriptor in YOHO also has a rotation-equivariant part, which enables the estimation the registration from just one correspondence hypothesis.
Given the following machine learning model name: Field Embedded Factorization Machine, provide a description of the model
**Field Embedded Factorization Machine**, or **FEFM**, is a factorization machine variant. For each field pair, FEFM introduces symmetric matrix embeddings along with the usual feature vector embeddings that are present in FM. Like FM, $v\_{i}$ is the vector embedding of the $i^{t h}$ feature. However, unlike Field-Aware Factorization Machines (FFMs), FEFM doesn't explicitly learn field-specific feature embeddings. The learnable symmetric matrix $W\_{F(i), F(j)}$ is the embedding for the field pair $F(i)$ and $F(j) .$ The interaction between the $i^{t h}$ feature and the $j^{t h}$ feature is mediated through $W_{F(i), F(j)} .$ $$ \phi(\theta, x)=\phi\_{F E F M}((w, v, W), x)=w\_{0}+\sum\_{i=1}^{m} w_{i} x_{i}+\sum\_{i=1}^{m} \sum\_{j=i+1}^{m} v\_{i}^{T} W\_{F(i), F(j)} v\_{j} x\_{i} x\_{j} $$ where $W\_{F(i), F(j)}$ is a $k \times k$ symmetric matrix ( $k$ is the dimension of the feature vector embedding space containing feature vectors $v\_{i}$ and $v\_{j}$ ). The symmetric property of the learnable matrix $W\_{F(i), F(j)}$ is ensured by reparameterizing $W\_{F(i), F(j)}$ as $U\_{F(i), F(j)}+$ $U\_{F(i), F(j)}^{T}$, where $U\_{F(i), F(j)}^{T}$ is the transpose of the learnable matrix $U\_{F(i), F(j)} .$ Note that $W_{F(i), F(j)}$ can also be interpreted as a vector transformation matrix which transforms a feature embedding when interacting with a specific field.
Given the following machine learning model name: Dilated Causal Convolution, provide a description of the model
A **Dilated Causal Convolution** is a [causal convolution](https://paperswithcode.com/method/causal-convolution) where the filter is applied over an area larger than its length by skipping input values with a certain step. A dilated causal [convolution](https://paperswithcode.com/method/convolution) effectively allows the network to have very large receptive fields with just a few layers.
Given the following machine learning model name: WaveNet, provide a description of the model
**WaveNet** is an audio generative model based on the [PixelCNN](https://paperswithcode.com/method/pixelcnn) architecture. In order to deal with long-range temporal dependencies needed for raw audio generation, architectures are developed based on dilated causal convolutions, which exhibit very large receptive fields. The joint probability of a waveform $\vec{x} = \{ x_1, \dots, x_T \}$ is factorised as a product of conditional probabilities as follows: $$p\left(\vec{x}\right) = \prod_{t=1}^{T} p\left(x_t \mid x_1, \dots ,x_{t-1}\right)$$ Each audio sample $x_t$ is therefore conditioned on the samples at all previous timesteps.
Given the following machine learning model name: GPT-4, provide a description of the model
**GPT-4** is a transformer based model pre-trained to predict the next token in a document.
Given the following machine learning model name: Double Q-learning, provide a description of the model
**Double Q-learning** is an off-policy reinforcement learning algorithm that utilises double estimation to counteract overestimation problems with traditional Q-learning. The max operator in standard [Q-learning](https://paperswithcode.com/method/q-learning) and [DQN](https://paperswithcode.com/method/dqn) uses the same values both to select and to evaluate an action. This makes it more likely to select overestimated values, resulting in overoptimistic value estimates. To prevent this, we can decouple the selection from the evaluation, which is the idea behind Double Q-learning: $$ Y^{Q}\_{t} = R\_{t+1} + \gamma{Q}\left(S\_{t+1}, \arg\max\_{a}Q\left(S\_{t+1}, a; \mathbb{\theta}\_{t}\right);\mathbb{\theta}\_{t}\right) $$ The Double Q-learning error can then be written as: $$ Y^{DoubleQ}\_{t} = R\_{t+1} + \gamma{Q}\left(S\_{t+1}, \arg\max\_{a}Q\left(S\_{t+1}, a; \mathbb{\theta}\_{t}\right);\mathbb{\theta}^{'}\_{t}\right) $$ Here the selection of the action in the $\arg\max$ is still due to the online weights $\theta\_{t}$. But we use a second set of weights $\mathbb{\theta}^{'}\_{t}$ to fairly evaluate the value of this policy. Source: [Deep Reinforcement Learning with Double Q-learning](https://paperswithcode.com/paper/deep-reinforcement-learning-with-double-q)
Given the following machine learning model name: NAS-FCOS, provide a description of the model
**NAS-FCOS** consists of two sub networks, an [FPN](https://paperswithcode.com/method/fpn) $f$ and a set of prediction heads $h$ which have shared structures. One notable difference with other FPN-based one-stage detectors is that our heads have partially shared weights. Only the last several layers of the predictions heads (marked as yellow) are tied by their weights. The number of layers to share is decided automatically by the search algorithm. Note that both FPN and head are in our actual search space; and have more layers than shown in this figure.
Given the following machine learning model name: Adaptive Early-Learning Correction, provide a description of the model
Adaptive Early-Learning Correction for Segmentation from Noisy Annotations