prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: building to building transfer learning, provide a description of the model | using transfer learning to transfer knowledge from one building to predict the energy consumption of another building with scarce data |
Given the following machine learning model name: TABBIE, provide a description of the model | **TABBIE** is a pretraining objective (*corrupt cell detection*) that learns exclusively from tabular data. Unlike other approaches, TABBIE provides embeddings of all table substructures (cells, rows, and columns). TABBIE can be seen as a table embedding model trained to detect corrupted cells, inspired by the [ELECTRA](https://www.paperswithcode.com/method/electra) objective function. |
Given the following machine learning model name: Affine Coupling, provide a description of the model | **Affine Coupling** is a method for implementing a normalizing flow (where we stack a sequence of invertible bijective transformation functions). Affine coupling is one of these bijective transformation functions. Specifically, it is an example of a reversible transformation where the forward function, the reverse function and the log-determinant are computationally efficient. For the forward function, we split the input dimension into two parts:
$$ \mathbf{x}\_{a}, \mathbf{x}\_{b} = \text{split}\left(\mathbf{x}\right) $$
The second part stays the same $\mathbf{x}\_{b} = \mathbf{y}\_{b}$, while the first part $\mathbf{x}\_{a}$ undergoes an affine transformation, where the parameters for this transformation are learnt using the second part $\mathbf{x}\_{b}$ being put through a neural network. Together we have:
$$ \left(\log{\mathbf{s}, \mathbf{t}}\right) = \text{NN}\left(\mathbf{x}\_{b}\right) $$
$$ \mathbf{s} = \exp\left(\log{\mathbf{s}}\right) $$
$$ \mathbf{y}\_{a} = \mathbf{s} \odot \mathbf{x}\_{a} + \mathbf{t} $$
$$ \mathbf{y}\_{b} = \mathbf{x}\_{b} $$
$$ \mathbf{y} = \text{concat}\left(\mathbf{y}\_{a}, \mathbf{y}\_{b}\right) $$
Image: [GLOW](https://paperswithcode.com/method/glow) |
Given the following machine learning model name: Shrink and Fine-Tune, provide a description of the model | **Shrink and Fine-Tune**, or **SFT**, is a type of distillation that avoids explicit distillation by copying parameters to a student student model and then fine-tuning. Specifically it extracts a student model from the maximally spaced layers of a fine-tuned teacher. Each layer $l \in L'$ is copied fully from $L$. For example, when creating a [BART](https://paperswithcode.com/method/bart) student with 3 decoder layers from the 12 encoder layer 12 decoder layer teacher, we copy the teacher’s full $Enc^{L}$ and decoder layers 0, 6, and 11 to the student. When deciding which layers to copy, we break ties arbitrarily; copying layers 0, 5, and 11 might work just as well. When copy only 1 decoder layer, we copy layer 0. This was found this to work better than copying layer 11. The impact of initialization on performance is measured experimentally in Section 6.1. After initialization, the student model continues to fine-tune on the summarization dataset, with the objective of minimizing $\mathcal{L}\_{Data}$. |
Given the following machine learning model name: AdaMax, provide a description of the model | **AdaMax** is a generalisation of [Adam](https://paperswithcode.com/method/adam) from the $l\_{2}$ norm to the $l\_{\infty}$ norm. Define:
$$ u\_{t} = \beta^{\infty}\_{2}v\_{t-1} + \left(1-\beta^{\infty}\_{2}\right)|g\_{t}|^{\infty}$$
$$ = \max\left(\beta\_{2}\cdot{v}\_{t-1}, |g\_{t}|\right)$$
We can plug into the Adam update equation by replacing $\sqrt{\hat{v}_{t} + \epsilon}$ with $u\_{t}$ to obtain the AdaMax update rule:
$$ \theta\_{t+1} = \theta\_{t} - \frac{\eta}{u\_{t}}\hat{m}\_{t} $$
Common default values are $\eta = 0.002$ and $\beta\_{1}=0.9$ and $\beta\_{2}=0.999$. |
Given the following machine learning model name: Random Gaussian Blur, provide a description of the model | **Random Gaussian Blur** is an image data augmentation technique where we randomly blur the image using a Gaussian distribution.
Image Source: [Wikipedia](https://en.wikipedia.org/wiki/Gaussian_blur) |
Given the following machine learning model name: DALL·E 2, provide a description of the model | **DALL·E 2** is a generative text-to-image model made up of two main components: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. |
Given the following machine learning model name: Sigmoid Linear Unit, provide a description of the model | ** Sigmoid Linear Units**, or **SiLUs**, are activation functions for
neural networks. The activation of the SiLU is computed by the sigmoid function multiplied by its input, or $$ x\sigma(x).$$
See [Gaussian Error Linear Units](https://arxiv.org/abs/1606.08415) ([GELUs](https://paperswithcode.com/method/gelu)) where the SiLU was originally coined, and see [Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning](https://arxiv.org/abs/1702.03118) and [Swish: a Self-Gated Activation Function](https://arxiv.org/abs/1710.05941v1) where the SiLU was experimented with later. |
Given the following machine learning model name: CARLA: An Open Urban Driving Simulator, provide a description of the model | CARLA is an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely.
Source: [Dosovitskiy et al.](https://arxiv.org/pdf/1711.03938v1.pdf)
Image source: [Dosovitskiy et al.](https://arxiv.org/pdf/1711.03938v1.pdf) |
Given the following machine learning model name: Fully Convolutional Network, provide a description of the model | **Fully Convolutional Networks**, or **FCNs**, are an architecture used mainly for semantic segmentation. They employ solely locally connected layers, such as [convolution](https://paperswithcode.com/method/convolution), pooling and upsampling. Avoiding the use of dense layers means less parameters (making the networks faster to train). It also means an FCN can work for variable image sizes given all connections are local.
The network consists of a downsampling path, used to extract and interpret the context, and an upsampling path, which allows for localization.
FCNs also employ skip connections to recover the fine-grained spatial information lost in the downsampling path. |
Given the following machine learning model name: Variational Dropout, provide a description of the model | **Variational Dropout** is a regularization technique based on [dropout](https://paperswithcode.com/method/dropout), but uses a variational inference grounded approach. In Variational Dropout, we repeat the same dropout mask at each time step for both inputs, outputs, and recurrent layers (drop the same network units at each time step). This is in contrast to ordinary Dropout where different dropout masks are sampled at each time step for the inputs and outputs alone. |
Given the following machine learning model name: Hi-LANDER, provide a description of the model | **Hi-LANDER** is a hierarchical [graph neural network](https://paperswithcode.com/methods/category/graph-models) (GNN) model that learns how to cluster a set of images into an unknown number of identities using an image annotated with labels belonging to a disjoint set of identities. The hierarchical GNN uses an approach to merge connected components predicted at each level of the hierarchy to form a new graph at the next level. Unlike fully unsupervised hierarchical clustering, the choice of grouping and complexity criteria stems naturally from supervision in the training set. |
Given the following machine learning model name: Fast-YOLOv2, provide a description of the model | |
Given the following machine learning model name: TSRUs, provide a description of the model | **TSRUs**, or **Transformation-based Spatial Recurrent Unit p**, is a modification of a [ConvGRU](https://paperswithcode.com/method/cgru) used in the [TriVD-GAN](https://paperswithcode.com/method/trivd-gan) architecture for video generation.
It largely follows [TSRUc](https://paperswithcode.com/method/tsruc), but computes each intermediate output in a fully sequential manner: like in TSRUc, $c$ is given access to $\hat{h}\_{t-1}$, but additionally, $u$ is given access to both outputs $\hat{h}\_{t-1}$ and $c$, so as to make an informed decision prior to mixing. This yields the following replacement for $u$:
$$ u = \sigma\left(W\_{u} \star\_{n}\left[\hat{h}\_{t-1};c\right] + b\_{u} \right) $$
In these equations $\sigma$ and $\rho$ are the elementwise sigmoid and [ReLU](https://paperswithcode.com/method/relu) functions respectively and the $\star\_{n}$ represents a [convolution](https://paperswithcode.com/method/convolution) with a kernel of size $n \times n$. Brackets are used to represent a feature concatenation. |
Given the following machine learning model name: Conditional Random Field, provide a description of the model | **Conditional Random Fields** or **CRFs** are a type of probabilistic graph model that take neighboring sample context into account for tasks like classification. Prediction is modeled as a graphical model, which implements dependencies between the predictions. Graph choice depends on the application, for example linear chain CRFs are popular in natural language processing, whereas in image-based tasks, the graph would connect to neighboring locations in an image to enforce that they have similar predictions.
Image Credit: [Charles Sutton and Andrew McCallum, An Introduction to Conditional Random Fields](https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf) |
Given the following machine learning model name: Generative Adversarial Transformer, provide a description of the model | GANformer is a novel and efficient type of [transformer](https://paperswithcode.com/method/transformer) which can be used for visual generative modeling. The network employs a bipartite structure that enables long-range interactions across an image, while maintaining computation of linearly efficiency, that can readily scale to high-resolution synthesis. It iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of each in light of the other and encourage the emergence of compositional representations of objects and scenes.
Source: [Generative Adversarial Transformers](https://arxiv.org/pdf/2103.01209v2.pdf)
Image source: [Generative Adversarial Transformers](https://arxiv.org/pdf/2103.01209v2.pdf) |
Given the following machine learning model name: YOLOX, provide a description of the model | **YOLOX** is a single-stage object detector that makes several modifications to [YOLOv3](https://paperswithcode.com/method/yolov3) with a [DarkNet53](https://www.paperswithcode.com/method/darknet53) backbone. Specifically, YOLO’s head is replaced with a decoupled one. For each level of [FPN](https://paperswithcode.com/method/fpn) feature, we first adopt a 1 × 1 conv layer to reduce the feature channel to 256 and then add two parallel branches with two 3 × 3 conv layers each for classification and regression tasks respectively.
Additional changes include adding Mosaic and [MixUp](https://paperswithcode.com/method/mixup) into the augmentation strategies to boost YOLOX’s performance. The anchor mechanism is also removed so YOLOX is anchor-free. Lastly, SimOTA for label assignment -- where label assignment is formulated as an optimal transport problem via a top-k strategy. |
Given the following machine learning model name: Double DQN, provide a description of the model | A **Double Deep Q-Network**, or **Double DQN** utilises [Double Q-learning](https://paperswithcode.com/method/double-q-learning) to reduce overestimation by decomposing the max operation in the target into action selection and action evaluation. We evaluate the greedy policy according to the online network, but we use the target network to estimate its value. The update is the same as for [DQN](https://paperswithcode.com/method/dqn), but replacing the target $Y^{DQN}\_{t}$ with:
$$ Y^{DoubleDQN}\_{t} = R\_{t+1}+\gamma{Q}\left(S\_{t+1}, \arg\max\_{a}Q\left(S\_{t+1}, a; \theta\_{t}\right);\theta\_{t}^{-}\right) $$
Compared to the original formulation of Double [Q-Learning](https://paperswithcode.com/method/q-learning), in Double DQN the weights of the second network $\theta^{'}\_{t}$ are replaced with the weights of the target network $\theta\_{t}^{-}$ for the evaluation of the current greedy policy. |
Given the following machine learning model name: Graph Echo State Network, provide a description of the model | **Graph Echo State Network** (**GraphESN**) model is a generalization of the Echo State Network (ESN) approach to graph domains. GraphESNs allow for an efficient approach to Recursive Neural Networks (RecNNs) modeling extended to deal with cyclic/acyclic, directed/undirected, labeled graphs. The recurrent reservoir of the network computes a fixed contractive encoding function over graphs and is left untrained after initialization, while a feed-forward readout implements an adaptive linear output function. Contractivity of the state transition function implies a Markovian characterization of state dynamics and stability of the state computation in presence of cycles. Due to the use of fixed (untrained) encoding, the model represents both an extremely efficient version and a baseline for the performance of recursive models with trained connections.
Description from: [Graph Echo State Networks](https://ieeexplore.ieee.org/document/5596796) |
Given the following machine learning model name: AutoSync, provide a description of the model | **AutoSync** is a pipeline for automatically optimizing synchronization strategies, given model structures and resource specifications, in data-parallel distributed machine learning. By factorizing the synchronization strategy with respect to each trainable building block of a DL model, we can construct a valid and large strategy space spanned by multiple factors. AutoSync efficiently navigates the space and locates the optimal strategy. AutoSync leverages domain knowledge about synchronization systems to reduce the search space, and is equipped with a domain adaptive simulator, which combines principled communication modeling and data-driven ML models, to estimate the runtime of strategy proposals without launching real distributed execution. |
Given the following machine learning model name: ALBEF, provide a description of the model | ALBEF introduces a contrastive loss to align the image and text representations before fusing them through cross-modal attention. This enables more grounded vision and language representation learning. ALBEF also doesn't require bounding box annotations. The model consists of an image encode, a text encoder, and a multimodal encoder. The image-text contrastive loss helps to align the unimodal representations of an image-text pair before fusion. The image-text matching loss and a masked language modeling loss are applied to learn multimodal interactions between image and text. In addition, momentum distillation is used to generate pseudo-targets. This improves learning with noisy data. |
Given the following machine learning model name: PipeMare, provide a description of the model | **PipeMare** is an asynchronous (bubble-free) pipeline parallel method for training large neural networks. It involves two main techniques: learning rate rescheduling and discrepancy correction. |
Given the following machine learning model name: Multi-partition Embedding Interaction, provide a description of the model | **MEI** introduces the *multi-partition embedding interaction* technique with block term tensor format to systematically address the efficiency--expressiveness trade-off in knowledge graph embedding. It divides the embedding vector into multiple partitions and learns the local interaction patterns from data instead of using fixed special patterns as in ComplEx or SimplE models. This enables MEI to achieve optimal efficiency--expressiveness trade-off, not just being fully expressive. Previous methods such as TuckER, RESCAL, DistMult, ComplEx, and SimplE are suboptimal restricted special cases of MEI. |
Given the following machine learning model name: Online Multi-granularity Distillation, provide a description of the model | **OMGD**, or **Online Multi-Granularity Distillation** is a framework for learning efficient [GANs](https://paperswithcode.com/methods/category/generative-adversarial-networks). The student generator is optimized in a discriminator-free and ground-truth-free setting. The scheme trains the teacher and student alternatively, promoting these two generators iteratively and progressively. The progressively optimized teacher generator helps to warm up the student and guide the optimization direction step by step.
Specifically, the student generator $G\_{S}$ only leverages the complementary teacher generators $G^{W}\_{T}$ and $G^{D}\_{T}$ for optimization and can be trained in the discriminator-free and ground-truth-free setting. This framework transfers different levels concepts from the intermediate layers and output layer to perform the knowledge distillation. The whole optimization is conducted on an online distillation scheme. Namely, $G^{W}\_{T}$, $G^{D}\_{T}$ and $G\_{S}$ are optimized simultaneously and progressively. |
Given the following machine learning model name: Multiscale Dilated Convolution Block, provide a description of the model | A **Multiscale Dilated Convolution Block** is an Inception-style convolutional block motivated by the ideas that image features naturally occur at multiple scales, that a network’s expressivity is proportional to the range of functions it can represent divided by its total number of parameters, and by the desire to efficiently expand a network’s receptive field. The Multiscale [Dilated Convolution](https://paperswithcode.com/method/dilated-convolution) (MDC) block applies a single $F\times{F}$ filter at multiple dilation factors, then performs a weighted elementwise sum of each dilated filter’s output, allowing the network to simultaneously learn a set of features and the relevant scales at which those features occur with a minimal increase in parameters. This also rapidly expands the network’s receptive field without requiring an increase in depth or the number of parameters. |
Given the following machine learning model name: Set Transformer, provide a description of the model | Many machine learning tasks such as multiple instance learning, 3D shape recognition, and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the order of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces the computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating the state-of-the-art performance compared to recent methods for set-structured data. |
Given the following machine learning model name: Contrastive Language-Image Pre-training, provide a description of the model | **Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes.
For pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores.
Image credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf) |
Given the following machine learning model name: Pixel Recurrent Neural Network, provide a description of the model | **PixelRNNs** are generative neural networks that sequentially predicts the pixels in an image along the two spatial dimensions. They model the discrete probability of the raw pixel values and encode the complete set of dependencies in the image. Variants include the Row [LSTM](https://paperswithcode.com/method/lstm) and the Diagonal [BiLSTM](https://paperswithcode.com/method/bilstm), that scale more easily to larger datasets. Pixel values are treated as discrete random variables by using a [softmax](https://paperswithcode.com/method/softmax) layer in the conditional distributions. Masked convolutions are employed to allow PixelRNNs to model full dependencies between the color channels. |
Given the following machine learning model name: Xception, provide a description of the model | **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) layers. |
Given the following machine learning model name: PrIme Sample Attention, provide a description of the model | **PrIme Sample Attention (PISA)** directs the training of object detection frameworks towards prime samples. These are samples that play a key role in driving the detection performance. The authors define Hierarchical Local Rank (HLR) as a metric of importance. Specifically, they use IoU-HLR to rank positive samples and ScoreHLR to rank negative samples in each mini-batch. This ranking strategy places the positive samples with highest IoUs around each object and the negative samples with highest scores in each cluster to the top of the ranked list and directs the focus of the training process to them via a simple re-weighting scheme. The authors also devise a classification-aware regression loss to jointly optimize the classification and regression branches. Particularly, this loss would suppress those samples with large regression loss, thus reinforcing the attention to prime samples. |
Given the following machine learning model name: Global Average Pooling, provide a description of the model | **Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer.
One advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input. |
Given the following machine learning model name: Prediction-aware One-To-One, provide a description of the model | **Prediction-aware One-To-One**, or **POTO**, is an assignment rule for object detection which dynamically assigns the foreground samples according to the quality of classification and regression simultaneously. |
Given the following machine learning model name: ALDA, provide a description of the model | **Adversarial-Learned Loss for Domain Adaptation** is a method for domain adaptation that combines adversarial learning with self-training. Specifically, the domain discriminator has to produce different corrected labels for different domains, while the feature generator aims to confuse the domain discriminator. The adversarial process finally leads to a proper confusion matrix on the target domain. In this way, ALDA takes the strengths of domain-adversarial learning and self-training based methods. |
Given the following machine learning model name: Firefly algorithm, provide a description of the model | Metaheuristic algorithm |
Given the following machine learning model name: Graph Convolutional Networks for Fake News Detection, provide a description of the model | Social media are nowadays one of the main news sources for millions of people around the globe due to their low cost, easy access and rapid dissemination. This however comes at the cost of dubious trustworthiness and significant risk of exposure to 'fake news', intentionally written to mislead the readers. Automatically detecting fake news poses challenges that defy existing content-based analysis approaches. One of the main reasons is that often the interpretation of the news requires the knowledge of political or social context or 'common sense', which current NLP algorithms are still missing. Recent studies have shown that fake and real news spread differently on social media, forming propagation patterns that could be harnessed for the automatic fake news detection. Propagation-based approaches have multiple advantages compared to their content-based counterparts, among which is language independence and better resilience to adversarial attacks. In this paper we show a novel automatic fake news detection model based on geometric deep learning. The underlying core algorithms are a generalization of classical CNNs to graphs, allowing the fusion of heterogeneous data such as content, user profile and activity, social graph, and news propagation. Our model was trained and tested on news stories, verified by professional fact-checking organizations, that were spread on Twitter. Our experiments indicate that social network structure and propagation are important features allowing highly accurate (92.7% ROC AUC) fake news detection. Second, we observe that fake news can be reliably detected at an early stage, after just a few hours of propagation. Third, we test the aging of our model on training and testing data separated in time. Our results point to the promise of propagation-based approaches for fake news detection as an alternative or complementary strategy to content-based approaches. |
Given the following machine learning model name: GAN Hinge Loss, provide a description of the model | The **GAN Hinge Loss** is a hinge loss based loss function for [generative adversarial networks](https://paperswithcode.com/methods/category/generative-adversarial-networks):
$$ L\_{D} = -\mathbb{E}\_{\left(x, y\right)\sim{p}\_{data}}\left[\min\left(0, -1 + D\left(x, y\right)\right)\right] -\mathbb{E}\_{z\sim{p\_{z}}, y\sim{p\_{data}}}\left[\min\left(0, -1 - D\left(G\left(z\right), y\right)\right)\right] $$
$$ L\_{G} = -\mathbb{E}\_{z\sim{p\_{z}}, y\sim{p\_{data}}}D\left(G\left(z\right), y\right) $$ |
Given the following machine learning model name: GoogLeNet, provide a description of the model | **GoogLeNet** is a type of convolutional neural network based on the [Inception](https://paperswithcode.com/method/inception-module) architecture. It utilises Inception modules, which allow the network to choose between multiple convolutional filter sizes in each block. An Inception network stacks these modules on top of each other, with occasional max-pooling layers with stride 2 to halve the resolution of the grid. |
Given the following machine learning model name: Differentiable Neural Architecture Search, provide a description of the model | |
Given the following machine learning model name: Variational Graph Auto Encoder, provide a description of the model | |
Given the following machine learning model name: Diffusion, provide a description of the model | Diffusion models generate samples by gradually
removing noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239). |
Given the following machine learning model name: COCO-FUNIT, provide a description of the model | **COCO-FUNIT** is few-shot image translation model which computes the style embedding of the example images conditioned on the input image and a new module called the constant style bias. It builds on top of [FUNIT](https://arxiv.org/abs/1905.01723) by identifying the content loss problem and then addressing it with a novel content-conditioned style encoder architecture.
The FUNIT method suffers from the content loss problem—the translation result is not well-aligned with the input image. While a direct theoretical analysis is likely elusive, we conduct an empirical study, aiming at identify the cause of the content loss problem. In analyses, the authors show that the FUNIT style encoder produces very different style codes using different crops -- suggesting the style code contains other information about the style image such as the object pose.
To make the style embedding more robust to small variations in the style image, a new style encoder architecture, the Content-Conditioned style encoder (COCO), is introduced. The most distinctive feature of this new encoder is the conditioning in the content image as illustrated in the top-right of the Figure. Unlike the style encoder in FUNIT, COCO takes both content and style image as input. With this content-conditioning scheme, a direct feedback path is created during learning to let the content image influence how the style code is computed. It also helps reduce the direct influence of the style image to the extract style code. |
Given the following machine learning model name: Lookahead, provide a description of the model | **Lookahead** is a type of stochastic optimizer that iteratively updates two sets of weights: "fast" and "slow". Intuitively, the algorithm chooses a search direction by looking ahead at the sequence of *fast weights* generated by another optimizer.
**Algorithm 1** Lookahead Optimizer
**Require** Initial parameters $\phi_0$, objective function $L$
**Require** Synchronization period $k$, slow weights step size $\alpha$, optimizer $A$
**for** $t=1, 2, \dots$
Synchronize parameters $\theta_{t,0} \gets \phi_{t-1}$
**for** $i=1, 2, \dots, k$
sample minibatch of data $d \sim \mathcal{D}$
$\theta_{t,i} \gets \theta_{t,i-1} + A(L, \theta_{t,i-1}, d)$
**endfor**
Perform outer update $\phi_t \gets \phi_{t-1} + \alpha (\theta_{t,k} - \phi_{t-1})$
**endfor**
**return** parameters $\phi$ |
Given the following machine learning model name: REINFORCE, provide a description of the model | **REINFORCE** is a Monte Carlo variant of a policy gradient algorithm in reinforcement learning. The agent collects samples of an episode using its current policy, and uses it to update the policy parameter $\theta$. Since one full trajectory must be completed to construct a sample space, it is updated as an off-policy algorithm.
$$ \nabla\_{\theta}J\left(\theta\right) = \mathbb{E}\_{\pi}\left[G\_{t}\nabla\_{\theta}\ln\pi\_{\theta}\left(A\_{t}\mid{S\_{t}}\right)\right]$$
Image Credit: [Tingwu Wang](http://www.cs.toronto.edu/~tingwuwang/REINFORCE.pdf) |
Given the following machine learning model name: DNN2LR, provide a description of the model | **DNN2LR** is an automatic feature crossing method to find feature interactions in a deep neural network, and use them as cross features in logistic regression. In general, DNN2LR consists of two steps: (1) generating a compact and accurate candidate set of cross feature fields; (2) searching in the candidate set for the final cross feature fields. |
Given the following machine learning model name: Synthesizer, provide a description of the model | The **Synthesizer** is a model that learns synthetic attention weights without token-token interactions. Unlike [Transformers](https://paperswithcode.com/method/transformer), the model eschews dot product self-attention but also content-based self-attention altogether. Synthesizer learns to synthesize the self-alignment matrix instead of manually computing pairwise dot products. It is transformation-based, only relies on simple feed-forward layers, and completely dispenses with dot products and explicit token-token interactions.
This new module employed by the Synthesizer is called "Synthetic Attention": a new way of learning to attend without explicitly attending (i.e., without dot product attention or [content-based attention](https://paperswithcode.com/method/content-based-attention)). Instead, Synthesizer generate the alignment matrix independent of token-token dependencies. |
Given the following machine learning model name: Auxiliary Batch Normalization, provide a description of the model | **Auxiliary Batch Normalization** is a type of regularization used in adversarial training schemes. The idea is that adversarial examples should have a separate [batch normalization](https://paperswithcode.com/method/batch-normalization) components to the clean examples, as they have different underlying statistics. |
Given the following machine learning model name: Universal Language Model Fine-tuning, provide a description of the model | **Universal Language Model Fine-tuning**, or **ULMFiT**, is an architecture and transfer learning method that can be applied to NLP tasks. It involves a 3-layer [AWD-LSTM](https://paperswithcode.com/method/awd-lstm) architecture for its representations. The training consists of three steps: 1) general language model pre-training on a Wikipedia-based text, 2) fine-tuning the language model on a target task, and 3) fine-tuning the classifier on the target task.
As different layers capture different types of information, they are fine-tuned to different extents using [discriminative fine-tuning](https://paperswithcode.com/method/discriminative-fine-tuning). Training is performed using [Slanted triangular learning rates](https://paperswithcode.com/method/slanted-triangular-learning-rates) (STLR), a learning rate scheduling strategy that first linearly increases the learning rate and then linearly decays it.
Fine-tuning the target classifier is achieved in ULMFiT using gradual unfreezing. Rather than fine-tuning all layers at once, which risks catastrophic forgetting, ULMFiT gradually unfreezes the model starting from the last layer (i.e., closest to the output) as this contains the least general knowledge. First the last layer is unfrozen and all unfrozen layers are fine-tuned for one epoch. Then the next group of frozen layers is unfrozen and fine-tuned and repeat, until all layers are fine-tuned until convergence at the last iteration. |
Given the following machine learning model name: Trust Region Policy Optimization, provide a description of the model | **Trust Region Policy Optimization**, or **TRPO**, is a policy gradient method in reinforcement learning that avoids parameter updates that change the policy too much with a KL divergence constraint on the size of the policy update at each iteration.
Take the case of off-policy reinforcement learning, where the policy $\beta$ for collecting trajectories on rollout workers is different from the policy $\pi$ to optimize for. The objective function in an off-policy model measures the total advantage over the state visitation distribution and actions, while the mismatch between the training data distribution and the true policy state distribution is compensated with an importance sampling estimator:
$$ J\left(\theta\right) = \sum\_{s\in{S}}p^{\pi\_{\theta\_{old}}}\sum\_{a\in\mathcal{A}}\left(\pi\_{\theta}\left(a\mid{s}\right)\hat{A}\_{\theta\_{old}}\left(s, a\right)\right) $$
$$ J\left(\theta\right) = \sum\_{s\in{S}}p^{\pi\_{\theta\_{old}}}\sum\_{a\in\mathcal{A}}\left(\beta\left(a\mid{s}\right)\frac{\pi\_{\theta}\left(a\mid{s}\right)}{\beta\left(a\mid{s}\right)}\hat{A}\_{\theta\_{old}}\left(s, a\right)\right) $$
$$ J\left(\theta\right) = \mathbb{E}\_{s\sim{p}^{\pi\_{\theta\_{old}}}, a\sim{\beta}} \left(\frac{\pi\_{\theta}\left(a\mid{s}\right)}{\beta\left(a\mid{s}\right)}\hat{A}\_{\theta\_{old}}\left(s, a\right)\right)$$
When training on policy, theoretically the policy for collecting data is same as the policy that we want to optimize. However, when rollout workers and optimizers are running in parallel asynchronously, the behavior policy can get stale. TRPO considers this subtle difference: It labels the behavior policy as $\pi\_{\theta\_{old}}\left(a\mid{s}\right)$ and thus the objective function becomes:
$$ J\left(\theta\right) = \mathbb{E}\_{s\sim{p}^{\pi\_{\theta\_{old}}}, a\sim{\pi\_{\theta\_{old}}}} \left(\frac{\pi\_{\theta}\left(a\mid{s}\right)}{\pi\_{\theta\_{old}}\left(a\mid{s}\right)}\hat{A}\_{\theta\_{old}}\left(s, a\right)\right)$$
TRPO aims to maximize the objective function $J\left(\theta\right)$ subject to a trust region constraint which enforces the distance between old and new policies measured by KL-divergence to be small enough, within a parameter $\delta$:
$$ \mathbb{E}\_{s\sim{p}^{\pi\_{\theta\_{old}}}} \left[D\_{KL}\left(\pi\_{\theta\_{old}}\left(.\mid{s}\right)\mid\mid\pi\_{\theta}\left(.\mid{s}\right)\right)\right] \leq \delta$$ |
Given the following machine learning model name: Gated Convolution, provide a description of the model | A **Gated Convolution** is a type of temporal [convolution](https://paperswithcode.com/method/convolution) with a gating mechanism. Zero-padding is used to ensure that future context can not be seen. |
Given the following machine learning model name: Medical Entity Disambiguation using Graph Neural Networks, provide a description of the model | |
Given the following machine learning model name: Faster R-CNN, provide a description of the model | **Faster R-CNN** is an object detection model that improves on [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) by utilising a region proposal network ([RPN](https://paperswithcode.com/method/rpn)) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly cost-free region proposals. It is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) for detection. RPN and Fast [R-CNN](https://paperswithcode.com/method/r-cnn) are merged into a single network by sharing their convolutional features: the RPN component tells the unified network where to look.
As a whole, Faster R-CNN consists of two modules. The first module is a deep fully convolutional network that proposes regions, and the second module is the Fast R-CNN detector that uses the proposed regions. |
Given the following machine learning model name: Collaborative Distillation, provide a description of the model | **Collaborative Distillation** is a new knowledge distillation method (named Collaborative Distillation) for encoder-decoder based neural style transfer to reduce the number of convolutional filters. The main idea is underpinned by a finding that the encoder-decoder pairs construct an exclusive collaborative relationship, which is regarded as a new kind of knowledge for style transfer models. |
Given the following machine learning model name: Activation Normalization, provide a description of the model | **Activation Normalization** is a type of normalization used for flow-based generative models; specifically it was introduced in the [GLOW](https://paperswithcode.com/method/glow) architecture. An ActNorm layer performs an affine transformation of the activations using a scale and bias parameter per channel, similar to [batch normalization](https://paperswithcode.com/method/batch-normalization). These parameters are initialized such that the post-actnorm activations per-channel have zero mean and unit variance given an initial minibatch of data. This is a form of data dependent initilization. After initialization, the scale and bias are treated as regular trainable parameters that are independent of the data. |
Given the following machine learning model name: Gaussian Mixture Variational Autoencoder, provide a description of the model | **GMVAE**, or **Gaussian Mixture Variational Autoencoder**, is a stochastic regularization layer for [transformers](https://paperswithcode.com/methods/category/transformers). A GMVAE layer is trained using a 700-dimensional internal representation of the first MLP layer. For every output from the first MLP layer, the GMVAE layer first computes a latent low-dimensional representation sampling from the GMVAE posterior distribution to then provide at the output a reconstruction sampled from a generative model. |
Given the following machine learning model name: Vision Transformer, provide a description of the model | The **Vision Transformer**, or **ViT**, is a model for image classification that employs a [Transformer](https://paperswithcode.com/method/transformer)-like architecture over patches of the image. An image is split into fixed-size patches, each of them are then linearly embedded, position embeddings are added, and the resulting sequence of vectors is fed to a standard [Transformer](https://paperswithcode.com/method/transformer) encoder. In order to perform classification, the standard approach of adding an extra learnable “classification token” to the sequence is used. |
Given the following machine learning model name: Differentiable Architecture Search Max-W, provide a description of the model | Like [DARTS](https://paperswithcode.com/method/darts), except subtract the max weight gradients.
Max-W Weighting:
\begin{equation}
output_i = (1 - max(w) + w_i) * op_i(input_i)
\label{eqn:max_w}
\end{equation} |
Given the following machine learning model name: LeVIT, provide a description of the model | **LeVIT** is a hybrid neural network for fast inference image classification. LeViT is a stack of [transformer blocks](https://paperswithcode.com/method/transformer), with [pooling steps](https://paperswithcode.com/methods/category/pooling-operation) to reduce the resolution of the activation maps as in classical [convolutional architectures](https://paperswithcode.com/methods/category/convolutional-neural-networks). This replaces the uniform structure of a Transformer by a pyramid with pooling, similar to the [LeNet](https://paperswithcode.com/method/lenet) architecture |
Given the following machine learning model name: Randomized Leaky Rectified Linear Units, provide a description of the model | **Randomized Leaky Rectified Linear Units**, or **RReLU**, are an activation function that randomly samples the negative slope for activation values. It was first proposed and used in the Kaggle NDSB Competition. During training, $a\_{ji}$ is a random number sampled from a uniform distribution $U\left(l, u\right)$. Formally:
$$ y\_{ji} = x\_{ji} \text{ if } x\_{ji} \geq{0} $$
$$ y\_{ji} = a\_{ji}x\_{ji} \text{ if } x\_{ji} < 0 $$
where
$$\alpha\_{ji} \sim U\left(l, u\right), l < u \text{ and } l, u \in \left[0,1\right)$$
In the test phase, we take average of all the $a\_{ji}$ in training similar to [dropout](https://paperswithcode.com/method/dropout), and thus set $a\_{ji}$ to $\frac{l+u}{2}$ to get a deterministic result. As suggested by the NDSB competition winner, $a\_{ji}$ is sampled from $U\left(3, 8\right)$.
At test time, we use:
$$ y\_{ji} = \frac{x\_{ji}}{\frac{l+u}{2}} $$ |
Given the following machine learning model name: Pathways Language Model, provide a description of the model | **PaLM** (**Pathways Language Model**) uses a standard Transformer model architecture (Vaswani et al., 2017) in a decoder-only setup (i.e., each timestep can only attend to itself and past timesteps), with several modifications. PaLM is trained as a 540 billion parameter, densely activated, autoregressive Transformer on 780 billion tokens. PaLM leverages Pathways (Barham et al., 2022), which enables highly efficient training of very large neural networks across thousands of accelerator chips.
Image credit: [PaLM: Scaling Language Modeling with Pathways](https://paperswithcode.com/paper/palm-scaling-language-modeling-with-pathways-1) |
Given the following machine learning model name: TraDeS, provide a description of the model | **TradeS** is an online joint detection and tracking model, coined as TRACK to DEtect and Segment, exploiting tracking clues to assist detection end-to-end. TraDeS infers object tracking offset by a cost volume, which is used to propagate previous object features for improving current object detection and segmentation. |
Given the following machine learning model name: Policy Similarity Metric, provide a description of the model | **Policy Similarity Metric**, or **PSM**, is a similarity metric for measuring behavioral similarity between states in reinforcement learning. It assigns high similarity to states for which the optimal policies in those states as well as in future states are similar. PSM is reward-agnostic, making it more robust for generalization compared to approaches that rely on reward information. |
Given the following machine learning model name: E-swish, provide a description of the model | |
Given the following machine learning model name: Geometric Manifold Component Estimator, provide a description of the model | **Geomancer** is a nonparametric algorithm for symmetry-based disentangling of data manifolds. It learns a set of subspaces to assign to each point in the dataset, where each subspace is the tangent space of one disentangled submanifold. This means that geomancer can be used to disentangle manifolds for which there may not be a global axis-aligned coordinate system. |
Given the following machine learning model name: SKNet, provide a description of the model | **SKNet** is a type of convolutional neural network that employs [selective kernel](https://paperswithcode.com/method/selective-kernel) units, with selective kernel convolutions, in its architecture. This allows for a type of attention where the network can learn to attend to different receptive fields. |
Given the following machine learning model name: Vision-aided GAN, provide a description of the model | Vision-aided GAN training involves using pretrained computer vision models in an ensemble of discriminators to improve GAN performance. Linear separability between real and fake samples in pretrained model embeddings is used as a measure to choose the most accurate pretrained models for a dataset. |
Given the following machine learning model name: SNIP, provide a description of the model | **SNIP**, or **Scale Normalization for Image Pyramids**, is a multi-scale training scheme that selectively back-propagates the gradients of object instances of different sizes as a function of the image scale. SNIP is a modified version of MST where only the object instances that have a resolution close to the pre-training dataset, which is typically 224x224, are used for training the detector. In multi-scale training (MST), each image is observed at different resolutions therefore, at a high resolution (like 1400x2000) large objects are hard to classify and at a low resolution (like 480x800) small objects are hard to classify. Fortunately, each object instance appears at several different scales and some of those appearances fall in the desired scale range. In order to eliminate extreme scale objects, either too large or too small, training is only performed on objects that fall in the desired scale range and the remainder are simply ignored during back-propagation. Effectively, SNIP uses all the object instances during training, which helps capture all the variations in appearance and
pose, while reducing the domain-shift in the scale-space for the pre-trained network. |
Given the following machine learning model name: LocalViT, provide a description of the model | **LocalViT** aims to introduce depthwise convolutions to enhance local features modeling capability of ViTs. The network, as shown in Figure (c), brings localist mechanism into transformers through the depth-wise convolution (denoted by "DW"). To cope with the convolution operation, the conversation between sequence and image feature map is added by "Seq2Img" and "Img2Seq". The computation is as follows:
$$
\mathbf{Y}^{r}=f\left(f\left(\mathbf{Z}^{r} \circledast \mathbf{W}_{1}^{r} \right) \circledast \mathbf{W}_d \right) \circledast \mathbf{W}_2^{r}
$$
where $\mathbf{W}_{d} \in \mathbb{R}^{\gamma d \times 1 \times k \times k}$ is the kernel of the depth-wise convolution.
The input (sequence of tokens) is first reshaped to a feature map rearranged on a 2D lattice. Two convolutions along with a depth-wise convolution are applied to the feature map. The feature map is reshaped to a sequence of tokens which are used as by the self-attention of the network transformer layer. |
Given the following machine learning model name: TILDEv2, provide a description of the model | **TILDEv2** is a [BERT](https://paperswithcode.com/method/bert)-based re-ranking method that stems from [TILDE](https://dl.acm.org/doi/abs/10.1145/3404835.3462922) but that addresses its limitations. It relies on contextualized exact term matching with expanded passages. This requires to only store in the index the score of tokens that appear in the expanded passages (rather than all the vocabulary), thus producing indexes that are 99% smaller than those of the original.
Specifically, TILDE is modified in the following aspects:
- **Exact Term Matching**. The query likelihood matching originally employed in TILDE, expands passages into the BERT vocabulary size, resulting in large indexes. To overcome this issue, estimating relevance scores is achieved with contextualized exact term matching. This allows the model to index tokens only present in the passage, thus reducing the index size. In addition to this, we replace the query likelihood loss function, with the Noise contrastive estimation (NCE) loss that allows to better leverage negative training samples.
- **Passage Expansion**. To overcome the vocabulary mismatch problem that affects exact term matching methods, passage expansion is used to expand the original passage collection. Passages in the collection are expanded using deep LMs with a limited number of tokens. This requires TILDEv2 to only index a few extra tokens in addition to those in the original passages. |
Given the following machine learning model name: Population Based Training, provide a description of the model | **Population Based Training**, or **PBT**, is an optimization method for finding parameters and hyperparameters, and extends upon parallel search methods and sequential optimisation methods.
It leverages information sharing across a population of concurrently running optimisation processes, and allows for online propagation/transfer of parameters and hyperparameters between members of the population based on their performance. Furthermore, unlike most other adaptation schemes, the method is capable of performing online adaptation of hyperparameters -- which can be particularly important in problems with highly non-stationary learning dynamics, such as reinforcement learning settings. PBT is decentralised and asynchronous, although it could also be executed semi-serially or with partial synchrony if there is a binding budget constraint. |
Given the following machine learning model name: Scatter Connection, provide a description of the model | A **Scatter Connection** is a type of connection that allows a vector to be "scattered" onto a layer representing a map, so that a vector at a specific location corresponds to objects of interest at that location (e.g. units in Starcraft II). This allows for the integration of spatial and non-spatial features. |
Given the following machine learning model name: Encoder-Decoder model with local and pairwise loss along with shared encoder and discriminator network (EDLPS), provide a description of the model | In this paper, we propose a method for obtaining sentence-level embeddings. While the problem of obtaining word-level embeddings is very well studied, we propose a novel method for obtaining sentence-level embeddings. This is obtained by a simple method in the context of solving the paraphrase generation task. If we use a sequential encoder-decoder model for generating paraphrase, we would like the generated paraphrase to be semantically close to the original sentence. One way to ensure this is by adding constraints for true paraphrase embeddings to be close and unrelated paraphrase candidate sentence embeddings to be far. This is ensured by using a sequential pair-wise discriminator that shares weights with the encoder. This discriminator is trained with a suitable loss function. Our loss function penalizes paraphrase sentence embedding distances from being too large. This loss is used in combination with a sequential encoder-decoder network. We also validate our method by evaluating the obtained embeddings for a sentiment analysis task. The proposed method results in semantic embeddings and provide competitive results on the paraphrase generation and sentiment analysis task on standard dataset. These results are also shown to be statistically significant.
Github Link:https://github.com/dev-chauhan/PQG-pytorch.
2
The PQG dataset is available on this link: https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs.
3
website: https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs.
4
we report same baseline results as mentioned in [10]
5
website: www.kaggle.com/c/sentiment-analysis-on-movie-reviews.
6
Code: https://github.com/dev-chauhan/PQG-pytorch. |
Given the following machine learning model name: ReZero, provide a description of the model | **ReZero** is a [normalization](https://paperswithcode.com/methods/category/normalization) approach that dynamically facilitates well-behaved gradients and arbitrarily deep signal propagation. The idea is simple: ReZero initializes each layer to perform the identity operation. For each layer, a [residual connection](https://paperswithcode.com/method/residual-connectio) is introduced for the input signal $x$ and one trainable parameter $\alpha$ that modulates the non-trivial transformation of a layer $F(\mathbf{x})$:
$$
\mathbf{x}\_{i+1}=\mathbf{x}\_{i}+\alpha_{i} F\left(\mathbf{x}\_{i}\right)
$$
where $\alpha=0$ at the beginning of training. Initially the gradients for all parameters defining $F$ vanish, but dynamically evolve to suitable values during initial stages of training. The architecture is illustrated in the Figure. |
Given the following machine learning model name: BERT, provide a description of the model | **BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations.
There are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they
are initialized with the same pre-trained parameters. |
Given the following machine learning model name: Disp R-CNN, provide a description of the model | **Disp R-CNN** is a 3D object detection system for stereo images. It utilizes an instance disparity estimation network (iDispNet) that predicts disparity only for pixels on objects of interest and learns a category-specific shape prior for more accurate disparity estimation. To address the challenge from scarcity of disparity annotation in training, a statistical shape model is used to generate dense disparity pseudo-ground-truth without the need of LiDAR point clouds. |
Given the following machine learning model name: Dutch Eligibility Trace, provide a description of the model | A **Dutch Eligibility Trace** is a type of [eligibility trace](https://paperswithcode.com/method/eligibility-trace) where the trace increments grow less quickly than the accumulative eligibility trace (helping avoid large variance updates). For the memory vector $\textbf{e}\_{t} \in \mathbb{R}^{b} \geq \textbf{0}$:
$$\mathbf{e\_{0}} = \textbf{0}$$
$$\textbf{e}\_{t} = \gamma\lambda\textbf{e}\_{t-1} + \left(1-\alpha\gamma\lambda\textbf{e}\_{t-1}^{T}\phi\_{t}\right)\phi\_{t}$$ |
Given the following machine learning model name: ALBERT, provide a description of the model | **ALBERT** is a [Transformer](https://paperswithcode.com/method/transformer) architecture based on [BERT](https://paperswithcode.com/method/bert) but with much fewer parameters. It achieves this through two parameter reduction techniques. The first is a factorized embeddings parameterization. By decomposing the large vocabulary embedding matrix into two small matrices, the size of the hidden layers is separated from the size of vocabulary embedding. This makes it easier to grow the hidden size without significantly increasing the parameter size of the vocabulary embeddings. The second technique is cross-layer parameter sharing. This technique prevents the parameter from growing with the depth of the network.
Additionally, ALBERT utilises a self-supervised loss for sentence-order prediction (SOP). SOP primary focuses on inter-sentence coherence and is designed to address the ineffectiveness of the next sentence prediction (NSP) loss proposed in the original BERT. |
Given the following machine learning model name: GhostNet, provide a description of the model | A **GhostNet** is a type of convolutional neural network that is built using Ghost modules, which aim to generate more features by using fewer parameters (allowing for greater efficiency).
GhostNet mainly consists of a stack of Ghost bottlenecks with the Ghost modules as the building block. The first layer is a standard convolutional layer with 16 filters, then a series of Ghost bottlenecks with gradually increased channels follow. These Ghost bottlenecks are grouped into different stages according to the sizes of their input feature maps. All the Ghost bottlenecks are applied with stride=1 except that the last one in each stage is with stride=2. At last a [global average pooling](https://paperswithcode.com/method/global-average-pooling) and a convolutional layer are utilized to transform the feature maps to a 1280-dimensional feature vector for final classification. The squeeze and excite (SE) module is also applied to the residual layer in some ghost bottlenecks.
In contrast to [MobileNetV3](https://paperswithcode.com/method/mobilenetv3), GhostNet does not use [hard-swish](https://paperswithcode.com/method/hard-swish) nonlinearity function due to its large latency. |
Given the following machine learning model name: Target Policy Smoothing, provide a description of the model | **Target Policy Smoothing** is a regularization strategy for the value function in reinforcement learning. Deterministic policies can overfit to narrow peaks in the value estimate, making them highly susceptible to functional approximation error, increasing the variance of the target. To reduce this variance, target policy smoothing adds a small amount of random noise to the target policy and averages over mini-batches - approximating a [SARSA](https://paperswithcode.com/method/sarsa)-like expectation/integral.
The modified target update is:
$$ y = r + \gamma{Q}\_{\theta'}\left(s', \pi\_{\theta'}\left(s'\right) + \epsilon \right) $$
$$ \epsilon \sim \text{clip}\left(\mathcal{N}\left(0, \sigma\right), -c, c \right) $$
where the added noise is clipped to keep the target close to the original action. The outcome is an algorithm reminiscent of [Expected SARSA](https://paperswithcode.com/method/expected-sarsa), where the value estimate is instead learned off-policy and the noise added to the target policy is chosen independently of the exploration policy. The value estimate learned is with respect to a noisy policy defined by the parameter $\sigma$. |
Given the following machine learning model name: Mixed Attention Block, provide a description of the model | **Mixed Attention Block** is an attention module used in the [ConvBERT](https://paperswithcode.com/method/convbert) architecture. It is a mixture of [self-attention](https://paperswithcode.com/method/scaled) and [span-based dynamic convolution](https://paperswithcode.com/method/span-based-dynamic-convolution) (highlighted in pink). They share the same Query but use different Key to generate the attention map and [convolution](https://paperswithcode.com/method/convolution) kernel respectively. The number of attention heads is reducing by directly projecting the input to a smaller embedding space to form a bottleneck structure for self-attention and span-based dynamic convolution. Dimensions of the input and output of some blocks are labeled on the left top corner to illustrate the overall framework, where $d$ is the embedding size of the input and $\gamma$ is the reduction ratio. |
Given the following machine learning model name: IMPALA, provide a description of the model | **IMPALA**, or the **Importance Weighted Actor Learner Architecture**, is an off-policy actor-critic framework that decouples acting from learning and learns from experience trajectories using [V-trace](https://paperswithcode.com/method/v-trace). Unlike the popular [A3C](https://paperswithcode.com/method/a3c)-based agents, in which workers communicate gradients with respect to the parameters of the policy to a central parameter server, IMPALA actors communicate trajectories of experience (sequences of states, actions, and rewards) to a centralized learner. Since the learner in IMPALA has access to full trajectories of experience we use a GPU to perform updates on mini-batches of trajectories while aggressively parallelising all time independent operations.
This type of decoupled architecture can achieve very high throughput. However, because the policy used to generate a trajectory can lag behind the policy on the learner by several updates at the time of gradient calculation, learning becomes off-policy. The V-trace off-policy actor-critic algorithm is used to correct for this harmful discrepancy. |
Given the following machine learning model name: Conditional / Rectified flow matching, provide a description of the model | Conditional Flow Matching (CFM) is a fast way to train continuous normalizing flow (CNF) models. CFM is a simulation-free training objective for continuous normalizing flows that allows conditional generative modelling and speeds up training and inference. |
Given the following machine learning model name: Adaptive Locally Connected Neuron, provide a description of the model | The **Adaptive Locally Connected Neuron (ALCN)** is a topology aware, and locally adaptive -focusing neuron:
$$a = f\:\Bigg( \sum_{i=1}^{m} w_{i}\phi\left( \tau\left(i\right),\Theta\right) x_{i} + b \Bigg) %f\:\Bigg(\mathbf{X(W \circ \Phi) + b} \Bigg) $$ |
Given the following machine learning model name: Visual Commonsense Region-based Convolutional Neural Network, provide a description of the model | **VC R-CNN** is an unsupervised feature representation learning method, which uses Region-based Convolutional Neural Network ([R-CNN](https://paperswithcode.com/method/r-cnn)) as the visual backbone, and the causal intervention as the training objective. Given a set of detected object regions in an image (e.g., using [Faster R-CNN](https://paperswithcode.com/method/faster-r-cnn)), like any other unsupervised feature learning methods (e.g., word2vec), the proxy training objective of VC R-CNN is to predict the contextual objects of a region. However, they are fundamentally different: the prediction of VC R-CNN is by using causal intervention: P(Y|do(X)), while others are by using the conventional likelihood: P(Y|X). This is also the core reason why VC R-CNN can learn "sense-making" knowledge like chair can be sat -- while not just "common" co-occurrences such as the chair is likely to exist if table is observed. |
Given the following machine learning model name: Strip Pooling, provide a description of the model | **Strip Pooling** is a pooling strategy for scene parsing which considers a long but narrow kernel, i.e., $1\times{N}$ or $N\times{1}$. As an alternative to global pooling, strip pooling offers two advantages. First, it deploys a long kernel shape along one spatial dimension and hence enables capturing long-range relations of isolated regions. Second, it keeps a narrow kernel shape along the other spatial dimension, which facilitates capturing local context and prevents irrelevant regions from interfering the label prediction. Integrating such long but narrow pooling kernels enables the scene parsing networks to simultaneously aggregate both global and local context. This is essentially different from the traditional spatial pooling which collects context from a fixed square region. |
Given the following machine learning model name: Linear Combination of Activations, provide a description of the model | The **Linear Combination of Activations**, or **LinComb**, is a type of activation function that has trainable parameters and uses the linear combination of other activation functions.
$$LinComb(x) = \sum\limits_{i=0}^{n} w_i \mathcal{F}_i(x)$$ |
Given the following machine learning model name: NoisyNet-Dueling, provide a description of the model | **NoisyNet-Dueling** is a modification of a [Dueling Network](https://paperswithcode.com/method/dueling-network) that utilises noisy linear layers for exploration instead of $\epsilon$-greedy exploration as in the original Dueling formulation. |
Given the following machine learning model name: Temporal Graph Network, provide a description of the model | **Temporal Graph Network**, or **TGN**, is a framework for deep learning on dynamic graphs represented as sequences of timed events. The memory (state) of the model at time $t$ consists of a vector $\mathbf{s}_i(t)$ for each node $i$ the model has seen so far. The memory of a node is updated after an event (e.g. interaction with another node or node-wise change), and its purpose is to represent the node's history in a compressed format. Thanks to this specific module, TGNs have the capability to memorize long term dependencies for each node in the graph. When a new node is encountered, its memory is initialized as the zero vector, and it is then updated for each event involving the node, even after the model has finished training. |
Given the following machine learning model name: RAG, provide a description of the model | **Retriever-Augmented Generation**, or **RAG**, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. For query $x$, Maximum Inner Product Search (MIPS) is used to find the top-K documents $z\_{i}$. For final prediction $y$, we treat $z$ as a latent variable and marginalize over seq2seq predictions given different documents. |
Given the following machine learning model name: Exponential Decay, provide a description of the model | **Exponential Decay** is a learning rate schedule where we decay the learning rate with more iterations using an exponential function:
$$ \text{lr} = \text{lr}\_{0}\exp\left(-kt\right) $$
Image Credit: [Suki Lau](https://towardsdatascience.com/learning-rate-schedules-and-adaptive-learning-rate-methods-for-deep-learning-2c8f433990d1) |
Given the following machine learning model name: CTAL, provide a description of the model | **CTAL** is a pre-training framework for strong audio-and-language representations with a [Transformer](https://paperswithcode.com/method/transformer), which aims to learn the intra-modality and inter-modalities connections between audio and language through two proxy tasks on a large amount of audio- and-language pairs: masked language modeling and masked cross-modal acoustic modeling. The pre-trained model is a Transformer for Audio and Language, i.e., CTAL, which consists of two modules, a language stream encoding module which adapts word as input element, and a text-referred audio stream encoder module which accepts both frame-level Mel-spectrograms and token-level output embeddings from the language stream |
Given the following machine learning model name: Neural Image Assessment, provide a description of the model | In the context of image enhancement, maximizing NIMA score as a prior can increase the likelihood of enhancing perceptual quality of an image. |
Given the following machine learning model name: Randomized Smoothing, provide a description of the model | |
Given the following machine learning model name: DenseNAS-C, provide a description of the model | **DenseNAS-C** is a mobile convolutional neural network discovered through the [DenseNAS](https://paperswithcode.com/method/densenas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) method. The basic building block is MBConvs, or inverted bottleneck residuals, from the [MobileNet](https://paperswithcode.com/method/mobilenetv2) architectures. |
Given the following machine learning model name: Sliding Window Attention, provide a description of the model | **Sliding Window Attention** is an attention pattern for attention-based models. It was proposed as part of the [Longformer](https://paperswithcode.com/method/longformer) architecture. It is motivated by the fact that non-sparse attention in the original [Transformer](https://paperswithcode.com/method/transformer) formulation has a [self-attention component](https://paperswithcode.com/method/scaled) with $O\left(n^{2}\right)$ time and memory complexity where $n$ is the input sequence length and thus, is not efficient to scale to long inputs. Given the importance of local context, the sliding window attention pattern employs a fixed-size window attention surrounding each token. Using multiple stacked layers of such windowed attention results in a large receptive field, where top layers have access to all input locations and have the capacity to build representations that incorporate information across the entire input.
More formally, in this attention pattern, given a fixed window size $w$, each token attends to $\frac{1}{2}w$ tokens on each side. The computation complexity of this pattern is $O\left(n×w\right)$,
which scales linearly with input sequence length $n$. To make this attention pattern efficient, $w$ should be small compared with $n$. But a model with typical multiple stacked transformers will have a large receptive field. This is analogous to CNNs where stacking layers of small kernels leads to high level features that are built from a large portion of the input (receptive field)
In this case, with a transformer of $l$ layers, the receptive field size is $l × w$ (assuming
$w$ is fixed for all layers). Depending on the application, it might be helpful to use different values of $w$ for each layer to balance between efficiency and model representation capacity. |
Given the following machine learning model name: Denoised Smoothing, provide a description of the model | **Denoised Smoothing** is a method for obtaining a provably robust classifier from a fixed pretrained one, without any additional training or fine-tuning of the latter. The basic idea is to prepend a custom-trained denoiser before the pretrained classifier, and then apply randomized smoothing. Randomized smoothing is a certified defense that converts any given classifier $f$ into a new smoothed classifier $g$ that is characterized by a non-linear Lipschitz property. When queried at a point $x$, the smoothed classifier $g$ outputs the class that is most likely to be returned by $f$ under isotropic Gaussian perturbations of its inputs. Unfortunately, randomized smoothing requires that the underlying classifier $f$ is robust to relatively large random Gaussian perturbations of the input, which is not the case for off-the-shelf pretrained models. By applying our custom-trained denoiser to the classifier $f$, we can effectively make $f$ robust to such Gaussian perturbations, thereby making it “suitable” for randomized smoothing. |
Given the following machine learning model name: Dilated Bottleneck with Projection Block, provide a description of the model | **Dilated Bottleneck with Projection Block** is an image model block used in the [DetNet](https://paperswithcode.com/method/detnet) convolutional neural network architecture. It employs a bottleneck structure with dilated convolutions to efficiently enlarge the receptive field. It uses a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) to ensure the spatial size stays fixed. |
Given the following machine learning model name: VoVNet, provide a description of the model | **VoVNet** is a convolutional neural network that seeks to make [DenseNet](https://paperswithcode.com/method/densenet) more efficient by concatenating all features only once in the last feature map, which makes input size constant and enables enlarging new output channel. In the Figure to the right, $F$ represents a [convolution](https://paperswithcode.com/method/convolution) layer and $\otimes$ indicates concatenation. |
Given the following machine learning model name: 1x1 Convolution, provide a description of the model | A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.
Image Credit: [http://deeplearning.ai](http://deeplearning.ai) |
Given the following machine learning model name: Guided Anchoring, provide a description of the model | **Guided Anchoring** is an anchoring scheme for object detection which leverages semantic features to guide the anchoring. The method is motivated by the observation that objects are not distributed evenly over the image. The scale of an object is also closely related to the imagery content, its location and geometry of the scene. Following this intuition, the method generates sparse anchors in two steps: first identifying sub-regions that may contain objects and then determining the shapes at different locations. |
Given the following machine learning model name: Self-Attention Guidance, provide a description of the model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.