prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: MCKERNEL, provide a description of the model | McKernel introduces a framework to use kernel approximates in the mini-batch setting with Stochastic Gradient Descent ([SGD](https://paperswithcode.com/method/sgd)) as an alternative to Deep Learning.
The core library was developed in 2014 as integral part of a thesis of Master of Science [1,2] at Carnegie Mellon and City University of Hong Kong. The original intend was to implement a speedup of Random Kitchen Sinks (Rahimi and Recht 2007) by writing a very efficient HADAMARD tranform, which was the main bottleneck of the construction. The code though was later expanded at ETH Zürich (in McKernel by Curtó et al. 2017) to propose a framework that could explain both Kernel Methods and Neural Networks. This manuscript and the corresponding theses, constitute one of the first usages (if not the first) in the literature of FOURIER features and Deep Learning; which later got a lot of research traction and interest in the community.
More information can be found in this presentation that the first author gave at ICLR 2020 [iclr2020_DeCurto](https://www.decurto.tw/c/iclr2020_DeCurto.pdf).
[1] [https://www.curto.hk/c/decurto.pdf](https://www.curto.hk/c/decurto.pdf)
[2] [https://www.zarza.hk/z/dezarza.pdf](https://www.zarza.hk/z/dezarza.pdf) |
Given the following machine learning model name: A2C, provide a description of the model | **A2C**, or **Advantage Actor Critic**, is a synchronous version of the [A3C](https://paperswithcode.com/method/a3c) policy gradient method. As an alternative to the asynchronous implementation of A3C, A2C is a synchronous, deterministic implementation that waits for each actor to finish its segment of experience before updating, averaging over all of the actors. This more effectively uses GPUs due to larger batch sizes.
Image Credit: [OpenAI Baselines](https://openai.com/blog/baselines-acktr-a2c/) |
Given the following machine learning model name: Inverted Residual Block, provide a description of the model | An **Inverted Residual Block**, sometimes called an **MBConv Block**, is a type of residual block used for image models that uses an inverted structure for efficiency reasons. It was originally proposed for the [MobileNetV2](https://paperswithcode.com/method/mobilenetv2) CNN architecture. It has since been reused for several mobile-optimized CNNs.
A traditional [Residual Block](https://paperswithcode.com/method/residual-block) has a wide -> narrow -> wide structure with the number of channels. The input has a high number of channels, which are compressed with a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution). The number of channels is then increased again with a 1x1 [convolution](https://paperswithcode.com/method/convolution) so input and output can be added.
In contrast, an Inverted Residual Block follows a narrow -> wide -> narrow approach, hence the inversion. We first widen with a 1x1 convolution, then use a 3x3 [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) (which greatly reduces the number of parameters), then we use a 1x1 convolution to reduce the number of channels so input and output can be added. |
Given the following machine learning model name: WaveGlow, provide a description of the model | **WaveGlow** is a flow-based generative model that generates audio by sampling from a distribution. Specifically samples are taken from a zero mean spherical Gaussian with the same number of dimensions as our desired output, and those samples are put through a series of layers that transforms the simple distribution to one which has the desired distribution. |
Given the following machine learning model name: Dreamix: video diffusion models are general video editors, provide a description of the model | |
Given the following machine learning model name: Sliced Iterative Generator, provide a description of the model | The **Sliced Iterative Generator (SIG)** is an iterative generative model that is a Normalizing Flow (NF), but shares the advantages of Generative Adversarial Networks (GANs). The model is based on iterative Optimal Transport of a series of 1D slices through the data space, matching on each slice the probability distribution function (PDF) of the samples to the data. To improve the efficiency, the directions of the orthogonal slices are chosen to maximize the PDF difference between the generated samples and the data using Wasserstein distance at each iteration. A patch based approach is adopted to model the images in a hierarchical way, enabling the model to scale well to high dimensions.
Unlike GANs, SIG has a NF structure and allows efficient likelihood evaluations that can be used in downstream tasks. While SIG has a deep neural network architecture, the approach deviates significantly from the current deep learning paradigm, as it does not use concepts such as mini-batching, stochastic gradient descent, gradient back-propagation through deep layers, or non-convex loss function optimization. SIG is very insensitive to hyper-parameter tuning, making it a useful generator tool for ML experts and non-experts alike. |
Given the following machine learning model name: Virtual Data Augmentation, provide a description of the model | **Virtual Data Augmentation**, or **VDA**, is a framework for robustly fine-tuning pre-trained language model. Based on the original token embeddings, a multinomial mixture for augmenting virtual data is constructed, where a masked language model guarantees the semantic relevance and the Gaussian noise provides the augmentation diversity. Furthermore, a regularized training strategy is proposed to balance the two aspects. |
Given the following machine learning model name: Bilateral Grid, provide a description of the model | Bilateral grid is a new data structure that enables fast edge-aware image processing. It enables edge-aware image manipulations such as local tone mapping on high resolution images in real time.
Source: [Chen et al.](https://people.csail.mit.edu/sparis/publi/2007/siggraph/Chen_07_Bilateral_Grid.pdf)
Image source: [Chen et al.](https://people.csail.mit.edu/sparis/publi/2007/siggraph/Chen_07_Bilateral_Grid.pdf) |
Given the following machine learning model name: Attentional Liquid Warping GAN, provide a description of the model | **Attentional Liquid Warping GAN** is a type of generative adversarial network for human image synthesis that utilizes a [AttLWB](https://paperswithcode.com/method/attlwb) block, which is a 3D body mesh recovery module that disentangles pose and shape. To preserve the source information, such as texture, style, color, and face identity, the Attentional Liquid Warping GAN with AttLWB propagates the source information in both image and feature spaces to the synthesized reference. |
Given the following machine learning model name: Inception-ResNet-v2, provide a description of the model | **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture). |
Given the following machine learning model name: Switchable Normalization, provide a description of the model | **Switchable Normalization** combines three types of statistics estimated channel-wise, layer-wise, and minibatch-wise by using [instance normalization](https://paperswithcode.com/method/instance-normalization), [layer normalization](https://paperswithcode.com/method/layer-normalization), and [batch normalization](https://paperswithcode.com/method/batch-normalization) respectively. [Switchable Normalization](https://paperswithcode.com/method/switchable-normalization) switches among them by learning their importance weights. |
Given the following machine learning model name: MPRNet, provide a description of the model | **MPRNet** is a multi-stage progressive image restoration architecture that progressively learns restoration functions for the degraded inputs, thereby breaking down the overall recovery process into more manageable steps. Specifically, the model first learns the contextualized features using encoder-decoder architectures and later combines them with a high-resolution branch that retains local information. At each stage, a per-pixel adaptive design is introduced that leverages in-situ supervised attention to reweight the local features. |
Given the following machine learning model name: FMix, provide a description of the model | A variant of [CutMix](https://paperswithcode.com/method/cutmix) which randomly samples masks from Fourier space. |
Given the following machine learning model name: ARM-Net, provide a description of the model | ARM-Net is an adaptive relation modeling network tailored for structured data, and a lightweight framework ARMOR based on ARM-Net for relational data analytics. The key idea is to model feature interactions with cross features selectively and dynamically, by first transforming the input features into exponential space, and then determining the interaction order and interaction weights adaptively for each cross feature. The authors propose a novel sparse attention mechanism to dynamically generate the interaction weights given the input tuple, so that we can explicitly model cross features of arbitrary orders with noisy features filtered selectively. Then during model inference, ARM-Net can specify the cross features being used for each prediction for higher accuracy and better interpretability. |
Given the following machine learning model name: Swapping Assignments between Views, provide a description of the model | **SwaV**, or **Swapping Assignments Between Views**, is a self-supervised learning approach that takes advantage of contrastive methods without requiring to compute pairwise comparisons. Specifically, it simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or views) of the same image, instead of comparing features directly as in contrastive learning. Simply put, SwaV uses a swapped prediction mechanism where we predict the cluster assignment of a view from the representation of another view. |
Given the following machine learning model name: ENet, provide a description of the model | **ENet** is a semantic segmentation architecture which utilises a compact encoder-decoder architecture. Some design choices include:
1. Using the [SegNet](https://paperswithcode.com/method/segnet) approach to downsampling y saving indices of elements chosen in max
pooling layers, and using them to produce sparse upsampled maps in the decoder.
2. Early downsampling to optimize the early stages of the network and reduce the cost of processing large input frames. The first two blocks of ENet heavily reduce the input size, and use only a small set of feature maps.
3. Using PReLUs as an activation function
4. Using dilated convolutions
5. Using Spatial [Dropout](https://paperswithcode.com/method/dropout) |
Given the following machine learning model name: Hybrid Firefly and Particle Swarm Optimization, provide a description of the model | **Hybrid Firefly and Particle Swarm Optimization (HFPSO)** is a metaheuristic optimization algorithm that combines strong points of firefly and particle swarm optimization. HFPSO tries to determine the start of the local search process properly by checking the previous global best fitness values.
[Click Here for the Paper](https://www.sciencedirect.com/science/article/abs/pii/S156849461830084X)
[Codes (MATLAB)](https://www.mathworks.com/matlabcentral/fileexchange/67768-a-hybrid-firefly-and-particle-swarm-optimization-hfpso) |
Given the following machine learning model name: Multiplicative LSTM, provide a description of the model | A **Multiplicative LSTM (mLSTM)** is a recurrent neural network architecture for sequence modelling that combines the long short-term memory ([LSTM](https://paperswithcode.com/method/lstm)) and multiplicative recurrent neural network ([mRNN](https://paperswithcode.com/method/mrnn)) architectures. The mRNN and LSTM architectures can be combined by adding connections from the mRNN’s intermediate state $m\_{t}$ to each gating units in the LSTM. |
Given the following machine learning model name: CSPDarknet53, provide a description of the model | **CSPDarknet53** is a convolutional neural network and backbone for object detection that uses [DarkNet-53](https://paperswithcode.com/method/darknet-53). It employs a CSPNet strategy to partition the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network.
This CNN is used as the backbone for [YOLOv4](https://paperswithcode.com/method/yolov4). |
Given the following machine learning model name: Composite Backbone Network, provide a description of the model | **CBNet** is a backbone architecture that consists of multiple identical backbones (specially called Assistant Backbones and Lead Backbone) and composite connections between neighbor backbones. From left to right, the output of each stage in an Assistant Backbone, namely higher-level
features, flows to the parallel stage of the succeeding backbone as part of inputs through composite connections. Finally, the feature maps of the last backbone named Lead
Backbone are used for object detection. The features extracted by CBNet for object detection fuse the high-level and low-level features of multiple backbones, hence improve the detection performance. |
Given the following machine learning model name: VisualBERT, provide a description of the model | VisualBERT aims to reuse self-attention to implicitly align elements of the input text and regions in the input image. Visual embeddings are used to model images where the representations are represented by a bounding region in an image obtained from an object detector. These visual embeddings are constructed by summing three embeddings: 1) visual feature representation, 2) a segment embedding indicate whether it is an image embedding, and 3) position embedding. Essentially, image regions and language are combined with a Transformer to allow self-attention to discover implicit alignments between language and vision. VisualBERT is trained using COCO, which consists of images paired with captions. It is pre-trained using two objectives: masked language modeling objective and sentence-image prediction task. It can then be fine-tuned on different downstream tasks. |
Given the following machine learning model name: OverFeat, provide a description of the model | **OverFeat** is a classic type of convolutional neural network architecture, employing [convolution](https://paperswithcode.com/method/convolution), pooling and fully connected layers. The Figure to the right shows the architectural details. |
Given the following machine learning model name: Dilated Convolution, provide a description of the model | **Dilated Convolutions** are a type of [convolution](https://paperswithcode.com/method/convolution) that “inflate” the kernel by inserting holes between the kernel elements. An additional parameter $l$ (dilation rate) indicates how much the kernel is widened. There are usually $l-1$ spaces inserted between kernel elements.
Note that concept has existed in past literature under different names, for instance the *algorithme a trous*, an algorithm for wavelet decomposition (Holschneider et al., 1987; Shensa, 1992). |
Given the following machine learning model name: Residual Multi-Layer Perceptrons, provide a description of the model | **Residual Multi-Layer Perceptrons**, or **ResMLP**, is an architecture built entirely upon [multi-layer perceptrons](https://paperswithcode.com/methods/category/feedforward-networks) for image classification. It is a simple [residual network](https://paperswithcode.com/method/residual-connection) that alternates (i) a [linear layer](https://paperswithcode.com/method/linear-layer) in which image patches interact, independently and identically across channels, and (ii) a two-layer [feed-forward network](https://paperswithcode.com/method/feedforward-network) in which channels interact independently per patch. At the end of the network, the patch representations are average pooled, and fed to a linear classifier.
[Layer normalization](https://paperswithcode.com/method/layer-normalization) is replaced with a simpler [affine transformation](https://paperswithcode.com/method/affine-operator), thanks to the absence of self-attention layers which makes training more stable. The affine operator is applied at the beginning ("pre-normalization") and end ("post-normalization") of each residual block. As a pre-normalization, Aff replaces LayerNorm without using channel-wise statistics. Initialization is achieved as $\mathbf{\alpha}=\mathbf{1}$, and $\mathbf{\beta}=\mathbf{0}$. As a post-normalization, Aff is similar to [LayerScale](https://paperswithcode.com/method/layerscale) and $\mathbf{\alpha}$ is initialized with the same small value. |
Given the following machine learning model name: Temporal Distribution Matching, provide a description of the model | **Temporal Distribution Matching**, or **TDM**, is a module used in the [AdaRNN](https://paperswithcode.com/method/adarnn) architecture to match the distributions of the discovered periods to build a time series prediction model $\mathcal{M}$ Given the learned time periods, the TDM module is designed to learn the common knowledge shared by different periods via matching their distributions. Thus, the learned model $\mathcal{M}$ is expected to generalize well on unseen test data compared with the methods which only rely on local or statistical information.
Within the context of AdaRNN, Temporal Distribution Matching aims to adaptively match the distributions between the [RNN](https://paperswithcode.com/methods/category/recurrent-neural-networks) cells of two periods while capturing the temporal dependencies. TDM introduces the importance vector $\mathbf{\alpha} \in \mathbb{R}^{\hat{V}}$ to learn the relative importance of $V$ hidden states inside the RNN, where all the hidden states are weighted with a normalized $\alpha$. Note that for each pair of periods, there is an $\mathbf{\alpha}$, and we omit the subscript if there is no confusion. In this way, we can dynamically reduce the distribution divergence of cross-periods.
Given a period-pair $\left(\mathcal{D}\_{i}, \mathcal{D}\_{j}\right)$, the loss of temporal distribution matching is formulated as:
$$
\mathcal{L}\_{t d m}\left(\mathcal{D}\_{i}, \mathcal{D}\_{j} ; \theta\right)=\sum_{t=1}^{V} \alpha\_{i, j}^{t} d\left(\mathbf{h}\_{i}^{t}, \mathbf{h}\_{j}^{t} ; \theta\right)
$$
where $\alpha\_{i, j}^{t}$ denotes the distribution importance between the periods $\mathcal{D}\_{i}$ and $\mathcal{D}\_{j}$ at state $t$.
All the hidden states of the RNN can be easily computed by following the standard RNN computation. Denote by $\delta(\cdot)$ the computation of a next hidden state based on a previous state. The state computation can be formulated as
$$
\mathbf{h}\_{i}^{t}=\delta\left(\mathbf{x}\_{i}^{t}, \mathbf{h}\_{i}^{t-1}\right)
$$
The final objective of temporal distribution matching (one RNN layer) is:
$$
\mathcal{L}(\theta, \mathbf{\alpha})=\mathcal{L}\_{\text {pred }}(\theta)+\lambda \frac{2}{K(K-1)} \sum\_{i, j}^{i \neq j} \mathcal{L}\_{t d m}\left(\mathcal{D}\_{i}, \mathcal{D}\_{j} ; \theta, \mathbf{\alpha}\right)
$$
where $\lambda$ is a trade-off hyper-parameter. Note that in the second term, we compute the average of the distribution distances of all pairwise periods. For computation, we take a mini-batch of $\mathcal{D}_{i}$ and $\mathcal{D}\_{j}$ to perform forward operation in RNN layers and concatenate all hidden features. Then, we can perform TDM using the above equation. |
Given the following machine learning model name: CoordConv, provide a description of the model | A **CoordConv** layer is a simple extension to the standard convolutional layer. It has the same functional signature as a convolutional layer, but accomplishes the mapping by first concatenating extra channels to the incoming representation. These channels contain hard-coded coordinates, the most basic version of which is one channel for the $i$ coordinate and one for the $j$ coordinate.
The CoordConv layer keeps the properties of few parameters and efficient computation from convolutions, but allows the network to learn to keep or to discard translation invariance as is needed for the task being learned. This is useful for coordinate transform based tasks where regular convolutions can fail. |
Given the following machine learning model name: Temporal Word Embeddings with a Compass, provide a description of the model | TWEC is a method to generate temporal word embeddings: this method is efficient and it is based on a simple heuristic: we train an atemporal word embedding, the compass and we use this embedding to freeze one of the layers of the CBOW architecture. The frozen architecture is then used to train time-specific slices that are all comparable after training. |
Given the following machine learning model name: Embedded Gaussian Affinity, provide a description of the model | **Embedded Gaussian Affinity** is a type of affinity or self-similarity function between two points $\mathbf{x\_{i}}$ and $\mathbf{x\_{j}}$ that uses a Gaussian function in an embedding space:
$$ f\left(\mathbf{x\_{i}}, \mathbf{x\_{j}}\right) = e^{\theta\left(\mathbf{x\_{i}}\right)^{T}\phi\left(\mathbf{x\_{j}}\right)} $$
Here $\theta\left(x\_{i}\right) = W\_{θ}x\_{i}$ and $\phi\left(x\_{j}\right) = W\_{φ}x\_{j}$ are two embeddings.
Note that the self-attention module used in the original [Transformer](https://paperswithcode.com/method/transformer) model is a special case of non-local operations in the embedded Gaussian version. This can be seen from the fact that for a given $i$, $\frac{1}{\mathcal{C}\left(\mathbf{x}\right)}\sum\_{\forall{j}}f\left(\mathbf{x}\_{i}, \mathbf{x}\_{j}\right)g\left(\mathbf{x}\_{j}\right)$ becomes the [softmax](https://paperswithcode.com/method/softmax) computation along the dimension $j$. So we have $\mathbf{y} = \text{softmax}\left(\mathbf{x}^{T}W^{T}\_{\theta}W\_{\phi}\mathbf{x}\right)g\left(\mathbf{x}\right)$, which is the self-attention form in the Transformer model. This shows how we can relate this recent self-attention model to the classic computer vision method of non-local means. |
Given the following machine learning model name: Multi-Head Linear Attention, provide a description of the model | **Multi-Head Linear Attention** is a type of linear multi-head self-attention module, proposed with the [Linformer](https://paperswithcode.com/method/linformer) architecture. The main idea is to add two linear projection matrices $E\_{i}, F\_{i} \in \mathbb{R}^{n\times{k}}$ when computing key and value. We first project the original $\left(n \times d\right)$-dimensional key and value layers $KW\_{i}^{K}$ and $VW\_{i}^{V}$ into $\left(k\times{d}\right)$-dimensional projected key and value layers. We then compute a $\left(n\times{k}\right)$ dimensional context mapping $\bar{P}$ using scaled-dot product attention:
$$ \bar{\text{head}\_{i}} = \text{Attention}\left(QW^{Q}\_{i}, E\_{i}KW\_{i}^{K}, F\_{i}VW\_{i}^{V}\right) $$
$$ \bar{\text{head}\_{i}} = \text{softmax}\left(\frac{QW^{Q}\_{i}\left(E\_{i}KW\_{i}^{K}\right)^{T}}{\sqrt{d\_{k}}}\right) \cdot F\_{i}VW\_{i}^{V} $$
Finally, we compute context embeddings for each head using $\bar{P} \cdot \left(F\_{i}{V}W\_{i}^{V}\right)$. |
Given the following machine learning model name: Temporal ROIAlign, provide a description of the model | **Temporal ROI Align** is an operator for extracting features from other frames' feature maps for current frame proposals by utilizing feature similarity. Considering the features of the same object instance are highly similar among frames in a video, the proposed operator implicitly extracts the most similar ROI features from support frames feature map for target frame proposals based on feature similarity. |
Given the following machine learning model name: CornerNet-Squeeze Hourglass, provide a description of the model | **CornerNet-Squeeze Hourglass** is a convolutional neural network and object detection backbone used in the [CornerNet-Squeeze](https://paperswithcode.com/method/cornernet-squeeze) object detector. It uses a modified [hourglass module](https://paperswithcode.com/method/hourglass-module) that makes use of a [fire module](https://paperswithcode.com/method/fire-module): containing 1x1 convolutions and depthwise convolutions. |
Given the following machine learning model name: Self-Organizing Map, provide a description of the model | The **Self-Organizing Map (SOM)**, commonly also known as Kohonen network (Kohonen 1982, Kohonen 2001) is a computational method for the visualization and analysis of high-dimensional data, especially experimentally acquired information.
Extracted from [scholarpedia](http://www.scholarpedia.org/article/Self-organizing_map)
**Sources**:
Image: [scholarpedia](http://www.scholarpedia.org/article/File:Somnbc.png)
Paper: [Kohonen, T. Self-organized formation of topologically correct feature maps. Biol. Cybern. 43, 59–69 (1982)](https://doi.org/10.1007/BF00337288)
Book: [Self-Organizing Maps](https://doi.org/10.1007/978-3-642-56927-2) |
Given the following machine learning model name: OPT, provide a description of the model | **OPT** is a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters. The model uses an AdamW optimizer and weight decay of 0.1. It follows a linear learning rate schedule, warming up from 0 to the maximum learning rate over the first 2000 steps in OPT-175B, or over 375M tokens in the smaller models, and decaying down to 10% of the maximum LR over 300B tokens. The batch sizes range from 0.5M to 4M depending on the model size and is kept constant throughout the course of training. |
Given the following machine learning model name: Wasserstein Embedding for Graph Learning, provide a description of the model | Please enter a description here |
Given the following machine learning model name: Hyper HyperNetwork, provide a description of the model | |
Given the following machine learning model name: Dynamic Memory Network, provide a description of the model | A **Dynamic Memory Network** is a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the inputs and the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers.
The DMN consists of a number of modules:
- Input Module: The input module encodes raw text inputs from the task into distributed vector representations. The input takes forms like a sentence, a long story, a movie review and so on.
- Question Module: The question module encodes the question of the task into a distributed
vector representation. For question answering, the question may be a sentence such as "Where did the author first fly?". The representation is fed into the episodic memory module, and forms the basis, or initial state, upon which the episodic memory module iterates.
- Episodic Memory Module: Given a collection of input representations, the episodic memory module chooses which parts of the inputs to focus on through the attention mechanism. It then produces a ”memory” vector representation taking into account the question as well as the previous memory. Each iteration provides the module with newly relevant information about the input. In other words,
the module has the ability to retrieve new information, in the form of input representations, which were thought to be irrelevant in previous iterations.
- Answer Module: The answer module generates an answer from the final memory vector of the memory module. |
Given the following machine learning model name: Learning From Multiple Experts, provide a description of the model | **Learning From Multiple Experts** is a self-paced knowledge distillation framework that aggregates the knowledge from multiple 'Experts' to learn a unified student model. Specifically, the proposed framework involves two levels of adaptive learning schedules: Self-paced Expert Selection and Curriculum Instance Selection, so that the knowledge is adaptively transferred to the 'Student'. The self-paced expert selection automatically controls the impact of knowledge distillation from each expert, so that the learned student model will gradually acquire the knowledge from the experts, and finally exceed the expert. The curriculum instance selection, on the other hand, designs a curriculum for the unified model where the training samples are organized from easy to hard, so that the unified student model will receive a less challenging learning schedule, and gradually learns from easy to hard samples. |
Given the following machine learning model name: Harris Hawks optimization, provide a description of the model | [HHO](https://aliasgharheidari.com/HHO.html) is a popular swarm-based, gradient-free optimization algorithm with several active and time-varying phases of exploration and exploitation. This algorithm initially published by the prestigious Journal of Future Generation Computer Systems (FGCS) in 2019, and from the first day, it has gained increasing attention among researchers due to its flexible structure, high performance, and high-quality results. The main logic of the HHO method is designed based on the cooperative behaviour and chasing styles of Harris' hawks in nature called "surprise pounce". Currently, there are many suggestions about how to enhance the functionality of HHO, and there are also several enhanced variants of the HHO in the leading Elsevier and IEEE transaction journals.
From the algorithmic behaviour viewpoint, there are several effective features in HHO :
Escaping energy parameter has a dynamic randomized time-varying nature, which can further improve and harmonize the exploratory and exploitive patterns of HHO. This factor also supports HHO to conduct a smooth transition between exploration and exploitation.
Different exploration mechanisms with respect to the average location of hawks can increase the exploratory trends of HHO throughout initial iterations.
Diverse LF-based patterns with short-length jumps enrich the exploitative behaviours of HHO when directing a local search.
The progressive selection scheme supports search agents to progressively advance their position and only select a better position, which can improve the superiority of solutions and intensification powers of HHO throughout the optimization procedure.
HHO shows a series of searching strategies and then, it selects the best movement step. This feature has also a constructive influence on the exploitation inclinations of HHO.
The randomized jump strength can assist candidate solutions in harmonising the exploration and exploitation leanings.
The application of adaptive and time-varying components allows HHO to handle difficulties of a feature space including local optimal solutions, multi-modality, and deceptive optima.
🔗 The source codes of HHO are publicly available at https://aliasgharheidari.com/HHO.html |
Given the following machine learning model name: FairMOT, provide a description of the model | **FairMOT** is a model for multi-object tracking which consists of two homogeneous branches to predict pixel-wise objectness scores and re-ID features. The achieved fairness between the tasks is used to achieve high levels of detection and tracking accuracy. The detection branch is implemented in an anchor-free style which estimates object centers and sizes represented as position-aware measurement maps. Similarly, the re-ID branch estimates a re-ID feature for each pixel to characterize the object centered at the pixel. Note that the two branches are completely homogeneous which essentially differs from the previous methods which perform detection and re-ID in a cascaded style. It is also worth noting that FairMOT operates on high-resolution feature maps of strides four while the previous anchor-based methods operate on feature maps of stride 32. The elimination of anchors as well as the use of high-resolution feature maps better aligns re-ID features to object centers which significantly improves the tracking accuracy. |
Given the following machine learning model name: Monocular Real-Time Volumetric Performance Capture, provide a description of the model | |
Given the following machine learning model name: Self-supervised Equivariant Attention Mechanism, provide a description of the model | **Self-supervised Equivariant Attention Mechanism**, or **SEAM**, is an attention mechanism for weakly supervised semantic segmentation. The SEAM applies consistency regularization on CAMs from various transformed images to provide self-supervision for network learning. To further improve the network prediction consistency, SEAM introduces the pixel correlation module (PCM), which captures context appearance information for each pixel and revises original CAMs by learned affinity attention maps. The SEAM is implemented by a [siamese network](https://paperswithcode.com/method/siamese-network) with equivariant cross regularization (ECR) loss, which regularizes the original CAMs and the revised CAMs on different branches. |
Given the following machine learning model name: Ensemble Clustering, provide a description of the model | Ensemble clustering, also called consensus clustering, has
been attracting much attention in recent years, aiming to combine multiple base clustering algorithms into a better and more consensus clustering. Due to its good performance, ensemble clustering plays a vital role in many research areas, such as community detection and bioinformatics. |
Given the following machine learning model name: Position-Sensitive RoI Pooling, provide a description of the model | **Position-Sensitive RoI Pooling layer** aggregates the outputs of the last convolutional layer and generates scores for each RoI. Unlike [RoI Pooling](https://paperswithcode.com/method/roi-pooling), PS RoI Pooling conducts selective pooling, and each of the $k$ × $k$ bin aggregates responses from only one score map out of the bank of $k$ × $k$ score maps. With end-to-end training, this RoI layer shepherds the last convolutional layer to learn specialized position-sensitive score maps. |
Given the following machine learning model name: PatchGAN, provide a description of the model | **PatchGAN** is a type of discriminator for generative adversarial networks which only penalizes structure at the scale of local image patches. The PatchGAN discriminator tries to classify if each $N \times N$ patch in an image is real or fake. This discriminator is run convolutionally across the image, averaging all responses to provide the ultimate output of $D$. Such a discriminator effectively models the image as a Markov random field, assuming independence between pixels separated by more than a patch diameter. It can be understood as a type of texture/style loss. |
Given the following machine learning model name: SEED RL, provide a description of the model | **SEED** (Scalable, Efficient, Deep-RL) is a scalable reinforcement learning agent. It utilizes an architecture that features centralized inference and an optimized communication layer. SEED adopts two state of the art distributed algorithms, [IMPALA](https://paperswithcode.com/method/impala)/[V-trace](https://paperswithcode.com/method/v-trace) (policy gradients) and R2D2 ([Q-learning](https://paperswithcode.com/method/q-learning)). |
Given the following machine learning model name: Metric mixup, provide a description of the model | A generic way of representing and interpolating labels, which allows straightforward extension of any kind of [mixup](https://paperswithcode.com/method/mixup) to deep metric learning for a large class of loss functions. |
Given the following machine learning model name: mBARTHez, provide a description of the model | **BARThez** is a self-supervised transfer learning model for the French language based on [BART](https://paperswithcode.com/method/bart). Compared to existing [BERT](https://paperswithcode.com/method/bert)-based French language models such as [CamemBERT](https://paperswithcode.com/paper/camembert-a-tasty-french-language-model) and [FlauBERT](https://paperswithcode.com/paper/flaubert-unsupervised-language-model-pre), BARThez is well-suited for generative tasks, since not only its encoder but also its decoder is pretrained. |
Given the following machine learning model name: SqueezeBERT, provide a description of the model | **SqueezeBERT** is an efficient architectural variant of [BERT](https://paperswithcode.com/method/bert) for natural language processing that uses [grouped convolutions](https://paperswithcode.com/method/grouped-convolution). It is much like BERT-base, but with positional feedforward connection layers implemented as convolutions, and grouped [convolution](https://paperswithcode.com/method/convolution) for many of the layers. |
Given the following machine learning model name: Viewmaker Network, provide a description of the model | **Viewmaker Network** is a type of generative model that learns to produce input-dependent views for contrastive learning. This network is trained jointly with an encoder network. The viewmaker network is trained adversarially to create views which increase the contrastive loss of the encoder network. Rather than directly outputting views for an image, the viewmaker instead outputs a stochastic perturbation that is added to the input. This perturbation is projected onto an $\mathcal{l}\_{p}$ sphere, controlling the effective strength of the view, similar to methods in adversarial robustness. This constrained adversarial training method enables the model to reduce the mutual information between different views while preserving useful input features for the encoder to learn from.
Specifically, the encoder and viewmaker are optimized in alternating steps to minimize and maximize $\mathcal{L}$, respectively. An image-to-image neural network is used as the viewmaker network, with an architecture adapted from work on style transfer. This network ingests the input image and outputs a perturbation that is constrained to an $\ell_{1}$ sphere. The sphere's radius is determined by the volume of the input tensor times a hyperparameter $\epsilon$, the distortion budget, which determines the strength of the applied perturbation. This perturbation is added to the input image and optionally clamped in the case of images to ensure all pixels are in $[0,1]$. |
Given the following machine learning model name: ConViT, provide a description of the model | **ConViT** is a type of [vision transformer](https://paperswithcode.com/method/vision-transformer) that uses a gated positional self-attention module ([GPSA](https://paperswithcode.com/method/gpsa)), a form of positional self-attention which can be equipped with a “soft” convolutional inductive bias. The GPSA layers are initialized to mimic the locality of convolutional layers, then each attention head is given the freedom to escape locality by adjusting a gating parameter regulating the attention paid to position versus content information. |
Given the following machine learning model name: Spatiotemporal Point Inference Network, provide a description of the model | |
Given the following machine learning model name: Chain-of-thought prompting, provide a description of the model | Chain-of-thought prompts contain a series of intermediate reasoning steps, and they are shown to significantly improve the ability of large language models to perform certain tasks that involve complex reasoning (e.g., arithmetic, commonsense reasoning, symbolic reasoning, etc.) |
Given the following machine learning model name: BlendMask, provide a description of the model | **BlendMask** is an [instance segmentation framework](https://paperswithcode.com/methods/category/instance-segmentation-models) built on top of the[ FCOS](https://paperswithcode.com/method/fcos) object detector. The bottom module uses either backbone or [FPN](https://paperswithcode.com/method/fpn) features to predict a set of bases. A single [convolution](https://paperswithcode.com/methods/category/convolutions) layer is added on top of the detection towers to produce attention masks along with each bounding box prediction. For each predicted instance, the [blender](https://paperswithcode.com/method/blender) crops the bases with its bounding box and linearly combine them according the learned attention maps. Note that the Bottom Module can take features either from ‘C’, or ‘P’ as the input. |
Given the following machine learning model name: EmbraceNet: A robust deep learning architecture for multimodal classification, provide a description of the model | |
Given the following machine learning model name: Recurrent Trend Predictive Neural Network, provide a description of the model | A neural network model to automatically capture trends in time-series data for improved prediction/forecasting performance |
Given the following machine learning model name: Residual Block, provide a description of the model | **Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.
Formally, denoting the desired underlying mapping as $\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}({x}):=\mathcal{H}({x})-{x}$. The original mapping is recast into $\mathcal{F}({x})+{x}$. The $\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.
The intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.
Note that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive. |
Given the following machine learning model name: ENet Bottleneck, provide a description of the model | **ENet Bottleneck** is an image model block used in the [ENet](https://paperswithcode.com/method/enet) semantic segmentation architecture. Each block consists of three convolutional layers: a 1 × 1 projection that reduces the dimensionality, a main convolutional layer, and a 1 × 1 expansion. We place [Batch Normalization](https://paperswithcode.com/method/batch-normalization) and [PReLU](https://paperswithcode.com/method/prelu) between all convolutions. If the bottleneck is downsampling, a [max pooling](https://paperswithcode.com/method/max-pooling) layer is added to the main branch.
Also, the first 1 × 1 projection is replaced with a 2 × 2 [convolution](https://paperswithcode.com/method/convolution) with stride 2 in both dimensions. We zero pad the activations, to match the number of feature maps. |
Given the following machine learning model name: Fast Sample Re-Weighting, provide a description of the model | **Fast Sample Re-Weighting**, or **FSR**, is a sample re-weighting strategy to tackle problems such as dataset biases, noisy labels and imbalanced classes. It leverages a dictionary (essentially an extra buffer) to monitor the training history reflected by the model updates during meta optimization periodically, and utilises a valuation function to discover meaningful samples from training data as the proxy of reward data. The unbiased dictionary keeps being updated and provides reward signals to optimize sample weights. Additionally, instead of maintaining model states for both model and sample weight updates separately, feature sharing is enabled for saving the computation cost used for maintaining respective states. |
Given the following machine learning model name: DropPath, provide a description of the model | Just as [dropout](https://paperswithcode.com/method/dropout) prevents co-adaptation of activations, **DropPath** prevents co-adaptation of parallel paths in networks such as [FractalNets](https://paperswithcode.com/method/fractalnet) by randomly dropping operands of the join layers. This
discourages the network from using one input path as an anchor and another as a corrective term (a
configuration that, if not prevented, is prone to overfitting). Two sampling strategies are:
- **Local**: a join drops each input with fixed probability, but we make sure at least one survives.
- **Global**: a single path is selected for the entire network. We restrict this path to be a single
column, thereby promoting individual columns as independently strong predictors. |
Given the following machine learning model name: One Representation, provide a description of the model | In the OneR method, model input can be one of image, text or image+text, and CMC objective is combined with the traditional image-text contrastive (ITC) loss. Masked modeling is also carried out for all three input types (i.e., image, text and multi-modal). This framework employs no modality-specific architectural component except for the initial token embedding layer, making our model generic and modality-agnostic with minimal inductive bias. |
Given the following machine learning model name: LipGAN, provide a description of the model | **LipGAN** is a generative adversarial network for generating realistic talking faces conditioned on translated speech. It employs an adversary that measures the extent of lip synchronization in the frames generated by the generator. The system is capable of handling faces in random poses without the need for realignment to a template pose. LipGAN is a fully self-supervised approach that learns a phoneme-viseme mapping, making it language independent. |
Given the following machine learning model name: DropPathway, provide a description of the model | **DropPathway** randomly drops an audio pathway during training as a regularization technique for audiovisual recognition models. Specifically, at each training iteration, we drop the Audio pathway altogether with probability $P\_{d}$. This way, we slow down the learning of the Audio pathway and make its learning dynamics more compatible with its visual counterpart. When dropping the audio pathway, we sum zero tensors with the visual pathways.
Note that DropPathway is different from simply setting different learning rates for the audio/visual pathways in that it 1) ensures the audio pathway has fewer parameter updates, 2) hinders the visual pathway to 'shortcut' training by memorizing audio information, and 3) provides extra regularization as different audio clips are dropped in each epoch. |
Given the following machine learning model name: Soft Actor-Critic (Autotuned Temperature), provide a description of the model | **Soft Actor Critic (Autotuned Temperature** is a modification of the [SAC](https://paperswithcode.com/method/soft-actor-critic) reinforcement learning algorithm. [SAC](https://paperswithcode.com/method/sac) can suffer from brittleness to the temperature hyperparameter. Unlike in conventional reinforcement learning, where the optimal policy is independent of scaling of the reward function, in maximum entropy reinforcement learning the scaling factor has to be compensated by the choice a of suitable temperature, and a sub-optimal temperature can drastically degrade performance. To resolve this issue, SAC with Autotuned Temperature has an automatic gradient-based temperature tuning method that adjusts the expected entropy over the visited states to match a target value. |
Given the following machine learning model name: Spatial Attention Module, provide a description of the model | A **Spatial Attention Module** is a module for spatial attention in convolutional neural networks. It generates a spatial attention map by utilizing the inter-spatial relationship of features. Different from the [channel attention](https://paperswithcode.com/method/channel-attention-module), the spatial attention focuses on where is an informative part, which is complementary to the channel attention. To compute the spatial attention, we first apply average-pooling and max-pooling operations along the channel axis and concatenate them to generate an efficient feature descriptor. On the concatenated feature descriptor, we apply a [convolution](https://paperswithcode.com/method/convolution) layer to generate a spatial attention map $\textbf{M}\_{s}\left(F\right) \in \mathcal{R}^{H×W}$ which encodes where to emphasize or suppress.
We aggregate channel information of a feature map by using two pooling operations, generating two 2D maps: $\mathbf{F}^{s}\_{avg} \in \mathbb{R}^{1\times{H}\times{W}}$ and $\mathbf{F}^{s}\_{max} \in \mathbb{R}^{1\times{H}\times{W}}$. Each denotes average-pooled features and max-pooled features across the channel. Those are then concatenated and convolved by a standard convolution layer, producing the 2D spatial attention map. In short, the spatial attention is computed as:
$$ \textbf{M}\_{s}\left(F\right) = \sigma\left(f^{7x7}\left(\left[\text{AvgPool}\left(F\right);\text{MaxPool}\left(F\right)\right]\right)\right) $$
$$ \textbf{M}\_{s}\left(F\right) = \sigma\left(f^{7x7}\left(\left[\mathbf{F}^{s}\_{avg};\mathbf{F}^{s}\_{max} \right]\right)\right) $$
where $\sigma$ denotes the sigmoid function and $f^{7×7}$ represents a convolution operation with the filter size of 7 × 7. |
Given the following machine learning model name: VGG and variational Model Decomposition, provide a description of the model | |
Given the following machine learning model name: Auto-Classifier, provide a description of the model | |
Given the following machine learning model name: AMSBound, provide a description of the model | **AMSBound** is a variant of the [AMSGrad](https://paperswithcode.com/method/amsgrad) stochastic optimizer which is designed to be more robust to extreme learning rates. Dynamic bounds are employed on learning rates, where the lower and upper bound are initialized as zero and infinity respectively, and they both smoothly converge to a constant final step size. AMSBound can be regarded as an adaptive method at the beginning of training, and it gradually and smoothly transforms to [SGD](https://paperswithcode.com/method/sgd) (or with momentum) as time step increases.
$$ g\_{t} = \nabla{f}\_{t}\left(x\_{t}\right) $$
$$ m\_{t} = \beta\_{1t}m\_{t-1} + \left(1-\beta\_{1t}\right)g\_{t} $$
$$ v\_{t} = \beta\_{2}v\_{t-1} + \left(1-\beta\_{2}\right)g\_{t}^{2}$$
$$ \hat{v}\_{t} = \max\left(\hat{v}\_{t-1}, v\_{t}\right) \text{ and } V\_{t} = \text{diag}\left(\hat{v}\_{t}\right) $$
$$ \eta = \text{Clip}\left(\alpha/\sqrt{V\_{t}}, \eta\_{l}\left(t\right), \eta\_{u}\left(t\right)\right) \text{ and } \eta\_{t} = \eta/\sqrt{t} $$
$$ x\_{t+1} = \Pi\_{\mathcal{F}, \text{diag}\left(\eta\_{t}^{-1}\right)}\left(x\_{t} - \eta\_{t} \odot m\_{t} \right) $$
Where $\alpha$ is the initial step size, and $\eta_{l}$ and $\eta_{u}$ are the lower and upper bound functions respectively. |
Given the following machine learning model name: Multi-Query Attention, provide a description of the model | Multi-head attention consists of multiple attention layers (heads) in parallel with different linear
transformations on the queries, keys, values and outputs. **Multi-query attention** is identical except that the
different heads share a single set of keys and values. |
Given the following machine learning model name: Local Relation Network, provide a description of the model | The **Local Relation Network** (**LR-Net**) is a network built with local relation layers which represent a feature image extractor. This feature extractor adaptively determines aggregation weights based on the compositional relationship of local pixel pairs. |
Given the following machine learning model name: Linear Layer, provide a description of the model | A **Linear Layer** is a projection $\mathbf{XW + b}$. |
Given the following machine learning model name: MLP-Mixer, provide a description of the model | The **MLP-Mixer** architecture (or “Mixer” for short) is an image architecture that doesn't use convolutions or self-attention. Instead, Mixer’s architecture is based entirely on multi-layer perceptrons (MLPs) that are repeatedly applied across either spatial locations or feature channels. Mixer relies only on basic matrix multiplication routines, changes to data layout (reshapes and transpositions), and scalar nonlinearities.
It accepts a sequence of linearly projected image patches (also referred to as tokens) shaped as a “patches × channels” table as an input, and maintains this dimensionality. Mixer makes use of two types of MLP layers: channel-mixing MLPs and token-mixing MLPs. The channel-mixing MLPs allow communication between different channels; they operate on each token independently and take individual rows of the table as inputs. The token-mixing MLPs allow communication between different spatial locations (tokens); they operate on each channel independently and take individual columns of the table as inputs. These two types of layers are interleaved to enable interaction of both input dimensions. |
Given the following machine learning model name: Poisson Flow Generative Models, provide a description of the model | |
Given the following machine learning model name: HyperTree MetaModel, provide a description of the model | Optimize combinations of various neural network models for multimodal data with bayseian optimization. |
Given the following machine learning model name: Tanh Exponential Activation Function, provide a description of the model | Lightweight or mobile neural networks used for real-time computer vision tasks contain fewer parameters than normal
networks, which lead to a constrained performance. In this work, we proposed a novel activation function named Tanh Exponential
Activation Function (TanhExp) which can improve the performance for these networks on image classification task significantly.
The definition of TanhExp is $f(x) = x tanh(e^x)$. We demonstrate the simplicity, efficiency, and robustness of TanhExp on various
datasets and network models and TanhExp outperforms its counterparts in both convergence speed and accuracy. Its behaviour
also remains stable even with noise added and dataset altered. We show that without increasing the size of the network, the
capacity of lightweight neural networks can be enhanced by TanhExp with only a few training epochs and no extra parameters
added. |
Given the following machine learning model name: VoVNetV2, provide a description of the model | **VoVNetV2** is a convolutional neural network that improves upon [VoVNet](https://paperswithcode.com/method/vovnet) with two effective strategies: (1) [residual connection](https://paperswithcode.com/method/residual-connection) for alleviating the optimization problem of larger VoVNets and (2) effective Squeeze-Excitation (eSE) dealing with the channel information loss problem of the original squeeze-and-excitation module. |
Given the following machine learning model name: Part-based Convolutional Baseline, provide a description of the model | |
Given the following machine learning model name: Attention Free Transformer, provide a description of the model | **Attention Free Transformer**, or **AFT**, is an efficient variant of a [multi-head attention module](https://paperswithcode.com/method/multi-head-attention) that eschews [dot product self attention](https://paperswithcode.com/method/scaled). In an AFT layer, the key and value are first combined with a set of learned position biases, the result of which is multiplied with the query in an element-wise fashion. This new operation has a memory complexity linear w.r.t. both the context size and the dimension of features, making it compatible to both large input and model sizes.
Given the input $X$, AFT first linearly transforms them into $Q=X W^{Q}, K=X W^{K}, V=X W^{V}$, then performs following operation:
$$
Y=f(X) ; Y\_{t}=\sigma\_{q}\left(Q\_{t}\right) \odot \frac{\sum\_{t^{\prime}=1}^{T} \exp \left(K\_{t^{\prime}}+w\_{t, t^{\prime}}\right) \odot V\_{t^{\prime}}}{\sum\_{t^{\prime}=1}^{T} \exp \left(K\_{t^{\prime}}+w\_{t, t^{\prime}}\right)}
$$
where $\odot$ is the element-wise product; $\sigma\_{q}$ is the nonlinearity applied to the query with default being sigmoid; $w \in R^{T \times T}$ is the learned pair-wise position biases.
Explained in words, for each target position $t$, AFT performs a weighted average of values, the result of which is combined with the query with element-wise multiplication. In particular, the weighting is simply composed of the keys and a set of learned pair-wise position biases. This provides the immediate advantage of not needing to compute and store the expensive attention matrix, while maintaining the global interactions between query and values as MHA does. |
Given the following machine learning model name: Assemble-ResNet, provide a description of the model | **Assemble-ResNet** is a modification to the [ResNet](https://paperswithcode.com/method/resnet) architecture with several tweaks including using [ResNet-D](https://paperswithcode.com/method/resnet-d), channel attention, [anti-alias downsampling](https://paperswithcode.com/method/anti-alias-downsampling), and Big Little Networks. |
Given the following machine learning model name: 3D ResNet-RS, provide a description of the model | **3D ResNet-RS** is an architecture and scaling strategy for 3D ResNets for video recognition. The key additions are:
- **3D ResNet-D stem**: The [ResNet-D](https://paperswithcode.com/method/resnet-d) stem is adapted to 3D inputs by using three consecutive [3D convolutional layers](https://paperswithcode.com/method/3d-convolution). The first convolutional layer employs a temporal kernel size of 5 while the remaining two convolutional layers employ a temporal kernel size of 1.
- **3D Squeeze-and-Excitation**: [Squeeze-and-Excite](https://paperswithcode.com/method/squeeze-and-excitation-block) is adapted to spatio-temporal inputs by using a 3D [global average pooling](https://paperswithcode.com/method/global-average-pooling) operation for the squeeze operation. A SE ratio of 0.25 is applied in each 3D bottleneck block for all experiments.
- **Self-gating**: A self-gating module is used in each 3D bottleneck block after the SE module. |
Given the following machine learning model name: Parallel Layers, provide a description of the model | • Parallel Layers – We use a “parallel” formulation in each Transformer block (Wang & Komatsuzaki, 2021), rather than the standard “serialized” formulation. Specifically, the standard formulation can be written as:
y = x + MLP(LayerNorm(x + Attention(LayerNorm(x)))
Whereas the parallel formulation can be written as:
y = x + MLP(LayerNorm(x)) + Attention(LayerNorm(x))
The parallel formulation results in roughly 15% faster training speed at large scales, since the MLP and Attention input matrix multiplications can be fused. Ablation experiments showed a small quality degradation at 8B scale but no quality degradation at 62B scale, so we extrapolated that the effect of parallel layers should be quality neutral at the 540B scale. |
Given the following machine learning model name: ResNeXt Block, provide a description of the model | A **ResNeXt Block** is a type of [residual block](https://paperswithcode.com/method/residual-block) used as part of the [ResNeXt](https://paperswithcode.com/method/resnext) CNN architecture. It uses a "split-transform-merge" strategy (branched paths within a single module) similar to an [Inception module](https://paperswithcode.com/method/inception-module), i.e. it aggregates a set of transformations. Compared to a Residual Block, it exposes a new dimension, *cardinality* (size of set of transformations) $C$, as an essential factor in addition to depth and width.
Formally, a set of aggregated transformations can be represented as: $\mathcal{F}(x)=\sum_{i=1}^{C}\mathcal{T}_i(x)$, where $\mathcal{T}_i(x)$ can be an arbitrary function. Analogous to a simple neuron, $\mathcal{T}_i$ should project $x$ into an (optionally low-dimensional) embedding and then transform it. |
Given the following machine learning model name: Singular Value Clipping, provide a description of the model | **Singular Value Clipping (SVC)** is an adversarial training technique used by [TGAN](https://paperswithcode.com/method/tgan) to enforce the 1-Lipschitz constraint of the [WGAN](https://paperswithcode.com/method/wgan) objective. It is a constraint to all linear layers in the discriminator that satisfies the spectral norm of weight parameter $W$ is equal or less than one. This
means that the singular values of weight matrix are all one or less. Therefore singular value decomposition (SVD) is performed after a parameter update, replacing all the singular values larger than one with one, and the parameters are reconstructed with them. The same operation is applied to convolutional layers by interpreting a higher order tensor in weight parameter as a matrix $\hat{W}$. |
Given the following machine learning model name: Triplet Entropy Loss, provide a description of the model | The Triplet Entropy Loss (TEL) training method aims to leverage both the strengths of Cross Entropy Loss (CEL) and [Triplet loss](https://paperswithcode.com/method/triplet-loss) during the training process, assuming that it would lead to better generalization. The TEL method though does not contain a pre-training step, but trains simultaneously with both CEL and Triplet losses. |
Given the following machine learning model name: Semi-Supervised Knowledge Distillation, provide a description of the model | **Semi-Supervised Knowledge Distillation** is a type of knowledge distillation for person re-identification that exploits weakly annotated data by assigning soft pseudo labels to YouTube-Human to improve models' generalization ability. SSKD first trains a student model (e.g. [ResNet](https://paperswithcode.com/method/resnet)-50) and a teacher model (e.g. ResNet-101) using labeled data from multi-source domain datasets. Then, SSKD develops an [auxiliary classifier](https://paperswithcode.com/method/auxiliary-classifier) to imitate the soft predictions of unlabeled data generated by the teacher model. Meanwhile, the student model is also supervised by hard labels and predicted soft labels by the teacher model for labeled data. |
Given the following machine learning model name: MEUZZ, provide a description of the model | **MEUZZ** is a machine learning-based hybrid fuzzer which employs supervised machine learning for adaptive and generalizable seed scheduling -- a prominent factor in determining the yields of hybrid fuzzing. MEUZZ determines which new seeds are expected to produce better fuzzing yields based on the knowledge learned from past seed scheduling decisions made on the same or similar programs. MEUZZ's learning is based on a series of features extracted via code reachability and dynamic analysis, which incurs negligible runtime overhead (in microseconds). Moreover, MEUZZ automatically infers the data labels by evaluating the fuzzing performance of each selected seed. |
Given the following machine learning model name: Online Normalization, provide a description of the model | **Online Normalization** is a normalization technique for training deep neural networks. To define Online Normalization. we replace arithmetic averages over the full dataset in with exponentially decaying averages of online samples. The decay factors $\alpha\_{f}$ and $\alpha\_{b}$ for forward and backward passes respectively are hyperparameters for the technique.
We allow incoming samples $x\_{t}$, such as images, to have multiple scalar components and denote
feature-wide mean and variance by $\mu\left(x\_{t}\right)$ and $\sigma^{2}\left(x\_{t}\right)$. The algorithm also applies to outputs of fully connected layers with only one scalar output per feature. In fact, this case simplifies to $\mu\left(x\_{t}\right) = x\_{t}$ and $\sigma\left(x\_{t}\right) = 0$. Denote scalars $\mu\_{t}$ and $\sigma\_{t}$ to denote running estimates of mean and variance across
all samples. The subscript $t$ denotes time steps corresponding to processing new incoming samples.
Online Normalization uses an ongoing process during the forward pass to estimate activation means
and variances. It implements the standard online computation of mean and variance generalized to processing multi-value samples and exponential averaging of sample statistics. The
resulting estimates directly lead to an affine normalization transform.
$$ y\_{t} = \frac{x\_{t} - \mu\_{t-1}}{\sigma\_{t-1}} $$
$$ \mu\_{t} = \alpha\_{f}\mu\_{t-1} + \left(1-\alpha\_{f}\right)\mu\left(x\_{t}\right) $$
$$ \sigma^{2}\_{t} = \alpha\_{f}\sigma^{2}\_{t-1} + \left(1-\alpha\_{f}\right)\sigma^{2}\left(x\_{t}\right) + \alpha\_{f}\left(1-\alpha\_{f}\right)\left(\mu\left(x\_{t}\right) - \mu\_{t-1}\right)^{2} $$ |
Given the following machine learning model name: GAN-TTS, provide a description of the model | **GAN-TTS** is a generative adversarial network for text-to-speech synthesis. The architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes. The discriminators analyze the audio both in terms of general realism, as well as how well the audio corresponds to the utterance that should be pronounced.
The generator architecture consists of several GBlocks, which are residual based (dilated) [convolution](https://paperswithcode.com/method/convolution) blocks. GBlocks 3–7 gradually upsample the temporal dimension of hidden representations by factors of 2, 2, 2, 3, 5, while the number of channels is reduced by GBlocks 3, 6 and 7 (by a factor of 2 each). The final convolutional layer with [Tanh activation](https://paperswithcode.com/method/tanh-activation) produces a single-channel audio waveform.
Instead of a single discriminator, GAN-TTS uses an ensemble of Random Window Discriminators (RWDs) which operate on randomly sub-sampled fragments of the real or generated samples. The ensemble allows for the evaluation of audio in different complementary ways. |
Given the following machine learning model name: CornerNet-Squeeze, provide a description of the model | **CornerNet-Squeeze** is an object detector that extends [CornerNet](https://paperswithcode.com/method/cornernet) with a new compact hourglass architecture that makes use of fire modules with depthwise separable convolutions. |
Given the following machine learning model name: Pansharpening by convolutional neural networks in the full resolution framework, provide a description of the model | In recent years, there has been a growing interest on deep learning-based pansharpening.
Research has mainly focused on architectures.
However, lacking a ground truth, model training is also a major issue.
A popular approach is to train networks in a reduced resolution domain, using the original data as ground truths.
The trained networks are then used on full resolution data, relying on an implicit scale invariance hypothesis.
Results are generally good at reduced resolution, but more questionable at full resolution.
Here, we propose a full-resolution training framework for deep learning-based pansharpening.
Training takes place in the high resolution domain, relying only on the original data, with no loss of information.
To ensure spectral and spatial fidelity, suitable losses are defined,
which force the pansharpened output to be consistent with the available panchromatic and multispectral input.
Experiments carried out on WorldView-3, WorldView-2, and GeoEye-1 images show that methods trained with the proposed framework
guarantee an excellent performance in terms of both full-resolution numerical indexes and visual quality.
The framework is fully general, and can be used to train and fine-tune any deep learning-based pansharpening network. |
Given the following machine learning model name: Additive Angular Margin Loss, provide a description of the model | **ArcFace**, or **Additive Angular Margin Loss**, is a loss function used in face recognition tasks. The [softmax](https://paperswithcode.com/method/softmax) is traditionally used in these tasks. However, the softmax loss function does not explicitly optimise the feature embedding to enforce higher similarity for intraclass samples and diversity for inter-class samples, which results in a performance gap for deep face recognition under large intra-class appearance variations.
The ArcFace loss transforms the logits $W^{T}\_{j}x\_{i} = || W\_{j} || \text{ } || x\_{i} || \cos\theta\_{j}$,
where $\theta\_{j}$ is the angle between the weight $W\_{j}$ and the feature $x\_{i}$. The individual weight $ || W\_{j} || = 1$ is fixed by $l\_{2}$ normalization. The embedding feature $ ||x\_{i} ||$ is fixed by $l\_{2}$ normalization and re-scaled to $s$. The normalisation step on features and weights makes the predictions only depend on the angle between the feature and the weight. The learned embedding
features are thus distributed on a hypersphere with a radius of $s$. Finally, an additive angular margin penalty $m$ is added between $x\_{i}$ and $W\_{y\_{i}}$ to simultaneously enhance the intra-class compactness and inter-class discrepancy. Since the proposed additive angular margin penalty is
equal to the geodesic distance margin penalty in the normalised hypersphere, the method is named ArcFace:
$$ L\_{3} = -\frac{1}{N}\sum^{N}\_{i=1}\log\frac{e^{s\left(\cos\left(\theta\_{y\_{i}} + m\right)\right)}}{e^{s\left(\cos\left(\theta\_{y\_{i}} + m\right)\right)} + \sum^{n}\_{j=1, j \neq y\_{i}}e^{s\cos\theta\_{j}}} $$
The authors select face images from 8 different identities containing enough samples (around 1,500 images/class) to train 2-D feature embedding networks with the softmax and ArcFace loss, respectively. As the Figure shows, the softmax loss provides roughly separable feature embedding
but produces noticeable ambiguity in decision boundaries, while the proposed ArcFace loss can obviously enforce a more evident gap between the nearest classes.
Other alternatives to enforce intra-class compactness and inter-class distance include [Supervised Contrastive Learning](https://arxiv.org/abs/2004.11362). |
Given the following machine learning model name: Reduction-B, provide a description of the model | **Reduction-B** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture. |
Given the following machine learning model name: GloVe Embeddings, provide a description of the model | **GloVe Embeddings** are a type of word embedding that encode the co-occurrence probability ratio between two words as vector differences. GloVe uses a weighted least squares objective $J$ that minimizes the difference between the dot product of the vectors of two words and the logarithm of their number of co-occurrences:
$$ J=\sum\_{i, j=1}^{V}f\left(𝑋\_{i j}\right)(w^{T}\_{i}\tilde{w}_{j} + b\_{i} + \tilde{b}\_{j} - \log{𝑋}\_{ij})^{2} $$
where $w\_{i}$ and $b\_{i}$ are the word vector and bias respectively of word $i$, $\tilde{w}_{j}$ and $b\_{j}$ are the context word vector and bias respectively of word $j$, $X\_{ij}$ is the number of times word $i$ occurs in the context of word $j$, and $f$ is a weighting function that assigns lower weights to rare and frequent co-occurrences. |
Given the following machine learning model name: Attention Mesh, provide a description of the model | **Attention Mesh** is a neural network architecture for 3D face mesh prediction that uses attention to semantically meaningful regions. Specifically region-specific heads are employed that transform the feature maps with spatial transformers. |
Given the following machine learning model name: Relation-aware Global Attention, provide a description of the model | In relation-aware global attention (RGA) stresses the importance of global structural information provided by pairwise relations, and uses it to produce attention maps.
RGA comes in two forms, spatial RGA (RGA-S) and channel RGA (RGA-C). RGA-S first reshapes the input feature map $X$ to $C\times (H\times W)$ and the pairwise relation matrix $R \in \mathbb{R}^{(H\times W)\times (H\times W)}$ is computed using
\begin{align}
Q &= \delta(W^QX)
\end{align}
\begin{align}
K &= \delta(W^KX)
\end{align}
\begin{align}
R &= Q^TK
\end{align}
The relation vector $r_i$ at position $i$ is defined by stacking pairwise relations at all positions:
\begin{align}
r_i = [R(i, :); R(:,i)]
\end{align}
and the spatial relation-aware feature $y_i$ can be written as
\begin{align}
Y_i = [g^c_\text{avg}(\delta(W^\varphi x_i)); \delta(W^\phi r_i)]
\end{align}
where $g^c_\text{avg}$ denotes global average pooling in the channel domain. Finally, the spatial attention score at position $i$ is given by
\begin{align}
a_i = \sigma(W_2\delta(W_1y_i))
\end{align}
RGA-C has the same form as RGA-S, except for taking the input feature map as a set of $H\times W$-dimensional features.
RGA uses global relations to generate the attention score for each feature node, so provides valuable structural information and significantly enhances the representational power. RGA-S and RGA-C are flexible enough to be used in any CNN network; Zhang et al. propose using them jointly in sequence to better capture both spatial and cross-channel relationships. |
Given the following machine learning model name: Progressive Growing Channel Attentive Non-Local Network, provide a description of the model | Lung cancer classification in screening computed tomography (CT) scans is one of the most crucial tasks for early detection of this disease. Many lives can be saved if we are able to accurately classify malignant/cancerous lung nodules. Consequently, several deep learning based models have been proposed recently to classify lung nodules as malignant or benign. Nevertheless, the large variation in the size and heterogeneous appearance of the nodules makes this task an extremely challenging one. We propose a new Progressive Growing Channel Attentive Non-Local (ProCAN) network for lung nodule classification. The proposed method addresses this challenge from three different aspects. First, we enrich the Non-Local network by adding channel-wise attention capability to it. Second, we apply Curriculum Learning principles, whereby we first train our model on easy examples before hard ones. Third, as the classification task gets harder during the Curriculum learning, our model is progressively grown to increase its capability of handling the task at hand. We examined our proposed method on two different public datasets and compared its performance with state-of-the-art methods in the literature. The results show that the ProCAN model outperforms state-of-the-art methods and achieves an AUC of 98.05% and an accuracy of 95.28% on the LIDC-IDRI dataset. Moreover, we conducted extensive ablation studies to analyze the contribution and effects of each new component of our proposed method. |
Given the following machine learning model name: Graph Finite-State Automaton, provide a description of the model | **Graph Finite-State Automaton**, or **GFSA**, is a differentiable layer for learning graph structure that adds a new edge type (expressed as a weighted adjacency matrix) to a base graph. This layer can be trained end-to-end to add derived relationships (edges) to arbitrary graph-structured data based on performance on a downstream task. |
Given the following machine learning model name: BLOOMZ, provide a description of the model | **BLOOMZ** is a Multitask prompted finetuning (MTF) variant of BLOOM. |
Given the following machine learning model name: CPC v2, provide a description of the model | **Contrastive Predictive Coding v2 (CPC v2)** is a self-supervised learning approach that builds upon the original [CPC](https://paperswithcode.com/method/contrastive-predictive-coding) with several improvements. These improvements include:
- **Model capacity** - The third residual stack of [ResNet](https://paperswithcode.com/method/resnet)-101 (originally containing 23 blocks, 1024-dimensional feature maps, and 256-dimensional bottleneck layers), is converted to use 46 blocks, with 4096-dimensional feature maps and 512-dimensional bottleneck layers: ResNet-161.
- **Layer Normalization** - The authors find CPC with [batch normalization](https://paperswithcode.com/method/batch-normalization) harms downstream performance. They hypothesize this is due to batch normalization allowing large models to find a trivial solution to CPC: it introduces a dependency between patches (through the batch statistics) that can be exploited to bypass the constraints on the receptive field. They replace batch normalization with [layer normalization](https://paperswithcode.com/method/layer-normalization).
- **Predicting lengths and directions** - patches are predicted with contexts from both directions rather than just spatially underneath.
- **Patch-based Augmentation** - Utilising "color dropping" which randomly drops two of the three color channels in each patch, as well as random horizontal flips.
Consistent with prior results, this new architecture delivers better performance regardless of |
Given the following machine learning model name: Linformer, provide a description of the model | **Linformer** is a linear [Transformer](https://paperswithcode.com/method/transformer) that utilises a linear self-attention mechanism to tackle the self-attention bottleneck with [Transformer models](https://paperswithcode.com/methods/category/transformers). The original [scaled dot-product attention](https://paperswithcode.com/method/scaled) is decomposed into multiple smaller attentions through linear projections, such that the combination of these operations forms a low-rank factorization of the original attention. |
Given the following machine learning model name: Decentralized Distributed Proximal Policy Optimization, provide a description of the model | **Decentralized Distributed Proximal Policy Optimization (DD-PPO)** is a method for distributed reinforcement learning in resource-intensive simulated environments. DD-PPO is distributed (uses multiple machines), decentralized (lacks a centralized server), and synchronous (no computation is ever `stale'), making it conceptually simple and easy to implement.
Proximal Policy Optimization, or [PPO](https://paperswithcode.com/method/ppo), is a policy gradient method for reinforcement learning. The motivation was to have an algorithm with the data efficiency and reliable performance of [TRPO](https://paperswithcode.com/method/trpo), while using only first-order optimization.
Let $r\_{t}\left(\theta\right)$ denote the probability ratio $r\_{t}\left(\theta\right) = \frac{\pi\_{\theta}\left(a\_{t}\mid{s\_{t}}\right)}{\pi\_{\theta\_{old}}\left(a\_{t}\mid{s\_{t}}\right)}$, so $r\left(\theta\_{old}\right) = 1$. TRPO maximizes a “surrogate” objective:
$$ L^{v}\left({\theta}\right) = \hat{\mathbb{E}}\_{t}\left[\frac{\pi\_{\theta}\left(a\_{t}\mid{s\_{t}}\right)}{\pi\_{\theta\_{old}}\left(a\_{t}\mid{s\_{t}}\right)})\hat{A}\_{t}\right] = \hat{\mathbb{E}}\_{t}\left[r\_{t}\left(\theta\right)\hat{A}\_{t}\right] $$
As a general abstraction, DD-PPO implements the following:
at step $k$, worker $n$ has a copy of the parameters, $\theta^k_n$, calculates the gradient, $\delta \theta^k_n$, and updates $\theta$ via
$$ \theta^{k+1}\_n = \text{ParamUpdate}\Big(\theta^{k}\_n, \text{AllReduce}\big(\delta \theta^k\_1, \ldots, \delta \theta^k\_N\big)\Big) = \text{ParamUpdate}\Big(\theta^{k}\_n, \frac{1}{N} \sum_{i=1}^{N} { \delta \theta^k_i} \Big) $$
where $\text{ParamUpdate}$ is any first-order optimization technique (e.g. gradient descent) and $\text{AllReduce}$ performs a reduction (e.g. mean) over all copies of a variable and returns the result to all workers.
Distributed DataParallel scales very well (near-linear scaling up to 32,000 GPUs), and is reasonably simple to implement (all workers synchronously running identical code). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.