uid
int64
4
318k
paper_url
stringlengths
39
81
arxiv_id
stringlengths
9
16
title
stringlengths
6
365
abstract
stringlengths
0
7.27k
url_abs
stringlengths
17
601
url_pdf
stringlengths
21
819
proceeding
stringlengths
7
1.03k
authors
list
tasks
list
date
float64
422B
1,672B
methods
list
__index_level_0__
int64
1
197k
172,339
https://paperswithcode.com/paper/maximum-a-posteriori-signal-recovery-for
2010.15682
Maximum a posteriori signal recovery for optical coherence tomography angiography image generation and denoising
Optical coherence tomography angiography (OCTA) is a novel and clinically promising imaging modality to image retinal and sub-retinal vasculature. Based on repeated optical coherence tomography (OCT) scans, intensity changes are observed over time and used to compute OCTA image data. OCTA data are prone to noise and artifacts caused by variations in flow speed and patient movement. We propose a novel iterative maximum a posteriori signal recovery algorithm in order to generate OCTA volumes with reduced noise and increased image quality. This algorithm is based on previous work on probabilistic OCTA signal models and maximum likelihood estimates. Reconstruction results using total variation minimization and wavelet shrinkage for regularization were compared against an OCTA ground truth volume, merged from six co-registered single OCTA volumes. The results show a significant improvement in peak signal-to-noise ratio and structural similarity. The presented algorithm brings together OCTA image generation and Bayesian statistics and can be developed into new OCTA image generation and denoising algorithms.
https://arxiv.org/abs/2010.15682v1
https://arxiv.org/pdf/2010.15682v1.pdf
null
[ "Lennart Husvogt", "Stefan B. Ploner", "Siyu Chen", "Daniel Stromer", "Julia Schottenhamml", "A. Yasin Alibhai", "Eric Moult", "Nadia K. Waheed", "James G. Fujimoto", "Andreas Maier" ]
[ "Denoising", "Image Generation" ]
1,603,929,600,000
[]
25,324
3,841
https://paperswithcode.com/paper/code-completion-with-neural-attention-and
1711.09573
Code Completion with Neural Attention and Pointer Networks
Intelligent code completion has become an essential research task to accelerate modern software development. To facilitate effective code completion for dynamically-typed programming languages, we apply neural language models by learning from large codebases, and develop a tailored attention mechanism for code completion. However, standard neural language models even with attention mechanism cannot correctly predict the out-of-vocabulary (OoV) words that restrict the code completion performance. In this paper, inspired by the prevalence of locally repeated terms in program source code, and the recently proposed pointer copy mechanism, we propose a pointer mixture network for better predicting OoV words in code completion. Based on the context, the pointer mixture network learns to either generate a within-vocabulary word through an RNN component, or regenerate an OoV word from local context through a pointer component. Experiments on two benchmarked datasets demonstrate the effectiveness of our attention mechanism and pointer mixture network on the code completion task.
http://arxiv.org/abs/1711.09573v2
http://arxiv.org/pdf/1711.09573v2.pdf
null
[ "Jian Li", "Yue Wang", "Michael R. Lyu", "Irwin King" ]
[ "Code Completion" ]
1,511,740,800,000
[]
140,067
151,672
https://paperswithcode.com/paper/naist-s-machine-translation-systems-for-iwslt
null
NAIST's Machine Translation Systems for IWSLT 2020 Conversational Speech Translation Task
This paper describes NAIST{'}s NMT system submitted to the IWSLT 2020 conversational speech translation task. We focus on the translation disfluent speech transcripts that include ASR errors and non-grammatical utterances. We tried a domain adaptation method by transferring the styles of out-of-domain data (United Nations Parallel Corpus) to be like in-domain data (Fisher transcripts). Our system results showed that the NMT model with domain adaptation outperformed a baseline. In addition, slight improvement by the style transfer was observed.
https://aclanthology.org/2020.iwslt-1.21
https://aclanthology.org/2020.iwslt-1.21.pdf
WS 2020 7
[ "Ryo Fukuda", "Katsuhito Sudoh", "Satoshi Nakamura" ]
[ "Domain Adaptation", "Machine Translation", "Style Transfer" ]
1,593,561,600,000
[]
124,264
124,349
https://paperswithcode.com/paper/influence-aware-memory-for-deep-reinforcement-1
1911.07643
Influence-aware Memory Architectures for Deep Reinforcement Learning
Due to its perceptual limitations, an agent may have too little information about the state of the environment to act optimally. In such cases, it is important to keep track of the observation history to uncover hidden state. Recent deep reinforcement learning methods use recurrent neural networks (RNN) to memorize past observations. However, these models are expensive to train and have convergence difficulties, especially when dealing with high dimensional input spaces. In this paper, we propose influence-aware memory (IAM), a theoretically inspired memory architecture that tries to alleviate the training difficulties by restricting the input of the recurrent layers to those variables that influence the hidden state information. Moreover, as opposed to standard RNNs, in which every piece of information used for estimating Q values is inevitably fed back into the network for the next prediction, our model allows information to flow without being necessarily stored in the RNN's internal memory. Results indicate that, by letting the recurrent layers focus on a small fraction of the observation variables while processing the rest of the information with a feedforward neural network, we can outperform standard recurrent architectures both in training speed and policy performance. This approach also reduces runtime and obtains better scores than methods that stack multiple observations to remove partial observability.
https://arxiv.org/abs/1911.07643v4
https://arxiv.org/pdf/1911.07643v4.pdf
null
[ "Miguel Suau", "Jinke He", "Elena Congeduti", "Rolf A. N. Starre", "Aleksander Czechowski", "Frans A. Oliehoek" ]
[ "reinforcement-learning" ]
1,574,035,200,000
[]
166,238
101,001
https://paperswithcode.com/paper/deep-unified-multimodal-embeddings-for
1905.07075
Deep Unified Multimodal Embeddings for Understanding both Content and Users in Social Media Networks
There has been an explosion of multimodal content generated on social media networks in the last few years, which has necessitated a deeper understanding of social media content and user behavior. We present a novel content-independent content-user-reaction model for social multimedia content analysis. Compared to prior works that generally tackle semantic content understanding and user behavior modeling in isolation, we propose a generalized solution to these problems within a unified framework. We embed users, images and text drawn from open social media in a common multimodal geometric space, using a novel loss function designed to cope with distant and disparate modalities, and thereby enable seamless three-way retrieval. Our model not only outperforms unimodal embedding based methods on cross-modal retrieval tasks but also shows improvements stemming from jointly solving the two tasks on Twitter data. We also show that the user embeddings learned within our joint multimodal embedding model are better at predicting user interests compared to those learned with unimodal content on Instagram data. Our framework thus goes beyond the prior practice of using explicit leader-follower link information to establish affiliations by extracting implicit content-centric affiliations from isolated users. We provide qualitative results to show that the user clusters emerging from learned embeddings have consistent semantics and the ability of our model to discover fine-grained semantics from noisy and unstructured data. Our work reveals that social multimodal content is inherently multimodal and possesses a consistent structure because in social networks meaning is created through interactions between users and content.
https://arxiv.org/abs/1905.07075v3
https://arxiv.org/pdf/1905.07075v3.pdf
null
[ "Karan Sikka", "Lucas Van Bramer", "Ajay Divakaran" ]
[ "Cross-Modal Retrieval" ]
1,558,051,200,000
[]
108,730
105,815
https://paperswithcode.com/paper/few-shot-learning-with-per-sample-rich
1906.03859
Few-Shot Learning with Per-Sample Rich Supervision
Learning with few samples is a major challenge for parameter-rich models like deep networks. In contrast, people learn complex new concepts even from very few examples, suggesting that the sample complexity of learning can often be reduced. Many approaches to few-shot learning build on transferring a representation from well-sampled classes, or using meta learning to favor architectures that can learn with few samples. Unfortunately, such approaches often struggle when learning in an online way or with non-stationary data streams. Here we describe a new approach to learn with fewer samples, by using additional information that is provided per sample. Specifically, we show how the sample complexity can be reduced by providing semantic information about the relevance of features per sample, like information about the presence of objects in a scene or confidence of detecting attributes in an image. We provide an improved generalization error bound for this case. We cast the problem of using per-sample feature relevance by using a new ellipsoid-margin loss, and develop an online algorithm that minimizes this loss effectively. Empirical evaluation on two machine vision benchmarks for scene classification and fine-grain bird classification demonstrate the benefits of this approach for few-shot learning.
https://arxiv.org/abs/1906.03859v1
https://arxiv.org/pdf/1906.03859v1.pdf
null
[ "Roman Visotsky", "Yuval Atzmon", "Gal Chechik" ]
[ "Few-Shot Learning", "Classification", "Meta-Learning", "Scene Classification" ]
1,560,124,800,000
[]
81,212
9,528
https://paperswithcode.com/paper/constrained-image-generation-using-binarized
1802.08795
Constrained Image Generation Using Binarized Neural Networks with Decision Procedures
We consider the problem of binary image generation with given properties. This problem arises in a number of practical applications, including generation of artificial porous medium for an electrode of lithium-ion batteries, for composed materials, etc. A generated image represents a porous medium and, as such, it is subject to two sets of constraints: topological constraints on the structure and process constraints on the physical process over this structure. To perform image generation we need to define a mapping from a porous medium to its physical process parameters. For a given geometry of a porous medium, this mapping can be done by solving a partial differential equation (PDE). However, embedding a PDE solver into the search procedure is computationally expensive. We use a binarized neural network to approximate a PDE solver. This allows us to encode the entire problem as a logical formula. Our main contribution is that, for the first time, we show that this problem can be tackled using decision procedures. Our experiments show that our model is able to produce random constrained images that satisfy both topological and process constraints.
http://arxiv.org/abs/1802.08795v1
http://arxiv.org/pdf/1802.08795v1.pdf
null
[ "Svyatoslav Korneev", "Nina Narodytska", "Luca Pulina", "Armando Tacchella", "Nikolaj Bjorner", "Mooly Sagiv" ]
[ "Image Generation" ]
1,519,430,400,000
[]
121,913
63,916
https://paperswithcode.com/paper/learning-to-predict-denotational
null
Learning to Predict Denotational Probabilities For Modeling Entailment
We propose a framework that captures the denotational probabilities of words and phrases by embedding them in a vector space, and present a method to induce such an embedding from a dataset of denotational probabilities. We show that our model successfully predicts denotational probabilities for unseen phrases, and that its predictions are useful for textual entailment datasets such as SICK and SNLI.
https://aclanthology.org/E17-1068
https://aclanthology.org/E17-1068.pdf
EACL 2017 4
[ "Alice Lai", "Julia Hockenmaier" ]
[ "Coreference Resolution", "Natural Language Inference" ]
1,491,004,800,000
[]
74,007
201,003
https://paperswithcode.com/paper/adversarially-guided-actor-critic-1
2102.04376
Adversarially Guided Actor-Critic
Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments, particularly in tasks where efficient exploration is a bottleneck. These methods consider a policy (the actor) and a value function (the critic) whose respective losses are built using different motivations and approaches. This paper introduces a third protagonist: the adversary. While the adversary mimics the actor by minimizing the KL-divergence between their respective action distributions, the actor, in addition to learning to solve the task, tries to differentiate itself from the adversary predictions. This novel objective stimulates the actor to follow strategies that could not have been correctly predicted from previous trajectories, making its behavior innovative in tasks where the reward is extremely rare. Our experimental analysis shows that the resulting Adversarially Guided Actor-Critic (AGAC) algorithm leads to more exhaustive exploration. Notably, AGAC outperforms current state-of-the-art methods on a set of various hard-exploration and procedurally-generated tasks.
https://arxiv.org/abs/2102.04376v1
https://arxiv.org/pdf/2102.04376v1.pdf
ICLR 2021 1
[ "Yannis Flet-Berliac", "Johan Ferret", "Olivier Pietquin", "Philippe Preux", "Matthieu Geist" ]
[ "Efficient Exploration" ]
1,612,742,400,000
[]
50,348
75,241
https://paperswithcode.com/paper/generative-entity-networks-disentangling
null
Generative Entity Networks: Disentangling Entitites and Attributes in Visual Scenes using Partial Natural Language Descriptions
Generative image models have made significant progress in the last few years, and are now able to generate low-resolution images which sometimes look realistic. However the state-of-the-art models utilize fully entangled latent representations where small changes to a single neuron can effect every output pixel in relatively arbitrary ways, and different neurons have possibly arbitrary relationships with each other. This limits the ability of such models to generalize to new combinations or orientations of objects as well as their ability to connect with more structured representations such as natural language, without explicit strong supervision. In this work explore the synergistic effect of using partial natural language scene descriptions to help disentangle the latent entities visible an image. We present a novel neural network architecture called Generative Entity Networks, which jointly generates both the natural language descriptions and the images from a set of latent entities. Our model is based on the variational autoencoder framework and makes use of visual attention to identify and characterise the visual attributes of each entity. Using the Shapeworld dataset, we show that our representation both enables a better generative model of images, leading to higher quality image samples, as well as creating more semantically useful representations that improve performance over purely dicriminative models on a simple natural language yes/no question answering task.
https://openreview.net/forum?id=BJInMmWC-
https://openreview.net/pdf?id=BJInMmWC-
ICLR 2018 1
[ "Charlie Nash", "Sebastian Nowozin", "Nate Kushman" ]
[ "Question Answering" ]
1,514,764,800,000
[ { "code_snippet_url": "https://github.com/L1aoXingyu/pytorch-beginner/blob/9c86be785c7c318a09cf29112dd1f1a58613239b/08-AutoEncoder/simple_autoencoder.py#L38", "description": "An **Autoencoder** is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and...
5,299
298,219
https://paperswithcode.com/paper/where-are-my-neighbors-exploiting-patches
2206.00481
Where are my Neighbors? Exploiting Patches Relations in Self-Supervised Vision Transformer
Vision Transformers (ViTs) enabled the use of transformer architecture on vision tasks showing impressive performances when trained on big datasets. However, on relatively small datasets, ViTs are less accurate given their lack of inductive bias. To this end, we propose a simple but still effective self-supervised learning (SSL) strategy to train ViTs, that without any external annotation, can significantly improve the results. Specifically, we define a set of SSL tasks based on relations of image patches that the model has to solve before or jointly during the downstream training. Differently from ViT, our RelViT model optimizes all the output tokens of the transformer encoder that are related to the image patches, thus exploiting more training signal at each training step. We investigated our proposed methods on several image benchmarks finding that RelViT improves the SSL state-of-the-art methods by a large margin, especially on small datasets.
https://arxiv.org/abs/2206.00481v1
https://arxiv.org/pdf/2206.00481v1.pdf
null
[ "Guglielmo Camporese", "Elena Izzo", "Lamberto Ballan" ]
[ "Inductive Bias", "Self-Supervised Learning" ]
1,654,041,600,000
[]
192,503
197,581
https://paperswithcode.com/paper/fakebuster-a-deepfakes-detection-tool-for
2101.03321
FakeBuster: A DeepFakes Detection Tool for Video Conferencing Scenarios
This paper proposes a new DeepFake detector FakeBuster for detecting impostors during video conferencing and manipulated faces on social media. FakeBuster is a standalone deep learning based solution, which enables a user to detect if another person's video is manipulated or spoofed during a video conferencing based meeting. This tool is independent of video conferencing solutions and has been tested with Zoom and Skype applications. It uses a 3D convolutional neural network for predicting video segment-wise fakeness scores. The network is trained on a combination of datasets such as Deeperforensics, DFDC, VoxCeleb, and deepfake videos created using locally captured (for video conferencing scenarios) images. This leads to different environments and perturbations in the dataset, which improves the generalization of the deepfake network.
https://arxiv.org/abs/2101.03321v1
https://arxiv.org/pdf/2101.03321v1.pdf
null
[ "Vineet Mehta", "Parul Gupta", "Ramanathan Subramanian", "Abhinav Dhall" ]
[ "Face Swapping" ]
1,610,150,400,000
[]
5,388
168,778
https://paperswithcode.com/paper/a-deep-learning-based-interactive-sketching
2010.04413
A deep learning based interactive sketching system for fashion images design
In this work, we propose an interactive system to design diverse high-quality garment images from fashion sketches and the texture information. The major challenge behind this system is to generate high-quality and detailed texture according to the user-provided texture information. Prior works mainly use the texture patch representation and try to map a small texture patch to a whole garment image, hence unable to generate high-quality details. In contrast, inspired by intrinsic image decomposition, we decompose this task into texture synthesis and shading enhancement. In particular, we propose a novel bi-colored edge texture representation to synthesize textured garment images and a shading enhancer to render shading based on the grayscale edges. The bi-colored edge representation provides simple but effective texture cues and color constraints, so that the details can be better reconstructed. Moreover, with the rendered shading, the synthesized garment image becomes more vivid.
https://arxiv.org/abs/2010.04413v1
https://arxiv.org/pdf/2010.04413v1.pdf
null
[ "Yao Li", "Xianggang Yu", "Xiaoguang Han", "Nianjuan Jiang", "Kui Jia", "Jiangbo Lu" ]
[ "Intrinsic Image Decomposition", "Texture Synthesis" ]
1,602,201,600,000
[]
17,119
227,557
https://paperswithcode.com/paper/reinforcement-learning-based-dialogue-guided
2106.12384
Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit Argument Relations
Event extraction is a fundamental task for natural language processing. Finding the roles of event arguments like event participants is essential for event extraction. However, doing so for real-life event descriptions is challenging because an argument's role often varies in different contexts. While the relationship and interactions between multiple arguments are useful for settling the argument roles, such information is largely ignored by existing approaches. This paper presents a better approach for event extraction by explicitly utilizing the relationships of event arguments. We achieve this through a carefully designed task-oriented dialogue system. To model the argument relation, we employ reinforcement learning and incremental learning to extract multiple arguments via a multi-turned, iterative process. Our approach leverages knowledge of the already extracted arguments of the same sentence to determine the role of arguments that would be difficult to decide individually. It then uses the newly obtained information to improve the decisions of previously extracted arguments. This two-way feedback process allows us to exploit the argument relations to effectively settle argument roles, leading to better sentence understanding and event extraction. Experimental results show that our approach consistently outperforms seven state-of-the-art event extraction methods for the classification of events and argument role and argument identification.
https://arxiv.org/abs/2106.12384v2
https://arxiv.org/pdf/2106.12384v2.pdf
null
[ "Qian Li", "Hao Peng", "JianXin Li", "Jia Wu", "Yuanxing Ning", "Lihong Wang", "Philip S. Yu", "Zheng Wang" ]
[ "Event Extraction", "Incremental Learning", "reinforcement-learning" ]
1,624,406,400,000
[]
134,800
26,039
https://paperswithcode.com/paper/adversarial-examples-for-generative-models
1702.06832
Adversarial examples for generative models
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.
http://arxiv.org/abs/1702.06832v1
http://arxiv.org/pdf/1702.06832v1.pdf
null
[ "Jernej Kos", "Ian Fischer", "Dawn Song" ]
[ "Classification", "Classification" ]
1,487,721,600,000
[ { "code_snippet_url": "https://github.com/L1aoXingyu/pytorch-beginner/blob/9c86be785c7c318a09cf29112dd1f1a58613239b/08-AutoEncoder/simple_autoencoder.py#L38", "description": "An **Autoencoder** is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and...
153,759
279,975
https://paperswithcode.com/paper/cake-a-scalable-commonsense-aware-framework
2202.13785
CAKE: A Scalable Commonsense-Aware Framework For Multi-View Knowledge Graph Completion
Knowledge graphs store a large number of factual triples while they are still incomplete, inevitably. The previous knowledge graph completion (KGC) models predict missing links between entities merely relying on fact-view data, ignoring the valuable commonsense knowledge. The previous knowledge graph embedding (KGE) techniques suffer from invalid negative sampling and the uncertainty of fact-view link prediction, limiting KGC's performance. To address the above challenges, we propose a novel and scalable Commonsense-Aware Knowledge Embedding (CAKE) framework to automatically extract commonsense from factual triples with entity concepts. The generated commonsense augments effective self-supervision to facilitate both high-quality negative sampling (NS) and joint commonsense and fact-view link prediction. Experimental results on the KGC task demonstrate that assembling our framework could enhance the performance of the original KGE models, and the proposed commonsense-aware NS module is superior to other NS techniques. Besides, our proposed framework could be easily adaptive to various KGE models and explain the predicted results.
https://arxiv.org/abs/2202.13785v3
https://arxiv.org/pdf/2202.13785v3.pdf
ACL 2022 5
[ "Guanglin Niu", "Bo Li", "Yongfei Zhang", "ShiLiang Pu" ]
[ "Graph Embedding", "Knowledge Graph Completion", "Knowledge Graph Embedding", "Knowledge Graphs", "Link Prediction" ]
1,645,747,200,000
[]
53,744
184,651
https://paperswithcode.com/paper/mufold-betaturn-a-deep-dense-inception
1808.04322
MUFold-BetaTurn: A Deep Dense Inception Network for Protein Beta-Turn Prediction
Beta-turn prediction is useful in protein function studies and experimental design. Although recent approaches using machine-learning techniques such as SVM, neural networks, and K-NN have achieved good results for beta-turn pre-diction, there is still significant room for improvement. As previous predictors utilized features in a sliding window of 4-20 residues to capture interactions among sequentially neighboring residues, such feature engineering may result in incomplete or biased features, and neglect interactions among long-range residues. Deep neural networks provide a new opportunity to address these issues. Here, we proposed a deep dense inception network (DeepDIN) for beta-turn prediction, which takes advantages of the state-of-the-art deep neural network design of the DenseNet and the inception network. A test on a recent BT6376 benchmark shows that the DeepDIN outperformed the previous best BetaTPred3 significantly in both the overall prediction accuracy and the nine-type beta-turn classification. A tool, called MUFold-BetaTurn, was developed, which is the first beta-turn prediction tool utilizing deep neural networks. The tool can be downloaded at http://dslsrv8.cs.missouri.edu/~cf797/MUFoldBetaTurn/download.html.
http://arxiv.org/abs/1808.04322v1
http://arxiv.org/pdf/1808.04322v1.pdf
null
[]
[ "Experimental Design", "Feature Engineering" ]
1,534,118,400,000
[]
97,061
137,241
https://paperswithcode.com/paper/pool-based-unsupervised-active-learning-for
2003.07658
Pool-Based Unsupervised Active Learning for Regression Using Iterative Representativeness-Diversity Maximization (iRDM)
Active learning (AL) selects the most beneficial unlabeled samples to label, and hence a better machine learning model can be trained from the same number of labeled samples. Most existing active learning for regression (ALR) approaches are supervised, which means the sampling process must use some label information, or an existing regression model. This paper considers completely unsupervised ALR, i.e., how to select the samples to label without knowing any true label information. We propose a novel unsupervised ALR approach, iterative representativeness-diversity maximization (iRDM), to optimally balance the representativeness and the diversity of the selected samples. Experiments on 12 datasets from various domains demonstrated its effectiveness. Our iRDM can be applied to both linear regression and kernel regression, and it even significantly outperforms supervised ALR when the number of labeled samples is small.
https://arxiv.org/abs/2003.07658v2
https://arxiv.org/pdf/2003.07658v2.pdf
null
[ "Ziang Liu", "Xue Jiang", "Hanbin Luo", "Weili Fang", "Jiajing Liu", "Dongrui Wu" ]
[ "Active Learning" ]
1,584,403,200,000
[ { "code_snippet_url": null, "description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predict...
120,211
293,867
https://paperswithcode.com/paper/cross-modal-cloze-task-a-new-task-to-brain-to
null
Cross-Modal Cloze Task: A New Task to Brain-to-Word Decoding
Decoding language from non-invasive brain activity has attracted increasing attention from both researchers in neuroscience and natural language processing. Due to the noisy nature of brain recordings, existing work has simplified brain-to-word decoding as a binary classification task which is to discriminate a brain signal between its corresponding word and a wrong one. This pairwise classification task, however, cannot promote the development of practical neural decoders for two reasons. First, it has to enumerate all pairwise combinations in the test set, so it is inefficient to predict a word in a large vocabulary. Second, a perfect pairwise decoder cannot guarantee the performance on direct classification. To overcome these and go a step further to a realistic neural decoder, we propose a novel Cross-Modal Cloze (CMC) task which is to predict the target word encoded in the neural image with a context as prompt. Furthermore, to address this task, we propose a general approach that leverages the pre-trained language model to predict the target word. To validate our method, we perform experiments on more than 20 participants from two brain imaging datasets. Our method achieves 28.91% top-1 accuracy and 54.19% top-5 accuracy on average across all participants, significantly outperforming several baselines. This result indicates that our model can serve as a state-of-the-art baseline for the CMC task. More importantly, it demonstrates that it is feasible to decode a certain word within a large vocabulary from its neural brain activity.
https://aclanthology.org/2022.findings-acl.54
https://aclanthology.org/2022.findings-acl.54.pdf
Findings (ACL) 2022 5
[ "Shuxian Zou", "Shaonan Wang", "Jiajun Zhang", "Chengqing Zong" ]
[ "Language Modelling" ]
1,651,363,200,000
[]
154,832
227,847
https://paperswithcode.com/paper/bayesian-inference-in-high-dimensional-time-1
2106.13379
Bayesian Inference in High-Dimensional Time-Serieswith the Orthogonal Stochastic Linear Mixing Model
Many modern time-series datasets contain large numbers of output response variables sampled for prolonged periods of time. For example, in neuroscience, the activities of 100s-1000's of neurons are recorded during behaviors and in response to sensory stimuli. Multi-output Gaussian process models leverage the nonparametric nature of Gaussian processes to capture structure across multiple outputs. However, this class of models typically assumes that the correlations between the output response variables are invariant in the input space. Stochastic linear mixing models (SLMM) assume the mixture coefficients depend on input, making them more flexible and effective to capture complex output dependence. However, currently, the inference for SLMMs is intractable for large datasets, making them inapplicable to several modern time-series problems. In this paper, we propose a new regression framework, the orthogonal stochastic linear mixing model (OSLMM) that introduces an orthogonal constraint amongst the mixing coefficients. This constraint reduces the computational burden of inference while retaining the capability to handle complex output dependence. We provide Markov chain Monte Carlo inference procedures for both SLMM and OSLMM and demonstrate superior model scalability and reduced prediction error of OSLMM compared with state-of-the-art methods on several real-world applications. In neurophysiology recordings, we use the inferred latent functions for compact visualization of population responses to auditory stimuli, and demonstrate superior results compared to a competing method (GPFA). Together, these results demonstrate that OSLMM will be useful for the analysis of diverse, large-scale time-series datasets.
https://arxiv.org/abs/2106.13379v2
https://arxiv.org/pdf/2106.13379v2.pdf
null
[ "Rui Meng", "Kristofer Bouchard" ]
[ "Bayesian Inference", "Gaussian Processes", "Time Series" ]
1,624,579,200,000
[ { "code_snippet_url": null, "description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty...
102,352
236,184
https://paperswithcode.com/paper/modulating-language-models-with-emotions
2108.07886
Modulating Language Models with Emotions
Generating context-aware language that embodies diverse emotions is an important step towards building empathetic NLP systems. In this paper, we propose a formulation of modulated layer normalization -- a technique inspired by computer vision -- that allows us to use large-scale language models for emotional response generation. In automatic and human evaluation on the MojiTalk dataset, our proposed modulated layer normalization method outperforms prior baseline methods while maintaining diversity, fluency, and coherence. Our method also obtains competitive performance even when using only 10% of the available training data.
https://arxiv.org/abs/2108.07886v1
https://arxiv.org/pdf/2108.07886v1.pdf
Findings (ACL) 2021 8
[ "Ruibo Liu", "Jason Wei", "Chenyan Jia", "Soroush Vosoughi" ]
[ "Response Generation" ]
1,629,158,400,000
[ { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** direct...
97,900
290,977
https://paperswithcode.com/paper/defending-against-person-hiding-adversarial
2204.13004
Defending Against Person Hiding Adversarial Patch Attack with a Universal White Frame
Object detection has attracted great attention in the computer vision area and has emerged as an indispensable component in many vision systems. In the era of deep learning, many high-performance object detection networks have been proposed. Although these detection networks show high performance, they are vulnerable to adversarial patch attacks. Changing the pixels in a restricted region can easily fool the detection network in the physical world. In particular, person-hiding attacks are emerging as a serious problem in many safety-critical applications such as autonomous driving and surveillance systems. Although it is necessary to defend against an adversarial patch attack, very few efforts have been dedicated to defending against person-hiding attacks. To tackle the problem, in this paper, we propose a novel defense strategy that mitigates a person-hiding attack by optimizing defense patterns, while previous methods optimize the model. In the proposed method, a frame-shaped pattern called a 'universal white frame' (UWF) is optimized and placed on the outside of the image. To defend against adversarial patch attacks, UWF should have three properties (i) suppressing the effect of the adversarial patch, (ii) maintaining its original prediction, and (iii) applicable regardless of images. To satisfy the aforementioned properties, we propose a novel pattern optimization algorithm that can defend against the adversarial patch. Through comprehensive experiments, we demonstrate that the proposed method effectively defends against the adversarial patch attack.
https://arxiv.org/abs/2204.13004v1
https://arxiv.org/pdf/2204.13004v1.pdf
null
[ "Youngjoon Yu", "Hong Joo Lee", "Hakmin Lee", "Yong Man Ro" ]
[ "Autonomous Driving", "Object Detection", "Object Detection" ]
1,651,017,600,000
[]
191,602
290,047
https://paperswithcode.com/paper/towards-fewer-labels-support-pair-active
2204.10008
Towards Fewer Labels: Support Pair Active Learning for Person Re-identification
Supervised-learning based person re-identification (re-id) require a large amount of manual labeled data, which is not applicable in practical re-id deployment. In this work, we propose a Support Pair Active Learning (SPAL) framework to lower the manual labeling cost for large-scale person reidentification. The support pairs can provide the most informative relationships and support the discriminative feature learning. Specifically, we firstly design a dual uncertainty selection strategy to iteratively discover support pairs and require human annotations. Afterwards, we introduce a constrained clustering algorithm to propagate the relationships of labeled support pairs to other unlabeled samples. Moreover, a hybrid learning strategy consisting of an unsupervised contrastive loss and a supervised support pair loss is proposed to learn the discriminative re-id feature representation. The proposed overall framework can effectively lower the labeling cost by mining and leveraging the critical support pairs. Extensive experiments demonstrate the superiority of the proposed method over state-of-the-art active learning methods on large-scale person re-id benchmarks.
https://arxiv.org/abs/2204.10008v1
https://arxiv.org/pdf/2204.10008v1.pdf
null
[ "Dapeng Jin", "Minxian Li" ]
[ "Active Learning", "Person Re-Identification" ]
1,650,499,200,000
[]
22,530
822
https://paperswithcode.com/paper/addition-of-code-mixed-features-to-enhance
1806.03821
Addition of Code Mixed Features to Enhance the Sentiment Prediction of Song Lyrics
Sentiment analysis, also called opinion mining, is the field of study that analyzes people's opinions,sentiments, attitudes and emotions. Songs are important to sentiment analysis since the songs and mood are mutually dependent on each other. Based on the selected song it becomes easy to find the mood of the listener, in future it can be used for recommendation. The song lyric is a rich source of datasets containing words that are helpful in analysis and classification of sentiments generated from it. Now a days we observe a lot of inter-sentential and intra-sentential code-mixing in songs which has a varying impact on audience. To study this impact we created a Telugu songs dataset which contained both Telugu-English code-mixed and pure Telugu songs. In this paper, we classify the songs based on its arousal as exciting or non-exciting. We develop a language identification tool and introduce code-mixing features obtained from it as additional features. Our system with these additional features attains 4-5% accuracy greater than traditional approaches on our dataset.
http://arxiv.org/abs/1806.03821v1
http://arxiv.org/pdf/1806.03821v1.pdf
null
[ "Gangula Rama Rohit Reddy", "Radhika Mamidi" ]
[ "Language Identification", "Opinion Mining", "Sentiment Analysis" ]
1,528,675,200,000
[]
174,454
6,803
https://paperswithcode.com/paper/multi-lingual-neural-title-generation-for-e
1804.01041
Multi-lingual neural title generation for e-Commerce browse pages
To provide better access of the inventory to buyers and better search engine optimization, e-Commerce websites are automatically generating millions of easily searchable browse pages. A browse page consists of a set of slot name/value pairs within a given category, grouping multiple items which share some characteristics. These browse pages require a title describing the content of the page. Since the number of browse pages are huge, manual creation of these titles is infeasible. Previous statistical and neural approaches depend heavily on the availability of large amounts of data in a language. In this research, we apply sequence-to-sequence models to generate titles for high- & low-resourced languages by leveraging transfer learning. We train these models on multi-lingual data, thereby creating one joint model which can generate titles in various different languages. Performance of the title generation system is evaluated on three different languages; English, German, and French, with a particular focus on low-resourced French language.
http://arxiv.org/abs/1804.01041v1
http://arxiv.org/pdf/1804.01041v1.pdf
NAACL 2018 6
[ "Prashant Mathur", "Nicola Ueffing", "Gregor Leusch" ]
[ "Transfer Learning" ]
1,522,713,600,000
[]
185,413
193,153
https://paperswithcode.com/paper/understanding-interpretability-by-generalized
2012.03089
Understanding Interpretability by generalized distillation in Supervised Classification
The ability to interpret decisions taken by Machine Learning (ML) models is fundamental to encourage trust and reliability in different practical applications. Recent interpretation strategies focus on human understanding of the underlying decision mechanisms of the complex ML models. However, these strategies are restricted by the subjective biases of humans. To dissociate from such human biases, we propose an interpretation-by-distillation formulation that is defined relative to other ML models. We generalize the distillation technique for quantifying interpretability, using an information-theoretic perspective, removing the role of ground-truth from the definition of interpretability. Our work defines the entropy of supervised classification models, providing bounds on the entropy of Piece-Wise Linear Neural Networks (PWLNs), along with the first theoretical bounds on the interpretability of PWLNs. We evaluate our proposed framework on the MNIST, Fashion-MNIST and Stanford40 datasets and demonstrate the applicability of the proposed theoretical framework in different supervised classification scenarios.
https://arxiv.org/abs/2012.03089v1
https://arxiv.org/pdf/2012.03089v1.pdf
null
[ "Adit Agarwal", "Dr. K. K. Shukla", "Arjan Kuijper", "Anirban Mukhopadhyay" ]
[ "Classification", "Classification" ]
1,607,126,400,000
[ { "code_snippet_url": null, "description": "Please enter a description about the method here", "full_name": "Interpretability", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Models** are methods that build representations of images f...
60,649
313,207
https://paperswithcode.com/paper/improving-multilayer-perceptron-mlp-based
2208.09711
Improving Multilayer-Perceptron(MLP)-based Network Anomaly Detection with Birch Clustering on CICIDS-2017 Dataset
Machine learning algorithms have been widely used in intrusion detection systems, including Multi-layer Perceptron (MLP). In this study, we proposed a two-stage model that combines the Birch clustering algorithm and MLP classifier to improve the performance of network anomaly multi-classification. In our proposed method, we first apply Birch or Kmeans as an unsupervised clustering algorithm to the CICIDS-2017 dataset to pre-group the data. The generated pseudo-label is then added as an additional feature to the training of the MLP-based classifier. The experimental results show that using Birch and K-Means clustering for data pre-grouping can improve intrusion detection system performance. Our method can achieve 99.73% accuracy in multi-classification using Birch clustering, which is better than similar researches using a stand-alone MLP model.
https://arxiv.org/abs/2208.09711v1
https://arxiv.org/pdf/2208.09711v1.pdf
null
[ "Yuhua Yin", "Julian Jang-Jaccard", "Fariza Sabrina", "Jin Kwak" ]
[ "Anomaly Detection", "Intrusion Detection", "pseudo label" ]
1,660,953,600,000
[ { "code_snippet_url": "https://cryptoabout.info", "description": "**k-Means Clustering** is a clustering algorithm that divides a training set into $k$ different clusters of examples that are near each other. It works by initializing $k$ different centroids {$\\mu\\left(1\\right),\\ldots,\\mu\\left(k\\right...
92,023
52,195
https://paperswithcode.com/paper/twitter-sentiment-analysis-via-bi-sense-emoji
1807.07961
Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM
Sentiment analysis on large-scale social media data is important to bridge the gaps between social media contents and real world activities including political election prediction, individual and public emotional status monitoring and analysis, and so on. Although textual sentiment analysis has been well studied based on platforms such as Twitter and Instagram, analysis of the role of extensive emoji uses in sentiment analysis remains light. In this paper, we propose a novel scheme for Twitter sentiment analysis with extra attention on emojis. We first learn bi-sense emoji embeddings under positive and negative sentimental tweets individually, and then train a sentiment classifier by attending on these bi-sense emoji embeddings with an attention-based long short-term memory network (LSTM). Our experiments show that the bi-sense embedding is effective for extracting sentiment-aware embeddings of emojis and outperforms the state-of-the-art models. We also visualize the attentions to show that the bi-sense emoji embedding provides better guidance on the attention mechanism to obtain a more robust understanding of the semantics and sentiments.
http://arxiv.org/abs/1807.07961v2
http://arxiv.org/pdf/1807.07961v2.pdf
null
[ "Yuxiao Chen", "Jianbo Yuan", "Quanzeng You", "Jiebo Luo" ]
[ "Sentiment Analysis", "Twitter Sentiment Analysis" ]
1,532,044,800,000
[ { "code_snippet_url": "https://github.com/aykutaaykut/Memory-Networks", "description": "A **Memory Network** provides a memory component that can be read from and written to with the inference capabilities of a neural network model. The motivation is that many neural networks lack a long-term memory compone...
87,823
164,737
https://paperswithcode.com/paper/an-incentive-mechanism-for-federated-learning
2009.10269
An Incentive Mechanism for Federated Learning in Wireless Cellular network: An Auction Approach
Federated Learning (FL) is a distributed learning framework that can deal with the distributed issue in machine learning and still guarantee high learning performance. However, it is impractical that all users will sacrifice their resources to join the FL algorithm. This motivates us to study the incentive mechanism design for FL. In this paper, we consider a FL system that involves one base station (BS) and multiple mobile users. The mobile users use their own data to train the local machine learning model, and then send the trained models to the BS, which generates the initial model, collects local models and constructs the global model. Then, we formulate the incentive mechanism between the BS and mobile users as an auction game where the BS is an auctioneer and the mobile users are the sellers. In the proposed game, each mobile user submits its bids according to the minimal energy cost that the mobile users experiences in participating in FL. To decide winners in the auction and maximize social welfare, we propose the primal-dual greedy auction mechanism. The proposed mechanism can guarantee three economic properties, namely, truthfulness, individual rationality and efficiency. Finally, numerical results are shown to demonstrate the performance effectiveness of our proposed mechanism.
https://arxiv.org/abs/2009.10269v1
https://arxiv.org/pdf/2009.10269v1.pdf
null
[ "Tra Huong Thi Le", "Nguyen H. Tran", "Yan Kyaw Tun", "Minh N. H. Nguyen", "Shashi Raj Pandey", "Zhu Han", "Choong Seon Hong" ]
[ "Federated Learning" ]
1,600,732,800,000
[]
25,683
314,754
https://paperswithcode.com/paper/spoofing-aware-attention-based-asv-back-end
2209.00423
Spoofing-Aware Attention based ASV Back-end with Multiple Enrollment Utterances and a Sampling Strategy for the SASV Challenge 2022
Current state-of-the-art automatic speaker verification (ASV) systems are vulnerable to presentation attacks, and several countermeasures (CMs), which distinguish bona fide trials from spoofing ones, have been explored to protect ASV. However, ASV systems and CMs are generally developed and optimized independently without considering their inter-relationship. In this paper, we propose a new spoofing-aware ASV back-end module that efficiently computes a combined ASV score based on speaker similarity and CM score. In addition to the learnable fusion function of the two scores, the proposed back-end module has two types of attention components, scaled-dot and feed-forward self-attention, so that intra-relationship information of multiple enrollment utterances can also be learned at the same time. Moreover, a new effective trials-sampling strategy is designed for simulating new spoofing-aware verification scenarios introduced in the Spoof-Aware Speaker Verification (SASV) challenge 2022.
https://arxiv.org/abs/2209.00423v1
https://arxiv.org/pdf/2209.00423v1.pdf
null
[ "Chang Zeng", "Lin Zhang", "Meng Liu", "Junichi Yamagishi" ]
[ "Speaker Verification" ]
1,661,990,400,000
[]
186,256
256,745
https://paperswithcode.com/paper/parbleu-augmenting-metrics-with-automatic
null
ParBLEU: Augmenting Metrics with Automatic Paraphrases for the WMT’20 Metrics Shared Task
We describe parBLEU, parCHRF++, and parESIM, which augment baseline metrics with automatically generated paraphrases produced by PRISM (Thompson and Post, 2020a), a multilingual neural machine translation system. We build on recent work studying how to improve BLEU by using diverse automatically paraphrased references (Bawden et al., 2020), extending experiments to the multilingual setting for the WMT2020 metrics shared task and for three base metrics. We compare their capacity to exploit up to 100 additional synthetic references. We find that gains are possible when using additional, automatically paraphrased references, although they are not systematic. However, segment-level correlations, particularly into English, are improved for all three metrics and even with higher numbers of paraphrased references.
https://aclanthology.org/2020.wmt-1.98
https://aclanthology.org/2020.wmt-1.98.pdf
WMT (EMNLP) 2020 11
[ "Rachel Bawden", "Biao Zhang", "Andre Tättar", "Matt Post" ]
[ "Machine Translation" ]
1,604,188,800,000
[]
32,834
207,192
https://paperswithcode.com/paper/learning-to-simulate-on-sparse-trajectory
2103.11845
Learning to Simulate on Sparse Trajectory Data
Simulation of the real-world traffic can be used to help validate the transportation policies. A good simulator means the simulated traffic is similar to real-world traffic, which often requires dense traffic trajectories (i.e., with a high sampling rate) to cover dynamic situations in the real world. However, in most cases, the real-world trajectories are sparse, which makes simulation challenging. In this paper, we present a novel framework ImInGAIL to address the problem of learning to simulate the driving behavior from sparse real-world data. The proposed architecture incorporates data interpolation with the behavior learning process of imitation learning. To the best of our knowledge, we are the first to tackle the data sparsity issue for behavior learning problems. We investigate our framework on both synthetic and real-world trajectory datasets of driving vehicles, showing that our method outperforms various baselines and state-of-the-art methods.
https://arxiv.org/abs/2103.11845v1
https://arxiv.org/pdf/2103.11845v1.pdf
null
[ "Hua Wei", "Chacha Chen", "Chang Liu", "Guanjie Zheng", "Zhenhui Li" ]
[ "Imitation Learning" ]
1,616,371,200,000
[]
148,197
13,588
https://paperswithcode.com/paper/a-variational-approach-to-shape-from-shading
1709.10354
A Variational Approach to Shape-from-shading Under Natural Illumination
A numerical solution to shape-from-shading under natural illumination is presented. It builds upon an augmented Lagrangian approach for solving a generic PDE-based shape-from-shading model which handles directional or spherical harmonic lighting, orthographic or perspective projection, and greylevel or multi-channel images. Real-world applications to shading-aware depth map denoising, refinement and completion are presented.
http://arxiv.org/abs/1709.10354v2
http://arxiv.org/pdf/1709.10354v2.pdf
null
[ "Yvain Quéau", "Jean Mélou", "Fabien Castan", "Daniel Cremers", "Jean-Denis Durou" ]
[ "Denoising" ]
1,506,643,200,000
[]
131,612
212,741
https://paperswithcode.com/paper/unsupervised-learning-of-explainable-parse
2104.04998
Unsupervised Learning of Explainable Parse Trees for Improved Generalisation
Recursive neural networks (RvNN) have been shown useful for learning sentence representations and helped achieve competitive performance on several natural language inference tasks. However, recent RvNN-based models fail to learn simple grammar and meaningful semantics in their intermediate tree representation. In this work, we propose an attention mechanism over Tree-LSTMs to learn more meaningful and explainable parse tree structures. We also demonstrate the superior performance of our proposed model on natural language inference, semantic relatedness, and sentiment analysis tasks and compare them with other state-of-the-art RvNN based methods. Further, we present a detailed qualitative and quantitative analysis of the learned parse trees and show that the discovered linguistic structures are more explainable, semantically meaningful, and grammatically correct than recent approaches. The source code of the paper is available at https://github.com/atul04/Explainable-Latent-Structures-Using-Attention.
https://arxiv.org/abs/2104.04998v1
https://arxiv.org/pdf/2104.04998v1.pdf
null
[ "Atul Sahay", "Ayush Maheshwari", "Ritesh Kumar", "Ganesh Ramakrishnan", "Manjesh Kumar Hanawal", "Kavi Arya" ]
[ "Natural Language Inference", "Sentiment Analysis" ]
1,618,099,200,000
[]
137,812
277,335
https://paperswithcode.com/paper/towards-weakly-supervised-text-spotting-using
2202.05508
Towards Weakly-Supervised Text Spotting using a Multi-Task Transformer
Text spotting end-to-end methods have recently gained attention in the literature due to the benefits of jointly optimizing the text detection and recognition components. Existing methods usually have a distinct separation between the detection and recognition branches, requiring exact annotations for the two tasks. We introduce TextTranSpotter (TTS), a transformer-based approach for text spotting and the first text spotting framework which may be trained with both fully- and weakly-supervised settings. By learning a single latent representation per word detection, and using a novel loss function based on the Hungarian loss, our method alleviates the need for expensive localization annotations. Trained with only text transcription annotations on real data, our weakly-supervised method achieves competitive performance with previous state-of-the-art fully-supervised methods. When trained in a fully-supervised manner, TextTranSpotter shows state-of-the-art results on multiple benchmarks.
https://arxiv.org/abs/2202.05508v2
https://arxiv.org/pdf/2202.05508v2.pdf
CVPR 2022 1
[ "Yair Kittenplon", "Inbal Lavi", "Sharon Fogel", "Yarin Bar", "R. Manmatha", "Pietro Perona" ]
[ "Text Spotting" ]
1,644,537,600,000
[]
6,532
168,919
https://paperswithcode.com/paper/a-novel-strategy-for-covid-19-classification
2010.05690
COVID-19 Classification Using Staked Ensembles: A Comprehensive Analysis
The issue of COVID-19, increasing with a massive mortality rate. This led to the WHO declaring it as a pandemic. In this situation, it is crucial to perform efficient and fast diagnosis. The reverse transcript polymerase chain reaction (RTPCR) test is conducted to detect the presence of SARS-CoV-2. This test is time-consuming and instead chest CT (or Chest X-ray) can be used for a fast and accurate diagnosis. Automated diagnosis is considered to be important as it reduces human effort and provides accurate and low-cost tests. The contributions of our research are three-fold. First, it is aimed to analyse the behaviour and performance of variant vision models ranging from Inception to NAS networks with the appropriate fine-tuning procedure. Second, the behaviour of these models is visually analysed by plotting CAMs for individual networks and determining classification performance with AUCROC curves. Thirdly, stacked ensembles techniques are imparted to provide higher generalisation on combining the fine-tuned models, in which six ensemble neural networks are designed by combining the existing fine-tuned networks. Implying these stacked ensembles provides a great generalization to the models. The ensemble model designed by combining all the fine-tuned networks obtained a state-of-the-art accuracy score of 99.17%. The precision and recall for the COVID-19 class are 99.99% and 89.79% respectively, which resembles the robustness of the stacked ensembles.
https://arxiv.org/abs/2010.05690v3
https://arxiv.org/pdf/2010.05690v3.pdf
null
[ "Lalith Bharadwaj B", "Rohit Boddeda", "Sai Vardhan K", "Madhu G" ]
[ "Classification" ]
1,602,028,800,000
[]
2,990
264,422
https://paperswithcode.com/paper/multilingual-pre-training-with-language-and
null
Multilingual pre-training with Language and Task Adaptation for Multilingual Text Style Transfer
We exploit the pre-trained seq2seq model mBART for multilingual text style transfer. Using machine translated data as well as gold aligned English sentences yields state-of-the-art results in the three target languages we consider. Besides, in view of the general scarcity of parallel data, we propose a modular approach for multilingual formality transfer, which consists of two training strategies that target adaptation to both language and task. Our approach achieves competitive performance without monolingual task-specific parallel data and can be applied to other style transfer tasks as well as to other languages.
https://openreview.net/forum?id=rWPLdCIiY6g
https://openreview.net/pdf?id=rWPLdCIiY6g
ACL ARR November 2021 11
[ "Anonymous" ]
[ "Style Transfer", "Text Style Transfer" ]
1,637,020,800,000
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329", "description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nH...
2,148
215,525
https://paperswithcode.com/paper/discovering-an-aid-policy-to-minimize-student
2104.10258
Discovering an Aid Policy to Minimize Student Evasion Using Offline Reinforcement Learning
High dropout rates in tertiary education expose a lack of efficiency that causes frustration of expectations and financial waste. Predicting students at risk is not enough to avoid student dropout. Usually, an appropriate aid action must be discovered and applied in the proper time for each student. To tackle this sequential decision-making problem, we propose a decision support method to the selection of aid actions for students using offline reinforcement learning to support decision-makers effectively avoid student dropout. Additionally, a discretization of student's state space applying two different clustering methods is evaluated. Our experiments using logged data of real students shows, through off-policy evaluation, that the method should achieve roughly 1.0 to 1.5 times as much cumulative reward as the logged policy. So, it is feasible to help decision-makers apply appropriate aid actions and, possibly, reduce student dropout.
https://arxiv.org/abs/2104.10258v1
https://arxiv.org/pdf/2104.10258v1.pdf
null
[ "Leandro M. de Lima", "Renato A. Krohling" ]
[ "reinforcement-learning" ]
1,618,876,800,000
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (...
77,804
8,616
https://paperswithcode.com/paper/learning-approximate-inference-networks-for
1803.03376
Learning Approximate Inference Networks for Structured Prediction
Structured prediction energy networks (SPENs; Belanger & McCallum 2016) use neural network architectures to define energy functions that can capture arbitrary dependencies among parts of structured outputs. Prior work used gradient descent for inference, relaxing the structured output to a set of continuous variables and then optimizing the energy with respect to them. We replace this use of gradient descent with a neural network trained to approximate structured argmax inference. This "inference network" outputs continuous values that we treat as the output structure. We develop large-margin training criteria for joint training of the structured energy function and inference network. On multi-label classification we report speed-ups of 10-60x compared to (Belanger et al, 2017) while also improving accuracy. For sequence labeling with simple structured energies, our approach performs comparably to exact inference while being much faster at test time. We then demonstrate improved accuracy by augmenting the energy with a "label language model" that scores entire output label sequences, showing it can improve handling of long-distance dependencies in part-of-speech tagging. Finally, we show how inference networks can replace dynamic programming for test-time inference in conditional random fields, suggestive for their general use for fast inference in structured settings.
http://arxiv.org/abs/1803.03376v1
http://arxiv.org/pdf/1803.03376v1.pdf
ICLR 2018 1
[ "Lifu Tu", "Kevin Gimpel" ]
[ "Language Modelling", "Multi-Label Classification", "Part-Of-Speech Tagging", "Structured Prediction" ]
1,520,553,600,000
[]
56,649
221,481
https://paperswithcode.com/paper/stytr-2-unbiased-image-style-transfer-with
2105.14576
StyTr$^2$: Image Style Transfer with Transformers
The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content. Owing to the locality in convolutional neural networks (CNNs), extracting and maintaining the global information of input images is difficult. Therefore, traditional neural style transfer methods face biased content representation. To address this critical issue, we take long-range dependencies of input images into account for image style transfer by proposing a transformer-based approach called StyTr$^2$. In contrast with visual transformers for other vision tasks, StyTr$^2$ contains two different transformer encoders to generate domain-specific sequences for content and style, respectively. Following the encoders, a multi-layer transformer decoder is adopted to stylize the content sequence according to the style sequence. We also analyze the deficiency of existing positional encoding methods and propose the content-aware positional encoding (CAPE), which is scale-invariant and more suitable for image style transfer tasks. Qualitative and quantitative experiments demonstrate the effectiveness of the proposed StyTr$^2$ compared with state-of-the-art CNN-based and flow-based approaches. Code and models are available at https://github.com/diyiiyiii/StyTR-2.
https://arxiv.org/abs/2105.14576v3
https://arxiv.org/pdf/2105.14576v3.pdf
null
[ "Yingying Deng", "Fan Tang", "WeiMing Dong", "Chongyang Ma", "Xingjia Pan", "Lei Wang", "Changsheng Xu" ]
[ "Style Transfer" ]
1,622,332,800,000
[]
130,489
206,830
https://paperswithcode.com/paper/consistency-based-active-learning-for-object
2103.10374
Consistency-based Active Learning for Object Detection
Active learning aims to improve the performance of task model by selecting the most informative samples with a limited budget. Unlike most recent works that focused on applying active learning for image classification, we propose an effective Consistency-based Active Learning method for object Detection (CALD), which fully explores the consistency between original and augmented data. CALD has three appealing benefits. (i) CALD is systematically designed by investigating the weaknesses of existing active learning methods, which do not take the unique challenges of object detection into account. (ii) CALD unifies box regression and classification with a single metric, which is not concerned by active learning methods for classification. CALD also focuses on the most informative local region rather than the whole image, which is beneficial for object detection. (iii) CALD not only gauges individual information for sample selection, but also leverages mutual information to encourage a balanced data distribution. Extensive experiments show that CALD significantly outperforms existing state-of-the-art task-agnostic and detection-specific active learning methods on general object detection datasets. Based on the Faster R-CNN detector, CALD consistently surpasses the baseline method (random selection) by 2.9/2.8/0.8 mAP on average on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO. Code is available at \url{https://github.com/we1pingyu/CALD}
https://arxiv.org/abs/2103.10374v3
https://arxiv.org/pdf/2103.10374v3.pdf
null
[ "Weiping Yu", "Sijie Zhu", "Taojiannan Yang", "Chen Chen" ]
[ "Active Learning", "Classification", "Classification", "Image Classification", "Object Detection", "Object Detection" ]
1,616,025,600,000
[ { "code_snippet_url": "https://github.com/pytorch/vision/blob/5e9ebe8dadc0ea2841a46cfcd82a93b4ce0d4519/torchvision/ops/roi_pool.py#L10", "description": "**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentatio...
62,718
52,784
https://paperswithcode.com/paper/news-session-based-recommendations-using-deep
1808.00076
News Session-Based Recommendations using Deep Neural Networks
News recommender systems are aimed to personalize users experiences and help them to discover relevant articles from a large and dynamic search space. Therefore, news domain is a challenging scenario for recommendations, due to its sparse user profiling, fast growing number of items, accelerated item's value decay, and users preferences dynamic shift. Some promising results have been recently achieved by the usage of Deep Learning techniques on Recommender Systems, specially for item's feature extraction and for session-based recommendations with Recurrent Neural Networks. In this paper, it is proposed an instantiation of the CHAMELEON -- a Deep Learning Meta-Architecture for News Recommender Systems. This architecture is composed of two modules, the first responsible to learn news articles representations, based on their text and metadata, and the second module aimed to provide session-based recommendations using Recurrent Neural Networks. The recommendation task addressed in this work is next-item prediction for users sessions: "what is the next most likely article a user might read in a session?" Users sessions context is leveraged by the architecture to provide additional information in such extreme cold-start scenario of news recommendation. Users' behavior and item features are both merged in an hybrid recommendation approach. A temporal offline evaluation method is also proposed as a complementary contribution, for a more realistic evaluation of such task, considering dynamic factors that affect global readership interests like popularity, recency, and seasonality. Experiments with an extensive number of session-based recommendation methods were performed and the proposed instantiation of CHAMELEON meta-architecture obtained a significant relative improvement in top-n accuracy and ranking metrics (10% on Hit Rate and 13% on MRR) over the best benchmark methods.
http://arxiv.org/abs/1808.00076v3
http://arxiv.org/pdf/1808.00076v3.pdf
null
[ "Gabriel de Souza P. Moreira", "Felipe Ferreira", "Adilson Marques da Cunha" ]
[ "News Recommendation", "Recommendation Systems", "Session-Based Recommendations" ]
1,532,995,200,000
[]
166,734
254,403
https://paperswithcode.com/paper/are-factuality-checkers-reliable-adversarial
null
Are Factuality Checkers Reliable? Adversarial Meta-evaluation of Factuality in Summarization
With the continuous upgrading of the summarization systems driven by deep neural networks, researchers have higher requirements on the quality of the generated summaries, which should be not only fluent and informative but also factually correct. As a result, the field of factual evaluation has developed rapidly recently. Despite its initial progress in evaluating generated summaries, the meta-evaluation methodologies of factuality metrics are limited in their opacity, leading to the insufficient understanding of factuality metrics’ relative advantages and their applicability. In this paper, we present an adversarial meta-evaluation methodology that allows us to (i) diagnose the fine-grained strengths and weaknesses of 6 existing top-performing metrics over 24 diagnostic test datasets, (ii) search for directions for further improvement by data augmentation. Our observations from this work motivate us to propose several calls for future research. We make all codes, diagnostic test datasets, trained factuality models available: https://github.com/zide05/AdvFact.
https://aclanthology.org/2021.findings-emnlp.179
https://aclanthology.org/2021.findings-emnlp.179.pdf
Findings (EMNLP) 2021 11
[ "Yiran Chen", "PengFei Liu", "Xipeng Qiu" ]
[ "Data Augmentation" ]
1,635,724,800,000
[]
110,904
169,201
https://paperswithcode.com/paper/block-term-tensor-neural-networks
2010.04963
Block-term Tensor Neural Networks
Deep neural networks (DNNs) have achieved outstanding performance in a wide range of applications, e.g., image classification, natural language processing, etc. Despite the good performance, the huge number of parameters in DNNs brings challenges to efficient training of DNNs and also their deployment in low-end devices with limited computing resources. In this paper, we explore the correlations in the weight matrices, and approximate the weight matrices with the low-rank block-term tensors. We name the new corresponding structure as block-term tensor layers (BT-layers), which can be easily adapted to neural network models, such as CNNs and RNNs. In particular, the inputs and the outputs in BT-layers are reshaped into low-dimensional high-order tensors with a similar or improved representation power. Sufficient experiments have demonstrated that BT-layers in CNNs and RNNs can achieve a very large compression ratio on the number of parameters while preserving or improving the representation power of the original DNNs.
https://arxiv.org/abs/2010.04963v2
https://arxiv.org/pdf/2010.04963v2.pdf
null
[ "Jinmian Ye", "Guangxi Li", "Di Chen", "Haiqin Yang", "Shandian Zhe", "Zenglin Xu" ]
[ "Image Classification" ]
1,602,288,000,000
[]
150,066
244,768
https://paperswithcode.com/paper/aggregation-with-feature-detection
null
Aggregation With Feature Detection
Aggregating features from different depths of a network is widely adopted to improve the network capability. Lots of modern architectures are equipped with skip connections, which actually makes the feature aggregation happen in all these networks. Since different features tell different semantic meanings, there are inconsistencies and incompatibilities to be solved. However, existing works naively blend deep features via element-wise summation or concatenation with a convolution behind. Better feature aggregation method beyond summation or concatenation is rarely explored. In this paper, given two layers of features to be aggregated together, we first detect and identify where and what needs to be updated in one layer, then replace the feature at the identified location with the information of the other layer. This process, which we call DEtect-rePLAce (DEPLA), enables us to avoid inconsistent patterns while keeping useful information in the merged outputs. Experimental results demonstrate our method largely boosts multiple baselines e.g. ResNet, FishNet and FPN on three major vision tasks including ImageNet classification, MS COCO object detection and instance segmentation.
http://openaccess.thecvf.com//content/ICCV2021/html/Sun_Aggregation_With_Feature_Detection_ICCV_2021_paper.html
http://openaccess.thecvf.com//content/ICCV2021/papers/Sun_Aggregation_With_Feature_Detection_ICCV_2021_paper.pdf
ICCV 2021 10
[ "Shuyang Sun", "Xiaoyu Yue", "Xiaojuan Qi", "Wanli Ouyang", "Victor Adrian Prisacariu", "Philip H.S. Torr" ]
[ "Instance Segmentation", "Object Detection", "Object Detection", "Semantic Segmentation" ]
1,609,459,200,000
[ { "code_snippet_url": "", "description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - me...
39,332
186,426
https://paperswithcode.com/paper/towards-adversarial-learning-of-speaker
1903.09606
Towards adversarial learning of speaker-invariant representation for speech emotion recognition
Speech emotion recognition (SER) has attracted great attention in recent years due to the high demand for emotionally intelligent speech interfaces. Deriving speaker-invariant representations for speech emotion recognition is crucial. In this paper, we propose to apply adversarial training to SER to learn speaker-invariant representations. Our model consists of three parts: a representation learning sub-network with time-delay neural network (TDNN) and LSTM with statistical pooling, an emotion classification network and a speaker classification network. Both the emotion and speaker classification network take the output of the representation learning network as input. Two training strategies are employed: one based on domain adversarial training (DAT) and the other one based on cross-gradient training (CGT). Besides the conventional data set, we also evaluate our proposed models on a much larger publicly available emotion data set with 250 speakers. Evaluation results show that on IEMOCAP, DAT and CGT provides 5.6% and 7.4% improvement respectively, over a baseline system without speaker-invariant representation learning on 5-fold cross validation. On the larger emotion data set, while CGT fails to yield better results than baseline, DAT can still provide 9.8% relative improvement on a standalone test set.
http://arxiv.org/abs/1903.09606v1
http://arxiv.org/pdf/1903.09606v1.pdf
null
[]
[ "Classification", "Emotion Classification", "Emotion Recognition", "Representation Learning", "Speech Emotion Recognition" ]
1,553,212,800,000
[]
91,257
110,612
https://paperswithcode.com/paper/chinese-relation-extraction-with-multi
null
Chinese Relation Extraction with Multi-Grained Information and External Linguistic Knowledge
Chinese relation extraction is conducted using neural networks with either character-based or word-based inputs, and most existing methods typically suffer from segmentation errors and ambiguity of polysemy. To address the issues, we propose a multi-grained lattice framework (MG lattice) for Chinese relation extraction to take advantage of multi-grained language information and external linguistic knowledge. In this framework, (1) we incorporate word-level information into character sequence inputs so that segmentation errors can be avoided. (2) We also model multiple senses of polysemous words with the help of external linguistic knowledge, so as to alleviate polysemy ambiguity. Experiments on three real-world datasets in distinct domains show consistent and significant superiority and robustness of our model, as compared with other baselines. We will release the source code of this paper in the future.
https://aclanthology.org/P19-1430
https://aclanthology.org/P19-1430.pdf
ACL 2019 7
[ "Ziran Li", "Ning Ding", "Zhiyuan Liu", "Hai-Tao Zheng", "Ying Shen" ]
[ "Relation Extraction" ]
1,561,939,200,000
[]
122,862
98,124
https://paperswithcode.com/paper/transformable-bottleneck-networks
1904.06458
Transformable Bottleneck Networks
We propose a novel approach to performing fine-grained 3D manipulation of image content via a convolutional neural network, which we call the Transformable Bottleneck Network (TBN). It applies given spatial transformations directly to a volumetric bottleneck within our encoder-bottleneck-decoder architecture. Multi-view supervision encourages the network to learn to spatially disentangle the feature space within the bottleneck. The resulting spatial structure can be manipulated with arbitrary spatial transformations. We demonstrate the efficacy of TBNs for novel view synthesis, achieving state-of-the-art results on a challenging benchmark. We demonstrate that the bottlenecks produced by networks trained for this task contain meaningful spatial structure that allows us to intuitively perform a variety of image manipulations in 3D, well beyond the rigid transformations seen during training. These manipulations include non-uniform scaling, non-rigid warping, and combining content from different images. Finally, we extract explicit 3D structure from the bottleneck, performing impressive 3D reconstruction from a single input image.
https://arxiv.org/abs/1904.06458v5
https://arxiv.org/pdf/1904.06458v5.pdf
ICCV 2019 10
[ "Kyle Olszewski", "Sergey Tulyakov", "Oliver Woodford", "Hao Li", "Linjie Luo" ]
[ "3D Reconstruction", "Novel View Synthesis" ]
1,555,113,600,000
[]
120,802
107,961
https://paperswithcode.com/paper/volmap-a-real-time-model-for-semantic
1906.11873
VolMap: A Real-time Model for Semantic Segmentation of a LiDAR surrounding view
This paper introduces VolMap, a real-time approach for the semantic segmentation of a 3D LiDAR surrounding view system in autonomous vehicles. We designed an optimized deep convolution neural network that can accurately segment the point cloud produced by a 360\degree{} LiDAR setup, where the input consists of a volumetric bird-eye view with LiDAR height layers used as input channels. We further investigated the usage of multi-LiDAR setup and its effect on the performance of the semantic segmentation task. Our evaluations are carried out on a large scale 3D object detection benchmark containing a LiDAR cocoon setup, along with KITTI dataset, where the per-point segmentation labels are derived from 3D bounding boxes. We show that VolMap achieved an excellent balance between high accuracy and real-time running on CPU.
https://arxiv.org/abs/1906.11873v1
https://arxiv.org/pdf/1906.11873v1.pdf
null
[ "Hager Radi", "Waleed Ali" ]
[ "3D Object Detection", "Autonomous Vehicles", "Object Detection", "Object Detection", "Semantic Segmentation" ]
1,560,297,600,000
[ { "code_snippet_url": null, "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively,...
71,378
123,059
https://paperswithcode.com/paper/using-dynamic-embeddings-to-improve-static
1911.02929
How Can BERT Help Lexical Semantics Tasks?
Contextualized embeddings such as BERT can serve as strong input representations to NLP tasks, outperforming their static embeddings counterparts such as skip-gram, CBOW and GloVe. However, such embeddings are dynamic, calculated according to a sentence-level context, which limits their use in lexical semantics tasks. We address this issue by making use of dynamic embeddings as word representations in training static embeddings, thereby leveraging their strong representation power for disambiguating context information. Results show that this method leads to improvements over traditional static embeddings on a range of lexical semantics tasks, obtaining the best reported results on seven datasets.
https://arxiv.org/abs/1911.02929v2
https://arxiv.org/pdf/1911.02929v2.pdf
null
[ "Yile Wang", "Leyang Cui", "Yue Zhang" ]
[ "Word Embeddings" ]
1,573,084,800,000
[ { "code_snippet_url": "", "description": "**GloVe Embeddings** are a type of word embedding that encode the co-occurrence probability ratio between two words as vector differences. GloVe uses a weighted least squares objective $J$ that minimizes the difference between the dot product of the vectors of two w...
135,696
307,643
https://paperswithcode.com/paper/funqg-molecular-representation-learning-via
2207.08597
FunQG: Molecular Representation Learning Via Quotient Graphs
Learning expressive molecular representations is crucial to facilitate the accurate prediction of molecular properties. Despite the significant advancement of graph neural networks (GNNs) in molecular representation learning, they generally face limitations such as neighbors-explosion, under-reaching, over-smoothing, and over-squashing. Also, GNNs usually have high computational complexity because of the large-scale number of parameters. Typically, such limitations emerge or increase when facing relatively large-size graphs or using a deeper GNN model architecture. An idea to overcome these problems is to simplify a molecular graph into a small, rich, and informative one, which is more efficient and less challenging to train GNNs. To this end, we propose a novel molecular graph coarsening framework named FunQG utilizing Functional groups, as influential building blocks of a molecule to determine its properties, based on a graph-theoretic concept called Quotient Graph. By experiments, we show that the resulting informative graphs are much smaller than the molecular graphs and thus are good candidates for training GNNs. We apply the FunQG on popular molecular property prediction benchmarks and then compare the performance of a GNN architecture on the obtained datasets with several state-of-the-art baselines on the original datasets. By experiments, this method significantly outperforms previous baselines on various datasets, besides its dramatic reduction in the number of parameters and low computational complexity. Therefore, the FunQG can be used as a simple, cost-effective, and robust method for solving the molecular representation learning problem.
https://arxiv.org/abs/2207.08597v1
https://arxiv.org/pdf/2207.08597v1.pdf
null
[ "Hossein Hajiabolhassan", "Zahra Taheri", "Ali Hojatnia", "Yavar Taheri Yeganeh" ]
[ "Molecular Property Prediction", "Representation Learning" ]
1,658,102,400,000
[]
54,202
182,790
https://paperswithcode.com/paper/mosaicked-multispectral-image-compression
1801.03577
Mosaicked multispectral image compression based on inter- and intra-band correlation
Multispectral imaging has been utilized in many fields, but the cost of capturing and storing image data is still high. Single-sensor cameras with multispectral filter arrays can reduce the cost of capturing images at the expense of slightly lower image quality. When multispectral filter arrays are used, conventional multispectral image compression methods can be applied after interpolation, but the compressed image data after interpolation has some redundancy because the interpolated data are computed from the captured raw data. In this paper, we propose an efficient image compression method for single-sensor multispectral cameras. The proposed method encodes the captured multispectral data before interpolation. We also propose a new spectral transform method for the compression of mosaicked multispectral images. This transform is designed by considering the filter arrangement and the spectral sensitivities of a multispectral filter array. The experimental results show that the proposed method achieves a higher peak signal-to-noise ratio at higher bit rates than a conventional compression method that encodes a multispectral image after interpolation, e.g., 3-dB gain over conventional compression when coding at rates of over 0.1 bit/pixel/bands.
http://arxiv.org/abs/1801.03577v1
http://arxiv.org/pdf/1801.03577v1.pdf
null
[]
[ "Image Compression" ]
1,515,542,400,000
[]
149,774
98,226
https://paperswithcode.com/paper/swtvm-exploring-the-automated-compilation-for
1904.07404
swTVM: Towards Optimized Tensor Code Generation for Deep Learning on Sunway Many-Core Processor
The flourish of deep learning frameworks and hardware platforms has been demanding an efficient compiler that can shield the diversity in both software and hardware in order to provide application portability. Among the existing deep learning compilers, TVM is well known for its efficiency in code generation and optimization across diverse hardware devices. In the meanwhile, the Sunway many-core processor renders itself as a competitive candidate for its attractive computational power in both scientific computing and deep learning workloads. This paper combines the trends in these two directions. Specifically, we propose swTVM that extends the original TVM to support ahead-of-time compilation for architecture requiring cross-compilation such as Sunway. In addition, we leverage the architecture features during the compilation such as core group for massive parallelism, DMA for high bandwidth memory transfer and local device memory for data locality, in order to generate efficient codes for deep learning workloads on Sunway. The experiment results show that the codes generated by swTVM achieves 1.79x on average compared to the state-of-the-art deep learning framework on Sunway, across six representative benchmarks. This work is the first attempt from the compiler perspective to bridge the gap of deep learning and Sunway processor particularly with productivity and efficiency in mind. We believe this work will encourage more people to embrace the power of deep learning and Sunway many-core processor.
https://arxiv.org/abs/1904.07404v3
https://arxiv.org/pdf/1904.07404v3.pdf
null
[ "Mingzhen Li", "Changxi Liu", "Jianjin Liao", "Xuegui Zheng", "Hailong Yang", "Rujun Sun", "Jun Xu", "Lin Gan", "Guangwen Yang", "Zhongzhi Luan", "Depei Qian" ]
[ "Code Generation" ]
1,555,372,800,000
[ { "code_snippet_url": "https://www.healthnutra.org/es/maxup/", "description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-li...
62,359
197,959
https://paperswithcode.com/paper/instantaneous-psd-estimation-for-speech
2007.00542
Instantaneous PSD Estimation for Speech Enhancement based on Generalized Principal Components
Power spectral density (PSD) estimates of various microphone signal components are essential to many speech enhancement procedures. As speech is highly non-nonstationary, performance improvements may be gained by maintaining time-variations in PSD estimates. In this paper, we propose an instantaneous PSD estimation approach based on generalized principal components. Similarly to other eigenspace-based PSD estimation approaches, we rely on recursive averaging in order to obtain a microphone signal correlation matrix estimate to be decomposed. However, instead of estimating the PSDs directly from the temporally smooth generalized eigenvalues of this matrix, yielding temporally smooth PSD estimates, we propose to estimate the PSDs from newly defined instantaneous generalized eigenvalues, yielding instantaneous PSD estimates. The instantaneous generalized eigenvalues are defined from the generalized principal components, i.e. a generalized eigenvector-based transform of the microphone signals. We further show that the smooth generalized eigenvalues can be understood as a recursive average of the instantaneous generalized eigenvalues. Simulation results comparing the multi-channel Wiener filter (MWF) with smooth and instantaneous PSD estimates indicate better speech enhancement performance for the latter. A MATLAB implementation is available online.
https://arxiv.org/abs/2007.00542v1
https://arxiv.org/pdf/2007.00542v1.pdf
null
[]
[ "Speech Enhancement" ]
1,593,561,600,000
[]
166,124
300,148
https://paperswithcode.com/paper/transformer-based-urdu-handwritten-text
2206.04575
Transformer based Urdu Handwritten Text Optical Character Reader
Extracting Handwritten text is one of the most important components of digitizing information and making it available for large scale setting. Handwriting Optical Character Reader (OCR) is a research problem in computer vision and natural language processing computing, and a lot of work has been done for English, but unfortunately, very little work has been done for low resourced languages such as Urdu. Urdu language script is very difficult because of its cursive nature and change of shape of characters based on it's relative position, therefore, a need arises to propose a model which can understand complex features and generalize it for every kind of handwriting style. In this work, we propose a transformer based Urdu Handwritten text extraction model. As transformers have been very successful in Natural Language Understanding task, we explore them further to understand complex Urdu Handwriting.
https://arxiv.org/abs/2206.04575v1
https://arxiv.org/pdf/2206.04575v1.pdf
null
[ "Mohammad Daniyal Shaiq", "Musa Dildar Ahmed Cheema", "Ali Kamal" ]
[ "Natural Language Understanding", "Optical Character Recognition" ]
1,654,732,800,000
[]
884
308,450
https://paperswithcode.com/paper/revealing-secrets-from-pre-trained-models
2207.09539
Revealing Secrets From Pre-trained Models
With the growing burden of training deep learning models with large data sets, transfer-learning has been widely adopted in many emerging deep learning algorithms. Transformer models such as BERT are the main player in natural language processing and use transfer-learning as a de facto standard training method. A few big data companies release pre-trained models that are trained with a few popular datasets with which end users and researchers fine-tune the model with their own datasets. Transfer-learning significantly reduces the time and effort of training models. However, it comes at the cost of security concerns. In this paper, we show a new observation that pre-trained models and fine-tuned models have significantly high similarities in weight values. Also, we demonstrate that there exist vendor-specific computing patterns even for the same models. With these new findings, we propose a new model extraction attack that reveals the model architecture and the pre-trained model used by the black-box victim model with vendor-specific computing patterns and then estimates the entire model weights based on the weight value similarities between the fine-tuned model and pre-trained model. We also show that the weight similarity can be leveraged for increasing the model extraction feasibility through a novel weight extraction pruning.
https://arxiv.org/abs/2207.09539v1
https://arxiv.org/pdf/2207.09539v1.pdf
null
[ "Mujahid Al Rafi", "Yuan Feng", "Hyeran Jeon" ]
[ "Model extraction", "Transfer Learning" ]
1,658,188,800,000
[ { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The posit...
135,419
207,542
https://paperswithcode.com/paper/watermark-faker-towards-forgery-of-digital
2103.12489
Watermark Faker: Towards Forgery of Digital Image Watermarking
Digital watermarking has been widely used to protect the copyright and integrity of multimedia data. Previous studies mainly focus on designing watermarking techniques that are robust to attacks of destroying the embedded watermarks. However, the emerging deep learning based image generation technology raises new open issues that whether it is possible to generate fake watermarked images for circumvention. In this paper, we make the first attempt to develop digital image watermark fakers by using generative adversarial learning. Suppose that a set of paired images of original and watermarked images generated by the targeted watermarker are available, we use them to train a watermark faker with U-Net as the backbone, whose input is an original image, and after a domain-specific preprocessing, it outputs a fake watermarked image. Our experiments show that the proposed watermark faker can effectively crack digital image watermarkers in both spatial and frequency domains, suggesting the risk of such forgery attacks.
https://arxiv.org/abs/2103.12489v1
https://arxiv.org/pdf/2103.12489v1.pdf
null
[ "Ruowei Wang", "Chenguo Lin", "Qijun Zhao", "Feiyu Zhu" ]
[ "Image Generation" ]
1,616,457,600,000
[ { "code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113", "description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more informa...
136,527
266,244
https://paperswithcode.com/paper/transzero-attribute-guided-transformer-for
2112.01683
TransZero: Attribute-guided Transformer for Zero-Shot Learning
Zero-shot learning (ZSL) aims to recognize novel classes by transferring semantic knowledge from seen classes to unseen ones. Semantic knowledge is learned from attribute descriptions shared between different classes, which act as strong priors for localizing object attributes that represent discriminative region features, enabling significant visual-semantic interaction. Although some attention-based models have attempted to learn such region features in a single image, the transferability and discriminative attribute localization of visual features are typically neglected. In this paper, we propose an attribute-guided Transformer network, termed TransZero, to refine visual features and learn attribute localization for discriminative visual embedding representations in ZSL. Specifically, TransZero takes a feature augmentation encoder to alleviate the cross-dataset bias between ImageNet and ZSL benchmarks, and improves the transferability of visual features by reducing the entangled relative geometry relationships among region features. To learn locality-augmented visual features, TransZero employs a visual-semantic decoder to localize the image regions most relevant to each attribute in a given image, under the guidance of semantic attribute information. Then, the locality-augmented visual features and semantic vectors are used to conduct effective visual-semantic interaction in a visual-semantic embedding network. Extensive experiments show that TransZero achieves the new state of the art on three ZSL benchmarks. The codes are available at: \url{https://github.com/shiming-chen/TransZero}.
https://arxiv.org/abs/2112.01683v1
https://arxiv.org/pdf/2112.01683v1.pdf
null
[ "Shiming Chen", "Ziming Hong", "Yang Liu", "Guo-Sen Xie", "Baigui Sun", "Hao Li", "Qinmu Peng", "Ke Lu", "Xinge You" ]
[ "Zero-Shot Learning" ]
1,638,489,600,000
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6", "description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.co...
159,757
154,284
https://paperswithcode.com/paper/policy-learning-with-partial-observation-and
2007.03155
Policy learning with partial observation and mechanical constraints for multi-person modeling
Extracting the rules of real-world biological multi-agent behaviors is a current challenge in various scientific and engineering fields. Biological agents generally have limited observation and mechanical constraints; however, most of the conventional data-driven models ignore such assumptions, resulting in lack of biological plausibility and model interpretability for behavioral analyses in biological and cognitive science. Here we propose sequential generative models with partial observation and mechanical constraints, which can visualize whose information the agents utilize and can generate biologically plausible actions. We formulate this as a decentralized multi-agent imitation learning problem, leveraging binary partial observation models with a Gumbel-Softmax reparameterization and policy models based on hierarchical variational recurrent neural networks with physical and biomechanical constraints. We investigate the empirical performances using real-world multi-person motion datasets from basketball and soccer games.
https://arxiv.org/abs/2007.03155v1
https://arxiv.org/pdf/2007.03155v1.pdf
null
[ "Keisuke Fujii", "Naoya Takeishi", "Yoshinobu Kawahara", "Kazuya Takeda" ]
[ "Imitation Learning" ]
1,594,080,000,000
[ { "code_snippet_url": null, "description": "Please enter a description about the method here", "full_name": "Interpretability", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Models** are methods that build representations of images f...
184,546
56,292
https://paperswithcode.com/paper/emi-exploration-with-mutual-information
1810.01176
EMI: Exploration with Mutual Information
Reinforcement learning algorithms struggle when the reward signal is very sparse. In these cases, naive random exploration methods essentially rely on a random walk to stumble onto a rewarding state. Recent works utilize intrinsic motivation to guide the exploration via generative models, predictive forward models, or discriminative modeling of novelty. We propose EMI, which is an exploration method that constructs embedding representation of states and actions that does not rely on generative decoding of the full observation but extracts predictive signals that can be used to guide exploration based on forward prediction in the representation space. Our experiments show competitive results on challenging locomotion tasks with continuous control and on image-based exploration tasks with discrete actions on Atari. The source code is available at https://github.com/snu-mllab/EMI .
https://arxiv.org/abs/1810.01176v6
https://arxiv.org/pdf/1810.01176v6.pdf
null
[ "Hyoungseok Kim", "Jaekyeom Kim", "Yeonwoo Jeong", "Sergey Levine", "Hyun Oh Song" ]
[ "Continuous Control" ]
1,538,438,400,000
[]
9,889
253,027
https://paperswithcode.com/paper/embedding-structured-dictionary-entries
null
Embedding Structured Dictionary Entries
Previous work has shown how to effectively use external resources such as dictionaries to improve English-language word embeddings, either by manipulating the training process or by applying post-hoc adjustments to the embedding space. We experiment with a multi-task learning approach for explicitly incorporating the structured elements of dictionary entries, such as user-assigned tags and usage examples, when learning embeddings for dictionary headwords. Our work generalizes several existing models for learning word embeddings from dictionaries. However, we find that the most effective representations overall are learned by simply training with a skip-gram objective over the concatenated text of all entries in the dictionary, giving no particular focus to the structure of the entries.
https://aclanthology.org/2020.insights-1.18
https://aclanthology.org/2020.insights-1.18.pdf
EMNLP (insights) 2020 11
[ "Steven Wilson", "Walid Magdy", "Barbara McGillivray", "Gareth Tyson" ]
[ "Learning Word Embeddings", "Multi-Task Learning", "Word Embeddings" ]
1,604,188,800,000
[]
129,159
50,499
https://paperswithcode.com/paper/evaluating-gammatone-frequency-cepstral
1806.09010
Evaluating Gammatone Frequency Cepstral Coefficients with Neural Networks for Emotion Recognition from Speech
Current approaches to speech emotion recognition focus on speech features that can capture the emotional content of a speech signal. Mel Frequency Cepstral Coefficients (MFCCs) are one of the most commonly used representations for audio speech recognition and classification. This paper proposes Gammatone Frequency Cepstral Coefficients (GFCCs) as a potentially better representation of speech signals for emotion recognition. The effectiveness of MFCC and GFCC representations are compared and evaluated over emotion and intensity classification tasks with fully connected and recurrent neural network architectures. The results provide evidence that GFCCs outperform MFCCs in speech emotion recognition.
http://arxiv.org/abs/1806.09010v1
http://arxiv.org/pdf/1806.09010v1.pdf
null
[ "Gabrielle K. Liu" ]
[ "Classification", "Emotion Recognition", "Classification", "Speech Emotion Recognition", "Speech Recognition", "Speech Recognition" ]
1,529,712,000,000
[]
179,379
267,412
https://paperswithcode.com/paper/semantic-search-as-extractive-paraphrase-span-1
2112.04886
Semantic Search as Extractive Paraphrase Span Detection
In this paper, we approach the problem of semantic search by framing the search task as paraphrase span detection, i.e. given a segment of text as a query phrase, the task is to identify its paraphrase in a given document, the same modelling setup as typically used in extractive question answering. On the Turku Paraphrase Corpus of 100,000 manually extracted Finnish paraphrase pairs including their original document context, we find that our paraphrase span detection model outperforms two strong retrieval baselines (lexical similarity and BERT sentence embeddings) by 31.9pp and 22.4pp respectively in terms of exact match, and by 22.3pp and 12.9pp in terms of token-level F-score. This demonstrates a strong advantage of modelling the task in terms of span retrieval, rather than sentence similarity. Additionally, we introduce a method for creating artificial paraphrase data through back-translation, suitable for languages where manually annotated paraphrase resources for training the span detection model are not available.
https://arxiv.org/abs/2112.04886v1
https://arxiv.org/pdf/2112.04886v1.pdf
null
[ "Jenna Kanerva", "Hanna Kitti", "Li-Hsin Chang", "Teemu Vahtola", "Mathias Creutz", "Filip Ginter" ]
[ "Extractive Question-Answering", "Question Answering", "Sentence Embedding", "Sentence Similarity" ]
1,639,008,000,000
[ { "code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271", "description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where el...
162,770
545
https://paperswithcode.com/paper/unsupervised-adaptation-with-interpretable
1806.04872
Unsupervised Adaptation with Interpretable Disentangled Representations for Distant Conversational Speech Recognition
The current trend in automatic speech recognition is to leverage large amounts of labeled data to train supervised neural network models. Unfortunately, obtaining data for a wide range of domains to train robust models can be costly. However, it is relatively inexpensive to collect large amounts of unlabeled data from domains that we want the models to generalize to. In this paper, we propose a novel unsupervised adaptation method that learns to synthesize labeled data for the target domain from unlabeled in-domain data and labeled out-of-domain data. We first learn without supervision an interpretable latent representation of speech that encodes linguistic and nuisance factors (e.g., speaker and channel) using different latent variables. To transform a labeled out-of-domain utterance without altering its transcript, we transform the latent nuisance variables while maintaining the linguistic variables. To demonstrate our approach, we focus on a channel mismatch setting, where the domain of interest is distant conversational speech, and labels are only available for close-talking speech. Our proposed method is evaluated on the AMI dataset, outperforming all baselines and bridging the gap between unadapted and in-domain models by over 77% without using any parallel data.
http://arxiv.org/abs/1806.04872v1
http://arxiv.org/pdf/1806.04872v1.pdf
null
[ "Wei-Ning Hsu", "Hao Tang", "James Glass" ]
[ "Automatic Speech Recognition", "Speech Recognition", "Speech Recognition" ]
1,528,848,000,000
[]
54,322
94,196
https://paperswithcode.com/paper/dpod-dense-6d-pose-object-detector-in-rgb
1902.11020
DPOD: 6D Pose Object Detector and Refiner
In this paper we present a novel deep learning method for 3D object detection and 6D pose estimation from RGB images. Our method, named DPOD (Dense Pose Object Detector), estimates dense multi-class 2D-3D correspondence maps between an input image and available 3D models. Given the correspondences, a 6DoF pose is computed via PnP and RANSAC. An additional RGB pose refinement of the initial pose estimates is performed using a custom deep learning-based refinement scheme. Our results and comparison to a vast number of related works demonstrate that a large number of correspondences is beneficial for obtaining high-quality 6D poses both before and after refinement. Unlike other methods that mainly use real data for training and do not train on synthetic renderings, we perform evaluation on both synthetic and real training data demonstrating superior results before and after refinement when compared to all recent detectors. While being precise, the presented approach is still real-time capable.
https://arxiv.org/abs/1902.11020v3
https://arxiv.org/pdf/1902.11020v3.pdf
ICCV 2019 10
[ "Sergey Zakharov", "Ivan Shugurov", "Slobodan Ilic" ]
[ "3D Object Detection", "6D Pose Estimation", "6D Pose Estimation using RGB", "Object Detection", "Object Detection", "Pose Estimation" ]
1,551,312,000,000
[]
148,709
296,006
https://paperswithcode.com/paper/supporting-vision-language-model-inference
2205.11100
Supporting Vision-Language Model Inference with Causality-pruning Knowledge Prompt
Vision-language models are pre-trained by aligning image-text pairs in a common space so that the models can deal with open-set visual concepts by learning semantic information from textual labels. To boost the transferability of these models on downstream tasks in a zero-shot manner, recent works explore generating fixed or learnable prompts, i.e., classification weights are synthesized from natural language describing task-relevant categories, to reduce the gap between tasks in the training and test phases. However, how and what prompts can improve inference performance remains unclear. In this paper, we explicitly provide exploration and clarify the importance of including semantic information in prompts, while existing prompt methods generate prompts without exploring the semantic information of textual labels. A challenging issue is that manually constructing prompts, with rich semantic information, requires domain expertise and is extremely time-consuming. To this end, we propose Causality-pruning Knowledge Prompt (CapKP) for adapting pre-trained vision-language models to downstream image recognition. CapKP retrieves an ontological knowledge graph by treating the textual label as a query to explore task-relevant semantic information. To further refine the derived semantic information, CapKP introduces causality-pruning by following the first principle of Granger causality. Empirically, we conduct extensive evaluations to demonstrate the effectiveness of CapKP, e.g., with 8 shots, CapKP outperforms the manual-prompt method by 12.51% and the learnable-prompt method by 1.39% on average, respectively. Experimental analyses prove the superiority of CapKP in domain generalization compared to benchmark approaches.
https://arxiv.org/abs/2205.11100v1
https://arxiv.org/pdf/2205.11100v1.pdf
null
[ "Jiangmeng Li", "Wenyi Mo", "Wenwen Qiang", "Bing Su", "Changwen Zheng" ]
[ "Domain Generalization", "Language Modelling" ]
1,653,264,000,000
[]
115,371
4,685
https://paperswithcode.com/paper/an-interactive-greedy-approach-to-group
1707.02963
An Interactive Greedy Approach to Group Sparsity in High Dimensions
Sparsity learning with known grouping structure has received considerable attention due to wide modern applications in high-dimensional data analysis. Although advantages of using group information have been well-studied by shrinkage-based approaches, benefits of group sparsity have not been well-documented for greedy-type methods, which much limits our understanding and use of this important class of methods. In this paper, generalizing from a popular forward-backward greedy approach, we propose a new interactive greedy algorithm for group sparsity learning and prove that the proposed greedy-type algorithm attains the desired benefits of group sparsity under high dimensional settings. An estimation error bound refining other existing methods and a guarantee for group support recovery are also established simultaneously. In addition, we incorporate a general M-estimation framework and introduce an interactive feature to allow extra algorithm flexibility without compromise in theoretical properties. The promising use of our proposal is demonstrated through numerical evaluations including a real industrial application in human activity recognition at home. Supplementary materials for this article are available online.
http://arxiv.org/abs/1707.02963v5
http://arxiv.org/pdf/1707.02963v5.pdf
null
[ "Wei Qian", "Wending Li", "Yasuhiro Sogawa", "Ryohei Fujimaki", "Xitong Yang", "Ji Liu" ]
[ "Activity Recognition", "Human Activity Recognition" ]
1,499,644,800,000
[]
147,498
137,985
https://paperswithcode.com/paper/deep-local-shapes-learning-local-sdf-priors
2003.10983
Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction
Efficiently reconstructing complex and intricate surfaces at scale is a long-standing goal in machine perception. To address this problem we introduce Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements. DeepLS replaces the dense volumetric signed distance function (SDF) representation used in traditional surface reconstruction systems with a set of locally learned continuous SDFs defined by a neural network, inspired by recent work such as DeepSDF. Unlike DeepSDF, which represents an object-level SDF with a neural network and a single latent code, we store a grid of independent latent codes, each responsible for storing information about surfaces in a small local neighborhood. This decomposition of scenes into local shapes simplifies the prior distribution that the network must learn, and also enables efficient inference. We demonstrate the effectiveness and generalization power of DeepLS by showing object shape encoding and reconstructions of full scenes, where DeepLS delivers high compression, accuracy, and local shape completion.
https://arxiv.org/abs/2003.10983v3
https://arxiv.org/pdf/2003.10983v3.pdf
ECCV 2020 8
[ "Rohan Chabra", "Jan Eric Lenssen", "Eddy Ilg", "Tanner Schmidt", "Julian Straub", "Steven Lovegrove", "Richard Newcombe" ]
[ "3D Reconstruction", "Surface Reconstruction" ]
1,585,008,000,000
[]
189,020
65,872
https://paperswithcode.com/paper/knowledge-graph-embedding-with-numeric
null
Knowledge Graph Embedding with Numeric Attributes of Entities
Knowledge Graph (KG) embedding projects entities and relations into low dimensional vector space, which has been successfully applied in KG completion task. The previous embedding approaches only model entities and their relations, ignoring a large number of entities{'} numeric attributes in KGs. In this paper, we propose a new KG embedding model which jointly model entity relations and numeric attributes. Our approach combines an attribute embedding model with a translation-based structure embedding model, which learns the embeddings of entities, relations, and attributes simultaneously. Experiments of link prediction on YAGO and Freebase show that the performance is effectively improved by adding entities{'} numeric attributes in the embedding model.
https://aclanthology.org/W18-3017
https://aclanthology.org/W18-3017.pdf
WS 2018 7
[ "Yanrong Wu", "Zhichun Wang" ]
[ "Graph Embedding", "Knowledge Graph Embedding", "Knowledge Graphs", "Link Prediction", "Representation Learning" ]
1,530,403,200,000
[]
43,180
70,781
https://paperswithcode.com/paper/gated-recurrent-convolution-neural-network
null
Gated Recurrent Convolution Neural Network for OCR
Optical Character Recognition (OCR) aims to recognize text in natural images. Inspired by a recently proposed model for general image classification, Recurrent Convolution Neural Network (RCNN), we propose a new architecture named Gated RCNN (GRCNN) for solving this problem. Its critical component, Gated Recurrent Convolution Layer (GRCL), is constructed by adding a gate to the Recurrent Convolution Layer (RCL), the critical component of RCNN. The gate controls the context modulation in RCL and balances the feed-forward information and the recurrent information. In addition, an efficient Bidirectional Long Short-Term Memory (BLSTM) is built for sequence modeling. The GRCNN is combined with BLSTM to recognize text in natural images. The entire GRCNN-BLSTM model can be trained end-to-end. Experiments show that the proposed model outperforms existing methods on several benchmark datasets including the IIIT-5K, Street View Text (SVT) and ICDAR.
http://papers.nips.cc/paper/6637-gated-recurrent-convolution-neural-network-for-ocr
http://papers.nips.cc/paper/6637-gated-recurrent-convolution-neural-network-for-ocr.pdf
NeurIPS 2017 12
[ "Jianfeng Wang", "Xiaolin Hu" ]
[ "Classification", "Image Classification", "Optical Character Recognition" ]
1,512,086,400,000
[ { "code_snippet_url": null, "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively,...
131,840
100,745
https://paperswithcode.com/paper/task-driven-modular-networks-for-zero-shot
1905.05908
Task-Driven Modular Networks for Zero-Shot Compositional Learning
One of the hallmarks of human intelligence is the ability to compose learned knowledge into novel concepts which can be recognized without a single training example. In contrast, current state-of-the-art methods require hundreds of training examples for each possible category to build reliable and accurate classifiers. To alleviate this striking difference in efficiency, we propose a task-driven modular architecture for compositional reasoning and sample efficient learning. Our architecture consists of a set of neural network modules, which are small fully connected layers operating in semantic concept space. These modules are configured through a gating function conditioned on the task to produce features representing the compatibility between the input image and the concept under consideration. This enables us to express tasks as a combination of sub-tasks and to generalize to unseen categories by reweighting a set of small modules. Furthermore, the network can be trained efficiently as it is fully differentiable and its modules operate on small sub-spaces. We focus our study on the problem of compositional zero-shot classification of object-attribute categories. We show in our experiments that current evaluation metrics are flawed as they only consider unseen object-attribute pairs. When extending the evaluation to the generalized setting which accounts also for pairs seen during training, we discover that naive baseline methods perform similarly or better than current approaches. However, our modular network is able to outperform all existing approaches on two widely-used benchmark datasets.
https://arxiv.org/abs/1905.05908v1
https://arxiv.org/pdf/1905.05908v1.pdf
ICCV 2019 10
[ "Senthil Purushwalkam", "Maximilian Nickel", "Abhinav Gupta", "Marc'Aurelio Ranzato" ]
[ "Novel Concepts", "Zero-Shot Learning" ]
1,557,878,400,000
[]
11,391
54,942
https://paperswithcode.com/paper/deep-mr-image-super-resolution-using
1809.03140
Deep MR Image Super-Resolution Using Structural Priors
High resolution magnetic resonance (MR) images are desired for accurate diagnostics. In practice, image resolution is restricted by factors like hardware, cost and processing constraints. Recently, deep learning methods have been shown to produce compelling state of the art results for image super-resolution. Paying particular attention to desired hi-resolution MR image structure, we propose a new regularized network that exploits image priors, namely a low-rank structure and a sharpness prior to enhance deep MR image superresolution. Our contributions are then incorporating these priors in an analytically tractable fashion in the learning of a convolutional neural network (CNN) that accomplishes the super-resolution task. This is particularly challenging for the low rank prior, since the rank is not a differentiable function of the image matrix (and hence the network parameters), an issue we address by pursuing differentiable approximations of the rank. Sharpness is emphasized by the variance of the Laplacian which we show can be implemented by a fixed {\em feedback} layer at the output of the network. Experiments performed on two publicly available MR brain image databases exhibit promising results particularly when training imagery is limited.
http://arxiv.org/abs/1809.03140v1
http://arxiv.org/pdf/1809.03140v1.pdf
null
[ "Venkateswararao Cherukuri", "Tiantong Guo", "Steven J. Schiff", "Vishal Monga" ]
[ "Image Super-Resolution", "Super-Resolution" ]
1,536,537,600,000
[]
43,546
226,931
https://paperswithcode.com/paper/multi-contextual-design-of-convolutional
2106.10430
Multi-Contextual Design of Convolutional Neural Network for Steganalysis
In recent times, deep learning-based steganalysis classifiers became popular due to their state-of-the-art performance. Most deep steganalysis classifiers usually extract noise residuals using high-pass filters as preprocessing steps and feed them to their deep model for classification. It is observed that recent steganographic embedding does not always restrict their embedding in the high-frequency zone; instead, they distribute it as per embedding policy. Therefore, besides noise residual, learning the embedding zone is another challenging task. In this work, unlike the conventional approaches, the proposed model first extracts the noise residual using learned denoising kernels to boost the signal-to-noise ratio. After preprocessing, the sparse noise residuals are fed to a novel Multi-Contextual Convolutional Neural Network (M-CNET) that uses heterogeneous context size to learn the sparse and low-amplitude representation of noise residuals. The model performance is further improved by incorporating the Self-Attention module to focus on the areas prone to steganalytic embedding. A set of comprehensive experiments is performed to show the proposed scheme's efficacy over the prior arts. Besides, an ablation study is given to justify the contribution of various modules of the proposed architecture.
https://arxiv.org/abs/2106.10430v2
https://arxiv.org/pdf/2106.10430v2.pdf
null
[ "Brijesh Singh", "Arijit Sur", "Pinaki Mitra" ]
[ "Denoising" ]
1,624,060,800,000
[]
170,107
277,745
https://paperswithcode.com/paper/bifsmn-binary-neural-network-for-keyword
2202.06483
BiFSMN: Binary Neural Network for Keyword Spotting
The deep neural networks, such as the Deep-FSMN, have been widely studied for keyword spotting (KWS) applications. However, computational resources for these networks are significantly constrained since they usually run on-call on edge devices. In this paper, we present BiFSMN, an accurate and extreme-efficient binary neural network for KWS. We first construct a High-frequency Enhancement Distillation scheme for the binarization-aware training, which emphasizes the high-frequency information from the full-precision network's representation that is more crucial for the optimization of the binarized network. Then, to allow the instant and adaptive accuracy-efficiency trade-offs at runtime, we also propose a Thinnable Binarization Architecture to further liberate the acceleration potential of the binarized network from the topology perspective. Moreover, we implement a Fast Bitwise Computation Kernel for BiFSMN on ARMv8 devices which fully utilizes registers and increases instruction throughput to push the limit of deployment efficiency. Extensive experiments show that BiFSMN outperforms existing binarization methods by convincing margins on various datasets and is even comparable with the full-precision counterpart (e.g., less than 3% drop on Speech Commands V1-12). We highlight that benefiting from the thinnable architecture and the optimized 1-bit implementation, BiFSMN can achieve an impressive 22.3x speedup and 15.5x storage-saving on real-world edge hardware.
https://arxiv.org/abs/2202.06483v4
https://arxiv.org/pdf/2202.06483v4.pdf
null
[ "Haotong Qin", "Xudong Ma", "Yifu Ding", "Xiaoyang Li", "Yang Zhang", "Yao Tian", "Zejun Ma", "Jie Luo", "Xianglong Liu" ]
[ "Binarization", "Keyword Spotting" ]
1,644,796,800,000
[]
124,838
61,658
https://paperswithcode.com/paper/time-discounting-convolution-for-event
1812.02395
Time-Discounting Convolution for Event Sequences with Ambiguous Timestamps
This paper proposes a method for modeling event sequences with ambiguous timestamps, a time-discounting convolution. Unlike in ordinary time series, time intervals are not constant, small time-shifts have no significant effect, and inputting timestamps or time durations into a model is not effective. The criteria that we require for the modeling are providing robustness against time-shifts or timestamps uncertainty as well as maintaining the essential capabilities of time-series models, i.e., forgetting meaningless past information and handling infinite sequences. The proposed method handles them with a convolutional mechanism across time with specific parameterizations, which efficiently represents the event dependencies in a time-shift invariant manner while discounting the effect of past events, and a dynamic pooling mechanism, which provides robustness against the uncertainty in timestamps and enhances the time-discounting capability by dynamically changing the pooling window size. In our learning algorithm, the decaying and dynamic pooling mechanisms play critical roles in handling infinite and variable length sequences. Numerical experiments on real-world event sequences with ambiguous timestamps and ordinary time series demonstrated the advantages of our method.
http://arxiv.org/abs/1812.02395v1
http://arxiv.org/pdf/1812.02395v1.pdf
null
[ "Takayuki Katsuki", "Takayuki Osogami", "Akira Koseki", "Masaki Ono", "Michiharu Kudo", "Masaki Makino", "Atsushi Suzuki" ]
[ "Time Series" ]
1,544,054,400,000
[]
61,722
219,761
https://paperswithcode.com/paper/multimodal-deep-learning-framework-for-image
2105.08809
Multimodal Deep Learning Framework for Image Popularity Prediction on Social Media
Billions of photos are uploaded to the web daily through various types of social networks. Some of these images receive millions of views and become popular, whereas others remain completely unnoticed. This raises the problem of predicting image popularity on social media. The popularity of an image can be affected by several factors, such as visual content, aesthetic quality, user, post metadata, and time. Thus, considering all these factors is essential for accurately predicting image popularity. In addition, the efficiency of the predictive model also plays a crucial role. In this study, motivated by multimodal learning, which uses information from various modalities, and the current success of convolutional neural networks (CNNs) in various fields, we propose a deep learning model, called visual-social convolutional neural network (VSCNN), which predicts the popularity of a posted image by incorporating various types of visual and social features into a unified network model. VSCNN first learns to extract high-level representations from the input visual and social features by utilizing two individual CNNs. The outputs of these two networks are then fused into a joint network to estimate the popularity score in the output layer. We assess the performance of the proposed method by conducting extensive experiments on a dataset of approximately 432K images posted on Flickr. The simulation results demonstrate that the proposed VSCNN model significantly outperforms state-of-the-art models, with a relative improvement of greater than 2.33%, 7.59%, and 14.16% in terms of Spearman's Rho, mean absolute error, and mean squared error, respectively.
https://arxiv.org/abs/2105.08809v1
https://arxiv.org/pdf/2105.08809v1.pdf
null
[ "Fatma S. Abousaleh", "Wen-Huang Cheng", "Neng-Hao Yu", "Yu Tsao" ]
[ "Image popularity prediction", "Multimodal Deep Learning" ]
1,621,296,000,000
[]
187,072
129,836
https://paperswithcode.com/paper/missing-class-robust-domain-adaptation-by
2001.02015
Missing-Class-Robust Domain Adaptation by Unilateral Alignment for Fault Diagnosis
Domain adaptation aims at improving model performance by leveraging the learned knowledge in the source domain and transferring it to the target domain. Recently, domain adversarial methods have been particularly successful in alleviating the distribution shift between the source and the target domains. However, these methods assume an identical label space between the two domains. This assumption imposes a significant limitation for real applications since the target training set may not contain the complete set of classes. We demonstrate in this paper that the performance of domain adversarial methods can be vulnerable to an incomplete target label space during training. To overcome this issue, we propose a two-stage unilateral alignment approach. The proposed methodology makes use of the inter-class relationships of the source domain and aligns unilaterally the target to the source domain. The benefits of the proposed methodology are first evaluated on the MNIST$\rightarrow$MNIST-M adaptation task. The proposed methodology is also evaluated on a fault diagnosis task, where the problem of missing fault types in the target training dataset is common in practice. Both experiments demonstrate the effectiveness of the proposed methodology.
https://arxiv.org/abs/2001.02015v1
https://arxiv.org/pdf/2001.02015v1.pdf
null
[ "Qin Wang", "Gabriel Michau", "Olga Fink" ]
[ "Domain Adaptation" ]
1,578,355,200,000
[]
90,096
289,794
https://paperswithcode.com/paper/video-moment-retrieval-from-text-queries-via
2204.09409
Video Moment Retrieval from Text Queries via Single Frame Annotation
Video moment retrieval aims at finding the start and end timestamps of a moment (part of a video) described by a given natural language query. Fully supervised methods need complete temporal boundary annotations to achieve promising results, which is costly since the annotator needs to watch the whole moment. Weakly supervised methods only rely on the paired video and query, but the performance is relatively poor. In this paper, we look closer into the annotation process and propose a new paradigm called "glance annotation". This paradigm requires the timestamp of only one single random frame, which we refer to as a "glance", within the temporal boundary of the fully supervised counterpart. We argue this is beneficial because comparing to weak supervision, trivial cost is added yet more potential in performance is provided. Under the glance annotation setting, we propose a method named as Video moment retrieval via Glance Annotation (ViGA) based on contrastive learning. ViGA cuts the input video into clips and contrasts between clips and queries, in which glance guided Gaussian distributed weights are assigned to all clips. Our extensive experiments indicate that ViGA achieves better results than the state-of-the-art weakly supervised methods by a large margin, even comparable to fully supervised methods in some cases.
https://arxiv.org/abs/2204.09409v3
https://arxiv.org/pdf/2204.09409v3.pdf
null
[ "Ran Cui", "Tianwen Qian", "Pai Peng", "Elena Daskalaki", "Jingjing Chen", "Xiaowei Guo", "Huyang Sun", "Yu-Gang Jiang" ]
[ "Contrastive Learning", "Moment Retrieval" ]
1,650,412,800,000
[]
178,188
271,256
https://paperswithcode.com/paper/3d-face-morphing-attacks-generation
2201.03454
3D Face Morphing Attacks: Generation, Vulnerability and Detection
Face Recognition systems (FRS) have been found vulnerable to morphing attacks, where the morphed face image is generated by blending the face images from contributory data subjects. This work presents a novel direction towards generating face morphing attacks in 3D. To this extent, we have introduced a novel approach based on blending the 3D face point clouds corresponding to the contributory data subjects. The proposed method will generate the 3D face morphing by projecting the input 3D face point clouds to depth-maps \& 2D color images followed by the image blending and wrapping operations performed independently on the color images and depth maps. We then back-project the 2D morphing color-map and the depth-map to the point cloud using the canonical (fixed) view. Given that the generated 3D face morphing models will result in the holes due to a single canonical view, we have proposed a new algorithm for hole filling that will result in a high-quality 3D face morphing model. Extensive experiments are carried out on the newly generated 3D face dataset comprised of 675 3D scans corresponding to 41 unique data subjects. Experiments are performed to benchmark the vulnerability of automatic 2D and 3D FRS and human observer analysis. We also present the quantitative assessment of the quality of the generated 3D face morphing models using eight different quality metrics. Finally, we have proposed three different 3D face Morphing Attack Detection (3D-MAD) algorithms to benchmark the performance of the 3D MAD algorithms.
https://arxiv.org/abs/2201.03454v2
https://arxiv.org/pdf/2201.03454v2.pdf
null
[ "Jag Mohan Singh", "Raghavendra Ramachandra" ]
[ "Face Recognition" ]
1,641,772,800,000
[]
156,979
165,222
https://paperswithcode.com/paper/bandit-change-point-detection-for-real-time
2009.11891
Bandit Change-Point Detection for Real-Time Monitoring High-Dimensional Data Under Sampling Control
In many real-world problems of real-time monitoring high-dimensional streaming data, one wants to detect an undesired event or change quickly once it occurs, but under the sampling control constraint in the sense that one might be able to only observe or use selected components data for decision-making per time step in the resource-constrained environments. In this paper, we propose to incorporate multi-armed bandit approaches into sequential change-point detection to develop an efficient bandit change-point detection algorithm based on the limiting Bayesian approach to incorporate a prior knowledge of potential changes. Our proposed algorithm, termed Thompson-Sampling-Shiryaev-Roberts-Pollak (TSSRP), consists of two policies per time step: the adaptive sampling policy applies the Thompson Sampling algorithm to balance between exploration for acquiring long-term knowledge and exploitation for immediate reward gain, and the statistical decision policy fuses the local Shiryaev-Roberts-Pollak statistics to determine whether to raise a global alarm by sum shrinkage techniques. Extensive numerical simulations and case studies demonstrate the statistical and computational efficiency of our proposed TSSRP algorithm.
https://arxiv.org/abs/2009.11891v2
https://arxiv.org/pdf/2009.11891v2.pdf
null
[ "Wanrong Zhang", "Yajun Mei" ]
[ "Change Point Detection" ]
1,600,905,600,000
[]
25,639
169,768
https://paperswithcode.com/paper/matching-space-stereo-networks-for-cross
2010.07347
Matching-space Stereo Networks for Cross-domain Generalization
End-to-end deep networks represent the state of the art for stereo matching. While excelling on images framing environments similar to the training set, major drops in accuracy occur in unseen domains (e.g., when moving from synthetic to real scenes). In this paper we introduce a novel family of architectures, namely Matching-Space Networks (MS-Nets), with improved generalization properties. By replacing learning-based feature extraction from image RGB values with matching functions and confidence measures from conventional wisdom, we move the learning process from the color space to the Matching Space, avoiding over-specialization to domain specific features. Extensive experimental results on four real datasets highlight that our proposal leads to superior generalization to unseen environments over conventional deep architectures, keeping accuracy on the source domain almost unaltered. Our code is available at https://github.com/ccj5351/MS-Nets.
https://arxiv.org/abs/2010.07347v1
https://arxiv.org/pdf/2010.07347v1.pdf
null
[ "Changjiang Cai", "Matteo Poggi", "Stefano Mattoccia", "Philippos Mordohai" ]
[ "Domain Generalization", "Stereo Matching" ]
1,602,633,600,000
[]
95,907
28,858
https://paperswithcode.com/paper/learning-to-reason-with-adaptive-computation
1610.07647
Learning to Reason With Adaptive Computation
Multi-hop inference is necessary for machine learning systems to successfully solve tasks such as Recognising Textual Entailment and Machine Reading. In this work, we demonstrate the effectiveness of adaptive computation for learning the number of inference steps required for examples of different complexity and that learning the correct number of inference steps is difficult. We introduce the first model involving Adaptive Computation Time which provides a small performance benefit on top of a similar model without an adaptive component as well as enabling considerable insight into the reasoning process of the model.
http://arxiv.org/abs/1610.07647v2
http://arxiv.org/pdf/1610.07647v2.pdf
null
[ "Mark Neumann", "Pontus Stenetorp", "Sebastian Riedel" ]
[ "Natural Language Inference", "Reading Comprehension" ]
1,477,267,200,000
[]
92,047
169,009
https://paperswithcode.com/paper/joint-semantic-analysis-with-document-level
2010.05567
Joint Semantic Analysis with Document-Level Cross-Task Coherence Rewards
Coreference resolution and semantic role labeling are NLP tasks that capture different aspects of semantics, indicating respectively, which expressions refer to the same entity, and what semantic roles expressions serve in the sentence. However, they are often closely interdependent, and both generally necessitate natural language understanding. Do they form a coherent abstract representation of documents? We present a neural network architecture for joint coreference resolution and semantic role labeling for English, and train graph neural networks to model the 'coherence' of the combined shallow semantic graph. Using the resulting coherence score as a reward for our joint semantic analyzer, we use reinforcement learning to encourage global coherence over the document and between semantic annotations. This leads to improvements on both tasks in multiple datasets from different domains, and across a range of encoders of different expressivity, calling, we believe, for a more holistic approach to semantics in NLP.
https://arxiv.org/abs/2010.05567v1
https://arxiv.org/pdf/2010.05567v1.pdf
null
[ "Rahul Aralikatte", "Mostafa Abdou", "Heather Lent", "Daniel Hershcovich", "Anders Søgaard" ]
[ "Coreference Resolution", "Natural Language Understanding", "Semantic Role Labeling" ]
1,602,460,800,000
[]
18,129
50,797
https://paperswithcode.com/paper/quit-when-you-can-efficient-evaluation-of
1806.11202
Quit When You Can: Efficient Evaluation of Ensembles with Ordering Optimization
Given a classifier ensemble and a set of examples to be classified, many examples may be confidently and accurately classified after only a subset of the base models in the ensemble are evaluated. This can reduce both mean latency and CPU while maintaining the high accuracy of the original ensemble. To achieve such gains, we propose jointly optimizing a fixed evaluation order of the base models and early-stopping thresholds. Our proposed objective is a combinatorial optimization problem, but we provide a greedy algorithm that achieves a 4-approximation of the optimal solution for certain cases. For those cases, this is also the best achievable polynomial time approximation bound unless $P = NP$. Experiments on benchmark and real-world problems show that the proposed Quit When You Can (QWYC) algorithm can speed-up average evaluation time by $2$x--$4$x, and is around $1.5$x faster than prior work. QWYC's joint optimization of ordering and thresholds also performed better in experiments than various fixed orderings, including gradient boosted trees' ordering.
http://arxiv.org/abs/1806.11202v1
http://arxiv.org/pdf/1806.11202v1.pdf
null
[ "Serena Wang", "Maya Gupta", "Seungil You" ]
[ "Combinatorial Optimization" ]
1,530,144,000,000
[]
68,600
25,989
https://paperswithcode.com/paper/causal-regularization
1702.02604
Causal Regularization
In application domains such as healthcare, we want accurate predictive models that are also causally interpretable. In pursuit of such models, we propose a causal regularizer to steer predictive models towards causally-interpretable solutions and theoretically study its properties. In a large-scale analysis of Electronic Health Records (EHR), our causally-regularized model outperforms its L1-regularized counterpart in causal accuracy and is competitive in predictive performance. We perform non-linear causality analysis by causally regularizing a special neural network architecture. We also show that the proposed causal regularizer can be used together with neural representation learning algorithms to yield up to 20% improvement over multilayer perceptron in detecting multivariate causation, a situation common in healthcare, where many causal factors should occur simultaneously to have an effect on the target variable.
http://arxiv.org/abs/1702.02604v2
http://arxiv.org/pdf/1702.02604v2.pdf
null
[ "Mohammad Taha Bahadori", "Krzysztof Chalupka", "Edward Choi", "Robert Chen", "Walter F. Stewart", "Jimeng Sun" ]
[ "Representation Learning" ]
1,486,512,000,000
[]
142,958
268,054
https://paperswithcode.com/paper/n-cps-generalising-cross-pseudo-supervision
2112.07528
n-CPS: Generalising Cross Pseudo Supervision to n Networks for Semi-Supervised Semantic Segmentation
We present n-CPS - a generalisation of the recent state-of-the-art cross pseudo supervision (CPS) approach for the task of semi-supervised semantic segmentation. In n-CPS, there are n simultaneously trained subnetworks that learn from each other through one-hot encoding perturbation and consistency regularisation. We also show that ensembling techniques applied to subnetworks outputs can significantly improve the performance. To the best of our knowledge, n-CPS paired with CutMix outperforms CPS and sets the new state-of-the-art for Pascal VOC 2012 with (1/16, 1/8, 1/4, and 1/2 supervised regimes) and Cityscapes (1/16 supervised).
https://arxiv.org/abs/2112.07528v4
https://arxiv.org/pdf/2112.07528v4.pdf
null
[ "Dominik Filipiak", "Piotr Tempczyk", "Marek Cygan" ]
[ "Semantic Segmentation", "Semi-Supervised Semantic Segmentation" ]
1,639,440,000,000
[ { "code_snippet_url": null, "description": "**CutMix** is an image data augmentation strategy. Instead of simply removing pixels as in [Cutout](https://paperswithcode.com/method/cutout), we replace the removed regions with a patch from another image. The ground truth labels are also mixed proportionally to ...
95,449
55,473
https://paperswithcode.com/paper/exploring-the-vulnerability-of-single-shot
1809.05966
Exploring the Vulnerability of Single Shot Module in Object Detectors via Imperceptible Background Patches
Recent works succeeded to generate adversarial perturbations on the entire image or the object of interests to corrupt CNN based object detectors. In this paper, we focus on exploring the vulnerability of the Single Shot Module (SSM) commonly used in recent object detectors, by adding small perturbations to patches in the background outside the object. The SSM is referred to the Region Proposal Network used in a two-stage object detector or the single-stage object detector itself. The SSM is typically a fully convolutional neural network which generates output in a single forward pass. Due to the excessive convolutions used in SSM, the actual receptive field is larger than the object itself. As such, we propose a novel method to corrupt object detectors by generating imperceptible patches only in the background. Our method can find a few background patches for perturbation, which can effectively decrease true positives and dramatically increase false positives. Efficacy is demonstrated on 5 two-stage object detectors and 8 single-stage object detectors on the MS COCO 2014 dataset. Results indicate that perturbations with small distortions outside the bounding box of object region can still severely damage the detection performance.
https://arxiv.org/abs/1809.05966v3
https://arxiv.org/pdf/1809.05966v3.pdf
null
[ "Yuezun Li", "Xiao Bian", "Ming-Ching Chang", "Siwei Lyu" ]
[ "Region Proposal" ]
1,537,056,000,000
[]
60,178
142,195
https://paperswithcode.com/paper/r-3-reverse-retrieve-and-rank-for-sarcasm
2004.13248
$R^3$: Reverse, Retrieve, and Rank for Sarcasm Generation with Commonsense Knowledge
We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence. Our method employs a retrieve-and-edit framework to instantiate two major characteristics of sarcasm: reversal of valence and semantic incongruity with the context which could include shared commonsense or world knowledge between the speaker and the listener. While prior works on sarcasm generation predominantly focus on context incongruity, we show that combining valence reversal and semantic incongruity based on the commonsense knowledge generates sarcasm of higher quality. Human evaluation shows that our system generates sarcasm better than human annotators 34% of the time, and better than a reinforced hybrid baseline 90% of the time.
https://arxiv.org/abs/2004.13248v4
https://arxiv.org/pdf/2004.13248v4.pdf
null
[ "Tuhin Chakrabarty", "Debanjan Ghosh", "Smaranda Muresan", "Nanyun Peng" ]
[ "Scene Text Detection" ]
1,588,032,000,000
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L551", "description": "A **Gated Linear Unit**, or **GLU** computes:\r\n\r\n$$ \\text{GLU}\\left(a, b\\right) = a\\otimes \\sigma\\left(b\\right) $$\r\n\r\nIt is used in nat...
179,044
263,248
https://paperswithcode.com/paper/learning-background-invariance-improves
null
Learning Background Invariance Improves Generalization and Robustness in Self-Supervised Learning on ImageNet and Beyond
Recent progress in self-supervised learning has demonstrated promising results in multiple visual tasks. An important ingredient in high-performing self-supervised methods is the use of data augmentation by training models to place different augmented views of the same image nearby in embedding space. However, commonly used augmentation pipelines treat images holistically, ignoring the semantic relevance of parts of an image—e.g. a subject vs. a background—which can lead to the learning of spurious correlations. Our work addresses this problem by investigating a class of simple, yet highly effective “background augmentations", which encourage models to focus on semantically-relevant content by discouraging them from focusing on image backgrounds. Through a systematic, comprehensive investigation, we show that background augmentations lead to improved generalization with substantial improvements ($\sim$1-2% on ImageNet) in performance across a spectrum of state-of-the-art self-supervised methods (MoCo-v2, BYOL, SwAV) on a variety of tasks, even enabling performance on par with the supervised baseline. We also find improved label efficiency with even larger performance improvements in limited-labels settings (up to 4.2%). Further, we find improved training efficiency, attaining a benchmark accuracy of 74.4%, outperforming many recent self-supervised learning methods trained for 800-1000 epochs, in only 100 epochs. Importantly, we also demonstrate that background augmentations boost generalization and robustness to a number of out-of-distribution settings, including ImageNet-9, natural adversarial examples, adversarial attacks, ImageNet-Renditions and ImageNet ReaL. We also make progress in completely unsupervised saliency detection, in the process of generating saliency masks that we use for background augmentations.
https://openreview.net/forum?id=zZnOG9ehfoO
https://openreview.net/pdf?id=zZnOG9ehfoO
NeurIPS Workshop ImageNet_PPF 2021 12
[ "Chaitanya Ryali", "David J. Schwab", "Ari S. Morcos" ]
[ "Data Augmentation", "Saliency Detection", "Self-Supervised Learning", "Unsupervised Saliency Detection" ]
1,632,787,200,000
[ { "code_snippet_url": "", "description": "BYOL (Bootstrap Your Own Latent) is a new approach to self-supervised learning. BYOL’s goal is to learn a representation $y_θ$ which can then be used for downstream tasks. BYOL uses two neural networks to learn: the online and target networks. The online network is ...
80,178
36,585
https://paperswithcode.com/paper/dynamic-concept-composition-for-zero-example
1601.03679
Dynamic Concept Composition for Zero-Example Event Detection
In this paper, we focus on automatically detecting events in unconstrained videos without the use of any visual training exemplars. In principle, zero-shot learning makes it possible to train an event detection model based on the assumption that events (e.g. \emph{birthday party}) can be described by multiple mid-level semantic concepts (e.g. "blowing candle", "birthday cake"). Towards this goal, we first pre-train a bundle of concept classifiers using data from other sources. Then we evaluate the semantic correlation of each concept \wrt the event of interest and pick up the relevant concept classifiers, which are applied on all test videos to get multiple prediction score vectors. While most existing systems combine the predictions of the concept classifiers with fixed weights, we propose to learn the optimal weights of the concept classifiers for each testing video by exploring a set of online available videos with free-form text descriptions of their content. To validate the effectiveness of the proposed approach, we have conducted extensive experiments on the latest TRECVID MEDTest 2014, MEDTest 2013 and CCV dataset. The experimental results confirm the superiority of the proposed approach.
http://arxiv.org/abs/1601.03679v1
http://arxiv.org/pdf/1601.03679v1.pdf
null
[ "Xiaojun Chang", "Yi Yang", "Guodong Long", "Chengqi Zhang", "Alexander G. Hauptmann" ]
[ "Event Detection", "Zero-Shot Learning" ]
1,452,729,600,000
[]
65,595
229,798
https://paperswithcode.com/paper/power-law-graph-transformer-for-machine
2107.02039
Power Law Graph Transformer for Machine Translation and Representation Learning
We present the Power Law Graph Transformer, a transformer model with well defined deductive and inductive tasks for prediction and representation learning. The deductive task learns the dataset level (global) and instance level (local) graph structures in terms of learnable power law distribution parameters. The inductive task outputs the prediction probabilities using the deductive task output, similar to a transductive model. We trained our model with Turkish-English and Portuguese-English datasets from TED talk transcripts for machine translation and compared the model performance and characteristics to a transformer model with scaled dot product attention trained on the same experimental setup. We report BLEU scores of $17.79$ and $28.33$ on the Turkish-English and Portuguese-English translation tasks with our model, respectively. We also show how a duality between a quantization set and N-dimensional manifold representation can be leveraged to transform between local and global deductive-inductive outputs using successive application of linear and non-linear transformations end-to-end.
https://arxiv.org/abs/2107.02039v1
https://arxiv.org/pdf/2107.02039v1.pdf
null
[ "Burc Gokden" ]
[ "Machine Translation", "Quantization", "Representation Learning" ]
1,624,752,000,000
[ { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The posit...
61,409
308,770
https://paperswithcode.com/paper/few-shot-class-incremental-learning-via-1
2207.11213
Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay
Few-shot class-incremental learning (FSCIL) has been proposed aiming to enable a deep learning system to incrementally learn new classes with limited data. Recently, a pioneer claims that the commonly used replay-based method in class-incremental learning (CIL) is ineffective and thus not preferred for FSCIL. This has, if truth, a significant influence on the fields of FSCIL. In this paper, we show through empirical results that adopting the data replay is surprisingly favorable. However, storing and replaying old data can lead to a privacy concern. To address this issue, we alternatively propose using data-free replay that can synthesize data by a generator without accessing real data. In observing the the effectiveness of uncertain data for knowledge distillation, we impose entropy regularization in the generator training to encourage more uncertain examples. Moreover, we propose to relabel the generated data with one-hot-like labels. This modification allows the network to learn by solely minimizing the cross-entropy loss, which mitigates the problem of balancing different objectives in the conventional knowledge distillation approach. Finally, we show extensive experimental results and analysis on CIFAR-100, miniImageNet and CUB-200 to demonstrate the effectiveness of our proposed one.
https://arxiv.org/abs/2207.11213v1
https://arxiv.org/pdf/2207.11213v1.pdf
null
[ "Huan Liu", "Li Gu", "Zhixiang Chi", "Yang Wang", "Yuanhao Yu", "Jun Chen", "Jin Tang" ]
[ "class-incremental learning", "Incremental Learning", "Knowledge Distillation" ]
1,658,448,000,000
[ { "code_snippet_url": null, "description": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may...
148,573
50,897
https://paperswithcode.com/paper/self-supervised-sparse-to-dense-self
1807.00275
Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera
Depth completion, the technique of estimating a dense depth image from sparse depth measurements, has a variety of applications in robotics and autonomous driving. However, depth completion faces 3 main challenges: the irregularly spaced pattern in the sparse depth input, the difficulty in handling multiple sensor modalities (when color images are available), as well as the lack of dense, pixel-level ground truth depth labels. In this work, we address all these challenges. Specifically, we develop a deep regression model to learn a direct mapping from sparse depth (and color images) to dense depth. We also propose a self-supervised training framework that requires only sequences of color and sparse depth images, without the need for dense depth labels. Our experiments demonstrate that our network, when trained with semi-dense annotations, attains state-of-the- art accuracy and is the winning approach on the KITTI depth completion benchmark at the time of submission. Furthermore, the self-supervised framework outperforms a number of existing solutions trained with semi- dense annotations.
http://arxiv.org/abs/1807.00275v2
http://arxiv.org/pdf/1807.00275v2.pdf
null
[ "Fangchang Ma", "Guilherme Venturelli Cavalheiro", "Sertac Karaman" ]
[ "Autonomous Driving", "Depth Completion" ]
1,530,403,200,000
[]
11,907
226,924
https://paperswithcode.com/paper/informative-class-activation-maps
2106.10472
Informative Class Activation Maps
We study how to evaluate the quantitative information content of a region within an image for a particular label. To this end, we bridge class activation maps with information theory. We develop an informative class activation map (infoCAM). Given a classification task, infoCAM depict how to accumulate information of partial regions to that of the entire image toward a label. Thus, we can utilise infoCAM to locate the most informative features for a label. When applied to an image classification task, infoCAM performs better than the traditional classification map in the weakly supervised object localisation task. We achieve state-of-the-art results on Tiny-ImageNet.
https://arxiv.org/abs/2106.10472v2
https://arxiv.org/pdf/2106.10472v2.pdf
null
[ "Zhenyue Qin", "Dongwoo Kim", "Tom Gedeon" ]
[ "Classification", "Image Classification" ]
1,624,060,800,000
[]
153,132
307,845
https://paperswithcode.com/paper/an-information-theoretic-analysis-of-bayesian
2207.08735
An Information-Theoretic Analysis of Bayesian Reinforcement Learning
Building on the framework introduced by Xu and Raginksy [1] for supervised learning problems, we study the best achievable performance for model-based Bayesian reinforcement learning problems. With this purpose, we define minimum Bayesian regret (MBR) as the difference between the maximum expected cumulative reward obtainable either by learning from the collected data or by knowing the environment and its dynamics. We specialize this definition to reinforcement learning problems modeled as Markov decision processes (MDPs) whose kernel parameters are unknown to the agent and whose uncertainty is expressed by a prior distribution. One method for deriving upper bounds on the MBR is presented and specific bounds based on the relative entropy and the Wasserstein distance are given. We then focus on two particular cases of MDPs, the multi-armed bandit problem (MAB) and the online optimization with partial feedback problem. For the latter problem, we show that our bounds can recover from below the current information-theoretic bounds by Russo and Van Roy [2].
https://arxiv.org/abs/2207.08735v1
https://arxiv.org/pdf/2207.08735v1.pdf
null
[ "Amaury Gouverneur", "Borja Rodríguez-Gálvez", "Tobias J. Oechtering", "Mikael Skoglund" ]
[ "reinforcement-learning" ]
1,658,102,400,000
[]
150,540
137,329
https://paperswithcode.com/paper/asr-error-correction-and-domain-adaptation
2003.07692
ASR Error Correction and Domain Adaptation Using Machine Translation
Off-the-shelf pre-trained Automatic Speech Recognition (ASR) systems are an increasingly viable service for companies of any size building speech-based products. While these ASR systems are trained on large amounts of data, domain mismatch is still an issue for many such parties that want to use this service as-is leading to not so optimal results for their task. We propose a simple technique to perform domain adaptation for ASR error correction via machine translation. The machine translation model is a strong candidate to learn a mapping from out-of-domain ASR errors to in-domain terms in the corresponding reference files. We use two off-the-shelf ASR systems in this work: Google ASR (commercial) and the ASPIRE model (open-source). We observe 7% absolute improvement in word error rate and 4 point absolute improvement in BLEU score in Google ASR output via our proposed method. We also evaluate ASR error correction via a downstream task of Speaker Diarization that captures speaker style, syntax, structure and semantic improvements we obtain via ASR correction.
https://arxiv.org/abs/2003.07692v1
https://arxiv.org/pdf/2003.07692v1.pdf
null
[ "Anirudh Mani", "Shruti Palaskar", "Nimshi Venkat Meripo", "Sandeep Konam", "Florian Metze" ]
[ "Automatic Speech Recognition", "Domain Adaptation", "Machine Translation", "Speaker Diarization", "Speaker Diarization", "Speech Recognition", "Speech Recognition" ]
1,584,057,600,000
[]
159,658
38,864
https://paperswithcode.com/paper/sampled-weighted-min-hashing-for-large-scale
1509.01771
Sampled Weighted Min-Hashing for Large-Scale Topic Mining
We present Sampled Weighted Min-Hashing (SWMH), a randomized approach to automatically mine topics from large-scale corpora. SWMH generates multiple random partitions of the corpus vocabulary based on term co-occurrence and agglomerates highly overlapping inter-partition cells to produce the mined topics. While other approaches define a topic as a probabilistic distribution over a vocabulary, SWMH topics are ordered subsets of such vocabulary. Interestingly, the topics mined by SWMH underlie themes from the corpus at different levels of granularity. We extensively evaluate the meaningfulness of the mined topics both qualitatively and quantitatively on the NIPS (1.7 K documents), 20 Newsgroups (20 K), Reuters (800 K) and Wikipedia (4 M) corpora. Additionally, we compare the quality of SWMH with Online LDA topics for document representation in classification.
http://arxiv.org/abs/1509.01771v2
http://arxiv.org/pdf/1509.01771v2.pdf
null
[ "Gibran Fuentes-Pineda", "Ivan Vladimir Meza-Ruiz" ]
[ "Classification" ]
1,441,497,600,000
[ { "code_snippet_url": null, "description": "**Linear discriminant analysis** (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition, and machine learning to find a linear combination o...
11,428
125,546
https://paperswithcode.com/paper/neural-machine-translation-with-explicit
1911.11520
Neural Machine Translation with Explicit Phrase Alignment
While neural machine translation (NMT) has achieved state-of-the-art translation performance, it is unable to capture the alignment between the input and output during the translation process. The lack of alignment in NMT models leads to three problems: it is hard to (1) interpret the translation process, (2) impose lexical constraints, and (3) impose structural constraints. To alleviate these problems, we propose to introduce explicit phrase alignment into the translation process of arbitrary NMT models. The key idea is to build a search space similar to that of phrase-based statistical machine translation for NMT where phrase alignment is readily available. We design a new decoding algorithm that can easily impose lexical and structural constraints. Experiments show that our approach makes the translation process of NMT more interpretable without sacrificing translation quality. In addition, our approach achieves significant improvements in lexically and structurally constrained translation tasks.
https://arxiv.org/abs/1911.11520v3
https://arxiv.org/pdf/1911.11520v3.pdf
null
[ "Jiacheng Zhang", "Huanbo Luan", "Maosong Sun", "FeiFei Zhai", "Jingfang Xu", "Yang Liu" ]
[ "Machine Translation" ]
1,574,726,400,000
[]
186,834
221,193
https://paperswithcode.com/paper/sample-efficient-reinforcement-learning-for
2105.14016
Sample-Efficient Reinforcement Learning for Linearly-Parameterized MDPs with a Generative Model
The curse of dimensionality is a widely known issue in reinforcement learning (RL). In the tabular setting where the state space $\mathcal{S}$ and the action space $\mathcal{A}$ are both finite, to obtain a nearly optimal policy with sampling access to a generative model, the minimax optimal sample complexity scales linearly with $|\mathcal{S}|\times|\mathcal{A}|$, which can be prohibitively large when $\mathcal{S}$ or $\mathcal{A}$ is large. This paper considers a Markov decision process (MDP) that admits a set of state-action features, which can linearly express (or approximate) its probability transition kernel. We show that a model-based approach (resp.$~$Q-learning) provably learns an $\varepsilon$-optimal policy (resp.$~$Q-function) with high probability as soon as the sample size exceeds the order of $\frac{K}{(1-\gamma)^{3}\varepsilon^{2}}$ (resp.$~$$\frac{K}{(1-\gamma)^{4}\varepsilon^{2}}$), up to some logarithmic factor. Here $K$ is the feature dimension and $\gamma\in(0,1)$ is the discount factor of the MDP. Both sample complexity bounds are provably tight, and our result for the model-based approach matches the minimax lower bound. Our results show that for arbitrarily large-scale MDP, both the model-based approach and Q-learning are sample-efficient when $K$ is relatively small, and hence the title of this paper.
https://arxiv.org/abs/2105.14016v2
https://arxiv.org/pdf/2105.14016v2.pdf
NeurIPS 2021 12
[ "Bingyan Wang", "Yuling Yan", "Jianqing Fan" ]
[ "Q-Learning", "reinforcement-learning" ]
1,622,160,000,000
[ { "code_snippet_url": null, "description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right...
187,376
65,464
https://paperswithcode.com/paper/stanfords-graph-based-neural-dependency
null
Stanford's Graph-based Neural Dependency Parser at the CoNLL 2017 Shared Task
This paper describes the neural dependency parser submitted by Stanford to the CoNLL 2017 Shared Task on parsing Universal Dependencies. Our system uses relatively simple LSTM networks to produce part of speech tags and labeled dependency parses from segmented and tokenized sequences of words. In order to address the rare word problem that abounds in languages with complex morphology, we include a character-based word representation that uses an LSTM to produce embeddings from sequences of characters. Our system was ranked first according to all five relevant metrics for the system: UPOS tagging (93.09{\%}), XPOS tagging (82.27{\%}), unlabeled attachment score (81.30{\%}), labeled attachment score (76.30{\%}), and content word labeled attachment score (72.57{\%}).
https://aclanthology.org/K17-3002
https://aclanthology.org/K17-3002.pdf
CONLL 2017 8
[ "Timothy Dozat", "Peng Qi", "Christopher D. Manning" ]
[ "Dependency Parsing" ]
1,501,545,600,000
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277", "description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\...
52,190