text stringlengths 373 5.09k | label class label 2
classes |
|---|---|
Title: Particle Based Stochastic Policy Optimization. Abstract: Stochastic polic have been widely applied for their good property in exploration and uncertainty quantification. Modeling policy distribution by joint state-action distribution within the exponential family has enabled flexibility in exploration and learn... | 0reject |
Title: DiP Benchmark Tests: Evaluation Benchmarks for Discourse Phenomena in MT. Abstract: Despite increasing instances of machine translation (MT) systems including extrasentential context information, the evidence for translation quality improvement is sparse, especially for discourse phenomena. Popular metrics like ... | 0reject |
Title: Combining Q-Learning and Search with Amortized Value Estimates. Abstract: We introduce "Search with Amortized Value Estimates" (SAVE), an approach for combining model-free Q-learning with model-based Monte-Carlo Tree Search (MCTS). In SAVE, a learned prior over state-action values is used to guide MCTS, which es... | 1accept |
Title: Composing Partial Differential Equations with Physics-Aware Neural Networks. Abstract: We introduce a compositional physics-aware neural network (FINN) for learning spatiotemporal advection-diffusion processes. FINN implements a new way of combining the learning abilities of artificial neural networks with physi... | 0reject |
Title: Understanding the failure modes of out-of-distribution generalization. Abstract: Empirical studies suggest that machine learning models often rely on features, such as the background, that may be spuriously correlated with the label only during training time, resulting in poor accuracy during test-time. In this ... | 1accept |
Title: Feature-map-level Online Adversarial Knowledge Distillation. Abstract: Feature maps contain rich information about image intensity and spatial correlation. However, previous online knowledge distillation methods only utilize the class probabilities. Thus in this paper, we propose an online knowledge distillation... | 0reject |
Title: VHEGAN: Variational Hetero-Encoder Randomized GAN for Zero-Shot Learning. Abstract: To extract and relate visual and linguistic concepts from images and textual descriptions for text-based zero-shot learning (ZSL), we develop variational hetero-encoder (VHE) that decodes text via a deep probabilisitic topic mode... | 0reject |
Title: Learning an Object-Based Memory System. Abstract: A robot operating in a household makes observations of multiple objects as it moves around over the course of days or weeks. The objects may be moved by inhabitants, but not completely at random. The robot may be called upon later to retrieve objects and will n... | 0reject |
Title: On Episodes, Prototypical Networks, and Few-Shot Learning. Abstract: Episodic learning is a popular practice among researchers and practitioners interested in few-shot learning. It consists of organising training in a series of learning problems, each relying on small “support” and “query” sets to mimic the few-... | 0reject |
Title: Vision at A Glance: Interplay between Fine and Coarse Information Processing Pathways. Abstract: Object recognition is often viewed as a feedforward, bottom-up process in machine learning, but in real neural systems, object recognition is a complicated process which involves the interplay between two signal path... | 0reject |
Title: L2E: Learning to Exploit Your Opponent. Abstract: Opponent modeling is essential to exploit sub-optimal opponents in strategic interactions. One key challenge facing opponent modeling is how to fast adapt to opponents with diverse styles of strategies. Most previous works focus on building explicit models to pre... | 0reject |
Title: FP-DETR: Detection Transformer Advanced by Fully Pre-training. Abstract: Large-scale pre-training has proven to be effective for visual representation learning on downstream tasks, especially for improving robustness and generalization. However, the recently developed detection transformers only employ pre-train... | 1accept |
Title: Brain insights improve RNNs' accuracy and robustness for hierarchical control of continually learned autonomous motor motifs. Abstract: We study the problem of learning dynamics that can produce hierarchically organized continuous outputs consisting of the flexible chaining of re-usable motor ‘motifs’ from which... | 0reject |
Title: Task Relatedness-Based Generalization Bounds for Meta Learning. Abstract: Supposing the $n$ training tasks and the new task are sampled from the same environment, traditional meta learning theory derives an error bound on the expected loss over the new task in terms of the empirical training loss, uniformly over... | 1accept |
Title: DEEP GRAPH TREE NETWORKS. Abstract: We propose Graph Tree Networks (GTree), a self-interpretive deep graph neural network architecture which originates from the tree representation of the graphs. In the tree representation, each node forms its own tree where the node itself is the root node and all its neighbors... | 0reject |
Title: Graph Neural Networks with Learnable Structural and Positional Representations. Abstract: Graph neural networks (GNNs) have become the standard learning architectures for graphs. GNNs have been applied to numerous domains ranging from quantum chemistry, recommender systems to knowledge graphs and natural languag... | 1accept |
Title: Bayesian Variational Autoencoders for Unsupervised Out-of-Distribution Detection. Abstract: Despite their successes, deep neural networks still make unreliable predictions when faced with test data drawn from a distribution different to that of the training data, constituting a major problem for AI safety. While... | 0reject |
Title: Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data. Abstract: This paper investigates the intriguing question of whether we can create learning algorithms that automatically generate training data, learning environments, and curricula in order t... | 0reject |
Title: Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space. Abstract: We consider off-policy evaluation (OPE) in continuous action domains, such as dynamic pricing and personalized dose finding. In OPE, one aims to learn the value under a new policy using historical data generated by a diffe... | 0reject |
Title: SPIGAN: Privileged Adversarial Learning from Simulation. Abstract: Deep Learning for Computer Vision depends mainly on the source of supervision. Photo-realistic simulators can generate large-scale automatically labeled synthetic data, but introduce a domain gap negatively impacting performance. We propose a new... | 1accept |
Title: Brittle interpretations: The Vulnerability of TCAV and Other Concept-based Explainability Tools to Adversarial Attack. Abstract: Methods for model explainability have become increasingly critical for testing the fairness and soundness of deep learning. A number of explainability techniques have been developed wh... | 0reject |
Title: Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning. Abstract: We present Deep Voice 3, a fully-convolutional attention-based neural text-to-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural speech synthesis systems in naturalness while training an order of magnitude faster.... | 1accept |
Title: BatchEnsemble: an Alternative Approach to Efficient Ensemble and Lifelong Learning. Abstract:
Ensembles, where multiple neural networks are trained individually and their predictions are averaged, have been shown to be widely successful for improving both the accuracy and predictive uncertainty of single neural... | 1accept |
Title: NESTED LEARNING FOR MULTI-GRANULAR TASKS. Abstract: Standard deep neural networks (DNNs) used for classification are trained in an end-to-end fashion for very specific tasks - object recognition, face identification, character recognition, etc. This specificity often leads to overconfident models that generalize... | 0reject |
Title: P-BN: Towards Effective Batch Normalization in the Path Space. Abstract: Neural networks with ReLU activation functions have demonstrated their success in many applications. Recently, researchers noticed a potential issue with the optimization of ReLU networks: the ReLU activation functions are positively scale-... | 0reject |
Title: Neural Clustering By Predicting And Copying Noise. Abstract: We propose a neural clustering model that jointly learns both latent features and how they cluster. Unlike similar methods our model does not require a predefined number of clusters. Using a supervised approach, we agglomerate latent features towards r... | 0reject |
Title: Stabilizing GAN Training with Multiple Random Projections. Abstract: Training generative adversarial networks is unstable in high-dimensions as the true data distribution tends to be concentrated in a small fraction of the ambient space. The discriminator is then quickly able to classify nearly all generated sam... | 0reject |
Title: A Uniform Generalization Error Bound for Generative Adversarial Networks. Abstract: This paper focuses on the theoretical investigation of unsupervised generalization theory of generative adversarial networks (GANs). We first formulate a more reasonable definition of general error and generalization bounds for ... | 0reject |
Title: Extreme Triplet Learning: Effectively Optimizing Easy Positives and Hard Negatives. Abstract: The Triplet Loss approach to Distance Metric Learning is defined by the strategy to select triplets and the loss function through which those triplets are optimized. During optimization, two especially important cases ... | 0reject |
Title: A critical analysis of self-supervision, or what we can learn from a single image. Abstract: We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can... | 1accept |
Title: Graph Residual Flow for Molecular Graph Generation. Abstract: Statistical generative models for molecular graphs attract attention from many researchers from the fields of bio- and chemo-informatics. Among these models, invertible flow-based approaches are not fully explored yet. In this paper, we propose a powe... | 0reject |
Title: Analytical Moment Regularizer for Training Robust Networks. Abstract: Despite the impressive performance of deep neural networks (DNNs) on numerous learning tasks, they still exhibit uncouth behaviours. One puzzling behaviour is the subtle sensitive reaction of DNNs to various noise attacks. Such a nuisance has... | 0reject |
Title: Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples. Abstract: To craft black-box adversarial examples, adversaries need to query the victim model and take proper advantage of its feedback. Existing black-box attacks generally suffer from high query complexity, especially when o... | 1accept |
Title: Parameter Efficient Multimodal Transformers for Video Representation Learning. Abstract: The recent success of Transformers in the language domain has motivated adapting it to a multimodal setting, where a new visual model is trained in tandem with an already pretrained language model. However, due to the excess... | 1accept |
Title: Drift Detection in Episodic Data: Detect When Your Agent Starts Faltering. Abstract: Detection of deterioration of agent performance in dynamic environments is challenging due to the non-i.i.d nature of the observed performance. We consider an episodic framework, where the objective is to detect when an agent be... | 0reject |
Title: Fast and Sample-Efficient Domain Adaptation for Autoencoder-Based End-to-End Communication. Abstract: The problem of domain adaptation conventionally considers the setting where a source domain has plenty of labeled data, and a target domain (with a different data distribution) has plenty of unlabeled data but n... | 0reject |
Title: Adapting Behaviour for Learning Progress. Abstract: Determining what experience to generate to best facilitate learning (i.e. exploration) is one of the distinguishing features and open challenges in reinforcement learning. The advent of distributed agents that interact with parallel instances of the environment... | 0reject |
Title: A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES. Abstract: Formal verification techniques that compute provable guarantees on properties of machine learning models, like robustness to norm-bounded adversarial perturbations, have yielded impressive results. Although most te... | 1accept |
Title: Exploring Model-based Planning with Policy Networks. Abstract: Model-based reinforcement learning (MBRL) with model-predictive control or
online planning has shown great potential for locomotion control tasks in both
sample efficiency and asymptotic performance. Despite the successes, the existing
planning metho... | 1accept |
Title: iPTR: Learning a representation for interactive program translation retrieval. Abstract: Program translation contributes to many real world scenarios, such as porting codebases written in an obsolete or deprecated language to a modern one or re-implementing existing projects in one's preferred programming langua... | 0reject |
Title: Learning to Move with Affordance Maps. Abstract: The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent, from household robotic vacuums to autonomous vehicles. Traditional SLAM-based approaches for exploration and navigation largel... | 1accept |
Title: Private Split Inference of Deep Networks. Abstract: Splitting network computations between the edge device and the cloud server is a promising approach for enabling low edge-compute and private inference of neural networks. Current methods for providing the privacy train the model to minimize information leakage... | 0reject |
Title: Generalizing Across Domains via Cross-Gradient Training. Abstract: We present CROSSGRAD , a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existin... | 1accept |
Title: Mismatched No More: Joint Model-Policy Optimization for Model-Based RL. Abstract: Many model-based reinforcement learning (RL) methods follow a similar template: fit a model to previously observed data, and then use data from that model for RL or planning. However, models that achieve better training performance... | 0reject |
Title: Revisiting Locally Supervised Learning: an Alternative to End-to-end Training. Abstract: Due to the need to store the intermediate activations for back-propagation, end-to-end (E2E) training of deep networks usually suffers from high GPUs memory footprint. This paper aims to address this problem by revisiting th... | 1accept |
Title: Improving Hierarchical Adversarial Robustness of Deep Neural Networks. Abstract: Do all adversarial examples have the same consequences? An autonomous driving system misclassifying a pedestrian as a car may induce a far more dangerous --and even potentially lethal-- behavior than, for instance, a car as a bus. I... | 0reject |
Title: Beyond Shared Hierarchies: Deep Multitask Learning through Soft Layer Ordering. Abstract: Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of p... | 1accept |
Title: Inference Suboptimality in Variational Autoencoders. Abstract: Amortized inference has led to efficient approximate inference for large datasets. The quality of posterior inference is largely determined by two factors: a) the ability of the variational distribution to model the true posterior and b) the capacity... | 0reject |
Title: Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference. Abstract: Deep neural networks have been demonstrated to be vulnerable to adversarial attacks, where small perturbations intentionally added to the original inputs can fool the classifier. In this paper, we propose a ... | 0reject |
Title: News-Driven Stock Prediction Using Noisy Equity State Representation. Abstract: News-driven stock prediction investigates the correlation between news events and stock price movements.
Previous work has considered effective ways for representing news events and their sequences, but rarely exploited the represent... | 0reject |
Title: Mix-MaxEnt: Creating High Entropy Barriers To Improve Accuracy and Uncertainty Estimates of Deterministic Neural Networks. Abstract: We propose an extremely simple approach to regularize a single deterministic neural network to obtain improved accuracy and reliable uncertainty estimates. Our approach, on top of ... | 0reject |
Title: Learning to Search Efficient DenseNet with Layer-wise Pruning. Abstract: Deep neural networks have achieved outstanding performance in many real-world applications with the expense of huge computational resources. The DenseNet, one of the recently proposed neural network architecture, has achieved the state-of-t... | 0reject |
Title: Effective Use of Variational Embedding Capacity in Expressive End-to-End Speech Synthesis. Abstract: Recent work has explored sequence-to-sequence latent variable models for expressive speech synthesis (supporting control and transfer of prosody and style), but has not presented a coherent framework for understa... | 0reject |
Title: On Stochastic Sign Descent Methods. Abstract: Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models. Sign-based methods, such as signSGD (Bernstein et al., 2018), have recently been gaining popularity because of t... | 0reject |
Title: The Early Phase of Neural Network Training. Abstract: Recent studies have shown that many important aspects of neural network learning take place within the very earliest iterations or epochs of training. For example, sparse, trainable sub-networks emerge (Frankle et al., 2019), gradient descent moves into a sma... | 1accept |
Title: LAYER SPARSITY IN NEURAL NETWORKS. Abstract: Sparsity has become popular in machine learning, because it can save computational resources, facilitate interpretations, and prevent overfitting. In this paper, we discuss sparsity in the framework of neural networks. In particular, we formulate a new notion of spars... | 0reject |
Title: DKM: Differentiable k-Means Clustering Layer for Neural Network Compression. Abstract: Deep neural network (DNN) model compression for efficient on-device inference is becoming increasingly important to reduce memory requirements and keep user data on-device. To this end, we propose a novel differentiable k-mean... | 1accept |
Title: Deep Q-Network with Proximal Iteration. Abstract: We employ Proximal Iteration for value-function optimization in reinforcement learning. Proximal Iteration is a computationally efficient technique that enables us to bias the optimization procedure towards more desirable solutions. As a concrete application of P... | 0reject |
Title: Interpreting Reinforcement Policies through Local Behaviors. Abstract: Many works in explainable AI have focused on explaining black-box classification models. Explaining deep reinforcement learning (RL) policies in a manner that could be understood by domain users has received much less attention. In this paper... | 0reject |
Title: Viewmaker Networks: Learning Views for Unsupervised Representation Learning. Abstract: Many recent methods for unsupervised representation learning train models to be invariant to different "views," or distorted versions of an input. However, designing these views requires considerable trial and error by human e... | 1accept |
Title: Success-Rate Targeted Reinforcement Learning by Disorientation Penalty. Abstract: Current reinforcement learning generally uses discounted return as its learning objective. However, real-world tasks may often demand a high success rate, which can be quite different from optimizing rewards. In this paper, we expl... | 0reject |
Title: Physics-aware Spatiotemporal Modules with Auxiliary Tasks for Meta-Learning. Abstract: Modeling the dynamics of real-world physical systems is critical for spatiotemporal prediction tasks, but challenging when data is limited. The scarcity of real-world data and the difficulty in reproducing the data distributio... | 0reject |
Title: Robust Loss Functions for Complementary Labels Learning. Abstract: In ordinary-label learning, the correct label is given to each training sample. Similarly, a complementary label is also provided for each training sample in complementary-label learning. A complementary label indicates a class that the example d... | 0reject |
Title: Policy Smoothing for Provably Robust Reinforcement Learning. Abstract: The study of provable adversarial robustness for deep neural networks (DNNs) has mainly focused on $\textit{static}$ supervised learning tasks such as image classification. However, DNNs have been used extensively in real-world $\textit{adapt... | 1accept |
Title: Neural networks are a priori biased towards Boolean functions with low entropy. Abstract: Understanding the inductive bias of neural networks is critical to explaining their ability to generalise. Here,
for one of the simplest neural networks -- a single-layer perceptron with $n$ input neurons, one output ne... | 0reject |
Title: Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization. Abstract: Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data. In many practical FL scenarios, participants have heterogeneous resources due to ... | 1accept |
Title: Decoupled Greedy Learning of Graph Neural Networks. Abstract: Graph Neural Networks (GNNs) become very popular for graph-related applications due to their superior performance. However, they have been shown to be computationally expensive in large scale settings, because their produced node embeddings have to be... | 0reject |
Title: Mirror Descent Policy Optimization. Abstract: Mirror descent (MD), a well-known first-order method in constrained convex optimization, has recently been shown as an important tool to analyze trust-region algorithms in reinforcement learning (RL). However, there remains a considerable gap between such theoretical... | 1accept |
Title: Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input. Abstract: The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we ... | 1accept |
Title: Manifold Learning and Alignment with Generative Adversarial Networks. Abstract: We present a generative adversarial network (GAN) that conducts manifold learning and alignment (MLA): A task to learn the multi-manifold structure underlying data and to align those manifolds without any correspondence information. ... | 0reject |
Title: A Walk with SGD: How SGD Explores Regions of Deep Network Loss?. Abstract: The non-convex nature of the loss landscape of deep neural networks (DNN) lends them the intuition that over the course of training, stochastic optimization algorithms explore different regions of the loss surface by entering and escaping... | 0reject |
Title: Contrastive estimation reveals topic posterior information to linear models. Abstract: Contrastive learning is an approach to representation learning that utilizes naturally occurring similar and dissimilar pairs of data points to find useful embeddings of data. In the context of document classification under to... | 0reject |
Title: PIVEN: A Deep Neural Network for Prediction Intervals with Specific Value Prediction. Abstract: Improving the robustness of neural nets in regression tasks is key to their application in multiple domains. Deep learning-based approaches aim to achieve this goal either by improving their prediction of specific val... | 0reject |
Title: Generative Modeling with Optimal Transport Maps. Abstract: With the discovery of Wasserstein GANs, Optimal Transport (OT) has become a powerful tool for large-scale generative modeling tasks. In these tasks, OT cost is typically used as the loss for training GANs. In contrast to this approach, we show that the O... | 1accept |
Title: Head2Toe: Utilizing Intermediate Representations for Better OOD Generalization. Abstract: Transfer-learning methods aim to improve performance in a data-scarce target domain using a model pretrained on a data-rich source domain. A cost-efficient strategy, linear probing, involves freezing the source model and t... | 0reject |
Title: Semi-Supervised Learning via Clustering Representation Space. Abstract: We proposed a novel loss function that combines supervised learning with clustering in deep neural networks. Taking advantage of the data distribution and the existence of some labeled data, we construct a meaningful latent space. Our loss ... | 0reject |
Title: Reward Design in Cooperative Multi-agent Reinforcement Learning for Packet Routing. Abstract: In cooperative multi-agent reinforcement learning (MARL), how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem. The global reward signal assigns the same global r... | 0reject |
Title: Learning meta-features for AutoML. Abstract: This paper tackles the AutoML problem, aimed to automatically select an ML algorithm and its hyper-parameter configuration most appropriate to the dataset at hand. The proposed approach, MetaBu, learns new meta-features via an Optimal Transport procedure, aligning the... | 1accept |
Title: Stochastic Normalized Gradient Descent with Momentum for Large Batch Training. Abstract: Stochastic gradient descent (SGD) and its variants have been the dominating optimization methods in machine learning. Compared with small batch training, SGD with large batch training can better utilize the computational pow... | 0reject |
Title: Three Dimensional Reconstruction of Botanical Trees with Simulatable Geometry. Abstract: We tackle the challenging problem of creating full and accurate three dimensional reconstructions of botanical trees with the topological and geometric accuracy required for subsequent physical simulation, e.g. in response t... | 0reject |
Title: LORD: Lower-Dimensional Embedding of Log-Signature in Neural Rough Differential Equations. Abstract: The problem of processing very long time-series data (e.g., a length of more than 10,000) is a long-standing research problem in machine learning. Recently, one breakthrough, called neural rough differential equa... | 1accept |
Title: Noisy Machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation. Abstract: The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog com... | 0reject |
Title: Invariance Through Inference. Abstract: We introduce a general approach, called invariance through inference, for improving the test-time performance of a behavior agent in deployment environments with unknown perceptual variations. Instead of producing invariant visual features through memorization, invariance ... | 0reject |
Title: Egocentric Spatial Memory Network. Abstract: Inspired by neurophysiological discoveries of navigation cells in the mammalian
brain, we introduce the first deep neural network architecture for modeling Egocentric
Spatial Memory (ESM). It learns to estimate the pose of the agent and
progressively construct top-dow... | 0reject |
Title: Center-wise Local Image Mixture For Contrastive Representation Learning. Abstract: Recent advances in unsupervised representation learning have experienced remarkable progress, especially with the achievements of contrastive learning, which regards each image as well its augmentations as a separate class, while ... | 0reject |
Title: Kronecker Recurrent Units. Abstract: Our work addresses two important issues with recurrent neural networks: (1) they are over-parameterized, and (2) the recurrent weight matrix is ill-conditioned. The former increases the sample complexity of learning and the training time. The latter causes the vanishing and e... | 0reject |
Title: Multi-class classification without multi-class labels. Abstract: This work presents a new strategy for multi-class classification that requires no class-specific labels, but instead leverages pairwise similarity between examples, which is a weaker form of annotation. The proposed method, meta classification lear... | 1accept |
Title: Learning Functionally Decomposed Hierarchies for Continuous Navigation Tasks. Abstract: Solving long-horizon sequential decision making tasks in environments with sparse rewards is a longstanding problem in reinforcement learning (RL) research. Hierarchical Reinforcement Learning (HRL) has held the promise to en... | 0reject |
Title: Neural Language Modeling by Jointly Learning Syntax and Lexicon. Abstract: We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networ... | 1accept |
Title: Neural Phrase-to-Phrase Machine Translation. Abstract: We present Neural Phrase-to-Phrase Machine Translation (\nppmt), a phrase-based translation model that uses a novel phrase-attention mechanism to discover relevant input (source) segments to generate output (target) phrases. We propose an efficient dynamic p... | 0reject |
Title: Sampling from Discrete Energy-Based Models with Quality/Efficiency Trade-offs. Abstract: Energy-Based Models (EBMs) allow for extremely flexible specifications of probability distributions. However, they do not provide a mechanism for obtaining exact samples from these distributions. Monte Carlo techniques can a... | 0reject |
Title: Differential-Critic GAN: Generating What You Want by a Cue of Preferences. Abstract: This paper proposes Differential-Critic Generative Adversarial Network (DiCGAN) to learn the distribution of user-desired data when only partial instead of the entire dataset possesses the desired properties. Existing approaches... | 0reject |
Title: Pre-training Text-to-Text Transformers for Concept-centric Common Sense. Abstract: Pretrained language models (PTLM) have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks that require a syntactic and semantic understanding of the text. However, current pre... | 1accept |
Title: Towards Finding Longer Proofs. Abstract: We present a reinforcement learning (RL) based guidance system for automated theorem proving geared towards Finding Longer Proofs (FLoP). FLoP focuses on generalizing from short proofs to longer ones of similar structure. To achieve that, FLoP uses state-of-the-art RL app... | 0reject |
Title: Adversarially Robust Generalization Just Requires More Unlabeled Data. Abstract: Neural network robustness has recently been highlighted by the existence of adversarial examples. Many previous works show that the learned networks do not perform well on perturbed test data, and significantly more labeled data is ... | 0reject |
Title: Accelerated Information Gradient flow. Abstract: We present a systematic framework for the Nesterov's accelerated gradient flows in the spaces of probabilities embedded with information metrics. Here two metrics are considered, including both the Fisher-Rao metric and the Wasserstein-$2$ metric. For the Wasserst... | 0reject |
Title: Unifying Question Answering, Text Classification, and Regression via Span Extraction. Abstract: Even as pre-trained language encoders such as BERT are shared across many tasks, the output layers of question answering, text classification, and regression models are significantly different. Span decoders are frequ... | 0reject |
Title: No Cost Likelihood Manipulation at Test Time for Making Better Mistakes in Deep Networks. Abstract: There has been increasing interest in building deep hierarchy-aware classifiers that aim to quantify and reduce the severity of mistakes, and not just reduce the number of errors. The idea is to exploit the label ... | 1accept |
Title: HaarPooling: Graph Pooling with Compressive Haar Basis. Abstract: Deep Graph Neural Networks (GNNs) are instrumental in graph classification and graph-based regression tasks. In these tasks, graph pooling is a critical ingredient by which GNNs adapt to input graphs of varying size and structure. We propose a new... | 0reject |
Title: Optimal Completion Distillation for Sequence Learning. Abstract: We present Optimal Completion Distillation (OCD), a training procedure for optimizing sequence to sequence models based on edit distance. OCD is efficient, has no hyper-parameters of its own, and does not require pre-training or joint optimization ... | 1accept |
End of preview. Expand in Data Studio
- Downloads last month
- 7