date_created
timestamp[ns]
abstract
string
title
string
categories
string
arxiv_id
string
year
int32
embedding_str
string
embedding
list
data_map
list
2022-05-25T23:51:31
Complex diseases are caused by a multitude of factors that may differ between patients even within the same diagnostic category. A few underlying root causes may nevertheless initiate the development of disease within each patient. We therefore focus on identifying patient-specific root causes of disease, which we eq...
Identifying Patient-Specific Root Causes with the Heteroscedastic Noise Model
stat.ML cs.LG stat.AP stat.ME
2205.13085
2,022
# Identifying Patient-Specific Root Causes with the Heteroscedastic Noise Model Complex diseases are caused by a multitude of factors that may differ between patients even within the same diagnostic category. A few underlying root causes may nevertheless initiate the development of disease within each patient. We ...
[ 0.03582717850804329, 0.015805741772055626, -0.02743498794734478, -0.07323871552944183, 0.054034069180488586, -0.025799209251999855, 0.01523791253566742, 0.05565733462572098, -0.018208632245659828, 0.08044566959142685, 0.009909414686262608, -0.06762943416833878, -0.034206364303827286, 0.077...
[ 1.3760803937911987, 9.103288650512695 ]
2022-05-26T00:35:11
While a broad range of techniques have been proposed to tackle distribution shift, the simple baseline of training on an $\textit{undersampled}$ balanced dataset often achieves close to state-of-the-art-accuracy across several popular benchmarks. This is rather surprising, since undersampling algorithms discard exces...
Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification
cs.LG cs.AI math.ST stat.ML stat.TH
2205.13094
2,022
# Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification While a broad range of techniques have been proposed to tackle distribution shift, the simple baseline of training on an $\textit{undersampled}$ balanced dataset often achieves close to state-of-the-art-accuracy across sev...
[ -0.0038429913111031055, -0.06111927330493927, -0.03007572516798973, -0.022116124629974365, 0.05273180827498436, -0.08545256406068802, -0.022319655865430832, 0.025222279131412506, 0.033827368170022964, 0.06772668659687042, 0.015041052363812923, -0.005346556659787893, -0.0341964028775692, 0....
[ 1.8524097204208374, 6.838903903961182 ]
2022-05-26T00:38:48
Traditional vision based Automated Optical Inspection (referred to as AOI in paper) systems present multiple challenges in factory settings including inability to scale across multiple product lines, requirement of vendor programming expertise, little tolerance to variations and lack of cloud connectivity for aggrega...
VizInspect Pro -- Automated Optical Inspection (AOI) solution
cs.AI cs.CV
2205.13095
2,022
# VizInspect Pro -- Automated Optical Inspection (AOI) solution Traditional vision based Automated Optical Inspection (referred to as AOI in paper) systems present multiple challenges in factory settings including inability to scale across multiple product lines, requirement of vendor programming expertise, little t...
[ -0.03218911960721016, 0.007219044025987387, -0.015091821551322937, 0.03249191865324974, 0.04597323387861252, -0.030843790620565414, 0.016878202557563782, 0.06566377729177475, 0.014653807505965233, 0.05085044726729393, 0.023768149316310883, -0.02319728024303913, -0.04741552099585533, -0.000...
[ -1.0347509384155273, 2.9587032794952393 ]
2022-05-26T00:51:12
The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing. The approximation of the Wasserstein gradient with finite samples requires solving a variational problem. We study the variational problem in the family of two-layer networks with squared-ReLU acti...
Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization
cs.LG math.OC stat.ML
2205.13098
2,022
# Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing. The approximation of the Wasserstein gradient with finite samples requires solving a variational ...
[ 0.01697232760488987, -0.018879439681768417, -0.07254933565855026, -0.013690082356333733, -0.007579749915748835, -0.06752552092075348, 0.0372052863240242, 0.033902108669281006, -0.006709798704832792, 0.06761177629232407, -0.03342357277870178, -0.0734643042087555, -0.04343666136264801, 0.012...
[ -1.418545126914978, 5.824329376220703 ]
2022-05-26T01:35:58
The rapid development of X-ray micro-computed tomography (micro-CT) opens new opportunities for 3D analysis of particle and grain-size characterisation, determination of particle densities and shape factors, estimation of mineral associations and liberation and locking. Current practices in mineral liberation analysi...
Deep-XFCT: Deep learning 3D-mineral liberation analysis with micro X-ray fluorescence and computed tomography
cs.LG physics.data-an
2205.13102
2,022
# Deep-XFCT: Deep learning 3D-mineral liberation analysis with micro X-ray fluorescence and computed tomography The rapid development of X-ray micro-computed tomography (micro-CT) opens new opportunities for 3D analysis of particle and grain-size characterisation, determination of particle densities and shape fact...
[ -0.02856249362230301, 0.05762088671326637, -0.016122831031680107, 0.03698430955410004, 0.04108242690563202, -0.018609562888741493, -0.0005669273086823523, 0.03972760587930679, 0.0027735689654946327, 0.01700020022690296, -0.04775569960474968, 0.034108031541109085, -0.02859898842871189, 0.05...
[ -1.8513931035995483, 2.8074440956115723 ]
2022-05-26T01:54:48
Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance. Our previous work extracts the subspaces by performing the dimension reduction method over the training trajectory, which verifies that DNN could be well-t...
Trainable Weight Averaging: A General Approach for Subspace Training
cs.LG
2205.13104
2,022
# Trainable Weight Averaging: A General Approach for Subspace Training Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance. Our previous work extracts the subspaces by performing the dimension reduction meth...
[ -0.022566210478544235, -0.031581368297338486, 0.026682335883378983, 0.01938757672905922, 0.012805774807929993, -0.07306064665317535, -0.004383350256830454, 0.047658246010541916, 0.016791202127933502, 0.0635407492518425, 0.0365767702460289, 0.01733786053955555, 0.002610786585137248, 0.05984...
[ 0.7160346508026123, 5.87845516204834 ]
2022-05-26T02:18:12
We advance the state-of-the-art in unsupervised abstractive dialogue summarization by utilizing multi-sentence compression graphs. Starting from well-founded assumptions about word graphs, we present simple but reliable path-reranking and topic segmentation schemes. Robustness of our method is demonstrated on dataset...
Unsupervised Abstractive Dialogue Summarization with Word Graphs and POV Conversion
cs.CL cs.AI
2205.13108
2,022
# Unsupervised Abstractive Dialogue Summarization with Word Graphs and POV Conversion We advance the state-of-the-art in unsupervised abstractive dialogue summarization by utilizing multi-sentence compression graphs. Starting from well-founded assumptions about word graphs, we present simple but reliable path-rera...
[ 0.036838654428720474, -0.0021887673065066338, -0.03148363530635834, -0.02111615240573883, 0.07131596654653549, -0.014410997740924358, 0.017973344773054123, 0.014084180817008018, -0.009276214055716991, 0.06803475320339203, 0.005985521711409092, -0.039210617542266846, 0.012102785520255566, 0...
[ 8.387636184692383, 9.391167640686035 ]
2022-05-26T02:23:14
Obtaining manual annotations for large datasets for supervised training of deep learning (DL) models is challenging. The availability of large unlabeled datasets compared to labeled ones motivate the use of self-supervised pretraining to initialize DL models for subsequent segmentation tasks. In this work, we conside...
Learning to segment with limited annotations: Self-supervised pretraining with regression and contrastive loss in MRI
cs.CV cs.AI cs.LG eess.IV
2205.13109
2,022
# Learning to segment with limited annotations: Self-supervised pretraining with regression and contrastive loss in MRI Obtaining manual annotations for large datasets for supervised training of deep learning (DL) models is challenging. The availability of large unlabeled datasets compared to labeled ones motivate...
[ -0.0026703316252678633, 0.04604636877775192, -0.02721347101032734, 0.03436868265271187, 0.01838252693414688, -0.040041301399469376, 0.025424961000680923, -0.006284296978265047, -0.0005365705001167953, 0.04807667434215546, -0.00952293910086155, 0.026849785819649696, -0.05126560106873512, 0....
[ 3.017625093460083, 1.2922850847244263 ]
2022-05-26T02:43:29
Pandora's Box is a fundamental stochastic optimization problem, where the decision-maker must find a good alternative while minimizing the search cost of exploring the value of each alternative. In the original formulation, it is assumed that accurate distributions are given for the values of all the alternatives, wh...
Contextual Pandora's Box
cs.LG
2205.13114
2,022
# Contextual Pandora's Box Pandora's Box is a fundamental stochastic optimization problem, where the decision-maker must find a good alternative while minimizing the search cost of exploring the value of each alternative. In the original formulation, it is assumed that accurate distributions are given for the values...
[ 0.05321767181158066, 0.007808060850948095, -0.014217935502529144, -0.049785349518060684, 0.010167811997234821, -0.07207323610782623, -0.01472643855959177, 0.0017126464517787099, -0.02076048217713833, 0.023917391896247864, 0.030999820679426193, -0.042616721242666245, -0.0397505946457386, 0....
[ -0.31397712230682373, 13.411467552185059 ]
2022-05-26T02:46:09
Modern image captioning models are usually trained with text similarity objectives. However, since reference captions in public datasets often describe the most salient common objects, models trained with text similarity objectives tend to ignore specific and detailed aspects of an image that distinguish it from othe...
Fine-grained Image Captioning with CLIP Reward
cs.CL cs.AI cs.CV
2205.13115
2,022
# Fine-grained Image Captioning with CLIP Reward Modern image captioning models are usually trained with text similarity objectives. However, since reference captions in public datasets often describe the most salient common objects, models trained with text similarity objectives tend to ignore specific and detailed...
[ -0.011742503382265568, -0.019631274044513702, -0.055993493646383286, 0.038914166390895844, 0.027699260041117668, 0.02550611086189747, 0.02360083907842636, -0.00775595847517252, -0.0025428617373108864, 0.017091119661927223, 0.06314599514007568, -0.0013019457692280412, 0.005076153203845024, ...
[ 4.979305744171143, 4.789552688598633 ]
2022-05-26T02:50:26
This paper is concerned with the complex task of identifying the type and cause of the events that are captured by distribution-level phasor measurement units (D-PMUs) in order to enhance situational awareness in power distribution systems. Our goal is to address two fundamental challenges in this field: a) scarcity ...
GraphPMU: Event Clustering via Graph Representation Learning Using Locationally-Scarce Distribution-Level Fundamental and Harmonic PMU Measurements
cs.LG cs.SY eess.SP eess.SY
2205.13116
2,022
# GraphPMU: Event Clustering via Graph Representation Learning Using Locationally-Scarce Distribution-Level Fundamental and Harmonic PMU Measurements This paper is concerned with the complex task of identifying the type and cause of the events that are captured by distribution-level phasor measurement units (D-P...
[ 0.012905735522508621, -0.00271155359223485, -0.03900783136487007, 0.012805327773094177, 0.04664544761180878, -0.062386367470026016, -0.022008687257766724, -0.017047660425305367, -0.053866781294345856, 0.05483969300985336, 0.016170190647244453, -0.05659028887748718, -0.06317779421806335, 0....
[ -4.218696117401123, 4.133960247039795 ]
2022-05-26T03:03:16
Paraphrase generation is a difficult problem. This is not only because of the limitations in text generation capabilities but also due that to the lack of a proper definition of what qualifies as a paraphrase and corresponding metrics to measure how good it is. Metrics for evaluation of paraphrasing quality is an on ...
Understanding Metrics for Paraphrasing
cs.CL cs.LG
2205.13119
2,022
# Understanding Metrics for Paraphrasing Paraphrase generation is a difficult problem. This is not only because of the limitations in text generation capabilities but also due that to the lack of a proper definition of what qualifies as a paraphrase and corresponding metrics to measure how good it is. Metrics for ev...
[ -0.00825461558997631, -0.008351902477443218, -0.05612625181674957, 0.00907050259411335, -0.013771427795290947, -0.08448052406311035, 0.008352714590728283, -0.033616453409194946, -0.018221301957964897, 0.054401688277721405, 0.018518585711717606, 0.006421167869120836, -0.04972962662577629, -...
[ 8.14725112915039, 7.424155235290527 ]
2022-05-26T03:05:26
The increasingly stringent regulations on privacy protection have sparked interest in federated learning. As a distributed machine learning framework, it bridges isolated data islands by training a global model over devices while keeping data localized. Specific to recommendation systems, many federated recommendatio...
Cali3F: Calibrated Fast Fair Federated Recommendation System
cs.IR cs.CY cs.LG
2205.13121
2,022
# Cali3F: Calibrated Fast Fair Federated Recommendation System The increasingly stringent regulations on privacy protection have sparked interest in federated learning. As a distributed machine learning framework, it bridges isolated data islands by training a global model over devices while keeping data localized. ...
[ -0.023199979215860367, 0.01617777906358242, -0.0035925398115068674, -0.017707325518131256, -0.003944237716495991, -0.08569041639566422, -0.028657179325819016, 0.057529326528310776, 0.019637010991573334, 0.0115233538672328, 0.0030872425995767117, -0.028951337561011314, -0.044047076255083084, ...
[ 6.781594753265381, 11.956313133239746 ]
2022-05-26T03:25:45
Multi-behavior recommendation exploits multiple types of user-item interactions to alleviate the data sparsity problem faced by the traditional models that often utilize only one type of interaction for recommendation. In real scenarios, users often take a sequence of actions to interact with an item, in order to get...
Cascading Residual Graph Convolutional Network for Multi-Behavior Recommendation
cs.IR
2205.13128
2,022
# Cascading Residual Graph Convolutional Network for Multi-Behavior Recommendation Multi-behavior recommendation exploits multiple types of user-item interactions to alleviate the data sparsity problem faced by the traditional models that often utilize only one type of interaction for recommendation. In real scena...
[ 0.05003334581851959, 0.03750646114349365, 0.012618187814950943, -0.004063404630869627, 0.03242586553096771, -0.015599740669131279, -0.01418758649379015, 0.03109625168144703, 0.028959020972251892, 0.035874366760253906, 0.04311675578355789, -0.03469127416610718, 0.03139367699623108, -0.00211...
[ 7.183546543121338, 11.578747749328613 ]
2022-05-26T03:33:44
Network-on-chip (NoC) architectures rely on buffers to store flits to cope with contention for router resources during packet switching. Recently, reversible multi-function channel (RMC) buffers have been proposed to simultaneously reduce power and enable adaptive NoC buffering between adjacent routers. While adaptiv...
RACE: A Reinforcement Learning Framework for Improved Adaptive Control of NoC Channel Buffers
cs.AR cs.LG
2205.13130
2,022
# RACE: A Reinforcement Learning Framework for Improved Adaptive Control of NoC Channel Buffers Network-on-chip (NoC) architectures rely on buffers to store flits to cope with contention for router resources during packet switching. Recently, reversible multi-function channel (RMC) buffers have been proposed to si...
[ 0.016349798068404198, 0.014593218453228474, -0.004670719616115093, -0.015816692262887955, 0.011776561848819256, -0.0858568400144577, -0.03800172731280327, 0.022504417225718498, -0.0248566884547472, 0.031594812870025635, -0.0279763862490654, -0.016260219737887383, -0.040509164333343506, 0.0...
[ -0.9201259016990662, 11.431821823120117 ]
2022-05-26T03:41:12
Artificial Intelligence is now recognized as a general-purpose technology with ample impact on human life. This work aims at understanding the evolution of AI and, in particular Machine learning, from the perspective of researchers' contributions to the field. In order to do so, we present several measures allowing t...
On the Evolution of A.I. and Machine Learning: Towards a Meta-level Measuring and Understanding Impact, Influence, and Leadership at Premier A.I. Conferences
cs.AI cs.CY cs.LG
2205.13131
2,022
# On the Evolution of A.I. and Machine Learning: Towards a Meta-level Measuring and Understanding Impact, Influence, and Leadership at Premier A.I. Conferences Artificial Intelligence is now recognized as a general-purpose technology with ample impact on human life. This work aims at understanding the evolution ...
[ 0.004440612625330687, 0.04988199099898338, -0.025462696328759193, 0.00004325899135437794, 0.05940474569797516, -0.04161330685019493, 0.062320295721292496, 0.039946049451828, 0.01850728876888752, 0.017871927469968796, 0.03185703977942467, -0.015357568860054016, 0.018971947953104973, 0.03548...
[ 4.472832679748535, 9.582934379577637 ]
2022-05-26T03:50:52
Nonlinear dynamics is ubiquitous in nature and commonly seen in various science and engineering disciplines. Distilling analytical expressions that govern nonlinear dynamics from limited data remains vital but challenging. To tackle this fundamental issue, we propose a novel Symbolic Physics Learner (SPL) machine to ...
Symbolic Physics Learner: Discovering governing equations via Monte Carlo tree search
cs.AI cs.LG cs.SC nlin.CD physics.comp-ph
2205.13134
2,022
# Symbolic Physics Learner: Discovering governing equations via Monte Carlo tree search Nonlinear dynamics is ubiquitous in nature and commonly seen in various science and engineering disciplines. Distilling analytical expressions that govern nonlinear dynamics from limited data remains vital but challenging. To t...
[ -0.015002243220806122, -0.007008877117186785, -0.036476895213127136, 0.031033968552947044, 0.01797640137374401, -0.03819059580564499, 0.04663991555571556, 0.015505724586546421, -0.03318333625793457, 0.04173225164413452, 0.018246853724122047, -0.03486179932951927, -0.05446488782763481, 0.01...
[ -2.0243959426879883, 4.643858432769775 ]
2022-05-26T04:05:51
Class imbalance naturally exists when train and test models in different domains. Unsupervised domain adaptation (UDA) augments model performance with only accessible annotations from the source domain and unlabeled data from the target domain. However, existing state-of-the-art UDA models learn domain-invariant repr...
Unsupervised Reinforcement Adaptation for Class-Imbalanced Text Classification
cs.CL cs.LG
2205.13139
2,022
# Unsupervised Reinforcement Adaptation for Class-Imbalanced Text Classification Class imbalance naturally exists when train and test models in different domains. Unsupervised domain adaptation (UDA) augments model performance with only accessible annotations from the source domain and unlabeled data from the targ...
[ 0.018144486472010612, -0.005661626812070608, -0.017537986859679222, 0.012473845854401588, -0.02257302962243557, -0.0502956248819828, -0.024229111149907112, 0.033990371972322464, 0.026296375319361687, 0.05788768082857132, -0.01662089303135872, -0.00898207537829876, -0.053522951900959015, 0....
[ 2.8837332725524902, 5.267635345458984 ]
2022-05-26T04:33:56
Learned representations are a central component in modern ML systems, serving a multitude of downstream tasks. When training such representations, it is often the case that computational and statistical constraints for each downstream task are unknown. In this context rigid, fixed capacity representations can be eith...
Matryoshka Representation Learning
cs.LG cs.CV
2205.13147
2,022
# Matryoshka Representation Learning Learned representations are a central component in modern ML systems, serving a multitude of downstream tasks. When training such representations, it is often the case that computational and statistical constraints for each downstream task are unknown. In this context rigid, fixe...
[ -0.019115081056952477, -0.042790766805410385, -0.006026506423950195, 0.0016705002635717392, 0.0166831873357296, -0.038487400859594345, 0.03151729330420494, 0.020960157737135887, 0.02821665070950985, 0.06085730716586113, -0.02550148032605648, -0.005239608697593212, -0.01755787804722786, 0.0...
[ 0.8984636068344116, 5.815464496612549 ]
2022-05-26T04:40:31
Grammar Detection, also referred to as Parts of Speech Tagging of raw text, is considered an underlying building block of the various Natural Language Processing pipelines like named entity recognition, question answering, and sentiment analysis. In short, forgiven a sentence, Parts of Speech tagging is the task of s...
Grammar Detection for Sentiment Analysis through Improved Viterbi Algorithm
cs.CL cs.LG
2205.13148
2,022
# Grammar Detection for Sentiment Analysis through Improved Viterbi Algorithm Grammar Detection, also referred to as Parts of Speech Tagging of raw text, is considered an underlying building block of the various Natural Language Processing pipelines like named entity recognition, question answering, and sentiment ...
[ 0.07463209331035614, 0.014032076112926006, -0.005057243164628744, 0.006706102751195431, 0.05707971379160881, -0.03955811634659767, -0.047944214195013046, 0.02949436940252781, -0.011821557767689228, 0.08359172195196152, 0.02797781489789486, -0.04352227970957756, -0.007101778406649828, -0.02...
[ 9.421160697937012, 8.079628944396973 ]
2022-05-26T04:59:28
The vulnerability of deep neural networks to adversarial examples has drawn tremendous attention from the community. Three approaches, optimizing standard objective functions, exploiting attention maps, and smoothing decision surfaces, are commonly used to craft adversarial examples. By tightly integrating the three ...
Transferable Adversarial Attack based on Integrated Gradients
cs.LG cs.CV
2205.13152
2,022
# Transferable Adversarial Attack based on Integrated Gradients The vulnerability of deep neural networks to adversarial examples has drawn tremendous attention from the community. Three approaches, optimizing standard objective functions, exploiting attention maps, and smoothing decision surfaces, are commonly used...
[ 0.05170007422566414, -0.027990370988845825, -0.02233651839196682, 0.035226646810770035, -0.01750309020280838, -0.07118004560470581, 0.03098992630839348, 0.009750339202582836, 0.023347754031419754, 0.0699211061000824, 0.017664067447185516, 0.002923122374340892, -0.006603787187486887, 0.0533...
[ 3.512852668762207, 6.531998157501221 ]
2022-05-26T05:27:31
This work discusses tensor network embeddings, which are random matrices ($S$) with tensor network structure. These embeddings have been used to perform dimensionality reduction of tensor network structured inputs $x$ and accelerate applications such as tensor decomposition and kernel regression. Existing works have ...
Cost-efficient Gaussian Tensor Network Embeddings for Tensor-structured Inputs
math.NA cs.DS cs.LG cs.NA
2205.13163
2,022
# Cost-efficient Gaussian Tensor Network Embeddings for Tensor-structured Inputs This work discusses tensor network embeddings, which are random matrices ($S$) with tensor network structure. These embeddings have been used to perform dimensionality reduction of tensor network structured inputs $x$ and accelerate a...
[ 0.035876836627721786, 0.010729937814176083, -0.031387388706207275, 0.027458254247903824, 0.004578641150146723, -0.011438403278589249, -0.016135873273015022, 0.06530340015888214, 0.029763195663690567, 0.05940987542271614, 0.0024971275124698877, -0.11205592751502991, -0.012858732603490353, 0...
[ -2.570835828781128, 6.683281421661377 ]
2022-05-26T05:27:50
The last few years have witnessed an exponential rise in the propagation of offensive text on social media. Identification of this text with high precision is crucial for the well-being of society. Most of the existing approaches tend to give high toxicity scores to innocuous statements (e.g., "I am a gay man"). Thes...
Leveraging Dependency Grammar for Fine-Grained Offensive Language Detection using Graph Convolutional Networks
cs.CL cs.LG
2205.13164
2,022
# Leveraging Dependency Grammar for Fine-Grained Offensive Language Detection using Graph Convolutional Networks The last few years have witnessed an exponential rise in the propagation of offensive text on social media. Identification of this text with high precision is crucial for the well-being of society. Most...
[ 0.03492064028978348, 0.033190276473760605, -0.01576388068497181, 0.04469981789588928, 0.03226957097649574, -0.02844131924211979, -0.02332593873143196, 0.039423007518053055, -0.012212318368256092, 0.06244898587465286, 0.01591024175286293, -0.0019725083839148283, -0.053982820361852646, 0.002...
[ 9.9133882522583, 8.860739707946777 ]
2022-05-26T05:34:57
While mixture of linear regressions (MLR) is a well-studied topic, prior works usually do not analyze such models for prediction error. In fact, {\em prediction} and {\em loss} are not well-defined in the context of mixtures. In this paper, first we show that MLR can be used for prediction where instead of predicting...
On Learning Mixture of Linear Regressions in the Non-Realizable Setting
stat.ML cs.IT cs.LG math.IT
2205.13166
2,022
# On Learning Mixture of Linear Regressions in the Non-Realizable Setting While mixture of linear regressions (MLR) is a well-studied topic, prior works usually do not analyze such models for prediction error. In fact, {\em prediction} and {\em loss} are not well-defined in the context of mixtures. In this paper, fi...
[ 0.014285930432379246, -0.02555042691528797, -0.037215184420347214, -0.03346254676580429, -0.011397051624953747, -0.04948091879487038, 0.044486869126558304, 0.0431702621281147, 0.01847284659743309, 0.09020689874887466, -0.026856383308768272, 0.013152392581105232, -0.03242729604244232, 0.076...
[ -1.4690155982971191, 7.13634729385376 ]
2022-05-26T05:56:23
We study distributed contextual linear bandits with stochastic contexts, where $N$ agents act cooperatively to solve a linear bandit-optimization problem with $d$-dimensional features over the course of $T$ rounds. For this problem, we derive the first ever information-theoretic lower bound $\Omega(dN)$ on the commun...
Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost
cs.LG stat.ML
2205.13170
2,022
# Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost We study distributed contextual linear bandits with stochastic contexts, where $N$ agents act cooperatively to solve a linear bandit-optimization problem with $d$-dimensional features over the course of $T$ rounds. For this problem, we...
[ 0.04510416463017464, 0.01415280532091856, 0.010746474377810955, 0.012450951151549816, 0.015231317840516567, -0.06951446086168289, 0.02762998826801777, 0.028351813554763794, -0.008801694959402084, 0.05756502225995064, 0.02617950551211834, -0.07249022275209427, -0.01088147982954979, -0.01209...
[ -0.6774062514305115, 13.338885307312012 ]
2022-05-26T06:36:53
Pre-trained models (PTMs) have lead to great improvements in natural language generation (NLG). However, it is still unclear how much commonsense knowledge they possess. With the goal of evaluating commonsense knowledge of NLG models, recent work has proposed the problem of generative commonsense reasoning, e.g., to ...
Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach
cs.CL
2205.13183
2,022
# Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach Pre-trained models (PTMs) have lead to great improvements in natural language generation (NLG). However, it is still unclear how much commonsense knowledge they possess. With the goal of evaluating commonsense knowledge of NLG models, recent work...
[ -0.03352903947234154, -0.002203955315053463, -0.03336292505264282, 0.04145842418074608, 0.06964463740587234, -0.027787944301962852, 0.03155094385147095, -0.013471536338329315, -0.044429194182157516, 0.1287854164838791, 0.03578217327594757, -0.03330998867750168, -0.03955560550093651, -0.018...
[ 6.80178165435791, 7.948134422302246 ]
2022-05-26T06:55:03
Geologic cores are rock samples that are extracted from deep under the ground during the well drilling process. They are used for petroleum reservoirs' performance characterization. Traditionally, physical studies of cores are carried out by the means of manual time-consuming experiments. With the development of deep...
AI for Porosity and Permeability Prediction from Geologic Core X-Ray Micro-Tomography
cs.LG cs.AI cs.CV
2205.13189
2,022
# AI for Porosity and Permeability Prediction from Geologic Core X-Ray Micro-Tomography Geologic cores are rock samples that are extracted from deep under the ground during the well drilling process. They are used for petroleum reservoirs' performance characterization. Traditionally, physical studies of cores are ...
[ -0.018166042864322662, -0.023265663534402847, -0.03077196143567562, 0.003519861027598381, -0.0017604477470740676, -0.02671324647963047, -0.0023691526148468256, 0.019337335601449013, 0.0074973334558308125, 0.10796604305505753, -0.030963994562625885, 0.013031437061727047, -0.037271276116371155...
[ -1.91721510887146, 2.8805699348449707 ]
2022-05-26T06:58:02
Role-oriented dialogue summarization is to generate summaries for different roles in the dialogue, e.g., merchants and consumers. Existing methods handle this task by summarizing each role's content separately and thus are prone to ignore the information from other roles. However, we believe that other roles' content...
Other Roles Matter! Enhancing Role-Oriented Dialogue Summarization via Role Interactions
cs.CL
2205.13190
2,022
# Other Roles Matter! Enhancing Role-Oriented Dialogue Summarization via Role Interactions Role-oriented dialogue summarization is to generate summaries for different roles in the dialogue, e.g., merchants and consumers. Existing methods handle this task by summarizing each role's content separately and thus are p...
[ 0.06614330410957336, 0.022363070398569107, 0.013497652485966682, 0.01844407618045807, 0.03682195395231247, 0.0489109568297863, 0.01531331054866314, 0.022716227918863297, -0.03473193198442459, 0.039421796798706055, 0.057153746485710144, -0.008003368973731995, 0.00558502646163106, 0.04183823...
[ 8.381359100341797, 9.43388843536377 ]
2022-05-26T07:07:26
As a randomized learner model, SCNs are remarkable that the random weights and biases are assigned employing a supervisory mechanism to ensure universal approximation and fast learning. However, the randomness makes SCNs more likely to generate approximate linear correlative nodes that are redundant and low quality, ...
Orthogonal Stochastic Configuration Networks with Adaptive Construction Parameter for Data Analytics
cs.LG
2205.13191
2,022
# Orthogonal Stochastic Configuration Networks with Adaptive Construction Parameter for Data Analytics As a randomized learner model, SCNs are remarkable that the random weights and biases are assigned employing a supervisory mechanism to ensure universal approximation and fast learning. However, the randomness ma...
[ 0.013441214337944984, 0.005188305862247944, -0.04922269657254219, -0.017618635669350624, 0.015122976154088974, -0.08394847810268402, -0.022252516821026802, 0.06988324970006943, 0.02994617447257042, 0.013334021903574467, -0.027105066925287247, -0.025368724018335342, -0.0010067030088976026, ...
[ 0.6337845325469971, 5.674646854400635 ]
2022-05-26T07:38:17
In recent years, significant advances have been made in the design and evaluation of balanced (hyper)graph partitioning algorithms. We survey trends of the last decade in practical algorithms for balanced (hyper)graph partitioning together with future research directions. Our work serves as an update to a previous su...
More Recent Advances in (Hyper)Graph Partitioning
cs.DS cs.LG
2205.13202
2,022
# More Recent Advances in (Hyper)Graph Partitioning In recent years, significant advances have been made in the design and evaluation of balanced (hyper)graph partitioning algorithms. We survey trends of the last decade in practical algorithms for balanced (hyper)graph partitioning together with future research dire...
[ 0.026975370943546295, 0.006494731642305851, -0.026075880974531174, 0.012901639565825462, 0.03938429057598114, -0.05743919312953949, 0.006624353118240833, 0.016412682831287384, 0.03599541634321213, 0.01892532967031002, -0.10695207864046097, -0.061304960399866104, -0.019571935757994652, 0.02...
[ 0.10914116352796555, 8.756016731262207 ]
2022-05-26T07:44:54
Fermionic neural network (FermiNet) is a recently proposed wavefunction Ansatz, which is used in variational Monte Carlo (VMC) methods to solve the many-electron Schr\"{o}dinger equation. FermiNet proposes permutation-equivariant architectures, on which a Slater determinant is applied to induce antisymmetry. FermiNet...
$O(N^2)$ Universal Antisymmetry in Fermionic Neural Networks
cs.LG physics.chem-ph physics.comp-ph quant-ph
2205.13205
2,022
# $O(N^2)$ Universal Antisymmetry in Fermionic Neural Networks Fermionic neural network (FermiNet) is a recently proposed wavefunction Ansatz, which is used in variational Monte Carlo (VMC) methods to solve the many-electron Schr\"{o}dinger equation. FermiNet proposes permutation-equivariant architectures, on which ...
[ -0.03896885737776756, -0.005916826892644167, 0.018912633880972862, 0.04730149358510971, -0.015001260675489902, -0.010002495720982552, -0.018320897594094276, 0.021151544526219368, -0.030428793281316757, 0.035630013793706894, -0.025663360953330994, -0.03786401450634003, -0.033390987664461136, ...
[ -3.1796488761901855, 5.178444862365723 ]
2022-05-26T07:55:43
Deep reinforcement learning (DRL)-based combinatorial optimization (CO) methods (i.e., DRL-NCO) have shown significant merit over the conventional CO solvers as DRL-NCO is capable of learning CO solvers less relying on problem-specific expert domain knowledge (heuristic method) and supervised labeled data (supervised...
Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization
cs.LG stat.ML
2205.13209
2,022
# Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization Deep reinforcement learning (DRL)-based combinatorial optimization (CO) methods (i.e., DRL-NCO) have shown significant merit over the conventional CO solvers as DRL-NCO is capable of learning CO solvers less relying on problem-specific expert d...
[ -0.005448325537145138, 0.006844686344265938, 0.005759713239967823, 0.040318217128515244, -0.028531573712825775, -0.006797999609261751, 0.010403879918158054, 0.03465881198644638, 0.015119550749659538, 0.017097169533371925, 0.03595742583274841, -0.01633787527680397, -0.020195789635181427, 0....
[ 0.5278958082199097, 10.75418472290039 ]
2022-05-26T08:16:14
Vision Transformers (ViTs) have triggered the most recent and significant breakthroughs in computer vision. Their efficient designs are mostly guided by the indirect metric of computational complexity, i.e., FLOPs, which however has a clear gap with the direct metric such as throughput. Thus, we propose to use the di...
Fast Vision Transformers with HiLo Attention
cs.CV cs.AI cs.LG
2205.13213
2,022
# Fast Vision Transformers with HiLo Attention Vision Transformers (ViTs) have triggered the most recent and significant breakthroughs in computer vision. Their efficient designs are mostly guided by the indirect metric of computational complexity, i.e., FLOPs, which however has a clear gap with the direct metric su...
[ 0.005587838590145111, -0.028205007314682007, 0.012715711258351803, 0.040730081498622894, 0.05687038227915764, 0.005369674880057573, -0.027314234524965286, 0.05015486478805542, 0.00009627266263123602, 0.04702574387192726, 0.011816526763141155, -0.03824589401483536, 0.004563131369650364, 0.0...
[ 3.6374106407165527, 4.928980827331543 ]
2022-05-26T08:17:39
Recently, many works have demonstrated that Symmetric Non-negative Matrix Factorization~(SymNMF) enjoys a great superiority for various clustering tasks. Although the state-of-the-art algorithms for SymNMF perform well on synthetic data, they cannot consistently obtain satisfactory results with desirable properties a...
SymNMF-Net for The Symmetric NMF Problem
cs.LG cs.AI
2205.13214
2,022
# SymNMF-Net for The Symmetric NMF Problem Recently, many works have demonstrated that Symmetric Non-negative Matrix Factorization~(SymNMF) enjoys a great superiority for various clustering tasks. Although the state-of-the-art algorithms for SymNMF perform well on synthetic data, they cannot consistently obtain sati...
[ -0.001843905309215188, -0.023263849318027496, 0.023913869634270668, 0.01167140994220972, -0.04643505811691284, -0.055332377552986145, 0.009057027287781239, 0.049976252019405365, 0.011513140983879566, -0.031118785962462425, 0.02099481225013733, -0.04089435189962387, -0.002157471841201186, 0...
[ -2.6487793922424316, 7.189738750457764 ]
2022-05-26T08:20:19
Federated learning enables isolated clients to train a shared model collaboratively by aggregating the locally-computed gradient updates. However, privacy information could be leaked from uploaded gradients and be exposed to malicious attackers or an honest-but-curious server. Although the additive homomorphic encryp...
Encoded Gradients Aggregation against Gradient Leakage in Federated Learning
cs.CR cs.LG
2205.13216
2,022
# Encoded Gradients Aggregation against Gradient Leakage in Federated Learning Federated learning enables isolated clients to train a shared model collaboratively by aggregating the locally-computed gradient updates. However, privacy information could be leaked from uploaded gradients and be exposed to malicious a...
[ -0.05721475929021835, 0.006318762432783842, -0.021641448140144348, -0.0007742244051769376, 0.027366584166884422, -0.078670434653759, 0.016498291864991188, 0.037358079105615616, 0.03320824354887009, 0.046978846192359924, 0.009456485509872437, 0.011029986664652824, -0.010586781427264214, -0....
[ -2.0974228382110596, 10.244235038757324 ]