aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1902.01929 | 2914086222 | Over the last four years we have operated a public smartphone platform testbed called PHONELAB. PHONELAB consists of up to several-hundred participants who run an experimental platform image on their primary smartphone. The experimental platform consists of both instrumentation and experimental changes to platform components, including core Android services and Linux. This paper describes the design of the testbed, the process of conducting PHONELAB experiments, and some of the research the testbed has supported. We also offer many lessons learned along the way, almost all of which have been learned the hard way--through trial and a lot of error. We expect our experiences will help those contemplating operating large user-facing testbeds, anyone conducting experiments on smartphones, and many mobile systems researchers. | The @cite_8 project has been using smartphones to study the social interaction between college students. Free Nexus S smartphones were distrusted to 200 university freshmen for two years. The smartphones ran modified CyanogenMod and instrumentation was added to log the communication events---such as phone calls, SMS, Facebook posts and Bluetooth proximity. Unlike , NetSense focuses on behavioral rather than systems experiments. The testbed is not open, nor is there a way to distribute platform modifications. At this point NetSense has moved to running as an app and utilizing a bring your own device'' model, rendering it complicated or impossible to perform platform experiments. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2058996704"
],
"abstract": [
"Over the past few years, smartphones have emerged as one of the most popular mechanisms for accessing content across the Internet driving considerable research to improve wireless performance. A key foundation for such research efforts is the proper understanding of user behavior. However, the gathering of live smartphone data at scale is often difficult and expensive. The focus of this paper is to explore the lessons learned from a two year study of two hundred smart phone users at the University of Notre Dame. In this paper, we offer commentary with regards to the entire process of the study covering aspects including funding considerations, technical architecture design, lessons learned, and recommendations for future efforts gathering live user data."
]
} |
1902.01929 | 2914086222 | Over the last four years we have operated a public smartphone platform testbed called PHONELAB. PHONELAB consists of up to several-hundred participants who run an experimental platform image on their primary smartphone. The experimental platform consists of both instrumentation and experimental changes to platform components, including core Android services and Linux. This paper describes the design of the testbed, the process of conducting PHONELAB experiments, and some of the research the testbed has supported. We also offer many lessons learned along the way, almost all of which have been learned the hard way--through trial and a lot of error. We expect our experiences will help those contemplating operating large user-facing testbeds, anyone conducting experiments on smartphones, and many mobile systems researchers. | LiveLabs @cite_10 @cite_26 @cite_18 is a human behavioral experiment testbed utilizing smartphones. They do not hand out smartphones nor control the platform, but instead deploy experiment software on participants' own devices. Because of this, LiveLabs is able to scale up to several thousands of participants spanning three venues, including university campus, a resort island and a large convention center. LiveLabs has different aims than . Its goal is to enable more pervasive computing experiments, rather than work on smartphone systems. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_10"
],
"mid": [
"2473243880",
"1994718979",
"2089790844"
],
"abstract": [
"In this paper, we present LiveLabs, a first-of-its-kind testbed that is deployed across a university campus, convention centre, and resort island and collects real-time attributes such as location, group context etc., from hundreds of opt-in participants. These venues, data, and participants are then made available for running rich human-centric behavioural experiments that could test new mobile sensing infrastructure, applications, analytics, or more social-science type hypotheses that influence and then observe actual user behaviour. We share case studies of how researchers from around the world have and are using LiveLabs, and our experiences and lessons learned from building, maintaining, and expanding Live-Labs over the last three years.",
"We present LiveLabs, a mobile experimentation testbed that is currently deployed across our university campus with further deployments at a large shopping mall, a commercial airport, and a resort island soon to follow. The key goal of LiveLabs is to allow in-situ real-time experimentation of mobile applications and services that require context-specific triggers with real participants on their actual smart phones. We describe how LiveLabs works, and then explain the novel R&D required to realise it. We end with a description of the current LiveLabs status (> 700 active participants to date) as well as present some key lessons learned.",
"We believe that, for successful adoption of novel mobile technologies and applications, it is important to be able to test them under real usage patterns, and with real users. To implement this vision, we present our initial effort in building LiveLabs, a large-scale mobile testbed for in-situ experimentation. LiveLabs is unique in two aspects. First, LiveLabs operates on a scale much larger than most research testbeds it is being deployed in four different public spaces in Singapore (a university campus, a shopping mall, an airport and a leisure resort), and is expected to have a pool of over 30,000 opt-in participants. Second, LiveLabs not only instruments smartphones and the infrastructure to gather deep individual and collective context, but also provides a unique experimentation platform that automates many aspects of behavioral experimentation, such as subject selection and context-triggered delivery of interventions. We briefly describe some of the research challenges associated with building such a large-scale deep-context collection testbed, as well as the current status of LiveLabs. We then share our perspectives on the challenges of setting up and operating such testbeds, with the expectation that our experiences will prove useful to other researchers looking to build similar testbeds elsewhere."
]
} |
1902.01929 | 2914086222 | Over the last four years we have operated a public smartphone platform testbed called PHONELAB. PHONELAB consists of up to several-hundred participants who run an experimental platform image on their primary smartphone. The experimental platform consists of both instrumentation and experimental changes to platform components, including core Android services and Linux. This paper describes the design of the testbed, the process of conducting PHONELAB experiments, and some of the research the testbed has supported. We also offer many lessons learned along the way, almost all of which have been learned the hard way--through trial and a lot of error. We expect our experiences will help those contemplating operating large user-facing testbeds, anyone conducting experiments on smartphones, and many mobile systems researchers. | Finally, SmartLab @cite_9 is a smartphone testbed consisting of 40 Android smartphones. The smartphones are connected to a hub via USB and user interactions are simulated through a web-based remote screen terminal. The devices are neither mobile nor used by real users. provides a level of realism that SmartLab lacks. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2407562357"
],
"abstract": [
"SmartLab is a first-of-a-kind open cloud of smartphones that enables a new line of systems-oriented mobile computing research."
]
} |
1902.01929 | 2914086222 | Over the last four years we have operated a public smartphone platform testbed called PHONELAB. PHONELAB consists of up to several-hundred participants who run an experimental platform image on their primary smartphone. The experimental platform consists of both instrumentation and experimental changes to platform components, including core Android services and Linux. This paper describes the design of the testbed, the process of conducting PHONELAB experiments, and some of the research the testbed has supported. We also offer many lessons learned along the way, almost all of which have been learned the hard way--through trial and a lot of error. We expect our experiences will help those contemplating operating large user-facing testbeds, anyone conducting experiments on smartphones, and many mobile systems researchers. | There are also various attempts to deploy experiments as apps on software marketplaces. MobiPerf @cite_5 is an Android app that utilizes the Mobilyzer @cite_21 library to perform network measurements, such as bandwidth and latency testing. The app was deployed on the Google Play store and has over 10K installations so far. Device Analyzer @cite_25 is a Android data collection tool that collects various information at background, such as phone charging status, phone calls, Bluetooth proximity, and so on. Different with MobiPerf, Device Analyzer does not provide value as the app and relies on voluntary participation. Compared to app-based measurement tools, has access to unfiltered more detailed information by instrumenting the smartphone platform. | {
"cite_N": [
"@cite_5",
"@cite_21",
"@cite_25"
],
"mid": [
"",
"1971841691",
"1208857234"
],
"abstract": [
"",
"Mobile Internet availability, performance and reliability have remained stubbornly opaque since the rise of cellular data access. Conducting network measurements can give us insight into user-perceived network conditions, but doing so requires careful consideration of device state and efficient use of scarce resources. Existing approaches address these concerns in ad-hoc ways. In this work we propose Mobilyzer, a platform for conducting mobile network measurement experiments in a principled manner. Our system is designed around three key principles: network measurements from mobile devices require tightly controlled access to the network interface to provide isolation; these measurements can be performed efficiently using a global view of available device resources and experiments; and distributing the platform as a library to existing apps provides the incentives and low barrier to adoption necessary for large-scale deployments. We describe our current design and implementation, and illustrate how it provides measurement isolation for applications, efficiently manages measurement experiments and enables a new class of experiments for the mobile environment.",
"We describe Device Analyzer, a robust data collection tool which is able to reliably collect information on Android smartphone usage from an open community of contributors. We collected the largest, most detailed dataset of Android phone use publicly available to date. In this paper we systematically evaluate smartphones as a platform for mobile ubiquitous computing by quantifying access to critical resources in the wild. Our analysis of the dataset demonstrates considerable diversity in behaviour between users but also over time. We further demonstrate the value of handset-centric data collection by presenting case-study analyses of human mobility, interaction patterns, and energy management and identify notable differences between our results and those found by other studies."
]
} |
1902.01729 | 2953103454 | The presence of data corruption in user-generated streaming data, such as social media, motivates a new fundamental problem that learns reliable regression coefficient when features are not accessible entirely at one time. Until now, several important challenges still cannot be handled concurrently: 1) corrupted data estimation when only partial features are accessible; 2) online feature selection when data contains adversarial corruption; and 3) scaling to a massive dataset. This paper proposes a novel RObust regression algorithm via Online Feature Selection () that concurrently addresses all the above challenges. Specifically, the algorithm iteratively updates the regression coefficients and the uncorrupted set via a robust online feature substitution method. We also prove that our algorithm has a restricted error bound compared to the optimal solution. Extensive empirical experiments in both synthetic and real-world datasets demonstrated that the effectiveness of our new method is superior to that of existing methods in the recovery of both feature selection and regression coefficients, with very competitive efficiency. | A large body of literature on robust regression problem has been established over the last few decades. Most of studies focus on handling stochastic noise in small amounts @cite_3 ; however, these methods cannot be applied to data that may exhibit malicious corruption @cite_0 . To recover regression coefficients with adversarial data corruption, @cite_0 proposed a robust algorithm based on trimmed inner product. @cite_20 proposed a sub-sampling algorithm for large-scale corrupted linear regression, but their theoretical recovery boundaries are not close to the ground truth @cite_15 . Some @math penalty based methods @cite_10 @cite_16 pursue strong recovery results for robust regression problem, but these methods depend on severe restrictions of the data distribution such as row-sampling from an incoherent orthogonal matrix @cite_16 . @cite_18 proposed a distributed robust algorithm to handle the large-scale data set under adversarial data corruption. | {
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_0",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"2764074721",
"2099210013",
"54422097",
"2952044842",
"",
"2096586829",
"2949125143"
],
"abstract": [
"In today's era of big data, robust least-squares regression becomes a more challenging problem when considering the adversarial corruption along with explosive growth of datasets. Traditional robust methods can handle the noise but suffer from several challenges when applied in huge dataset including 1) computational infeasibility of handling an entire dataset at once, 2) existence of heterogeneously distributed corruption, and 3) difficulty in corruption estimation when data cannot be entirely loaded. This paper proposes online and distributed robust regression approaches, both of which can concurrently address all the above challenges. Specifically, the distributed algorithm optimizes the regression coefficients of each data block via heuristic hard thresholding and combines all the estimates in a distributed robust consolidation. Furthermore, an online version of the distributed algorithm is proposed to incrementally update the existing estimates with new incoming data. We also prove that our algorithms benefit from strong robustness guarantees in terms of regression coefficient recovery with a constant upper bound on the error of state-of-the-art batch methods. Extensive experiments on synthetic and real datasets demonstrate that our approaches are superior to those of existing methods in effectiveness, with competitive efficiency.",
"Although the standard formulations of prediction problems involve fully-observed and noiseless data drawn in an i.i.d. manner, many applications involve noisy and or missing data, possibly involving dependence, as well. We study these issues in the context of high-dimensional sparse linear regression, and propose novel estimators for the cases of noisy, missing and or dependent data. Many standard approaches to noisy or missing data, such as those using the EM algorithm, lead to optimization problems that are inherently nonconvex, and it is difficult to establish theoretical guarantees on practical algorithms. While our approach also involves optimizing nonconvex programs, we are able to both analyze the statistical error associated with any global optimum, and more surprisingly, to prove that a simple algorithm based on projected gradient descent will converge in polynomial time to a small neighborhood of the set of all global minimizers. On the statistical side, we provide nonasymptotic bounds that hold with high probability for the cases of noisy, missing and or dependent data. On the computational side, we prove that under the same types of conditions required for statistical consistency, the projected gradient descent algorithm is guaranteed to converge at a geometric rate to a near-global minimizer. We illustrate these theoretical predictions with simulations, showing close agreement with the predicted scalings.",
"We consider high dimensional sparse regression with arbitrary - possibly, severe or coordinated - errors in the covariates matrix. We are interested in understanding how many corruptions we can tolerate, while identifying the correct support. To the best of our knowledge, neither standard outlier rejection techniques, nor recently developed robust regression algorithms (that focus only on corrupted response variables), nor recent algorithms for dealing with stochastic noise or erasures, can provide guarantees on support recovery. As we show, neither can the natural brute force algorithm that takes exponential time to find the subset of data and support columns, that yields the smallest regression error. We explore the power of a simple idea: replace the essential linear algebraic calculation - the inner product - with a robust counterpart that cannot be greatly affected by a controlled number of arbitrarily corrupted points: the trimmed inner product. We consider three popular algorithms in the uncorrupted setting: Thresholding Regression, Lasso, and the Dantzig selector, and show that the counterparts obtained using the trimmed inner product are provably robust.",
"We study the problem of Robust Least Squares Regression (RLSR) where several response variables can be adversarially corrupted. More specifically, for a data matrix X R^ p x n and an underlying model w*, the response vector is generated as y = X'w* + b where b R^n is the corruption vector supported over at most C.n coordinates. Existing exact recovery results for RLSR focus solely on L1-penalty based convex formulations and impose relatively strict model assumptions such as requiring the corruptions b to be selected independently of X. In this work, we study a simple hard-thresholding algorithm called TORRENT which, under mild conditions on X, can recover w* exactly even if b corrupts the response variables in an adversarial manner, i.e. both the support and entries of b are selected adversarially after observing X and w*. Our results hold under deterministic assumptions which are satisfied if X is sampled from any sub-Gaussian distribution. Finally unlike existing results that apply only to a fixed w*, generated independently of X, our results are universal and hold for any w* R^p. Next, we propose gradient descent-based extensions of TORRENT that can scale efficiently to large scale problems, such as high dimensional sparse recovery and prove similar recovery guarantees for these extensions. Empirically we find TORRENT, and more so its extensions, offering significantly faster recovery than the state-of-the-art L1 solvers. For instance, even on moderate-sized datasets (with p = 50K) with around 40 corrupted responses, a variant of our proposed method called TORRENT-HYB is more than 20x faster than the best L1 solver.",
"",
"This paper studies the problem of recovering a sparse signal x ∈ ℝn from highly corrupted linear measurements y = Ax + e ∈ ℝm, where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, this paper proves that for highly correlated (and possibly overcomplete) dictionaries A, any sufficiently sparse signal x can be recovered by solving an l1 -minimization problem min ||x||1 + ||e||1 subject to y = Ax + e. More precisely, if the fraction of the support of the error e is bounded away from one and the support of a: is a very small fraction of the dimension m, then as m becomes large the above l1 -minimization succeeds for all signals x and almost all sign-and-support patterns of e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100 of the observations corrupted. The proof relies on a careful characterization of the faces of a convex polytope spanned together by the standard crosspolytope and a set of independent identically distributed (i.i.d.) Gaussian vectors with nonzero mean and small variance, dubbed the \"cross-and-bouquet\" (CAB) model. Simulations and experiments corroborate the findings, and suggest extensions to the result.",
"Subsampling methods have been recently proposed to speed up least squares estimation in large scale settings. However, these algorithms are typically not robust to outliers or corruptions in the observed covariates. The concept of influence that was developed for regression diagnostics can be used to detect such corrupted observations as shown in this paper. This property of influence -- for which we also develop a randomized approximation -- motivates our proposed subsampling algorithm for large scale corrupted linear regression which limits the influence of data points since highly influential points contribute most to the residual error. Under a general model of corrupted observations, we show theoretically and empirically on a variety of simulated and real datasets that our algorithm improves over the current state-of-the-art approximation schemes for ordinary least squares."
]
} |
1902.01729 | 2953103454 | The presence of data corruption in user-generated streaming data, such as social media, motivates a new fundamental problem that learns reliable regression coefficient when features are not accessible entirely at one time. Until now, several important challenges still cannot be handled concurrently: 1) corrupted data estimation when only partial features are accessible; 2) online feature selection when data contains adversarial corruption; and 3) scaling to a massive dataset. This paper proposes a novel RObust regression algorithm via Online Feature Selection () that concurrently addresses all the above challenges. Specifically, the algorithm iteratively updates the regression coefficients and the uncorrupted set via a robust online feature substitution method. We also prove that our algorithm has a restricted error bound compared to the optimal solution. Extensive empirical experiments in both synthetic and real-world datasets demonstrated that the effectiveness of our new method is superior to that of existing methods in the recovery of both feature selection and regression coefficients, with very competitive efficiency. | Most research in this area requires the corruption ratio parameter, which is difficult to estimate under the assumption that the dataset can be adversarially attacked. For instance, She and Owen @cite_9 rely on a regularization parameter to determine the size of the uncorrupted set based on soft-thresholding. @cite_0 require the upper bound of the outliers number, which is also difficult to estimate when the data contain the adversarial data corruption. @cite_15 proposed a hard-thresholding algorithm with a strong guarantee of coefficient recovery under mild assumption on input data. However, the corruption ratio parameter is required by the algorithm and its recovery error can be more than doubled in size if the parameter is far from the true value. Recently, @cite_21 proposed a heuristic hard-thresholding based methods that learns the optimal uncorrupted set. However, all these approaches are based on batch feature selection under the assumption that all features can be accessed entirely at any time, which is infeasible to apply in massive and fast growing feature set. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_21",
"@cite_15"
],
"mid": [
"54422097",
"1969515697",
"2741929921",
"2952044842"
],
"abstract": [
"We consider high dimensional sparse regression with arbitrary - possibly, severe or coordinated - errors in the covariates matrix. We are interested in understanding how many corruptions we can tolerate, while identifying the correct support. To the best of our knowledge, neither standard outlier rejection techniques, nor recently developed robust regression algorithms (that focus only on corrupted response variables), nor recent algorithms for dealing with stochastic noise or erasures, can provide guarantees on support recovery. As we show, neither can the natural brute force algorithm that takes exponential time to find the subset of data and support columns, that yields the smallest regression error. We explore the power of a simple idea: replace the essential linear algebraic calculation - the inner product - with a robust counterpart that cannot be greatly affected by a controlled number of arbitrarily corrupted points: the trimmed inner product. We consider three popular algorithms in the uncorrupted setting: Thresholding Regression, Lasso, and the Dantzig selector, and show that the counterparts obtained using the trimmed inner product are provably robust.",
"This article studies the outlier detection problem from the standpoint of penalized regression. In the regression model, we add one mean shift parameter for each of the n data points. We then apply a regularization favoring a sparse vector of mean shift parameters. The usual L1 penalty yields a convex criterion, but fails to deliver a robust estimator. The L1 penalty corresponds to soft thresholding. We introduce a thresholding (denoted by Θ) based iterative procedure for outlier detection (Θ–IPOD). A version based on hard thresholding correctly identifies outliers on some hard test problems. We describe the connection between Θ–IPOD and M-estimators. Our proposed method has one tuning parameter with which to both identify outliers and estimate regression coefficients. A data-dependent choice can be made based on the Bayes information criterion. The tuned Θ–IPOD shows outstanding performance in identifying outliers in various situations compared with other existing approaches. In addition, Θ–IPOD is much ...",
"",
"We study the problem of Robust Least Squares Regression (RLSR) where several response variables can be adversarially corrupted. More specifically, for a data matrix X R^ p x n and an underlying model w*, the response vector is generated as y = X'w* + b where b R^n is the corruption vector supported over at most C.n coordinates. Existing exact recovery results for RLSR focus solely on L1-penalty based convex formulations and impose relatively strict model assumptions such as requiring the corruptions b to be selected independently of X. In this work, we study a simple hard-thresholding algorithm called TORRENT which, under mild conditions on X, can recover w* exactly even if b corrupts the response variables in an adversarial manner, i.e. both the support and entries of b are selected adversarially after observing X and w*. Our results hold under deterministic assumptions which are satisfied if X is sampled from any sub-Gaussian distribution. Finally unlike existing results that apply only to a fixed w*, generated independently of X, our results are universal and hold for any w* R^p. Next, we propose gradient descent-based extensions of TORRENT that can scale efficiently to large scale problems, such as high dimensional sparse recovery and prove similar recovery guarantees for these extensions. Empirically we find TORRENT, and more so its extensions, offering significantly faster recovery than the state-of-the-art L1 solvers. For instance, even on moderate-sized datasets (with p = 50K) with around 40 corrupted responses, a variant of our proposed method called TORRENT-HYB is more than 20x faster than the best L1 solver."
]
} |
1902.01729 | 2953103454 | The presence of data corruption in user-generated streaming data, such as social media, motivates a new fundamental problem that learns reliable regression coefficient when features are not accessible entirely at one time. Until now, several important challenges still cannot be handled concurrently: 1) corrupted data estimation when only partial features are accessible; 2) online feature selection when data contains adversarial corruption; and 3) scaling to a massive dataset. This paper proposes a novel RObust regression algorithm via Online Feature Selection () that concurrently addresses all the above challenges. Specifically, the algorithm iteratively updates the regression coefficients and the uncorrupted set via a robust online feature substitution method. We also prove that our algorithm has a restricted error bound compared to the optimal solution. Extensive empirical experiments in both synthetic and real-world datasets demonstrated that the effectiveness of our new method is superior to that of existing methods in the recovery of both feature selection and regression coefficients, with very competitive efficiency. | Online feature selection methods @cite_19 @cite_6 @cite_5 relaxes the requirement of batch selection and fit the scenarios that feature cannot be accessed entirely at one time. Statistical online feature selection algorithms @cite_14 @cite_22 @cite_4 select features via certain statistical quantity such as mutual information, but these methods lack of specific objectives and usually have sub-optimal solutions for some certain tasks. Optimization based approaches @cite_1 @cite_7 use target oriented objective functions solved by some specific optimization techniques. These methods usually require the regression coefficient @math be sparse, i.e., @math . Grafting @cite_2 and its variation @cite_7 relax the hard constraint of feature set into @math penalty, which makes it a convex problem. However, the parameter of @math norm @cite_8 is difficult to determine because the usual cross validation strategy is unavailable for the online feature selection scenario @cite_11 . @cite_12 proposed a limited-memory substitution algorithm based on the @math norm constraint. Although the hard constraint leads to an NP-hard problem, a theoretical guarantee for the error bound of their local optimal solution is provided. However, none of these online feature methods can handle the adversarial data corruption. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_12",
"@cite_11"
],
"mid": [
"2155648952",
"1983150145",
"1949281989",
"",
"",
"1887132526",
"2136051823",
"2153605240",
"",
"2257550357",
"2357827496",
"1612277053"
],
"abstract": [
"In streamwise feature selection, new features are sequentially considered for addition to a predictive model. When the space of potential features is large, streamwise feature selection offers many advantages over traditional feature selection methods, which assume that all features are known in advance. Features can be generated dynamically, focusing the search for new features on promising subspaces, and overfitting can be controlled by dynamically adjusting the threshold for adding features to the model. In contrast to traditional forward feature selection algorithms such as stepwise regression in which at each step all possible features are evaluated and the best one is selected, streamwise feature selection only evaluates each feature once when it is generated. We describe information-investing and α-investing, two adaptive complexity penalty methods for streamwise feature selection which dynamically adjust the threshold on the error reduction required for adding a new feature. These two methods give false discovery rate style guarantees against overfitting. They differ from standard penalty methods such as AIC, BIC and RIC, which always drastically over- or under-fit in the limit of infinite numbers of non-predictive features. Empirical results show that streamwise regression is competitive with (on small data sets) and superior to (on large data sets) much more compute-intensive feature selection methods such as stepwise regression, and allows feature selection on problems with millions of potential features.",
"Feature selection is important in many big data applications. There are at least two critical challenges. Firstly, in many applications, the dimensionality is extremely high, in millions, and keeps growing. Secondly, feature selection has to be highly scalable, preferably in an online manner such that each feature can be processed in a sequential scan. In this paper, we develop SAOLA, a Scalable and Accurate On Line Approach for feature selection. With a theoretical analysis on a low bound on the pair wise correlations between features in the currently selected feature subset, SAOLA employs novel online pair wise comparison techniques to address the two challenges and maintain a parsimonious model over time in an online manner. An empirical study using a series of benchmark real data sets shows that SAOLA is scalable on data sets of extremely high dimensionality, and has superior performance over the state-of-the-art feature selection methods.",
"We study an interesting and challenging problem, online streaming feature selection, in which the size of the feature set is unknown, and not all features are available for learning while leaving the number of observations constant. In this problem, the candidate features arrive one at a time, and the learner's task is to select a \"best so far\" set of features from streaming features. Standard feature selection methods cannot perform well in this scenario. Thus, we present a novel framework based on feature relevance. Under this framework, a promising alternative method, Online Streaming Feature Selection (OSFS), is presented to online select strongly relevant and non-redundant features. In addition to OSFS, a faster Fast-OSFS algorithm is proposed to further improve the selection efficiency. Experimental results show that our algorithms achieve more compactness and better accuracy than existing streaming feature selection algorithms on various datasets.",
"",
"",
"In the standard feature selection problem, we are given a fixed set of candidate features for use in a learning problem, and must select a subset that will be used to train a model that is \"as good as possible\" according to some criterion. In this paper, we present an interesting and useful variant, the online feature selection problem, in which, instead of all features being available from the start, features arrive one at a time. The learner's task is to select a subset of features and return a corresponding model at each time step which is as good as possible given the features seen so far. We argue that existing feature selection methods do not perform well in this scenario, and describe a promising alternative method, based on a stagewise gradient descent technique which we call grafting.",
"Feature selection is an important technique for data mining. Despite its importance, most studies of feature selection are restricted to batch learning. Unlike traditional batch learning methods, online learning represents a promising family of efficient and scalable machine learning algorithms for large-scale applications. Most existing studies of online learning require accessing all the attributes features of training instances. Such a classical setting is not always appropriate for real-world applications when data instances are of high dimensionality or it is expensive to acquire the full set of attributes features. To address this limitation, we investigate the problem of online feature selection (OFS) in which an online learner is only allowed to maintain a classifier involved only a small and fixed number of features. The key challenge of online feature selection is how to make accurate prediction for an instance using a small number of active features. This is in contrast to the classical setup of online learning where all the features can be used for prediction. We attempt to tackle this challenge by studying sparsity regularization and truncation techniques. Specifically, this article addresses two different tasks of online feature selection: 1) learning with full input, where an learner is allowed to access all the features to decide the subset of active features, and 2) learning with partial input, where only a limited number of features is allowed to be accessed for each instance by the learner. We present novel algorithms to solve each of the two problems and give their performance analysis. We evaluate the performance of the proposed algorithms for online feature selection on several public data sets, and demonstrate their applications to real-world problems including image classification in computer vision and microarray gene expression analysis in bioinformatics. The encouraging results of our experiments validate the efficacy and efficiency of the proposed techniques.",
"Content-based image retrieval (CBIR) has been more and more important in the last decade, and the gap between high-level semantic concepts and low-level visual features hinders further performance improvement. The problem of online feature selection is critical to really bridge this gap. In this paper, we investigate online feature selection in the relevance feedback learning process to improve the retrieval performance of the region-based image retrieval system. Our contributions are mainly in three areas. 1) A novel feature selection criterion is proposed, which is based on the psychological similarity between the positive and negative training sets. 2) An effective online feature selection algorithm is implemented in a boosting manner to select the most representative features for the current query concept and combine classifiers constructed over the selected features to retrieve images. 3) To apply the proposed feature selection method in region-based image retrieval systems, we propose a novel region-based representation to describe images in a uniform feature space with real-valued fuzzy features. Our system is suitable for online relevance feedback learning in CBIR by meeting the three requirements: learning with small size training set, the intrinsic asymmetry property of training samples, and the fast response requirement. Extensive experiments, including comparisons with many state-of-the-arts, show the effectiveness of our algorithm in improving the retrieval performance and saving the processing time.",
"",
"Feature selection is important in many big data applications. Two critical challenges closely associate with big data. First, in many big data applications, the dimensionality is extremely high, in millions, and keeps growing. Second, big data applications call for highly scalable feature selection algorithms in an online manner such that each feature can be processed in a sequential scan. We present SAOLA, a S calable and A ccurate O n L ine A pproach for feature selection in this paper. With a theoretical analysis on bounds of the pairwise correlations between features, SAOLA employs novel pairwise comparison techniques and maintains a parsimonious model over time in an online manner. Furthermore, to deal with upcoming features that arrive by groups, we extend the SAOLA algorithm, and then propose a new group-SAOLA algorithm for online group feature selection. The group-SAOLA algorithm can online maintain a set of feature groups that is sparse at the levels of both groups and individual features simultaneously. An empirical study using a series of benchmark real datasets shows that our two algorithms, SAOLA and group-SAOLA, are scalable on datasets of extremely high dimensionality and have superior performance over the state-of-the-art feature selection methods.",
"This paper considers the feature selection scenario where only a few features are accessible at any time point. For example, features are generated sequentially and visible one by one. Therefore, one has to make an online decision to identify key features after all features are only scanned once or twice. The optimization based approach is a powerful tool for the online feature selection. However, most existing optimization based algorithms explicitly or implicitly adopt L1 norm regularization to identify important features, and suffer two main disadvantages: 1) the penalty term for L1 norm term is hard to choose; and 2) the memory usage is hard to control or predict. To overcome these two drawbacks, this paper proposes a limited-memory and model parameter free online feature selection algorithm, namely online substitution (OS) algorithm. To improve the selection efficiency, an asynchronous parallel extension for OS (Asy-OS) is proposed. Convergence guarantees are provided for both algorithms. Empirical study suggests that the performance of OS and Asy-OS is comparable to the benchmark algorithm Grafting, but requires much less memory cost and can be easily extended to the parallel implementation.",
"Online selection of dynamic features has attracted intensive interest in recent years. However, existing online feature selection methods evaluate features individually and ignore the underlying structure of a feature stream. For instance, in image analysis, features are generated in groups which represent color, texture, and other visual information. Simply breaking the group structure in feature selection may degrade performance. Motivated by this observation, we formulate the problem as an online group feature selection. The problem assumes that features are generated individually but there are group structures in the feature stream. To the best of our knowledge, this is the first time that the correlation among streaming features has been considered in the online feature selection process. To solve this problem, we develop a novel online group feature selection method named OGFS. Our proposed approach consists of two stages: online intra-group selection and online inter-group selection. In the intra-group selection, we design a criterion based on spectral analysis to select discriminative features in each group. In the inter-group selection, we utilize a linear regression model to select an optimal subset. This two-stage procedure continues until there are no more features arriving or some predefined stopping conditions are met. Finally, we apply our method to multiple tasks including image classification and face verification. Extensive empirical studies performed on real-world and benchmark data sets demonstrate that our method outperforms other state-of-the-art online feature selection methods."
]
} |
1902.02021 | 2967344370 | On-line experimentation (also known as A B testing) has become an integral part of software development. To timely incorporate user feedback and continuously improve products, many software companies have adopted the culture of agile deployment, requiring online experiments to be conducted and concluded on limited sets of users for a short period. While conceptually efficient, the result observed during the experiment duration can deviate from what is seen after the feature deployment, which makes the A B test result biased. In this paper, we provide theoretical analysis to show that heavy-users can contribute significantly to the bias, and propose a re-sampling estimator for bias adjustment. | The attractiveness of controlled experiments comes from their abilities to establish causal relationships between the features being tested and the measured changes in user behaviors @cite_23 @cite_22 . One key touchstone of trustworthiness of experimentation is external validity @cite_3 @cite_11 @cite_24 @cite_1 -- can the results observed during an experiment period still hold when the new feature being tested is rolled out to the entire user population in the future? Obviously, there can be multiple factors that affect external validity. For example, novelty effect might jeopardize external validity: after a new feature is presented to the users, if the users are unfamiliar with the new feature, they might change their behavior out of curiosity but gradually go back to their normal behavior. For another example, weekday weekend effect could affect external validity: if users have distinct behaviors on weekdays and weekends, an A B test shorter than a week would yield a biased result. | {
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_23",
"@cite_11"
],
"mid": [
"2263423262",
"2901677415",
"1982803242",
"1730782591",
"2010505816",
"2094419105"
],
"abstract": [
"Most questions in social and biomedical sciences are causal in nature: what would happen to individuals, or to groups, if part of their environment were changed? In this groundbreaking text, two world-renowned experts present statistical methods for studying such questions. This book starts with the notion of potential outcomes, each corresponding to the outcome that would be realized if a subject were exposed to a particular treatment or regime. In this approach, causal effects are comparisons of such potential outcomes. The fundamental problem of causal inference is that we can only observe one of the potential outcomes for a particular subject. The authors discuss how randomized experiments allow us to assess causal effects and then turn to observational studies. They lay out the assumptions needed for causal inference and describe the leading analysis methods, including matching, propensity-score methods, and instrumental variables. Many detailed applications are included, with special focus on practical aspects for the empirical researcher.",
"",
"",
"1. Experiments and Generalized Causal Inference 2. Statistical Conclusion Validity and Internal Validity 3. Construct Validity and External Validity 4. Quasi-Experimental Designs That Either Lack a Control Group or Lack Pretest Observations on the Outcome 5. Quasi-Experimental Designs That Use Both Control Groups and Pretests 6. Quasi-Experimentation: Interrupted Time Series Designs 7. Regression Discontinuity Designs 8. Randomized Experiments: Rationale, Designs, and Conditions Conducive to Doing Them 9. Practical Problems 1: Ethics, Participant Recruitment, and Random Assignment 10. Practical Problems 2: Treatment Implementation and Attrition 11. Generalized Causal Inference: A Grounded Theory 12. Generalized Causal Inference: Methods for Single Studies 13. Generalized Causal Inference: Methods for Multiple Studies 14. A Critical Assessment of Our Assumptions",
"For obtaining causal inferences that are objective, and therefore have the best chance of revealing scientific truths, carefully designed and executed randomized experiments are generally considered to be the gold standard. Observational studies, in contrast, are generally fraught with problems that compromise any claim for objectivity of the resulting causal inferences. The thesis here is that observational studies have to be carefully designed to approximate randomized experiments, in particular, without examining any final outcome data. Often a candidate data set will have to be rejected as inadequate because of lack of data on key covariates, or because of lack of overlap in the distributions of key covariates between treatment and control groups, often revealed by careful propensity score analyses. Sometimes the template for the approximating randomized experiment will have to be altered, and the use of principal stratification can be helpful in doing this. These issues are discussed and illustrated using the framework of potential outcomes to define causal effects, which greatly clarifies critical issues. 1. Randomized experiments versus observational studies. 1.1. Historical dichotomy between randomized and nonrandomized studies for causal effects. For may years, causal inference based on randomized experiments, as described, for example, in classic texts by Fisher (1935), Kempthorne (1952), Cochran and Cox (1950 )a ndCox (1958), was an entirely distinct endeavor than causal inference based on observational data sets, described, for example, in texts by Blalock (1964), Kenny (1979), Campbell and Stanley (1963), Cook and Campbell (1979), Rothman (1986), Lilienfeld and Lilienfeld (1976), Maddala (1977 )a ndCochran (1983). This began to change in the 1970’s when the use of potential outcomes, commonly used in the context of randomized experiments to define causal effects since Neyman (1923), was used to define causal effects in both randomized experiments and observational studies [Rubin (1974)]. This allowed the definition of assignment mechanisms [Rubin (1975)], with randomized experiments as special cases, thereby allowing",
"A survey drawn from social-science research which deals with correlational, ex post facto, true experimental, and quasi-experimental designs and makes methodological recommendations. Bibliogs."
]
} |
1902.01794 | 2939052374 | We produce explicit formulae for various ideal zeta functions associated to the members of an infinite family of class- @math -nilpotent Lie rings, introduced in [8], in terms of Igusa functions. As corollaries we obtain information about analytic properties of global ideal zeta functions, local functional equations, topological, reduced, and graded ideal zeta functions, as well as representation zeta functions for the unipotent group schemes associated to the Lie rings in question. | The paper @cite_10 , which introduced the groups @math , computes their @math , enumerating the subgroups of finite index in @math whose profinite completions are isomorphic to that of @math . By general principles, these zeta functions also satisfy Euler product decompositions indexed by the rational primes, whose factors are rational functions in @math . In the notation of the current paper, [Theorem 1.4] BermanKlopschOnn 18 establishes that the Euler factor @math of @math at a rational prime @math , enumerating the relevant subgroups of @math of @math -power index, is of the form @math for explicitly given numerical data'' @math , for integers @math , @math , comparable to (but different from) those given in . | {
"cite_N": [
"@cite_10"
],
"mid": [
"2962768297"
],
"abstract": [
"The pro-isomorphic zeta function ( ^ _ (s) ) of a finitely generated nilpotent group ( ) is a Dirichlet generating function that enumerates finite-index subgroups whose profinite completion is isomorphic to that of ( ). Such zeta functions can be expressed as Euler products of p-adic integrals over the ( Q _p )-points of an algebraic automorphism group associated to ( ). In this way they are closely related to classical zeta functions of algebraic groups over local fields."
]
} |
1902.01794 | 2939052374 | We produce explicit formulae for various ideal zeta functions associated to the members of an infinite family of class- @math -nilpotent Lie rings, introduced in [8], in terms of Igusa functions. As corollaries we obtain information about analytic properties of global ideal zeta functions, local functional equations, topological, reduced, and graded ideal zeta functions, as well as representation zeta functions for the unipotent group schemes associated to the Lie rings in question. | For a nonzero prime ideal @math in a number ring @math , the @math -ideal zeta functions @math may not be confused with the @math -ideal zeta functions of @math , considered as @math -Lie algebras. For @math , i.e. in the case of the Heisenberg Lie ring @math ; cf. Example ), the latter have been computed for primes @math which are unramified in @math in @cite_6 and for primes @math which are nonsplit in @math in @cite_1 . In the forthcoming paper @cite_7 we generalize these computations to cover the ideal zeta functions of algebras arising from a large class of Lie rings, including the Grenham Lie rings @math , via base extensions with various compact discrete valuation rings. The paper gives a survey of applications of Igusa functions in the area of zeta functions of groups and rings, and is built on a generalization of this class of functions. | {
"cite_N": [
"@cite_1",
"@cite_7",
"@cite_6"
],
"mid": [
"",
"2054570374",
"1555308356"
],
"abstract": [
"",
"We introduce a new method to calculate local normal zeta functions of finitely generated, torsion-free nilpotent groups. It is based on an enumeration of vertices in the Bruhat-Tits building for Sln(Qp). It enables us to give explicit computations for groups of class 2 with small centres and to derive local functional equations. Examples include formulae for non-uniform normal zeta functions.",
"We report on progress and problems concerning the analytical behaviour of the zeta functions of groups and rings. We also describe how these generating functions are special cases of adelic cone integrals for which our results hold."
]
} |
1902.01970 | 2914078788 | Over the past years, political events and public opinion on the Web have been allegedly manipulated by accounts dedicated to spreading disinformation and performing malicious activities on social media. These accounts hereafter referred to as "Pathogenic Social Media (PSM)" accounts, are often controlled by terrorist supporters, water armies or fake news writers and hence can pose threats to social media and general public. Understanding and analyzing PSMs could help social media firms devise sophisticated and automated techniques that could be deployed to stop them from reaching their audience and consequently reduce their threat. In this paper, we leverage the well-known statistical technique "Hawkes Process" to quantify the influence of PSM accounts on the dissemination of malicious information on social media platforms. Our findings on a real-world ISIS-related dataset from Twitter indicate that PSMs are significantly different from regular users in making a message viral. Specifically, we observed that PSMs do not usually post URLs from mainstream news sources. Instead, their tweets usually receive large impact on audience, if contained URLs from Facebook and alternative news outlets. In contrary, tweets posted by regular users receive nearly equal impression regardless of the posted URLs and their sources. Our findings can further shed light on understanding and detecting PSM accounts. | Social bot is a computer program that automatically generate content and interacts with real people on social media, trying to emulate and possibly alter their behavior @cite_33 . Recently, DARPA organized a Twitter bot challenge to detect influence bots'' @cite_17 , where supervised and semi-supervised approaches were proposed using different features. The work of @cite_8 for example, use similarity to cluster accounts and uncover groups of malicious users. The work of @cite_19 presents a supervised framework for bot detection which uses more than thousands features. In a different attempt, the work of @cite_31 studied the problem of spam detection in Wikipedia using different spammers behavioral features. For a comprehensive survey on the ongoing efforts to fight social bots, we direct the reader to @cite_33 . | {
"cite_N": [
"@cite_33",
"@cite_8",
"@cite_19",
"@cite_31",
"@cite_17"
],
"mid": [
"1837843568",
"2125490153",
"2595521492",
"",
"2278635123"
],
"abstract": [
"Today's social bots are sophisticated and sometimes menacing. Indeed, their presence can endanger online ecosystems as well as our society.",
"The success of online social networks has attracted a constant interest in attacking and exploiting them. Attackers usually control malicious accounts, including both fake and compromised real user accounts, to launch attack campaigns such as social spam, malware distribution, and online rating distortion. To defend against these attacks, we design and implement a malicious account detection system called SynchroTrap. We observe that malicious accounts usually perform loosely synchronized actions in a variety of social network context. Our system clusters user accounts according to the similarity of their actions and uncovers large groups of malicious accounts that act similarly at around the same time for a sustained period of time. We implement SynchroTrap as an incremental processing system on Hadoop and Giraph so that it can process the massive user activity data in a large online social network efficiently. We have deployed our system in five applications at Facebook and Instagram. SynchroTrap was able to unveil more than two million malicious accounts and 1156 large attack campaigns within one month.",
"Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9 and 15 of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.",
"",
"From politicians and nation states to terrorist groups, numerous organizations reportedly conduct explicit campaigns to influence opinions on social media, posing a risk to freedom of expression. Thus, there is a need to identify and eliminate \"influence bots\"--realistic, automated identities that illicitly shape discussions on sites like Twitter and Facebook--before they get too influential."
]
} |
1902.01970 | 2914078788 | Over the past years, political events and public opinion on the Web have been allegedly manipulated by accounts dedicated to spreading disinformation and performing malicious activities on social media. These accounts hereafter referred to as "Pathogenic Social Media (PSM)" accounts, are often controlled by terrorist supporters, water armies or fake news writers and hence can pose threats to social media and general public. Understanding and analyzing PSMs could help social media firms devise sophisticated and automated techniques that could be deployed to stop them from reaching their audience and consequently reduce their threat. In this paper, we leverage the well-known statistical technique "Hawkes Process" to quantify the influence of PSM accounts on the dissemination of malicious information on social media platforms. Our findings on a real-world ISIS-related dataset from Twitter indicate that PSMs are significantly different from regular users in making a message viral. Specifically, we observed that PSMs do not usually post URLs from mainstream news sources. Instead, their tweets usually receive large impact on audience, if contained URLs from Facebook and alternative news outlets. In contrary, tweets posted by regular users receive nearly equal impression regardless of the posted URLs and their sources. Our findings can further shed light on understanding and detecting PSM accounts. | Fake news detection has recently attracted a growing interest of general public and researchers, as the spread of misinformation on social media and the Web increases on a daily basis. A growing body of work has been devoted to addressing the impact of bots in manipulating political discussion and spreading fake news, including the 2016 U.S. presidential election @cite_37 @cite_11 @cite_20 and the 2017 French election @cite_21 . For example, @cite_20 analyzes tweets following recent U.S. presidential election and found evidences that bots played key roles in spreading fake news. | {
"cite_N": [
"@cite_37",
"@cite_21",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2724523750",
"2804365752",
"2550819555"
],
"abstract": [
"",
"Recent accounts from researchers, journalists, as well as federal investigators, reached a unanimous conclusion: social media are systematically exploited to manipulate and alter public opinion. Some disinformation campaigns have been coordinated by means of bots, social media accounts controlled by computer scripts that try to disguise themselves as legitimate human users. In this study, we describe one such operation that occurred in the run up to the 2017 French presidential election. We collected a massive Twitter dataset of nearly 17 million posts, posted between 27 April and 7 May 2017 (Election Day). We then set to study the MacronLeaks disinformation campaign: By leveraging a mix of machine learning and cognitive behavioral modeling techniques, we separated humans from bots, and then studied the activities of the two groups independently, as well as their interplay. We provide a characterization of both the bots and the users who engaged with them, and oppose it to those users who didn’t. Prior interests of disinformation adopters pinpoint to the reasons of scarce success of this campaign: the users who engaged with MacronLeaks are mostly foreigners with pre-existing interest in alt-right topics and alternative news media, rather than French users with diverse political views. Concluding, anomalous account usage patterns suggest the possible existence of a black market for reusable political disinformation bots.",
"The massive spread of digital misinformation has been identified as a major threat to democracies. Communication, cognitive, social, and computer scientists are studying the complex causes for the viral diffusion of misinformation, while online platforms are beginning to deploy countermeasures. Little systematic, data-based evidence has been published to guide these efforts. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during ten months in 2016 and 2017. We find evidence that social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, resharing content posted by bots. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.",
"Social media have been extensively praised for increasing democratic discussion on social issues related to policy and politics. However, what happens when this powerful communication tools are exploited to manipulate online discussion, to change the public perception of political entities, or even to try affecting the outcome of political elections? In this study we investigated how the presence of social media bots, algorithmically driven entities that on the surface appear as legitimate users, affect political discussion around the 2016 U.S. Presidential election. By leveraging state-of-the-art social bot detection algorithms, we uncovered a large fraction of user population that may not be human, accounting for a significant portion of generated content (about one-fifth of the entire conversation). We inferred political partisanships from hashtag adoption, for both humans and bots, and studied spatio-temporal communication, political support dynamics, and influence mechanisms by discovering the level of network embeddedness of the bots. Our findings suggest that the presence of social media bots can indeed negatively affect democratic political discussion rather than improving it, which in turn can potentially alter public opinion and endanger the integrity of the Presidential election."
]
} |
1902.01883 | 2951640156 | In many finite horizon episodic reinforcement learning (RL) settings, it is desirable to optimize for the undiscounted return - in settings like Atari, for instance, the goal is to collect the most points while staying alive in the long run. Yet, it may be difficult (or even intractable) mathematically to learn with this target. As such, temporal discounting is often applied to optimize over a shorter effective planning horizon. This comes at the risk of potentially biasing the optimization target away from the undiscounted goal. In settings where this bias is unacceptable - where the system must optimize for longer horizons at higher discounts - the target of the value function approximator may increase in variance leading to difficulties in learning. We present an extension of temporal difference (TD) learning, which we call TD( @math ), that breaks down a value function into a series of components based on the differences between value functions with smaller discount factors. The separation of a longer horizon value function into these components has useful properties in scalability and performance. We discuss these properties and show theoretic and empirical improvements over standard TD learning in certain settings. | Some recent work has investigated how to precisely select the discount factor choice @cite_20 @cite_25 . suggest a particular scheduling mechanism, seen similarly in and . propose a meta-gradient approach which learns the discount factor (and @math value) over time. All of these methods can be applied to our own as we do not necessarily prescribe a final overall @math value to be used. | {
"cite_N": [
"@cite_25",
"@cite_20"
],
"mid": [
"2803767077",
"2194966727"
],
"abstract": [
"The goal of reinforcement learning algorithms is to estimate and or optimise the value function. However, unlike supervised learning, no teacher or oracle is available to provide the true value function. Instead, the majority of reinforcement learning algorithms estimate and or optimise a proxy for the value function. This proxy is typically based on a sampled and bootstrapped approximation to the true value function, known as a return. The particular choice of return is one of the chief components determining the nature of the algorithm: the rate at which future rewards are discounted; when and how values should be bootstrapped; or even the nature of the rewards themselves. It is well-known that these decisions are crucial to the overall success of RL algorithms. We discuss a gradient-based meta-learning algorithm that is able to adapt the nature of the return, online, whilst interacting and learning from the environment. When applied to 57 games on the Atari 2600 environment over 200 million frames, our algorithm achieved a new state-of-the-art performance.",
"Using deep neural nets as function approximator for reinforcement learning tasks have recently been shown to be very powerful for solving problems approaching real-world complexity. Using these results as a benchmark, we discuss the role that the discount factor may play in the quality of the learning process of a deep Q-network (DQN). When the discount factor progressively increases up to its final value, we empirically show that it is possible to significantly reduce the number of learning steps. When used in conjunction with a varying learning rate, we empirically show that it outperforms original DQN on several experiments. We relate this phenomenon with the instabilities of neural networks when they are used in an approximate Dynamic Programming setting. We also describe the possibility to fall within a local optimum during the learning process, thus connecting our discussion with the exploration exploitation dilemma."
]
} |
1902.01691 | 2912542094 | Performance of clustering algorithms is evaluated with the help of accuracy metrics. There is a great diversity of clustering algorithms, which are key components of many data analysis and exploration systems. However, there exist only few metrics for the accuracy measurement of overlapping and multi-resolution clustering algorithms on large datasets. In this paper, we first discuss existing metrics, how they satisfy a set of formal constraints, and how they can be applied to specific cases. Then, we propose several optimizations and extensions of these metrics. More specifically, we introduce a new indexing technique to reduce both the runtime and the memory complexity of the Mean F1 score evaluation. Our technique can be applied on large datasets and it is faster on a single CPU than state-of-the-art implementations running on high-performance servers. In addition, we propose several extensions of the discussed metrics to improve their effectiveness and satisfaction to formal constraints without affecting their efficiency. All the metrics discussed in this paper are implemented in C++ and are available for free as open-source packages that can be used either as stand-alone tools or as part of a benchmarking system to compare various clustering algorithms. | The @cite_7 is the first accuracy metric that was proposed for overlapping clustering evaluation. It belongs to the family of . It is a fuzzy version of the Adjusted Rand Index (ARI) @cite_28 and is identical to the Fuzzy Rand Index @cite_22 . We describe the Omega Index in . Versions of suitable for overlapping clustering evaluation were introduced as Overlapping NMI (ONMI) https: github.com eXascaleInfolab OvpNMI @cite_5 and Generalized NMI (GNMI) @cite_14 and belong to the family of . The authors of ONMI suggested to extend Mutual Information with approximations (introduced in @cite_2 ) to find the best matches for each cluster of a pair of overlapping clusterings. This approach allows to compare overlapping clusters, but unlike GNMI we introduce in , it yields values that are incompatible with standard NMI @cite_1 results. The is introduced in @cite_16 @cite_17 and a similar metric, NVD, is introduced in @cite_8 . The Average F1 score belongs to the family of and is described in . | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_1",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_17"
],
"mid": [
"1663379842",
"22730673",
"2148781711",
"1984563107",
"",
"2120043163",
"2118608338",
"1613448136",
"2139694940",
"2056897951"
],
"abstract": [
"In network science, researchers often use mutual information to understand the difference between network partitions produced by community detection methods. Here we extend the use of mutual information to covers, that is, the cases where a node can belong to more than one module. In our proposed solution, the underlying stochastic process used to compare partitions is extended to deal with covers, and the random variables of the new process are simply fed into the usual definition of mutual information. With partitions, our extended process behaves exactly as the conventional approach for partitions, and thus, the mutual information values obtained are the same. We also describe how to perform sampling and do error estimation for our extended process, as both are necessary steps for a practical application of this measure. The stochastic process that we define here is not only applicable to networks, but can also be used to compare more general set-to-set binary relations.",
"In this paper, we introduce a fuzzy extension of the Rand index, a well-known measure for comparing two clustering structures. In contrast to an existing proposal, which is restricted to the comparison of a fuzzy partition with a non-fuzzy reference partition, our extension is able to compare two proper fuzzy partitions with each other. Elaborating on the formal properties of our fuzzy Rand index, we show that it exhibits desirable metrical properties.",
"Cluster recovery indices are more important than ever, because of the necessity for comparing the large number of clustering procedures available today. Of the cluster recovery indices prominent in contemporary literature, the Hubert and Arabie (1985) adjustment to the Rand index (1971) has been demonstrated to have the most desirable properties (Milligan & Cooper, 1986). However, use of the Hubert and Arabie adjustment to the Rand index is limited to cluster solutions involving non-overlapping, or disjoint, clusters. The present paper introduces a generalization of the Hubert and Arabie adjusted Rand index. This generalization, called the Omega index, can be applied to situations where both, one, or neither of the solutions being compared is non-disjoint. In the special case where both solutions are disjoint, the Omega index is equivalent to the Hubert and Arabie adjusted Rand index.",
"Many networks display community structure which identifies groups of nodes within which connections are denser than between them. Detecting and characterizing such community structure, which is known as community detection, is one of the fundamental issues in the study of network systems. It has received a considerable attention in the last years. Numerous techniques have been developed for both efficient and effective community detection. Among them, the most efficient algorithm is the label propagation algorithm whose computational complexity is O (|E|). Although it is linear in the number of edges, the running time is still too long for very large networks, creating the need for parallel community detection. Also, computing community quality metrics for community structure is computationally expensive both with and without ground truth. However, to date we are not aware of any effort to introduce parallelism for this problem. In this paper, we provide a parallel toolkit to calculate the values of such metrics. We evaluate the parallel algorithms on both distributed memory machine and shared memory machine. The experimental results show that they yield a significant performance gain over sequential execution in terms of total running time, speedup, and efficiency.",
"",
"We compare recent approaches to community structure identification in terms of sensitivity and computational cost. The recently proposed modularity measure is revisited and the performance of the methods as applied to ad hoc networks with known community structure, is compared. We find that the most accurate methods tend to be more computationally expensive, and that both aspects need to be considered when choosing a method for practical purposes. The work is intended as an introduction as well as a proposal for a standard benchmark test of community detection methods.",
"Many networks in nature, society and technology are characterized by a mesoscopic level of organization, with groups of nodes forming tightly connected units, called communities or modules, that are only weakly linked to each other. Uncovering this community structure is one of the most important problems in the field of complex networks. Networks often show a hierarchical organization, with communities embedded within other communities; moreover, nodes can be shared between different communities. Here, we present the first algorithm that finds both overlapping communities and the hierarchical structure. The method is based on the local optimization of a fitness function. Community structure is revealed by peaks in the fitness histogram. The resolution can be tuned by a parameter enabling different hierarchical levels of organization to be investigated. Tests on real and artificial networks give excellent results.",
"Given the increasing popularity of algorithms for overlapping clustering, in particular in social network analysis, quantitative measures are needed to measure the accuracy of a method. Given a set of true clusters, and the set of clusters found by an algorithm, these sets of clusters must be compared to see how similar or different the sets are. A normalized measure is desirable in many contexts, for example assigning a value of 0 where the two sets are totally dissimilar, and 1 where they are identical. A measure based on normalized mutual information, [1], has recently become popular. We demonstrate unintuitive behaviour of this measure, and show how this can be corrected by using a more conventional normalization. We compare the results to that of other measures, such as the Omega index [2].",
"Network communities represent basic structures for understanding the organization of real-world networks. A community (also referred to as a module or a cluster) is typically thought of as a group of nodes with more connections amongst its members than between its members and the remainder of the network. Communities in networks also overlap as nodes belong to multiple clusters at once. Due to the difficulties in evaluating the detected communities and the lack of scalable algorithms, the task of overlapping community detection in large networks largely remains an open problem. In this paper we present BIGCLAM (Cluster Affiliation Model for Big Networks), an overlapping community detection method that scales to large networks of millions of nodes and edges. We build on a novel observation that overlaps between communities are densely connected. This is in sharp contrast with present community detection methods which implicitly assume that overlaps between communities are sparsely connected and thus cannot properly extract overlapping communities in networks. In this paper, we develop a model-based community detection algorithm that can detect densely overlapping, hierarchically nested as well as non-overlapping communities in massive networks. We evaluate our algorithm on 6 large social, collaboration and information networks with ground-truth community information. Experiments show state of the art performance both in terms of the quality of detected communities as well as in speed and scalability of our algorithm.",
"Community detection has arisen as one of the most relevant topics in the field of graph mining, principally for its applications in domains such as social or biological networks analysis. Different community detection algorithms have been proposed during the last decade, approaching the problem from different perspectives. However, existing algorithms are, in general, based on complex and expensive computations, making them unsuitable for large graphs with millions of vertices and edges such as those usually found in the real world. In this paper, we propose a novel disjoint community detection algorithm called Scalable Community Detection (SCD). By combining different strategies, SCD partitions the graph by maximizing the Weighted Community Clustering (WCC), a recently proposed community detection metric based on triangle analysis. Using real graphs with ground truth overlapped communities, we show that SCD outperforms the current state of the art proposals (even those aimed at finding overlapping communities) in terms of quality and performance. SCD provides the speed of the fastest algorithms and the quality in terms of NMI and F1Score of the most accurate state of the art proposals. We show that SCD is able to run up to two orders of magnitude faster than practical existing solutions by exploiting the parallelism of current multi-core processors, enabling us to process graphs of unprecedented size in short execution times."
]
} |
1902.01577 | 2914187274 | The ease of use of the Internet has enabled violent extremists such as the Islamic State of Iraq and Syria (ISIS) to easily reach large audience, build personal relationships and increase recruitment. Social media are primarily based on the reports they receive from their own users to mitigate the problem. Despite efforts of social media in suspending many accounts, this solution is not guaranteed to be effective, because not all extremists are caught this way, or they can simply return with another account or migrate to other social networks. In this paper, we design an automatic detection scheme that using as little as three groups of information related to usernames, profile, and textual content of users, determines whether or not a given username belongs to an extremist user. We first demonstrate that extremists are inclined to adopt usernames that are similar to the ones that their like-minded have adopted in the past. We then propose a detection framework that deploys features which are highly indicative of potential online extremism. Results on a real-world ISIS-related dataset from Twitter demonstrate the effectiveness of the methodology in identifying extremist users. | Beyond these works, the work of @cite_3 takes a different approach to track individual's behavioral indicators of homegrown extremism, using public and law enforcement data. The intuition is to use graph pattern matching to identify suspicious trajectories and potential radicalization over a dynamic heterogeneous graph associated with the fused data from public and law enforcement. The authors first develop a query pattern of radicalization and then run several graph pattern matching algorithms to detect and track the on-going radicalization. They develop the investigative simulation graph pattern matching technique, which is composed of required extension to the existing dual simulation graph pattern matching method to avoid over-matching. This approach provides analysts and law enforcement officials with the ability to find partial full matches, given a query of radicalization, as well as the pace of the appearance of the radicalized extremists. As opposed to the above studies, in this paper, we make the first attempt on determining if a given Twitter handle belongs to an extremist user or not, using only little information gathered from the handle, profile and content. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2553291089"
],
"abstract": [
"This paper outlines our on-going efforts to address the radicalization detection problem, the automated or semi-automated task of dynamically detecting and tracking behavioral changes in individuals who undergo the process of increasingly espousing jihadist beliefs and transition to the use of violent action in support of those beliefs. Leveraging the notion that personal trajectories towards violent radicalization exist, we take a graph pattern matching approach to track individual-level indicators using data fused from available public and government law enforcement databases. We show that our approach provides analysts with the ability to find full or partial matches against a query pattern of radicalization, and a means to quantify the pace of the appearance of the indicators that may help prioritize investigative efforts and resources to prevent planned attacks."
]
} |
1902.01543 | 2909858526 | In the recent years, the scale of graph datasets has increased to such a degree that a single machine is not capable of efficiently processing large graphs. Thereby, efficient graph partitioning is necessary for those large graph applications. Traditional graph partitioning generally loads the whole graph data into the memory before performing partitioning; this is not only a time consuming task but it also creates memory bottlenecks. These issues of memory limitation and enormous time complexity can be resolved using stream-based graph partitioning. A streaming graph partitioning algorithm reads vertices once and assigns that vertex to a partition accordingly. This is also called an one-pass algorithm. This paper proposes an efficient window-based streaming graph partitioning algorithm called WStream. The WStream algorithm is an edge-cut partitioning algorithm, which distributes a vertex among the partitions. Our results suggest that the WStream algorithm is able to partition large graph data efficiently while keeping the load balanced across different partitions, and communication to a minimum. Evaluation results with real workloads also prove the effectiveness of our proposed algorithm, and it achieves a significant reduction in load imbalance and edge-cut with different ranges of dataset. | Recently, there has been considerable interest in the design of an algorithm and framework to handle massive graph data in a streaming manner. Steaming graph-data can be partitioned into a cluster of nodes; the graph access pattern could be done via online or offline processing. Streaming graph partitioning is considerably efficient because the graph loader or partitioner does the partitioning task while receiving the graph data in a streaming manner. A near-optimal traditional graph partitioning algorithm called METIS was proposed in the early graph-partitioning era @cite_11 . METIS is the de facto standard for near-optimal partitioning in distributed graph partitioning. METIS can reduce the communication costs among distributed machines despite having a lengthy processing time for small graphs. However, METIS is not suitable for processing medium or large graph datasets @cite_11 . | {
"cite_N": [
"@cite_11"
],
"mid": [
"2161455936"
],
"abstract": [
"We consider the problem of partitioning the nodes of a graph with costs on its edges into subsets of given sizes so as to minimize the sum of the costs on all edges cut. This problem arises in several physical situations — for example, in assigning the components of electronic circuits to circuit boards to minimize the number of connections between boards. This paper presents a heuristic method for partitioning arbitrary graphs which is both effective in finding optimal partitions, and fast enough to be practical in solving large problems."
]
} |
1902.01543 | 2909858526 | In the recent years, the scale of graph datasets has increased to such a degree that a single machine is not capable of efficiently processing large graphs. Thereby, efficient graph partitioning is necessary for those large graph applications. Traditional graph partitioning generally loads the whole graph data into the memory before performing partitioning; this is not only a time consuming task but it also creates memory bottlenecks. These issues of memory limitation and enormous time complexity can be resolved using stream-based graph partitioning. A streaming graph partitioning algorithm reads vertices once and assigns that vertex to a partition accordingly. This is also called an one-pass algorithm. This paper proposes an efficient window-based streaming graph partitioning algorithm called WStream. The WStream algorithm is an edge-cut partitioning algorithm, which distributes a vertex among the partitions. Our results suggest that the WStream algorithm is able to partition large graph data efficiently while keeping the load balanced across different partitions, and communication to a minimum. Evaluation results with real workloads also prove the effectiveness of our proposed algorithm, and it achieves a significant reduction in load imbalance and edge-cut with different ranges of dataset. | A scalable streaming partitioning approach was proposed by Wang and Chiu @cite_2 with the aim of achieving a low complexity system. This partitioning technique aim to reduce the number of edges between partitions, and consequently reduces the communication cost of query processing. A streaming vertex-cut partitioning algorithm, High Degree Replicated First (HDRF), was proposed by @cite_23 to utilise the vertex characteristics. The study used a Greedy vertex-cut approach, in which the high-degree (number of edges of a vertex) vertices replicate first to minimize and avoid unnecessary vertex replication. This algorithm achieved a significant improvement in stream-based partitioning compared to previous algorithms @cite_8 . HDRF achieves nearly 2x the speedup than traditional Greedy placement and almost three times faster than using a constrained solution. proposed a scalable streaming graph partitioning technique called HoVerCut @cite_21 , which provided horizontal and vertical scalability for the graph partitioning system. HoVerCut used multi-threading with a windowing technique to share incoming edges between the threads. However, in that the window that contains the edges does not update over time. This may create performance degradation and it is not suitable for dynamic datasets. | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_23",
"@cite_2"
],
"mid": [
"2096544401",
"2527814217",
"2031709923",
"1988365129"
],
"abstract": [
"While high-level data parallel frameworks, like MapReduce, simplify the design and implementation of large-scale data processing systems, they do not naturally or efficiently support many important data mining and machine learning algorithms and can lead to inefficient learning systems. To help fill this critical void, we introduced the GraphLab abstraction which naturally expresses asynchronous, dynamic, graph-parallel computation while ensuring data consistency and achieving a high degree of parallel performance in the shared-memory setting. In this paper, we extend the GraphLab framework to the substantially more challenging distributed setting while preserving strong data consistency guarantees. We develop graph based extensions to pipelined locking and data versioning to reduce network congestion and mitigate the effect of network latency. We also introduce fault tolerance to the GraphLab abstraction using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can be easily implemented by exploiting the GraphLab abstraction itself. Finally, we evaluate our distributed implementation of the GraphLab abstraction on a large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains over Hadoop-based implementations.",
"While the algorithms for streaming graph partitioning are proved promising, they fall short of creating timely partitions when applied on large graphs. For example, it takes 415 seconds for a state-of-the-art partitioner to work on a social network graph with 117 millions edges. We introduce an efficient platform for boosting streaming graph partitioning algorithms. Our solution, called HoVerCut, is Horizontally and Vertically scalable. That is, it can run as a multi-threaded process on a single machine, or as a distributed partitioner across multiple machines. Our evaluations, on both real-world and synthetic graphs, show that HoVerCut speeds up the process significantly without degrading the quality of partitioning. For example, HoVerCut partitions the aforementioned social network graph with 117 millions edges in 11 seconds that is about 37 times faster.",
"Balanced graph partitioning is a fundamental problem that is receiving growing attention with the emergence of distributed graph-computing (DGC) frameworks. In these frameworks, the partitioning strategy plays an important role since it drives the communication cost and the workload balance among computing nodes, thereby affecting system performance. However, existing solutions only partially exploit a key characteristic of natural graphs commonly found in the real-world: their highly skewed power-law degree distributions. In this paper, we propose High-Degree (are) Replicated First (HDRF), a novel streaming vertex-cut graph partitioning algorithm that effectively exploits skewed degree distributions by explicitly taking into account vertex degree in the placement decision. We analytically and experimentally evaluate HDRF on both synthetic and real-world graphs and show that it outperforms all existing algorithms in partitioning quality.",
"RDF datasets are an important source of big data. Many of them, however, are too large to fit on a single machine. One approach to address this is to partition the RDF graph across multiple machines, with each component residing on a single machine. A poor partition can incur significant communication costs, however, if as a result many queries involve multiple machines. A number of existing partitioning schemes seek to reduce these costs by finding partitions that avoid cutting edges in the RDF graph. While these can successfully find good partitions the partitioning process itself is often not very scalable, and not capable of handling incrementally-generated RDF data. In this paper, we develop a more scalable, effective and low complexity approach, online graph dataset partitioning, to produce high quality dataset partitions with fewer links between partitions. We show experimentally that it works well in reducing the communication cost of query processing, while at the same time improving scalability of the partitioning itself."
]
} |
1902.01543 | 2909858526 | In the recent years, the scale of graph datasets has increased to such a degree that a single machine is not capable of efficiently processing large graphs. Thereby, efficient graph partitioning is necessary for those large graph applications. Traditional graph partitioning generally loads the whole graph data into the memory before performing partitioning; this is not only a time consuming task but it also creates memory bottlenecks. These issues of memory limitation and enormous time complexity can be resolved using stream-based graph partitioning. A streaming graph partitioning algorithm reads vertices once and assigns that vertex to a partition accordingly. This is also called an one-pass algorithm. This paper proposes an efficient window-based streaming graph partitioning algorithm called WStream. The WStream algorithm is an edge-cut partitioning algorithm, which distributes a vertex among the partitions. Our results suggest that the WStream algorithm is able to partition large graph data efficiently while keeping the load balanced across different partitions, and communication to a minimum. Evaluation results with real workloads also prove the effectiveness of our proposed algorithm, and it achieves a significant reduction in load imbalance and edge-cut with different ranges of dataset. | Real-world graphs, for example, social networks, typically follow a power-law degree distribution. Partitioning power-law graphs is very difficult. PowerGraph @cite_4 aims to reduce inter-partition communication by computing edges over vertices of power-law graphs. It follows the GAS (Gather, Apply, and Scatter) model and uses a vertex-cut partitioning technique. It distributes replicas of vertices into multiple machines to parallelize the computation. | {
"cite_N": [
"@cite_4"
],
"mid": [
"78077100"
],
"abstract": [
"Large-scale graph-structured computation is central to tasks ranging from targeted advertising to natural language processing and has led to the development of several graph-parallel abstractions including Pregel and GraphLab. However, the natural graphs commonly found in the real-world have highly skewed power-law degree distributions, which challenge the assumptions made by these abstractions, limiting performance and scalability. In this paper, we characterize the challenges of computation on natural graphs in the context of existing graph-parallel abstractions. We then introduce the PowerGraph abstraction which exploits the internal structure of graph programs to address these challenges. Leveraging the PowerGraph abstraction we introduce a new approach to distributed graph placement and representation that exploits the structure of power-law graphs. We provide a detailed analysis and experimental evaluation comparing PowerGraph to two popular graph-parallel systems. Finally, we describe three different implementation strategies for PowerGraph and discuss their relative merits with empirical evaluations on large-scale real-world problems demonstrating order of magnitude gains."
]
} |
1902.01543 | 2909858526 | In the recent years, the scale of graph datasets has increased to such a degree that a single machine is not capable of efficiently processing large graphs. Thereby, efficient graph partitioning is necessary for those large graph applications. Traditional graph partitioning generally loads the whole graph data into the memory before performing partitioning; this is not only a time consuming task but it also creates memory bottlenecks. These issues of memory limitation and enormous time complexity can be resolved using stream-based graph partitioning. A streaming graph partitioning algorithm reads vertices once and assigns that vertex to a partition accordingly. This is also called an one-pass algorithm. This paper proposes an efficient window-based streaming graph partitioning algorithm called WStream. The WStream algorithm is an edge-cut partitioning algorithm, which distributes a vertex among the partitions. Our results suggest that the WStream algorithm is able to partition large graph data efficiently while keeping the load balanced across different partitions, and communication to a minimum. Evaluation results with real workloads also prove the effectiveness of our proposed algorithm, and it achieves a significant reduction in load imbalance and edge-cut with different ranges of dataset. | Another variant of PowerGraph streaming partitioning was proposed by @cite_0 called S-PowerGraph. S-PowerGraph also used vertex-cut partitioning. This method is suitable for partitioning skewed natural graphs and was found to outperform algorithms in previous studies with regard to the acceptable imbalance factor. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2270258565"
],
"abstract": [
"One standard solution for analyzing large natural graphs is to adopt distributed computation on clusters. In distributed computation, graph partitioning (GP) methods assign the vertices or edges of a graph to different machines in a balanced way so that some distributed algorithms can be adapted for. Most of traditional GP methods are offline, which means that the whole graph has been observed before partitioning. However, the offline methods often incur high computation cost. Hence, streaming graph partitioning (SGP) methods, which can partition graphs in an online way, have recently attracted great attention in distributed computation. There exist two typical GP strategies: edge-cut and vertex-cut. Most SGP methods adopt edge-cut, but few vertex-cut methods have been proposed for SGP. However, the vertex-cut strategy would be a better choice than the edge-cut strategy because the degree of a natural graph in general follows a highly skewed power-law distribution. Thus, we propose a novel method, called S-PowerGraph, for SGP of natural graphs by vertex-cut. Our S-PowerGraph method is simple but effective. Experiments on several large natural graphs and synthetic graphs show that our S-PowerGraph can outperform the state-of-the-art baselines."
]
} |
1902.01543 | 2909858526 | In the recent years, the scale of graph datasets has increased to such a degree that a single machine is not capable of efficiently processing large graphs. Thereby, efficient graph partitioning is necessary for those large graph applications. Traditional graph partitioning generally loads the whole graph data into the memory before performing partitioning; this is not only a time consuming task but it also creates memory bottlenecks. These issues of memory limitation and enormous time complexity can be resolved using stream-based graph partitioning. A streaming graph partitioning algorithm reads vertices once and assigns that vertex to a partition accordingly. This is also called an one-pass algorithm. This paper proposes an efficient window-based streaming graph partitioning algorithm called WStream. The WStream algorithm is an edge-cut partitioning algorithm, which distributes a vertex among the partitions. Our results suggest that the WStream algorithm is able to partition large graph data efficiently while keeping the load balanced across different partitions, and communication to a minimum. Evaluation results with real workloads also prove the effectiveness of our proposed algorithm, and it achieves a significant reduction in load imbalance and edge-cut with different ranges of dataset. | A re-streaming algorithm @cite_20 proposed by considered the scenario where the same datasets routinely streamed. This re-streaming technique performed well when same datasets repeatedly comes over in an application. However, drawback of this study is it is not suitable for the graphs where changes happen very frequently of the structure of the dataset. As such cases, data do not arrive in routine manner or don't repeat their stream. Consequently, re-streaming technique has less impact in such scenario. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2111925081"
],
"abstract": [
"Partitioning large graphs is difficult, especially when performed in the limited models of computation afforded to modern large scale computing systems. In this work we introduce restreaming graph partitioning and develop algorithms that scale similarly to streaming partitioning algorithms yet empirically perform as well as fully offline algorithms. In streaming partitioning, graphs are partitioned serially in a single pass. Restreaming partitioning is motivated by scenarios where approximately the same dataset is routinely streamed, making it possible to transform streaming partitioning algorithms into an iterative procedure. This combination of simplicity and powerful performance allows restreaming algorithms to be easily adapted to efficiently tackle more challenging partitioning objectives. In particular, we consider the problem of stratified graph partitioning, where each of many node attribute strata are balanced simultaneously. As such, stratified partitioning is well suited for the study of network effects on social networks, where it is desirable to isolate disjoint dense subgraphs with representative user demographics. To demonstrate, we partition a large social network such that each partition exhibits the same degree distribution in the original graph --- a novel achievement for non-regular graphs. As part of our results, we also observe a fundamental difference in the ease with which social graphs are partitioned when compared to web graphs. Namely, the modular structure of web graphs appears to motivate full offline optimization, whereas the locally dense structure of social graphs precludes significant gains from global manipulations."
]
} |
1902.01543 | 2909858526 | In the recent years, the scale of graph datasets has increased to such a degree that a single machine is not capable of efficiently processing large graphs. Thereby, efficient graph partitioning is necessary for those large graph applications. Traditional graph partitioning generally loads the whole graph data into the memory before performing partitioning; this is not only a time consuming task but it also creates memory bottlenecks. These issues of memory limitation and enormous time complexity can be resolved using stream-based graph partitioning. A streaming graph partitioning algorithm reads vertices once and assigns that vertex to a partition accordingly. This is also called an one-pass algorithm. This paper proposes an efficient window-based streaming graph partitioning algorithm called WStream. The WStream algorithm is an edge-cut partitioning algorithm, which distributes a vertex among the partitions. Our results suggest that the WStream algorithm is able to partition large graph data efficiently while keeping the load balanced across different partitions, and communication to a minimum. Evaluation results with real workloads also prove the effectiveness of our proposed algorithm, and it achieves a significant reduction in load imbalance and edge-cut with different ranges of dataset. | A distributed vertex swapping technique called Ja-be-ja @cite_24 was proposed by this vertices swapping technique made uses to reduce the communication. Ja-be-ja was built based local search and Simulated Annealing(SA) method. SA method uses the statistical mechanism which is not suitable for the sparse network @cite_1 | {
"cite_N": [
"@cite_24",
"@cite_1"
],
"mid": [
"2128479026",
"2096159022"
],
"abstract": [
"Balanced graph partitioning is a well known NP-complete problem with a wide range of applications. These applications include many large-scale distributed problems including the optimal storage of large sets of graph-structured data over several hosts-a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because they typically involve frequent global operations over the entire graph. In this paper, we propose a fully distributed algorithm, called JA-BE-JA, that uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: there is no central coordination, each node is processed independently, and only the direct neighbors of the node, and a small subset of random nodes in the graph need to be known locally. Strict synchronization is not required. These features allow JA-BE-JA to be easily adapted to any distributed graph-processing system from data centers to fully distributed networks. We perform a thorough experimental analysis, which shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS. In particular, on large social networks JA-BEJA outperforms METIS, which makes JA-BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.",
"If a finite element mesh has a sufficiently regular structure, it is easy to decide in advance how to distribute the mesh among the processors of a distributed-memory parallel processor, but if the mesh is unstructured the problem becomes much more difficult. The distribution should be made so that each processor has approximately equal work to do, and such that communication overhead is minimized. If the mesh is solution-adaptive, i.e. the mesh and hence the load-balancing problem change discretely during execution of the code, then it is most efficient to decide the optimal mesh distribution in parallel. In this paper three parallel algorithms, orthogonal recursive bisection (ORB), eigenvector recursive bisection (ERB) and a simple parallelization of simulated annealing (SA) have been implemented for load balancing a dynamic unstructured triangular mesh on 16 processors of an NCUBE machine. The test problem is a solution-adaptive Laplace solver, with an initial mesh of 280 elements, refined in seven stages to 5772 elements. We present execution times for the solver resulting from the mesh distributions using the three algorithms, as well as results on imbalance, communication traffic and element migration. The load-balancing itself is fastest with ORB, but a very long run of SA produces a saving of 21 in the execution time of the Laplace solver. ERB is only a little slower than ORB, and yet produces a mesh distribution whose execution time is 15 faster than ORB."
]
} |
1902.01543 | 2909858526 | In the recent years, the scale of graph datasets has increased to such a degree that a single machine is not capable of efficiently processing large graphs. Thereby, efficient graph partitioning is necessary for those large graph applications. Traditional graph partitioning generally loads the whole graph data into the memory before performing partitioning; this is not only a time consuming task but it also creates memory bottlenecks. These issues of memory limitation and enormous time complexity can be resolved using stream-based graph partitioning. A streaming graph partitioning algorithm reads vertices once and assigns that vertex to a partition accordingly. This is also called an one-pass algorithm. This paper proposes an efficient window-based streaming graph partitioning algorithm called WStream. The WStream algorithm is an edge-cut partitioning algorithm, which distributes a vertex among the partitions. Our results suggest that the WStream algorithm is able to partition large graph data efficiently while keeping the load balanced across different partitions, and communication to a minimum. Evaluation results with real workloads also prove the effectiveness of our proposed algorithm, and it achieves a significant reduction in load imbalance and edge-cut with different ranges of dataset. | Stanton @cite_3 proposed a few heuristics for partitioning a large-scale graph in a streaming manner. Linear Deterministic Greedy (LDG) was the best performing heuristic of those heuristics. This algorithm is a Greedy heuristic, which is linear. It has a central graph loader, which loads and distributes data among the available workers. The heuristic assigns a vertex to the partition with which it shares the most edges. The algorithm was evaluated using 21 different static datasets and up to 16 partitions. It makes heuristics scale with the size and number of graph partitions. Based on PageRank computations, the method yielded a significant speed up achievement for large social networks by 18 The LDG algorithm is a well established streaming graph partitioning algorithm and is a state-of-the-art one-pass edge-cut partitioning algorithm. Therefore, in this study, we compared our one-pass edge-cut partitioning algorithm with the LDG algorithm @cite_3 . | {
"cite_N": [
"@cite_3"
],
"mid": [
"1971630691"
],
"abstract": [
"Extracting knowledge by performing computations on graphs is becoming increasingly challenging as graphs grow in size. A standard approach distributes the graph over a cluster of nodes, but performing computations on a distributed graph is expensive if large amount of data have to be moved. Without partitioning the graph, communication quickly becomes a limiting factor in scaling the system up. Existing graph partitioning heuristics incur high computation and communication cost on large graphs, sometimes as high as the future computation itself. Observing that the graph has to be loaded into the cluster, we ask if the partitioning can be done at the same time with a lightweight streaming algorithm. We propose natural, simple heuristics and compare their performance to hashing and METIS, a fast, offline heuristic. We show on a large collection of graph datasets that our heuristics are a significant improvement, with the best obtaining an average gain of 76 . The heuristics are scalable in the size of the graphs and the number of partitions. Using our streaming partitioning methods, we are able to speed up PageRank computations on Spark, a distributed computation system, by 18 to 39 for large social networks."
]
} |
1902.01560 | 2912698532 | We introduce the problem of Dynamic Real-time Multimodal Routing (DREAMR), which requires planning and executing routes under uncertainty for an autonomous agent. The agent has access to a time-varying transit vehicle network in which it can use multiple modes of transportation. For instance, a drone can either fly or ride on terrain vehicles for segments of their routes. DREAMR is a difficult problem of sequential decision making under uncertainty with both discrete and continuous variables. We design a novel hierarchical hybrid planning framework to solve the DREAMR problem that exploits its structural decomposability. Our framework consists of a global open-loop planning layer that invokes and monitors a local closed-loop execution layer. Additional abstractions allow efficient and seamless interleaving of planning and execution. We create a large-scale simulation for DREAMR problems, with each scenario having hundreds of transportation routes and thousands of connection points. Our algorithmic framework significantly outperforms a receding horizon control baseline, in terms of elapsed time to reach the destination and energy expended by the agent. | A Markov Decision Process or MDP @cite_33 is defined by @math , where and are the system's state and action spaces, @math is the transition function, where @math and @math , @math is the reward function, and @math the discount factor. Solving an MDP yields a policy @math which maximizes the or the expected reward-to-go from each state. An MDP can be solved by , a dynamic programming (DP) method that computes the optimal value function @math . We obtain a single @math for infinite horizon problems and @math ( @math is the maximum number of timesteps) for finite horizon or episodic problems. For large or continuous spaces, we can approximate the value function locally with multilinear interpolation @cite_29 , or globally with basis functions @cite_10 . Our framework uses approximate DP extensively @cite_6 . | {
"cite_N": [
"@cite_29",
"@cite_10",
"@cite_33",
"@cite_6"
],
"mid": [
"2145756561",
"1626155273",
"2119567691",
""
],
"abstract": [
"Dynamic Programming, Q-learning and other discrete Markov Decision Process solvers can be applied to continuous d-dimensional state-spaces by quantizing the state space into an array of boxes. This is often problematic above two dimensions: a coarse quantization can lead to poor policies, and fine quantization is too expensive. Possible solutions are variable-resolution discretization, or function approximation by neural nets. A third option, which has been little studied in the reinforcement learning literature, is interpolation on a coarse grid. In this paper we study interpolation techniques that can result in vast improvements in the online behavior of the resulting control systems: multilinear interpolation, and an interpolation algorithm based on an interesting regular triangulation of d-dimensional space. We adapt these interpolators under three reinforcement learning paradigms: (i) offline value iteration with a known model, (ii) Q-learning, and (iii) online value iteration with a previously unknown model learned from data. We describe empirical results, and the resulting implications for practical learning of continuous non-linear dynamic control.",
"From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems. However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence. Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications. The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work. Access the authors' website at www.dcsc.tudelft.nl rlbook for additional material, including computer code used in the studies and information concerning new developments.",
"From the Publisher: The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are uncertain and sequential decision-making processes are needed. A timely response to this increased activity, Martin L. Puterman's new work provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models. It discusses all major research directions in the field, highlights many significant applications of Markov decision processes models, and explores numerous important topics that have previously been neglected or given cursory coverage in the literature. Markov Decision Processes focuses primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous-time discrete state models. The book is organized around optimality criteria, using a common framework centered on the optimality (Bellman) equation for presenting results. The results are presented in a \"theorem-proof\" format and elaborated on through both discussion and examples, including results that are not available in any other book. A two-state Markov decision process model, presented in Chapter 3, is analyzed repeatedly throughout the book and demonstrates many results and algorithms. Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria. It also explores several topics that have received little or no attention in other books, including modified policy iteration, multichain models with average reward criterion, and sensitive optimality. In addition, a Bibliographic Remarks section in each chapter comments on relevant historic",
""
]
} |
1902.01575 | 2917261185 | We propose a new algorithm for the piece-wise i.i.d. non-stationary bandit problem with bounded rewards. Our proposal, GLR-klUCB, combines an efficient bandit algorithm, klUCB, with an efficient, parameter-free, change-point detector, the Bernoulli Generalized Likelihood Ratio Test, for which we provide new theoretical guarantees of independent interest. We analyze two variants of our strategy, based on local restarts and global restarts, and show that their regret is upper-bounded by @math if the number of change-points @math is unknown, and by @math if @math is known. This improves the current state-of-the-art bounds, as our algorithm needs no tuning based on knowledge of the problem complexity other than @math . We present numerical experiments showing that GLR-klUCB outperforms passively and actively adaptive algorithms from the literature, and highlight the benefit of using local restarts. | The piece-wise stationary bandit model was first studied by @cite_1 @cite_4 @cite_17 . It is also known as or environment. To our knowledge, all the previous approaches combine a standard bandit algorithm, like , Thompson Sampling or EXP3, with a strategy to account for changes in the arms distributions. This strategy often consists in , to efficiently focus on the most recent ones, more likely to be similar to future rewards. We make the distinction between and adaptive strategies. | {
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_17"
],
"mid": [
"",
"2161571887",
"157259654"
],
"abstract": [
"",
"We consider a sequential decision problem where the rewards are generated by a piecewise-stationary distribution. However, the different reward distributions are unknown and may change at unknown instants. Our approach uses a limited number of side observations on past rewards, but does not require prior knowledge of the frequency of changes. In spite of the adversarial nature of the reward process, we provide an algorithm whose regret, with respect to the baseline with perfect knowledge of the distributions and the changes, is O(k log(T)), where k is the number of changes up to time T. This is in contrast to the case where side observations are not available, and where the regret is at least Ω(√T).",
"Many problems, such as cognitive radio, parameter control of a scanning tunnelling microscope or internet advertisement, can be modelled as non-stationary bandit problems where the distributions of rewards changes abruptly at unknown time instants. In this paper, we analyze two algorithms designed for solving this issue: discounted UCB (D-UCB) and sliding-window UCB (SW-UCB). We establish an upperbound for the expected regret by upper-bounding the expectation of the number of times suboptimal arms are played. The proof relies on an interesting Hoeffding type inequality for self normalized deviations with a random number of summands. We establish a lower-bound for the regret in presence of abrupt changes in the arms reward distributions. We show that the discounted UCB and the sliding-window UCB both match the lower-bound up to a logarithmic factor. Numerical simulations show that D-UCB and SW-UCB perform significantly better than existing soft-max methods like EXP3.S."
]
} |
1902.01575 | 2917261185 | We propose a new algorithm for the piece-wise i.i.d. non-stationary bandit problem with bounded rewards. Our proposal, GLR-klUCB, combines an efficient bandit algorithm, klUCB, with an efficient, parameter-free, change-point detector, the Bernoulli Generalized Likelihood Ratio Test, for which we provide new theoretical guarantees of independent interest. We analyze two variants of our strategy, based on local restarts and global restarts, and show that their regret is upper-bounded by @math if the number of change-points @math is unknown, and by @math if @math is known. This improves the current state-of-the-art bounds, as our algorithm needs no tuning based on knowledge of the problem complexity other than @math . We present numerical experiments showing that GLR-klUCB outperforms passively and actively adaptive algorithms from the literature, and highlight the benefit of using local restarts. | More recently, @cite_0 proposed the Discounted Thompson Sampling (DTS) algorithm, which performs well in practice with @math . However, no theoretical guarantees are given for this strategy, and our experiments did not really confirm the robustness to @math . The RExp3 algorithm can also be qualified as passively adaptive: it is based on (non-adaptive) restarts of the EXP3 algorithm. Note that this algorithm is introduced for a different setting, where the quantity of interest is not @math but a quantity @math called the total variational budget (satisfying @math with @math the minimum magnitude of a change-point). A @math regret bound is proved, which is weaker than existing results in our setting. Hence we do no include this algorithm in our experiments. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2742123006"
],
"abstract": [
"We consider the multi armed bandit problem in non-stationary environments. Based on the Bayesian method, we propose a variant of Thompson Sampling which can be used in both rested and restless bandit scenarios. Applying discounting to the parameters of prior distribution, we describe a way to systematically reduce the effect of past observations. Further, we derive the exact expression for the probability of picking sub-optimal arms. By increasing the exploitative value of Bayes' samples, we also provide an optimistic version of the algorithm. Extensive empirical analysis is conducted under various scenarios to validate the utility of proposed algorithms. A comparison study with various state-of-the-arm algorithms is also included."
]
} |
1902.01575 | 2917261185 | We propose a new algorithm for the piece-wise i.i.d. non-stationary bandit problem with bounded rewards. Our proposal, GLR-klUCB, combines an efficient bandit algorithm, klUCB, with an efficient, parameter-free, change-point detector, the Bernoulli Generalized Likelihood Ratio Test, for which we provide new theoretical guarantees of independent interest. We analyze two variants of our strategy, based on local restarts and global restarts, and show that their regret is upper-bounded by @math if the number of change-points @math is unknown, and by @math if @math is known. This improves the current state-of-the-art bounds, as our algorithm needs no tuning based on knowledge of the problem complexity other than @math . We present numerical experiments showing that GLR-klUCB outperforms passively and actively adaptive algorithms from the literature, and highlight the benefit of using local restarts. | The first strategy is Windowed-Mean Shift , which combines any bandit policy with a change point detector which performs of the bandit algorithm. However, this approach is not applicable to our setting as it takes into account side observations. Another line of research on actively adaptive algorithms uses a Bayesian point of view. A Bayesian Change-Point Detection (CPD) algorithm is combined with Thompson Sampling by @cite_7 , and more recently in the Memory Bandit algorithm of @cite_3 . Both algorithms do not have theoretical guarantees and their implementation is very costly, hence we do not include them in our experiments. Our closest competitors rather use frequentist CPD algorithms (see, e.g. @cite_6 ) combined with a bandit algorithm. The first algorithm of this flavor, Adapt-EVE algorithm uses a Page-Hinkley test and the policy, but no theoretical guarantee are given. EXP3.R combines a CPD with EXP3, and the history of all arms are reset as soon as a a sub-optimal arm is detecting to become optimal and it achieve a @math regret. This is weaker than the @math regret achieved by two recent algorithms, and Monitored UCB ( , @cite_14 ). | {
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_6",
"@cite_7"
],
"mid": [
"2786555653",
"2185566361",
"2193897973",
"2116082319"
],
"abstract": [
"Multi-armed bandit (MAB) is a class of online learning problems where a learning agent aims to maximize its expected cumulative reward while repeatedly selecting to pull arms with unknown reward distributions. In this paper, we consider a scenario in which the arms' reward distributions may change in a piecewise-stationary fashion at unknown time steps. By connecting change-detection techniques with classic UCB algorithms, we motivate and propose a learning algorithm called M-UCB, which can detect and adapt to changes, for the considered scenario. We also establish an @math regret bound for M-UCB, where @math is the number of time steps, @math is the number of arms, and @math is the number of stationary segments. and @math is the gap between the expected rewards of the optimal and best suboptimal arms. Comparison with the best available lower bound shows that M-UCB is nearly optimal in @math up to a logarithmic factor. We also compare M-UCB with state-of-the-art algorithms in a numerical experiment based on a public Yahoo! dataset. In this experiment, M-UCB achieves about @math regret reduction with respect to the best performing state-of-the-art algorithm.",
"The multi-armed bandit is a model of exploration and exploitation, where one must select, within a finite set of arms, the one which maximizes the cumulative reward up to the time horizon T. For the adversarial multi-armed bandit problem, where the sequence of rewards is chosen by an oblivious adversary, the notion of best arm during the time horizon is too restrictive for applications such as ad-serving, where the best ad could change during time range. In this paper, we consider a variant of the adversarial multi-armed bandit problem, where the time horizon is divided into unknown time periods within which rewards are drawn from stochastic distributions. During each time period, there is an optimal arm which may be different from the optimal arm at the previous time period. We present an algorithm taking advantage of the constant exploration of EXP3 to detect when the best arm changes. Its analysis shows that on a run divided into N periods where the best arm changes, the proposed algorithms achieves a regret in O(N √T log T).",
"Due to the pervasive demand for mobile services, next generation wireless networks are expected to be able to deliver high data rates while wireless resources become more and more scarce. This requires the next generation wireless networks to move toward new networking paradigms that are able to efficiently support resource-demanding applications such as personalized mobile services. Examples of such paradigms foreseen for the emerging 5G cellular networks include very densely deployed small cells and device-to-device communications. For 5G networks, it will be imperative to search for spectrum and energy-efficient solutions to the resource allocation problems that i) are amenable to distributed implementation, ii) are capable of dealing with uncertainty and lack of information, and iii) can cope with users’ selfishness. The core objective of this article is to investigate and establish the potential of the MAB framework to address this challenge. In particular, we provide a brief tutorial on bandit problems, including different variations and solution approaches. Furthermore, we discuss recent applications as well as future research directions. In addition, we provide a detailed example of using an MAB model for energy-efficient small cell activation in 5G networks.",
"Thompson Sampling has recently been shown to achieve the lower bound on regret in the Bernoulli Multi-Armed Bandit setting. This bandit problem assumes stationary distributions for the rewards. It is often unrealistic to model the real world as a stationary distribution. In this paper we derive and evaluate algorithms using Thompson Sampling for a Switching Multi-Armed Bandit Problem. We propose a Thompson Sampling strategy equipped with a Bayesian change point mechanism to tackle this problem. We develop algorithms for a variety of cases with constant switching rate: when switching occurs all arms change (Global Switching), switching occurs independently for each arm (PerArm Switching), when the switching rate is known and when it must be inferred from data. This leads to a family of algorithms we collectively term Change-Point Thompson Sampling (CTS). We show empirical results in 4 articial environments, and 2 derived from real world data: news click-through[Yahoo!, 2011] and foreign exchange data[Dukascopy, 2012], comparing them to some other bandit algorithms. In real world data CTS is the most eective."
]
} |
1902.01453 | 2915045422 | Photovoltaic (PV) power generation has emerged as one of the lead renewable energy sources. Yet, its production is characterized by high uncertainty, being dependent on weather conditions like solar irradiance and temperature. Predicting PV production, even in the 24-hour forecast, remains a challenge and leads energy providers to left idling - often carbon emitting - plants. In this paper, we introduce a Long-Term Recurrent Convolutional Network using Numerical Weather Predictions (NWP) to predict, in turn, PV production in the 24-hour and 48-hour forecast horizons. This network architecture fully leverages both temporal and spatial weather data, sampled over the whole geographical area of interest. We train our model on an NWP dataset from the National Oceanic and Atmospheric Administration (NOAA) to predict spatially aggregated PV production in Germany. We compare its performance to the persistence model and state-of-the-art methods. | Adding NWP and physical model inputs to time-series models show better forecast prediction, yet limited computational resources as well as dataset sizes often prohibit studies to integrate an exhaustive list of these variables, especially in non-linear models. Instead, the impact of the variables is assessed by independent experiments @cite_7 of by fitting regression models like multiregression analyses @cite_23 or multivariate adaptive regression splines @cite_5 . Eventually, a subset of the variables may be selected for prediction. These variables selection analyses are conducted at specific locations, at four PV plants in Greece @cite_7 , one PV plant in Southern Italy @cite_23 and in Macau @cite_18 , while the variables impact on the prediction is likely to be location-dependent. | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_23",
"@cite_7"
],
"mid": [
"2506026375",
"2259944928",
"2028782916",
"2021540113"
],
"abstract": [
"Both linear and nonlinear models have been proposed for forecasting the power output of photovoltaic systems. Linear models are simple to implement but less flexible. Due to the stochastic nature of the power output of PV systems, nonlinear models tend to provide better forecast than linear models. Motivated by this, this paper suggests a fairly simple nonlinear regression model known as multivariate adaptive regression splines (MARS), as an alternative to forecasting of solar power output. The MARS model is a data-driven modeling approach without any assumption about the relationship between the power output and predictors. It maintains simplicity of the classical multiple linear regression (MLR) model while possessing the capability of handling nonlinearity. It is simpler in format than other nonlinear models such as ANN, k-nearest neighbors (KNN), classification and regression tree (CART), and support vector machine (SVM). The MARS model was applied on the daily output of a grid-connected 2.1kW PV system to provide the 1-day-ahead mean daily forecast of the power output. The comparisons with a wide variety of forecast models show that the MARS model is able to provide reliable forecast performance.",
"We evaluate and compare two common methods, artificial neural networks (ANN) and support vector regression (SVR), for predicting energy productions from a solar photovoltaic (PV) system in Florida 15 min, 1 h and 24 h ahead of time. A hierarchical approach is proposed based on the machine learning algorithms tested. The production data used in this work corresponds to 15 min averaged power measurements collected from 2014. The accuracy of the model is determined using computing error statistics such as mean bias error (MBE), mean absolute error (MAE), root mean square error (RMSE), relative MBE (rMBE), mean percentage error (MPE) and relative RMSE (rRMSE). This work provides findings on how forecasts from individual inverters will improve the total solar power generation forecast of the PV system.",
"An important issue for the growth and management of grid-connected photovoltaic (PV) systems is the possibility to forecast the power output over different horizons. In this work, statistical methods based on multiregression analysis and the Elmann artificial neural network (ANN) have been developed in order to predict power production of a 960 kWP grid-connected PV plant installed in Italy. Different combinations of the time series of produced PV power and measured meteorological variables were used as inputs of the ANN. Several statistical error measures are evaluated to estimate the accuracy of the forecasting methods. A decomposition of the standard deviation error has been carried out to identify the amplitude and phase error. The skewness and kurtosis parameters allow a detailed analysis of the distribution error.",
"The main purpose of this work is to lead an assessment of the day ahead forecasting activity of the power production by photovoltaic plants. Forecasting methods can play a fundamental role in solving problems related to renewable energy source (RES) integration in smart grids. Here a new hybrid method called Physical Hybrid Artificial Neural Network (PHANN) based on an Artificial Neural Network (ANN) and PV plant clear sky curves is proposed and compared with a standard ANN method. Furthermore, the accuracy of the two methods has been analyzed in order to better understand the intrinsic errors caused by the PHANN and to evaluate its potential in energy forecasting applications."
]
} |
1902.01453 | 2915045422 | Photovoltaic (PV) power generation has emerged as one of the lead renewable energy sources. Yet, its production is characterized by high uncertainty, being dependent on weather conditions like solar irradiance and temperature. Predicting PV production, even in the 24-hour forecast, remains a challenge and leads energy providers to left idling - often carbon emitting - plants. In this paper, we introduce a Long-Term Recurrent Convolutional Network using Numerical Weather Predictions (NWP) to predict, in turn, PV production in the 24-hour and 48-hour forecast horizons. This network architecture fully leverages both temporal and spatial weather data, sampled over the whole geographical area of interest. We train our model on an NWP dataset from the National Oceanic and Atmospheric Administration (NOAA) to predict spatially aggregated PV production in Germany. We compare its performance to the persistence model and state-of-the-art methods. | Meanwhile, flexible non-linear predictions models taking into account the spatio-temporal structure of the data, like Long-Term Recurrent Convolutional Network (LRCN) @cite_24 , 2D LSTM or Convolutional LSTM architectures (ConvLSTM) @cite_30 , have been successfully applied to a variety of problems. LRCN models have been used in activity recognition, image captioning and visual question answering @cite_24 . A 2D LSTM model has been applied to traffic forecasting @cite_3 while a ConvLSTM has shown promising results on a precipitation forecast that predicts rainfall intensity in a local region on a very short time horizon @cite_30 (also known as nowcasting), a weather-related prediction problem that shares similarities with PV forecast. | {
"cite_N": [
"@cite_24",
"@cite_30",
"@cite_3"
],
"mid": [
"2951183276",
"1485009520",
"2573587735"
],
"abstract": [
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-the-art operational ROVER algorithm for precipitation nowcasting.",
"Short-term traffic forecast is one of the essential issues in intelligent transportation system. Accurate forecast result enables commuters make appropriate travel modes, travel routes, and departure time, which is meaningful in traffic management. To promote the forecast accuracy, a feasible way is to develop a more effective approach for traffic data analysis. The availability of abundant traffic data and computation power emerge in recent years, which motivates us to improve the accuracy of short-term traffic forecast via deep learning approaches. A novel traffic forecast model based on long short-term memory (LSTM) network is proposed. Different from conventional forecast models, the proposed LSTM network considers temporal-spatial correlation in traffic system via a two-dimensional network which is composed of many memory units. A comparison with other representative forecast models validates that the proposed LSTM network can achieve a better performance."
]
} |
1902.01220 | 2942773025 | Deep Neural Networks have achieved remarkable success in computer vision, natural language processing, and audio tasks. | Papernot @math and Fawzi @math @cite_14 @cite_23 think that by injecting adversarial examples to training datasets, the deployed deep neural models' robustness is increased, @cite_20 @cite_6 proposed adversarial ensemble methods to improve defense abilities, as a result, the ensemble adversarial models perform well against gradient-based and black-box attacks strategy. Wenlin @math @math . illustrate Feature Squeezing method @cite_22 which reduces the search space available to an adversarial example by coalescing samples that correspond to different feature vectors in the original space into a single sample. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_6",
"@cite_23",
"@cite_20"
],
"mid": [
"2964082701",
"2607219512",
"2618098489",
"2963467071",
"2230740169"
],
"abstract": [
"Deep learning algorithms have been shown to perform extremely well on manyclassical machine learning problems. However, recent studies have shown thatdeep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force adeep neural network (DNN) to provide adversary-selected outputs. Such attackscan seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles canbe crashed, illicit or illegal content can bypass content filters, or biometricauthentication systems can be manipulated to allow improper access. In thiswork, we introduce a defensive mechanism called defensive distillationto reduce the effectiveness of adversarial samples on DNNs. We analyticallyinvestigate the generalizability and robustness properties granted by the useof defensive distillation when training DNNs. We also empirically study theeffectiveness of our defense mechanisms on two DNNs placed in adversarialsettings. The study shows that defensive distillation can reduce effectivenessof sample creation from 95 to less than 0.5 on a studied DNN. Such dramaticgains can be explained by the fact that distillation leads gradients used inadversarial sample creation to be reduced by a factor of 1030. We alsofind that distillation increases the average minimum number of features thatneed to be modified to create adversarial samples by about 800 on one of theDNNs we tested.",
"Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, , that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives. This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.",
"Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we propose the first quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exists shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties.",
"Several recent works have shown that state-of-the-art classifiers are vulnerable to worst-case (i.e., adversarial) perturbations of the datapoints. On the other hand, it has been empirically observed that these same classifiers are relatively robust to random noise. In this paper, we propose to study a semi-random noise regime that generalizes both the random and worst-case noise regimes. We propose the first quantitative analysis of the robustness of nonlinear classifiers in this general noise regime. We establish precise theoretical bounds on the robustness of classifiers in this general regime, which depend on the curvature of the classifier's decision boundary. Our bounds confirm and quantify the empirical observations that classifiers satisfying curvature constraints are robust to random noise. Moreover, we quantify the robustness of classifiers in terms of the subspace dimension in the semi-random noise regime, and show that our bounds remarkably interpolate between the worst-case and random noise regimes. We perform experiments and show that the derived bounds provide very accurate estimates when applied to various state-of-the-art deep neural networks and datasets. This result suggests bounds on the curvature of the classifiers' decision boundaries that we support experimentally, and more generally offers important insights onto the geometry of high dimensional classification problems.",
"The robustness of neural networks to intended perturbations has recently attracted significant attention. In this paper, we propose a new method, , that learns robust classifiers from supervised data. The proposed method takes finding adversarial examples as an intermediate step. A new and simple way of finding adversarial examples is presented and experimentally shown to be efficient. Experimental results demonstrate that resulting learning method greatly improves the robustness of the classification models produced."
]
} |
1902.01091 | 2913343786 | We propose a fog computing simulator for analysing the design and deployment of applications through customized and dynamical strategies. We model the relationships among deployed applications, network connections and infrastructure characteristics through complex network theory, enabling the integration of topological measures in dynamic and customizable strategies such as the placement of application modules, workload location, and path routing and scheduling of services. We present a comparative analysis of the efficiency and the convergence of results of our simulator with the most referenced entity, iFogSim. To highlight YAFS functionalities, we model three scenarios that, to the best of our knowledge, cannot be implemented with current fog simulators: dynamic allocation of new application modules, dynamic failures of network nodes and user mobility along the topology. | FogTorch @cite_11 uses Monte Carlo simulations to determine the best allocation for an application through QoS indicators such as latency, bandwidth, cost, and response time. This simulator addresses the application allocation problem. Our approach simulates the whole ecosystem only where the allocation is one of the available inputs of the simulation. In other words, FogTorch optimizes the deployment of applications under QoS restrictions, and YAFS integrates this optimized allocation values to obtain simulated metrics. defined an application as a set of triplets of software components and interactions among components with a QoS profile. They used Monte Carlo simulations to compute the eligible deployments of software components. They also presented a fire alarm IoT application as a case study with three components: a fire manager (an actuator to extinguish the fire), a database system, and a machine learning engine. The IoT infrastructure was based on three fog nodes, two cloud entities and nine network links among them. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2573834517"
],
"abstract": [
"Fog computing aims at extending the Cloud by bringing computational power, storage, and communication capabilities to the edge of the network, in support of the IoT. Segmentation, distribution, and adaptive deployment of functionalities over the continuum from Things to Cloud are challenging tasks, due to the intrinsic heterogeneity, hierarchical structure, and very large scale infrastructure they will have to exploit. In this paper, we propose a simple, yet general, model to support the QoS-aware deployment of multicomponent IoT applications to Fog infrastructures. The model describes operational systemic qualities of the available infrastructure (latency and bandwidth), interactions among software components and Things, and business policies. Algorithms to determine eligible deployments for an application to a Fog infrastructure are presented. A Java tool, FogTorch , based on the proposed model has been prototyped."
]
} |
1902.01091 | 2913343786 | We propose a fog computing simulator for analysing the design and deployment of applications through customized and dynamical strategies. We model the relationships among deployed applications, network connections and infrastructure characteristics through complex network theory, enabling the integration of topological measures in dynamic and customizable strategies such as the placement of application modules, workload location, and path routing and scheduling of services. We present a comparative analysis of the efficiency and the convergence of results of our simulator with the most referenced entity, iFogSim. To highlight YAFS functionalities, we model three scenarios that, to the best of our knowledge, cannot be implemented with current fog simulators: dynamic allocation of new application modules, dynamic failures of network nodes and user mobility along the topology. | iFogSim @cite_4 is a CloudSim extension that supports the management of edge-network entities and the evaluation of allocation policies. The infrastructure is defined by a set of entities:(or fog nodes), , (such as a network link) and . The application is modelled as a directed graph with (representing computational resources), (a data dependency between application modules), and (defining a sequence of that should be monitored along the simulation to compute the response time. In the article, the authors present two placement strategies that we describe in detail in the evaluation section: and . They introduce the simulator with two case studies: a latency-sensitive online game (namely, the EGG Tractor Beam game) and intelligent surveillance through distributed camera networks. Based on the iFogSim simulator, we use the application model in our simulator, introducing new improvements in the API, and we compare our results using the first case study and the two placement strategies as explained in the article. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2414114959"
],
"abstract": [
"Summary Internet of Things (IoT) aims to bring every object (eg, smart cameras, wearable, environmental sensors, home appliances, and vehicles) online, hence generating massive volume of data that can overwhelm storage systems and data analytics applications. Cloud computing offers services at the infrastructure level that can scale to IoT storage and processing requirements. However, there are applications such as health monitoring and emergency response that require low latency, and delay that is caused by transferring data to the cloud and then back to the application can seriously impact their performances. To overcome this limitation, Fog computing paradigm has been proposed, where cloud services are extended to the edge of the network to decrease the latency and network congestion. To realize the full potential of Fog and IoT paradigms for real-time analytics, several challenges need to be addressed. The first and most critical problem is designing resource management techniques that determine which modules of analytics applications are pushed to each edge device to minimize the latency and maximize the throughput. To this end, we need an evaluation platform that enables the quantification of performance of resource management policies on an IoT or Fog computing infrastructure in a repeatable manner. In this paper we propose a simulator, called iFogSim, to model IoT and Fog environments and measure the impact of resource management techniques in latency, network congestion, energy consumption, and cost. We describe two case studies to demonstrate modeling of an IoT environment and comparison of resource management policies. Moreover, scalability of the simulation toolkit of RAM consumption and execution time is verified under different circumstances."
]
} |
1902.01148 | 2913189540 | This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments. We close this gap by presenting a theoretical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we provide the first result relating the randomization rate to robustness to adversarial attacks. This result applies for the general family of exponential distributions, and thus extends and unifies the previous approaches. We support our theoretical claims with a set of experiments. | Even though several explicit or implicit definitions can be found in the literature, e.g. @cite_17 @cite_23 @cite_4 , there is no broadly accepted definition of robustness to adversarial examples attacks. Recently @cite_1 proposed general definitions and a taxonomy of these. The authors divide the definitions from the literature into three categories: error-region, prediction-change and corrupted instance. In this paper we introduce a definition of robustness that generalizes the one of prediction-change, in the sense that it relies on probabilistic mappings in arbitrary metric spaces, and is not restricted to classification tasks, as discussed in the sequel. | {
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_23",
"@cite_17"
],
"mid": [
"2112939491",
"2803732607",
"",
"1673923490"
],
"abstract": [
"Let D and V denote respectively Information Divergence and Total Variation Distance. Pinsker's and Vajda's inequalities are respectively D ≥ [ 1 2] V2 and D ≥ log[( 2+V) ( 2-V)] - [( 2V) ( 2+V)]. In this paper, several generalizations and improvements of these inequalities are established for wide classes of f -divergences. First, conditions on f are determined under which an f-divergence Df will satisfy Df ≥ cf V2 or Df ≥ c2,f V2 + c4,f V4, where the constants cf, c2,f and c4,f are best possible. As a consequence, lower bounds in terms of V are obtained for many well known distance and divergence measures, including the χ2 and Hellinger's discrimination and the families of Tsallis' and Renyi's divergences. For instance, if D(α) (P||Q) = [α(α-1)]-1 [∫pαq1-αdμ-1] and ℑα (P||Q) = (α-1)-1 log[∫pαq1-αdμ] are respectively the relative information of type α and the Renyi's information gain of order α, it is shown that D(α) ≥ [ 1 2] V2 + [ 1 72] (α+1)(2-α) V4 whenever -1 ≤ α ≤ 2, α ≠ 0,1 and that ℑα ≥ [( α) 2] V2 + [ 1 36] α(1 + 5 α- 5 α2 ) V4 for 0 <; α <; 1. In a somewhat different direction, and motivated by the fact that these Pinsker's type lower bounds are accurate only for small variation (V close to zero), lower bounds for Df which are accurate for both small and large variation (V close to two) are also obtained. In the special case of the information divergence they imply that D ≥ log[ 2 ( 2-V)] - [( 2-V) 2] log[( 2+V) 2], which uniformly improves Vajda's inequality.",
"Why are classifiers in high dimension vulnerable to \"adversarial\" perturbations? We show that it is likely not due to information theoretic limitations, but rather it could be due to computational constraints. First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples. Then we give a particular classification task where learning a robust classifier is computationally intractable. More precisely we construct a binary classification task in high dimensional space which is (i) information theoretically easy to learn robustly for large perturbations, (ii) efficiently learnable (non-robustly) by a simple linear separator, (iii) yet is not efficiently robustly learnable, even for small perturbations, by any algorithm in the statistical query (SQ) model. This example gives an exponential separation between classical learning and robust learning in the statistical query model. It suggests that adversarial examples may be an unavoidable byproduct of computational limitations of learning algorithms.",
"",
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input."
]
} |
1902.01148 | 2913189540 | This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments. We close this gap by presenting a theoretical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we provide the first result relating the randomization rate to robustness to adversarial attacks. This result applies for the general family of exponential distributions, and thus extends and unifies the previous approaches. We support our theoretical claims with a set of experiments. | Noise injection into algorithms to enhance robustness has been used for ages in detection and signal processing tasks, for example with a physical phenomenon called stochastic resonance'' @cite_12 @cite_13 @cite_36 . It has also been extensively studied in several machine learning and optimization fields, e.g. robust optimization @cite_28 and data augmentation techniques @cite_2 . Recently, noise injection techniques have been adopted by the adversarial defense community, especially for neural networks, with very promising results. The first technique explicitly using randomization at inference time as a defense appeared in 2017 during the NIPS defense challenge @cite_5 . This method samples from over 12000 geometric transformations of the image to select a substitute image to feed the network. Then @cite_34 proposed to use stochastic activation pruning based on a distribution for adversarial defense. | {
"cite_N": [
"@cite_36",
"@cite_28",
"@cite_2",
"@cite_5",
"@cite_34",
"@cite_13",
"@cite_12"
],
"mid": [
"2097752919",
"",
"2775795276",
"2767962654",
"2787733970",
"2017326891",
"2166397251"
],
"abstract": [
"This paper shows how adaptive systems can learn to add an optimal amount of noise to some nonlinear feedback systems. This \"stochastic resonance\" (SR) effect occurs in a wide range of physical and biological systems. The noise energy can enhance the faint periodic signals or faint broadband signals that force the dynamical systems. Fuzzy and other adaptive systems can learn to induce SR based only on samples from the process. The paper derives the SR optimality conditions that any stochastic learning system should try to achieve. The adaptive system learns the SR effect as the system performs a stochastic gradient ascent on the signal-to-noise ratio. The stochastic learning scheme does not depend on a fuzzy system or any other adaptive system. Simulations test this SR learning scheme on the popular quartic-bistable dynamical system and on other dynamical systems. The driving noise types range from Gaussian white noise to impulsive noise to chaotic noise.",
"",
"In this paper, we explore and compare multiple solutions to the problem of data augmentation in image classification. Previous work has demonstrated the effectiveness of data augmentation through simple techniques, such as cropping, rotating, and flipping input images. We artificially constrain our access to data to a small subset of the ImageNet dataset, and compare each data augmentation technique in turn. One of the more successful data augmentations strategies is the traditional transformations mentioned above. We also experiment with GANs to generate images of different styles. Finally, we propose a method to allow a neural net to learn augmentations that best improve the classifier, which we call neural augmentation. We discuss the successes and shortcomings of this method on various datasets.",
"Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at this https URL.",
"Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory and cast the problem as a minimax zero-sum game between the adversary and the model. In general, for such games, the optimal strategy for both players requires a stochastic policy, also known as a mixed strategy. In this light, we propose Stochastic Activation Pruning (SAP), a mixed strategy for adversarial defense. SAP prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate. We can apply SAP to pretrained networks, including adversarially trained models, without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that SAP confers robustness against attacks, increasing accuracy and preserving calibration.",
"A novel instance of a stochastic resonance effect, under the form of a noise-improved performance, is shown to be possible for an optimal Bayesian estimator. Estimation of the frequency of a periodic signal corrupted by a phase noise is considered. The optimal Bayesian estimator, achieving the minimum of the mean square estimation error, is explicitly derived. Conditions are exhibited where this minimal error is reduced when the noise level is raised, over some ranges, where this occurs essentially with non-Gaussian noise, in the tested configurations. These results contribute a new step in the exploration of stochastic resonance and its potentialities for signal processing.",
"This paper deals with stochastic resonance. This nonlinear physical phenomenon generally occurs in bistable systems excited by random input noise plus a sinusoid. Through its internal dynamics, such a system forces cooperation between the input noise and the input sine: provided the existence of fine tuning between the power noise and the dynamics, the system reacts periodically at the frequency of the sine. Of particular interest is the fact that the local output signal-to-noise ratio presents a maximum when plotted against the input noise power; the system resounds stochastically. Continuous-time systems have already been studied. We study the ability of intrinsically discrete-time systems [general nonlinear AR(1) models] to produce stochastic resonance. It is then suggested that such discrete systems can be used in signal processing."
]
} |
1902.01148 | 2913189540 | This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments. We close this gap by presenting a theoretical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we provide the first result relating the randomization rate to robustness to adversarial attacks. This result applies for the general family of exponential distributions, and thus extends and unifies the previous approaches. We support our theoretical claims with a set of experiments. | noise injection has also been well investigated. Recent papers @cite_26 @cite_8 propose to inject noise directly on the activation of selected layers both at training and inference time. @cite_37 , the authors proposed a randomization method by exploiting the link between differential privacy @cite_21 and adversarial robustness. Their framework inheriting some theoretical results from the differential privacy work, is based on injecting or noise at training and inference time. In general, noise drawn from continuous distributions is used to alter the activation of one layer or more, whereas noise drawn from discrete distributions is used to alter either the image or the architecture of the network. However efficient in practice, these methods lack theoretical arguments on every part of the procedure (when where to inject noise, what noise to use, etc.). | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_21",
"@cite_8"
],
"mid": [
"2787708942",
"2773300778",
"2027595342",
"2900663597"
],
"abstract": [
"We identify obfuscated gradients as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat optimization-based attacks, we find defenses relying on this effect can be circumvented. For each of the three types of obfuscated gradients we discover, we describe indicators of defenses exhibiting this effect and develop attack techniques to overcome it. In a case study, examining all defenses accepted to ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 8 defenses relying on obfuscated gradients. Using our new attack techniques, we successfully circumvent all 7 of them.",
"Recent studies have revealed the vulnerability of deep neural networks - A small adversarial perturbation that is imperceptible to human can easily make a well-trained deep neural network mis-classify. This makes it unsafe to apply neural networks in security-critical applications. In this paper, we propose a new defensive algorithm called Random Self-Ensemble (RSE) by combining two important concepts: @math and @math . To protect a targeted model, RSE adds random noise layers to the neural network to prevent from state-of-the-art gradient-based attacks, and ensembles the prediction over random noises to stabilize the performance. We show that our algorithm is equivalent to ensemble an infinite number of noisy models @math without any additional memory overhead, and the proposed training procedure based on noisy stochastic gradient descent can ensure the ensemble model has good predictive capability. Our algorithm significantly outperforms previous defense techniques on real datasets. For instance, on CIFAR-10 with VGG network (which has @math accuracy without any attack), under the state-of-the-art C&W attack within a certain distortion tolerance, the accuracy of unprotected model drops to less than @math , the best previous defense technique has @math accuracy, while our method still has @math prediction accuracy under the same level of attack. Finally, our method is simple and easy to integrate into any neural network.",
"The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it.",
"Recent development in the field of Deep Learning have exposed the underlying vulnerability of Deep Neural Network (DNN) against adversarial examples. In image classification, an adversarial example is a carefully modified image that is visually imperceptible to the original image but can cause DNN model to misclassify it. Training the network with Gaussian noise is an effective technique to perform model regularization, thus improving model robustness against input variation. Inspired by this classical method, we explore to utilize the regularization characteristic of noise injection to improve DNN's robustness against adversarial attack. In this work, we propose Parametric-Noise-Injection (PNI) which involves trainable Gaussian noise injection at each layer on either activation or weights through solving the min-max optimization problem, embedded with adversarial training. These parameters are trained explicitly to achieve improved robustness. To the best of our knowledge, this is the first work that uses trainable noise injection to improve network robustness against adversarial attacks, rather than manually configuring the injected noise level through cross-validation. The extensive results show that our proposed PNI technique effectively improves the robustness against a variety of powerful white-box and black-box attacks such as PGD, C & W, FGSM, transferable attack and ZOO attack. Last but not the least, PNI method improves both clean- and perturbed-data accuracy in comparison to the state-of-the-art defense methods, which outperforms current unbroken PGD defense by 1.1 and 6.8 on clean test data and perturbed test data respectively using Resnet-20 architecture."
]
} |
1902.01148 | 2913189540 | This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments. We close this gap by presenting a theoretical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we provide the first result relating the randomization rate to robustness to adversarial attacks. This result applies for the general family of exponential distributions, and thus extends and unifies the previous approaches. We support our theoretical claims with a set of experiments. | Since the initial discovery of adversarial examples, a wealth of non randomized defense approaches have been proposed, inspired by various machine learning domains such as image reconstruction @cite_9 @cite_32 or robust learning @cite_24 @cite_25 . Even if these methods have their own merits, they fall short to defend against universal attacks. We hypothesize that the randomization strategy is the principled one, hence motivating the current study. | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_25",
"@cite_32"
],
"mid": [
"1945616565",
"2618043096",
"2640329709",
""
],
"abstract": [
"Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",
"Deep learning has shown impressive performance on hard perceptual problems. However, researchers found deep learning systems to be vulnerable to small, specially crafted perturbations that are imperceptible to humans. Such perturbations cause deep learning systems to mis-classify adversarial examples, with potentially disastrous consequences where safety or security is crucial. Prior defenses against adversarial examples either targeted specific attacks or were shown to be ineffective. We propose MagNet, a framework for defending neural network classifiers against adversarial examples. MagNet neither modifies the protected classifier nor requires knowledge of the process for generating adversarial examples. MagNet includes one or more separate detector networks and a reformer network. The detector networks learn to differentiate between normal and adversarial examples by approximating the manifold of normal examples. Since they assume no specific process for generating adversarial examples, they generalize well. The reformer network moves adversarial examples towards the manifold of normal examples, which is effective for correctly classifying adversarial examples with small perturbation. We discuss the intrinsic difficulties in defending against whitebox attack and propose a mechanism to defend against graybox attack. Inspired by the use of randomness in cryptography, we use diversity to strengthen MagNet. We show empirically that MagNet is effective against the most advanced state-of-the-art attacks in blackbox and graybox scenarios without sacrificing false positive rate on normal examples.",
"Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.",
""
]
} |
1902.01108 | 2914351552 | In the advent of big data era, interactive visualization of large data sets consisting of M*10^5+ high-dimensional feature vectors of length N (N 10^3+), is an indispensable tool for data exploratory analysis. The state-of-the-art data embedding (DE) methods of N-D data into 2-D (3-D) visually perceptible space (e.g., based on t-SNE concept) are too demanding computationally to be efficiently employed for interactive data analytics of large and high-dimensional datasets. Herein we present a simple method, ivhd (interactive visualization of high-dimensional data tool), which radically outperforms the modern data-embedding algorithms in both computational and memory loads, while retaining high quality of N-D data embedding in 2-D (3-D). We show that DE problem is equivalent to the nearest neighbor nn-graph visualization, where only indices of a few nearest neighbors of each data sample has to be known, and binary distance between data samples -- 0 to the nearest and 1 to the other samples -- is defined. These improvements reduce the time-complexity and memory load from O(M log M) to O(M), and ensure minimal O(M) proportionality coefficient as well. We demonstrate high efficiency, quality and robustness of ivhd on popular benchmark datasets such as MNIST, 20NG, NORB and RCV1. | diversify the definition of the neighborhood in the target and source spaces (t-SNE @cite_5 ), develop more precise cost functions based on more complex divergence schemes (ws-SNE @cite_15 ), combine the algorithms in hierarchical structures (hier -arch -ical-SNE @cite_7 ), increase their computational efficiency (bh-SNE @cite_28 , qSNE @cite_21 , LargeVis @cite_2 , Fit-SNE @cite_16 , CE @cite_24 , tripletembedding @cite_32 ). | {
"cite_N": [
"@cite_7",
"@cite_28",
"@cite_21",
"@cite_32",
"@cite_24",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_16"
],
"mid": [
"2467197661",
"1875842236",
"",
"2592630836",
"",
"2255257084",
"2187089797",
"2098620241",
""
],
"abstract": [
"In recent years, dimensionality-reduction techniques have been developed and are widely used for hypothesis generation in Exploratory Data Analysis. However, these techniques are confronted with overcoming the trade-off between computation time and the quality of the provided dimensionality reduction. In this work, we address this limitation, by introducing Hierarchical Stochastic Neighbor Embedding (Hierarchical-SNE). Using a hierarchical representation of the data, we incorporate the well-known mantra of Overview-First, Details-On-Demand in non-linear dimensionality reduction. First, the analysis shows an embedding, that reveals only the dominant structures in the data (Overview). Then, by selecting structures that are visible in the overview, the user can filter the data and drill down in the hierarchy. While the user descends into the hierarchy, detailed visualizations of the high-dimensional structures will lead to new insights. In this paper, we explain how Hierarchical-SNE scales to the analysis of big datasets. In addition, we show its application potential in the visualization of Deep-Learning architectures and the analysis of hyperspectral images.",
"The paper investigates the acceleration of t-SNE--an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots--using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N log N). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.",
"",
"Visualizing high-dimensional data has been a focus in data analysis communities for decades, which has led to the design of many algorithms, some of which are now considered references (such as t-SNE for example). In our era of overwhelming data volumes, the scalability of such methods have become more and more important. In this work, we present a method which allows to apply any visualization or embedding algorithm on very large datasets by considering only a fraction of the data as input and then extending the information to all data points using a graph encoding its global similarity. We show that in most cases, using only O(log(N)) samples is sufficient to diffuse the information to all N data points. In addition, we propose quantitative methods to measure the quality of embeddings and demonstrate the validity of our technique on both synthetic and real-world datasets.",
"",
"We study the problem of visualizing large-scale and high-dimensional data in a low-dimensional (typically 2D or 3D) space. Much success has been reported recently by techniques that first compute a similarity structure of the data points and then project them into a low-dimensional space with the structure preserved. These two steps suffer from considerable computational costs, preventing the state-of-the-art methods such as the t-SNE from scaling to large-scale and high-dimensional data (e.g., millions of data points and hundreds of dimensions). We propose the LargeVis, a technique that first constructs an accurately approximated K-nearest neighbor graph from the data and then layouts the graph in the low-dimensional space. Comparing to t-SNE, LargeVis significantly reduces the computational cost of the graph construction step and employs a principled probabilistic model for the visualization step, the objective of which can be effectively optimized through asynchronous stochastic gradient descent with a linear time complexity. The whole procedure thus easily scales to millions of high-dimensional data points. Experimental results on real-world data sets demonstrate that the LargeVis outperforms the state-of-the-art methods in both efficiency and effectiveness. The hyper-parameters of LargeVis are also much more stable over different data sets.",
"We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.",
"Visualization methods that arrange data objects in 2D or 3D layouts have followed two main schools, methods oriented for graph layout and methods oriented for vectorial embedding. We show the two previously separate approaches are tied by an optimization equivalence, making it possible to relate methods from the two approaches and to build new methods that take the best of both worlds. In detail, we prove a theorem of optimization equivalences between β- and γ-, as well as α- and Renyi-divergences through a connection scalar. Through the equivalences we represent several nonlinear dimensionality reduction and graph drawing methods in a generalized stochastic neighbor embedding setting, where information divergences are minimized between similarities in input and output spaces, and the optimal connection scalar provides a natural choice for the tradeoff between attractive and repulsive forces. We give two examples of developing new visualization methods through the equivalences: 1) We develop weighted symmetric stochastic neighbor embedding (ws-SNE) from Elastic Embedding and analyze its benefits, good performance for both vectorial and network data; in experiments ws-SNE has good performance across data sets of different types, whereas comparison methods fail for some of the data sets; 2) we develop a γ-divergence version of a PolyLog layout method; the new method is scale invariant in the output space and makes it possible to efficiently use large-scale smoothed neighborhoods.",
""
]
} |
1902.01374 | 2914698573 | Single image defogging is a classical and challenging problem in computer vision. Existing methods towards this problem mainly include handcrafted priors based methods that rely on the use of the atmospheric degradation model and learning based approaches that require paired fog-fogfree training example images. In practice, however, prior-based methods are prone to failure due to their own limitations and paired training data are extremely difficult to acquire. Inspired by the principle of CycleGAN network, we have developed an end-to-end learning system that uses unpaired fog and fogfree training images, adversarial discriminators and cycle consistency losses to automatically construct a fog removal system. Similar to CycleGAN, our system has two transformation paths; one maps fog images to a fogfree image domain and the other maps fogfree images to a fog image domain. Instead of one stage mapping, our system uses a two stage mapping strategy in each transformation path to enhance the effectiveness of fog removal. Furthermore, we make explicit use of prior knowledge in the networks by embedding the atmospheric degradation principle and a sky prior for mapping fogfree images to the fog images domain. In addition, we also contribute the first real world nature fog-fogfree image dataset for defogging research. Our multiple real fog images dataset (MRFID) contains images of 200 natural outdoor scenes. For each scene, there are one clear image and corresponding four foggy images of different fog densities manually selected from a sequence of images taken by a fixed camera over the course of one year. Qualitative and quantitative comparison against several state-of-the-art methods on both synthetic and real world images demonstrate that our approach is effective and performs favorably for recovering a clear image from a foggy image. | These methods have been widely used in the past few years and are also known as hand-crafted feature based methods. These methods often leverage the statistics of the natural image to characterize the transmission map, such as dark-channel prior @cite_25 , color attenuation prior @cite_9 , contrast color-lines @cite_12 , hue disparity prior @cite_16 and haze-line prior @cite_0 . Particularly, the method of the dark-channel has shown its excellent defogging performance, which has led many researchers to improve this method to achieve single image defogging. Despite the remarkable defogging performance by these methods, hand-crafted features (such as textural, contrast and so on.) also have limitations. For instance, dark-channel prior @cite_25 does not work well for some scene objects (such as sky, white building and so on.) which are inherently similar to the atmospheric light. Using haze-line prior @cite_0 can cause color distortion when the fog density is high. | {
"cite_N": [
"@cite_9",
"@cite_0",
"@cite_16",
"@cite_25",
"@cite_12"
],
"mid": [
"2156936307",
"",
"1494869093",
"2128254161",
"2028763589"
],
"abstract": [
"Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we propose a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.",
"",
"In this paper we introduce a novel approach to restore a single image degraded by atmospheric phenomena such as fog or haze. The presented algorithm allows for fast identification of hazy regions of an image, without making use of expensive optimization and refinement procedures. By applying a single per pixel operation on the original image, we produce a 'semi-inverse' of the image. Based on the hue disparity between the original image and its semi-inverse, we are then able to identify hazy regions on a per pixel basis. This enables for a simple estimation of the airlight constant and the transmission map. Our approach is based on an extensive study on a large data set of images, and validated based on a metric that measures the contrast but also the structural changes. The algorithm is straightforward and performs faster than existing strategies while yielding comparative and even better results. We also provide a comparative evaluation against other recent single image dehazing methods, demonstrating the efficiency and utility of our approach.",
"In this paper, we propose a simple but effective image prior-dark channel prior to remove haze from a single input image. The dark channel prior is a kind of statistics of outdoor haze-free images. It is based on a key observation-most local patches in outdoor haze-free images contain some pixels whose intensity is very low in at least one color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high-quality haze-free image. Results on a variety of hazy images demonstrate the power of the proposed prior. Moreover, a high-quality depth map can also be obtained as a byproduct of haze removal.",
"Photographs of hazy scenes typically suffer having low contrast and offer a limited visibility of the scene. This article describes a new method for single-image dehazing that relies on a generic regularity in natural images where pixels of small image patches typically exhibit a 1D distribution in RGB color space, known as color-lines. We derive a local formation model that explains the color-lines in the context of hazy scenes and use it for recovering the scene transmission based on the lines' offset from the origin. The lack of a dominant color-line inside a patch or its lack of consistency with the formation model allows us to identify and avoid false predictions. Thus, unlike existing approaches that follow their assumptions across the entire image, our algorithm validates its hypotheses and obtains more reliable estimates where possible. In addition, we describe a Markov random field model dedicated to producing complete and regularized transmission maps given noisy and scattered estimates. Unlike traditional field models that consist of local coupling, the new model is augmented with long-range connections between pixels of similar attributes. These connections allow our algorithm to properly resolve the transmission in isolated regions where nearby pixels do not offer relevant information. An extensive evaluation of our method over different types of images and its comparison to state-of-the-art methods over established benchmark images show a consistent improvement in the accuracy of the estimated scene transmission and recovered haze-free radiances."
]
} |
1902.01374 | 2914698573 | Single image defogging is a classical and challenging problem in computer vision. Existing methods towards this problem mainly include handcrafted priors based methods that rely on the use of the atmospheric degradation model and learning based approaches that require paired fog-fogfree training example images. In practice, however, prior-based methods are prone to failure due to their own limitations and paired training data are extremely difficult to acquire. Inspired by the principle of CycleGAN network, we have developed an end-to-end learning system that uses unpaired fog and fogfree training images, adversarial discriminators and cycle consistency losses to automatically construct a fog removal system. Similar to CycleGAN, our system has two transformation paths; one maps fog images to a fogfree image domain and the other maps fogfree images to a fog image domain. Instead of one stage mapping, our system uses a two stage mapping strategy in each transformation path to enhance the effectiveness of fog removal. Furthermore, we make explicit use of prior knowledge in the networks by embedding the atmospheric degradation principle and a sky prior for mapping fogfree images to the fog images domain. In addition, we also contribute the first real world nature fog-fogfree image dataset for defogging research. Our multiple real fog images dataset (MRFID) contains images of 200 natural outdoor scenes. For each scene, there are one clear image and corresponding four foggy images of different fog densities manually selected from a sequence of images taken by a fixed camera over the course of one year. Qualitative and quantitative comparison against several state-of-the-art methods on both synthetic and real world images demonstrate that our approach is effective and performs favorably for recovering a clear image from a foggy image. | Recently, some learning-based methods have drawn significant attention in the defogging reseach community. Tang @cite_32 proposed a method by using the random forest methods to train dark primary colors and other multiple color features to improve the estimation accuracy of transmittance. Mai @cite_29 found that the RGB color feature of the haze image had a strong linear relationship with the depth of the scene, and established the intrinsic relation between the color feature and the scene depth through the back propagation neural network to effectively restore the scene depth. Cai @cite_8 proposed a concept of a dehazing network that used a convolutional neural network to train the color characteristics (such as dark primary colors, color fading, maximum contrast, etc.) of foggy images and to optimize the transmission. All of those methods can achieve better defogging effect. However, they still have to estimate the transmission map and atmospheric light first, and then remove the fog with the atmospheric degradation model. Thus, the artifacts could not be avoided in the final defogged results when the transmission or atmospheric light is wrongly estimated. | {
"cite_N": [
"@cite_29",
"@cite_32",
"@cite_8"
],
"mid": [
"1980939919",
"2065002911",
"2256362396"
],
"abstract": [
"In this paper, we propose a novel learning-based approach for single image dehazing. The proposed approach is mostly inspired by the observation that the color of the objects fades gradually along with the increment of the scene depth. We regard the RGB values of the pixels within the image as the important feature, and use the back propagation neural network to mine the internal link between color and depth from the training samples, which consists of the hazy images and their corresponding ground truth depth map. With the trained neural network, we can easily restore the depth information as well as the scene radiance from the hazy image. Experimental results show that the proposed approach is able to produce a high-quality haze-free image with the single hazy image and achieve the real-time requirement.",
"Haze is one of the major factors that degrade outdoor images. Removing haze from a single image is known to be severely ill-posed, and assumptions made in previous methods do not hold in many situations. In this paper, we systematically investigate different haze-relevant features in a learning framework to identify the best feature combination for image dehazing. We show that the dark-channel feature is the most informative one for this task, which confirms the observation of [8] from a learning perspective, while other haze-relevant features also contribute significantly in a complementary way. We also find that surprisingly, the synthetic hazy image patches we use for feature investigation serve well as training data for realworld images, which allows us to train specific models for specific applications. Experiment results demonstrate that the proposed algorithm outperforms state-of-the-art methods on both synthetic and real-world datasets.",
"Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use."
]
} |
1902.01374 | 2914698573 | Single image defogging is a classical and challenging problem in computer vision. Existing methods towards this problem mainly include handcrafted priors based methods that rely on the use of the atmospheric degradation model and learning based approaches that require paired fog-fogfree training example images. In practice, however, prior-based methods are prone to failure due to their own limitations and paired training data are extremely difficult to acquire. Inspired by the principle of CycleGAN network, we have developed an end-to-end learning system that uses unpaired fog and fogfree training images, adversarial discriminators and cycle consistency losses to automatically construct a fog removal system. Similar to CycleGAN, our system has two transformation paths; one maps fog images to a fogfree image domain and the other maps fogfree images to a fog image domain. Instead of one stage mapping, our system uses a two stage mapping strategy in each transformation path to enhance the effectiveness of fog removal. Furthermore, we make explicit use of prior knowledge in the networks by embedding the atmospheric degradation principle and a sky prior for mapping fogfree images to the fog images domain. In addition, we also contribute the first real world nature fog-fogfree image dataset for defogging research. Our multiple real fog images dataset (MRFID) contains images of 200 natural outdoor scenes. For each scene, there are one clear image and corresponding four foggy images of different fog densities manually selected from a sequence of images taken by a fixed camera over the course of one year. Qualitative and quantitative comparison against several state-of-the-art methods on both synthetic and real world images demonstrate that our approach is effective and performs favorably for recovering a clear image from a foggy image. | To address the above problem, networks based on encoder-decoder structure @cite_20 @cite_13 @cite_10 have been used to directly recover clear images. Among these methods, generative adversarial network (GAN) @cite_17 based defogging algorithms have achieved remarkable results. Li @cite_20 modify the basic GAN to directly restore a clear image from a foggy image. However, all these methods required fog-fogfree pair images to train the network. In practice, it is difficult to obtain a large number of paired fog-fogfree images. A method based on CycleGAN @cite_23 has been proposed in @cite_31 where cycle-consistency and VGG perceptual losses are used to directly remove fog. A significant advantage of using CycleGAN is that there is no need to use paired fog-fogfree images to train the system. | {
"cite_N": [
"@cite_13",
"@cite_23",
"@cite_31",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2962793481",
"2963074253",
"2963306157",
"2798876216",
"2099471712"
],
"abstract": [
"",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"In this paper, we present an end-to-end network, called Cycle-Dehaze, for single image dehazing problem, which does not require pairs of hazy and corresponding ground truth images for training. That is, we train the network by feeding clean and hazy images in an unpaired manner. Moreover, the proposed approach does not rely on estimation of the atmospheric scattering model parameters. Our method enhances CycleGAN formulation by combining cycle-consistency and perceptual losses in order to improve the quality of textural information recovery and generate visually better haze-free images. Typically, deep learning models for dehazing take low resolution images as input and produce low resolution outputs. However, in the NTIRE 2018 challenge on single image dehazing, high resolution images were provided. Therefore, we apply bicubic downscaling. After obtaining low-resolution outputs from the network, we utilize the Laplacian pyramid to upscale the output images to the original resolution. We conduct experiments on NYU-Depth, I-HAZE, and O-HAZE datasets. Extensive experiments demonstrate that the proposed approach improves CycleGAN method both quantitatively and qualitatively.",
"We propose a new end-to-end single image dehazing method, called Densely Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the transmission map, atmospheric light and dehazing all together. The end-to-end learning is achieved by directly embedding the atmospheric scattering model into the network, thereby ensuring that the proposed method strictly follows the physics-driven scattering model for dehazing. Inspired by the dense network that can maximize the information flow along features from different levels, we propose a new edge-preserving densely connected encoder-decoder structure with multi-level pyramid pooling module for estimating the transmission map. This network is optimized using a newly introduced edge-preserving loss function. To further incorporate the mutual structural information between the estimated transmission map and the dehazed result, we propose a joint-discriminator based on generative adversarial network framework to decide whether the corresponding dehazed image and the estimated transmission map are real or fake. An ablation study is conducted to demonstrate the effectiveness of each module evaluated at both estimated transmission map and dehazed result. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods. Code and dataset is made available at: https: github.com hezhangsprinter DCPDN",
"In this paper, we present an algorithm to directly restore a clear image from a hazy image. This problem is highly ill-posed and most existing algorithms often use hand-crafted features, e.g., dark channel, color disparity, maximum contrast, to estimate transmission maps and then atmospheric lights. In contrast, we solve this problem based on a conditional generative adversarial network (cGAN), where the clear image is estimated by an end-to-end trainable neural network. Different from the generative network in basic cGAN, we propose an encoder and decoder architecture so that it can generate better results. To generate realistic clear images, we further modify the basic cGAN formulation by introducing the VGG features and an L1-regularized gradient prior. We also synthesize a hazy dataset including indoor and outdoor scenes to train and evaluate the proposed algorithm. Extensive experimental results demonstrate that the proposed method performs favorably against the state-of-the-art methods on both synthetic dataset and real world hazy images.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
]
} |
1902.01349 | 2956139268 | Semantic proto-role labeling (SPRL) is an alternative to semantic role labeling (SRL) that moves beyond a categorical definition of roles, following Dowty's feature-based view of proto-roles. This theory determines agenthood vs. patienthood based on a participant's instantiation of more or less typical agent vs. patient properties, such as, for example, volition in an event. To perform SPRL, we develop an ensemble of hierarchical models with self-attention and concurrently learned predicate-argument-markers. Our method is competitive with the state-of-the art, overall outperforming previous work in two formulations of the task (multi-label and multi-variate Likert scale prediction). In contrast to previous work, our results do not depend on gold argument heads derived from supplementary gold tree banks. | () are the first to treat SPRL as a multivariate Likert scale regression problem. They develop a neural model whose predictions have good correlation with the values in the testing data on and SPR2. When comparing with TEI17 (multi-label setting, SPR1), the model establishes a new state-of-the-art. Pre-training the model in a machine translation setting helps on SPR1 but results in a performance drop on SPR2. The model takes a sentence as input to a Bi-LSTM @cite_4 to produce a sequence of hidden states. The prediction is based on the hidden state corresponding to the head of the argument phrase, which is determined by inspection of the gold syntax tree. Our approach, in contrast, does not rely on any supplementary information from gold syntax trees. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2067551521"
],
"abstract": [
"As a novel attack on the perennially vexing questions of the theoretical status of thematic roles and the inventory of possible roles, this paper defends a strategy of basing accounts of roles on more unified domains of linguistic data than have been used in the past to motivate roles, addressing in particular the problem of ARGUMENT SELECTION (principles determining which roles are associated with which grammatical relations). It is concluded that the best theory for describing this domain is not a traditional system of discrete roles (Agent, Patient, Source, etc.) but a theory in which the only roles are two cluster-concepts called PROTO-AGENT and PROTO-PATIENT, each characterized by a set of verbal entailments: an argument of a verb may bear either of the two proto-roles (or both) to varying degrees, according to the number of entailments of each kind the verb gives it. Both fine-grained and coarse-grained classes of verbal arguments (corresponding to traditional thematic roles and other classes as well) follow automatically, as do desired 'role hierarchies'. By examining occurrences of the 'same' verb with different argument configurations—e.g. two forms of psych predicates and object-oblique alternations as in the familiar spray load class—it can also be argued that proto-roles act as defaults in the learning of lexical meanings. Are proto-role categories manifested elsewhere in language or as cognitive categories? If so, they might be a means of making grammar acquisition easier for the child, they might explain certain other typological and acquisitional observations, and they may lead to an account of contrasts between unaccusative and unergative intransitive verbs that does not rely on deriving unaccusatives from underlying direct objects."
]
} |
1902.01372 | 2912282903 | Compressed videos constitute 70 of Internet traffic, and video upload growth rates far outpace compute and storage improvement trends. Past work in leveraging perceptual cues like saliency, i.e., regions where viewers focus their perceptual attention, reduces compressed video size while maintaining perceptual quality, but requires significant changes to video codecs and ignores the data management of this perceptual information. In this paper, we propose Vignette, a compression technique and storage manager for perception-based video compression. Vignette complements off-the-shelf compression software and hardware codec implementations. Vignette's compression technique uses a neural network to predict saliency information used during transcoding, and its storage manager integrates perceptual information into the video storage system to support a perceptual compression feedback loop. Vignette's saliency-based optimizations reduce storage by up to 95 with minimal quality loss, and Vignette videos lead to power savings of 50 on mobile phones during video playback. Our results demonstrate the benefit of embedding information about the human visual system into the architecture of video storage systems. | More recently, multimedia and networking research optimized streaming bandwidth requirements for and VR video by decreasing quality outside the VR field-of-view @cite_6 @cite_45 @cite_34 @cite_57 ; while similar in spirit to perceptual compression, this only compresses to non-visible regions of a video. Sitzmann al @cite_43 observe the impact of leveraging saliency for VR video compression and identified key perceptual requirements, but do not address the production or distribution of saliency-compressed videos. | {
"cite_N": [
"@cite_6",
"@cite_57",
"@cite_43",
"@cite_45",
"@cite_34"
],
"mid": [
"2739620949",
"2613296122",
"2963925362",
"2396399340",
"2623181870"
],
"abstract": [
"Streaming video algorithms dynamically select between different versions of a video to deliver the highest quality version that can be viewed without buffering over the client's connection. To improve the quality for viewers, the backing video service can generate more and or better versions, but at a significant computational overhead. Processing all videos uploaded to Facebook in the most intensive way would require a prohibitively large cluster. Facebook's video popularity distribution is highly skewed, however, with analysis on sampled videos showing 1 of them accounting for 83 of the total watch time by users. Thus, if we can predict the future popularity of videos, we can focus the intensive processing on those videos that improve the quality of the most watch time. To address this challenge, we designed Chess, the first popularity prediction algorithm that is both scalable and accurate. Chess is scalable because, unlike the state-of-the-art approaches, it requires only constant space per video, enabling it to handle Facebook's video workload. Chess is accurate because it delivers superior predictions using a combination of historical access patterns with social signals in a unified online learning framework. We have built a video prediction service, ChessVPS, using our new algorithm that can handle Facebook's workload with only four machines. We find that re-encoding popular videos predicted by ChessVPS enables a higher percentage of total user watch time to benefit from intensive encoding, with less overhead than a recent production heuristic, e.g., 80 of watch time with one-third as much overhead.",
"We demonstrate VisualCloud, a database management system designed to efficiently ingest, store, and deliver virtual reality (VR) content at scale. VisualCloud targets both live and prerecorded spherical panoramic (a.k.a. 360°) VR videos. It persists content as a multidimensional array that utilizes both dense (e.g., space and time) and sparse (e.g., bitrate) dimensions. VisualCloud uses orientation prediction to reduce data transfer by degrading out-of-view portions of the video. Content delivered through VisualCloud requires up to 60 less bandwidth than existing methods and scales to many concurrent connections. This demonstration will allow attendees to view both live and prerecorded VR video content served through VisualCloud. Viewers will be able to dynamically adjust tuning parameters (e.g., bitrates and path prediction) and observe changes in visual fidelity.",
"",
"Humans see only a tiny region at the center of their visual field with the highest visual acuity, a behavior known as foveation. Visual acuity reduces drastically towards the visual periphery. 'Foveated' video coding compression techniques exploit this non-uniformity to gain significant efficiency by compressing more in the periphery and less in the center. We propose a practical and scalable method to use such a technique for video streaming service over the Internet. The essential idea is to use a commodity webcam on the user side to provide real-time gaze feedback to the server with the server sending appropriately coded video to the client player. We develop a multi-resolution video coding approach that is scalable in that it is possible to pre-code the video in a small number of copies for a given set of resolutions. The coding approach is designed to match the error performance of an eye tracker built using commodity webcams. We demonstrate that the technique is energy efficient and thus usable in mobile devices. We develop a methodology for performance evaluation of such a system when network budgets may vary and video quality may fluctuate. Finally, we present a comprehensive user study that demonstrates a bandwidth reduction of a factor of 2 for the same user satisfaction.",
"360° videos and Head-Mounted Displays (HMDs) are getting increasingly popular. However, streaming 360° videos to HMDs is challenging. This is because only video content in viewers' Field-of-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra efforts to align the content and sensor data using the timestamps in the raw log files. The resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven camera movements). We believe that our dataset will stimulate more research activities along this exciting new research direction."
]
} |
1902.01370 | 2913250058 | Conventional neural autoregressive decoding commonly assumes a fixed left-to-right generation order, which may be sub-optimal. In this work, we propose a novel decoding algorithm -- InDIGO -- which supports flexible sequence generation in arbitrary orders through insertion operations. We extend Transformer, a state-of-the-art sequence generation model, to efficiently implement the proposed approach, enabling it to be trained with either a pre-defined generation order or adaptive orders obtained from beam-search. Experiments on four real-world tasks, including word order recovery, machine translation, image caption and code generation, demonstrate that our algorithm can generate sequences following arbitrary orders, while achieving competitive or even better performance compared to the conventional left-to-right generation. The generated sequences show that InDIGO adopts adaptive generation orders based on input information. | Neural autoregressive modelling has become one of the most successful approaches for generating sequences @cite_0 @cite_18 , which has been widely used in a range of applications, such as machine translation @cite_10 , dialogue response generation @cite_13 , image captioning @cite_15 and speech recognition @cite_14 . Another stream of work focuses on generating a sequence of tokens in a non-autoregressive fashion @cite_2 @cite_33 @cite_39 , in which the discrete tokens are generated in parallel. Semi-autoregressive modelling @cite_8 @cite_32 is a mixture of the two approaches, while largely adhering to left-to-right generation. Our method is radically different from these approaches as we support flexible generation orders, while preserving the dependencies among generated tokens. | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_14",
"@cite_33",
"@cite_8",
"@cite_32",
"@cite_39",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_10"
],
"mid": [
"1591706642",
"",
"854541894",
"2789163545",
"2963946353",
"",
"2769810959",
"196214544",
"2767206889",
"2951805548",
"2130942839"
],
"abstract": [
"Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.",
"",
"Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks including machine translation, handwriting synthesis [1,2] and image caption generation [3]. We extend the attention-mechanism with features needed for speech recognition. We show that while an adaptation of the model used for machine translation in [2] reaches a competitive 18.7 phoneme error rate (PER) on the TIMET phoneme recognition task, it can only be applied to utterances which are roughly as long as the ones it was trained on. We offer a qualitative explanation of this failure and propose a novel and generic method of adding location-awareness to the attention mechanism to alleviate this issue. The new method yields a model that is robust to long inputs and achieves 18 PER in single utterances and 20 in 10-times longer (repeated) utterances. Finally, we propose a change to the attention mechanism that prevents it from concentrating too much on single frames, which further reduces PER to 17.6 level.",
"We propose a conditional non-autoregressive neural sequence model based on iterative refinement. The proposed model is designed based on the principles of latent variable models and denoising autoencoders, and is generally applicable to any sequence generation task. We extensively evaluate the proposed model on machine translation (En-De and En-Ro) and image caption generation, and observe that it significantly speeds up decoding while maintaining the generation quality comparable to the autoregressive counterpart.",
"Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While several common architecture classes including recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the length of the critical path at training time, inference for novel inputs still remains an inherently sequential process. We propose a novel blockwise parallel decoding scheme that takes advantage of the fact that some architectures can score sequences in sublinear time. By generating predictions for multiple time steps at once then backing off to the longest prefix validated by the scoring model, we can substantially improve the speed of greedy decoding without compromising performance. When tested on state-of-the-art self-attention models for machine translation and image super-resolution, our approach achieves iteration reductions of up to 2x over a baseline greedy decoder with no loss in quality. Relaxing the acceptance criterion and fine tuning model parameters allows for reductions of up to 7x in exchange for a slight decrease in performance. Our fastest models achieve a 4x speedup in wall-clock time.",
"",
"The recently-developed WaveNet architecture is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today's massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, and is deployed online by Google Assistant, including serving multiple English and Japanese voices.",
"Recurrent Neural Networks (RNNs) are very powerful sequence models that do not enjoy widespread use because it is extremely difficult to train them properly. Fortunately, recent advances in Hessian-free optimization have been able to overcome the difficulties associated with training RNNs, making it possible to apply them successfully to challenging sequence problems. In this paper we demonstrate the power of RNNs trained with the new Hessian-Free optimizer (HF) by applying them to character-level language modeling tasks. The standard RNN architecture, while effective, is not ideally suited for such tasks, so we introduce a new RNN variant that uses multiplicative (or \"gated\") connections which allow the current input character to determine the transition matrix from one hidden state vector to the next. After training the multiplicative RNN with the HF optimizer for five days on 8 high-end Graphics Processing Units, we were able to surpass the performance of the best previous single method for character-level language modeling – a hierarchical non-parametric sequence model. To our knowledge this represents the largest recurrent neural network application to date.",
"Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English-German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English-Romanian.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
]
} |
1902.01370 | 2913250058 | Conventional neural autoregressive decoding commonly assumes a fixed left-to-right generation order, which may be sub-optimal. In this work, we propose a novel decoding algorithm -- InDIGO -- which supports flexible sequence generation in arbitrary orders through insertion operations. We extend Transformer, a state-of-the-art sequence generation model, to efficiently implement the proposed approach, enabling it to be trained with either a pre-defined generation order or adaptive orders obtained from beam-search. Experiments on four real-world tasks, including word order recovery, machine translation, image caption and code generation, demonstrate that our algorithm can generate sequences following arbitrary orders, while achieving competitive or even better performance compared to the conventional left-to-right generation. The generated sequences show that InDIGO adopts adaptive generation orders based on input information. | Previous studies on generation order of sequences mostly resort to a fixed set of generation orders. empirically show that R2L generation outperforms its L2R counterpart in a few tasks. devises a two-pass approach that produces partially-filled sentence templates" and then fills in missing tokens. also proposes to generate tokens by first predicting a text template and infill the sentence afterwards while in a more general way. proposes a middle-out decoder that firstly predicts a middle-word and simultaneously expands the sequence in both directions afterwards. Another line of work models the probability of a sequence as a tree or directed graph @cite_12 @cite_11 @cite_19 @cite_30 @cite_34 . In contrast, Transformer-InDIGO supports fully flexible generation orders which is inferred during decoding. | {
"cite_N": [
"@cite_30",
"@cite_19",
"@cite_34",
"@cite_12",
"@cite_11"
],
"mid": [
"2950462711",
"2609011624",
"2594047108",
"2259512711",
"2289899728"
],
"abstract": [
"Recent advances in Neural Machine Translation (NMT) show that adding syntactic information to NMT systems can improve the quality of their translations. Most existing work utilizes some specific types of linguistically-inspired tree structures, like constituency and dependency parse trees. This is often done via a standard RNN decoder that operates on a linearized target tree structure. However, it is an open question of what specific linguistic formalism, if any, is the best structural representation for NMT. In this paper, we (1) propose an NMT model that can naturally generate the topology of an arbitrary tree structure on the target side, and (2) experiment with various target tree structures. Our experiments show the surprising result that our model delivers the best improvements with balanced binary trees constructed without any linguistic knowledge; this model outperforms standard seq2seq models by up to 2.1 BLEU points, and other methods for incorporating target-side syntax by up to 0.7 BLEU.",
"We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. An experiment on the WMT16 German-English news translation task resulted in an improved BLEU score when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A small-scale human evaluation also showed an advantage to the syntax-aware system.",
"",
"Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have been successfully applied to a variety of sequence modeling tasks. In this paper we develop Tree Long Short-Term Memory (TreeLSTM), a neural network model based on LSTM, which is designed to predict a tree rather than a linear sequence. TreeLSTM defines the probability of a sentence by estimating the generation probability of its dependency tree. At each time step, a node is generated based on the representation of the generated sub-tree. We further enhance the modeling power of TreeLSTM by explicitly representing the correlations between left and right dependents. Application of our model to the MSR sentence completion challenge achieves results beyond the current state of the art. We also report results on dependency parsing reranking achieving competitive performance.",
"We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure. We explain efficient inference procedures that allow application to both parsing and language modeling. Experiments show that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art sequential RNNs in English and Chinese."
]
} |
1902.01370 | 2913250058 | Conventional neural autoregressive decoding commonly assumes a fixed left-to-right generation order, which may be sub-optimal. In this work, we propose a novel decoding algorithm -- InDIGO -- which supports flexible sequence generation in arbitrary orders through insertion operations. We extend Transformer, a state-of-the-art sequence generation model, to efficiently implement the proposed approach, enabling it to be trained with either a pre-defined generation order or adaptive orders obtained from beam-search. Experiments on four real-world tasks, including word order recovery, machine translation, image caption and code generation, demonstrate that our algorithm can generate sequences following arbitrary orders, while achieving competitive or even better performance compared to the conventional left-to-right generation. The generated sequences show that InDIGO adopts adaptive generation orders based on input information. | There are two concurrent work @cite_23 @cite_29 , which study sequence generation in a non-L2R order. propose a tree-like generation algorithm. Unlike this work, the tree-based generation order only produces a subset of all possible generation orders compared to our insertion-based models. Further, find L2R is superior to their learned orders on machine translation tasks, while transformer-InDIGO with searched adaptive orders achieves better performance. propose a very similar idea of using insertion operations in Transformer for machine translation. The major difference is that they directly use absolute positions, while ours utilizes relative positions. As a result, their model needs to re-encode the partial sequence at every step, which is computationally more expensive. In contrast, our approach does not necessitate re-encoding the entire sentence during generation. In addition, knowledge distillation was necessary to achieve good performance in, while our model is able to match the performance of L2R even without bootstrapping. | {
"cite_N": [
"@cite_29",
"@cite_23"
],
"mid": [
"2949644922",
"2953345635"
],
"abstract": [
"We present the Insertion Transformer, an iterative, partially autoregressive model for sequence generation based on insertion operations. Unlike typical autoregressive models which rely on a fixed, often left-to-right ordering of the output, our approach accommodates arbitrary orderings by allowing for tokens to be inserted anywhere in the sequence during decoding. This flexibility confers a number of advantages: for instance, not only can our model be trained to follow specific orderings such as left-to-right generation or a binary tree traversal, but it can also be trained to maximize entropy over all valid insertions for robustness. In addition, our model seamlessly accommodates both fully autoregressive generation (one insertion at a time) and partially autoregressive generation (simultaneous insertions at multiple locations). We validate our approach by analyzing its performance on the WMT 2014 English-German machine translation task under various settings for training and decoding. We find that the Insertion Transformer outperforms many prior non-autoregressive approaches to translation at comparable or better levels of parallelism, and successfully recovers the performance of the original Transformer while requiring only logarithmically many iterations during decoding.",
"Standard sequential generation methods assume a pre-specified generation order, such as text generation methods which generate words from left to right. In this work, we propose a framework for training models of text generation that operate in non-monotonic orders; the model directly learns good orders, without any additional annotation. Our framework operates by generating a word at an arbitrary position, and then recursively generating words to its left and then words to its right, yielding a binary tree. Learning is framed as imitation learning, including a coaching method which moves from imitating an oracle to reinforcing the policy's own preferences. Experimental results demonstrate that using the proposed method, it is possible to learn policies which generate text without pre-specifying a generation order, while achieving competitive performance with conventional left-to-right generation."
]
} |
1902.00750 | 2913911166 | Rapid development of Internet technologies promotes traditional newspapers to report news on social networks. However, people on social networks may have different needs which naturally arises the question: whether can we analyze the influence of writing style on news quality automatically and assist writers in improving news quality? It's challenging due to writing style and 'quality' are hard to measure. First, we use 'popularity' as the measure of 'quality'. It is natural on social networks but brings new problems: popularity are also influenced by event and publisher. So we design two methods to alleviate their influence. Then, we proposed eight types of linguistic features (53 features in all) according eight writing guidelines and analyze their relationship with news quality. The experimental results show these linguistic features influence greatly on news quality. Based on it, we design a news quality assessment model on social network (SNQAM). SNQAM performs excellently on predicting quality, presenting interpretable quality score and giving accessible suggestions on how to improve it according to writing guidelines we referred to. | @cite_4 @cite_8 take into account various linguistic factors to produce predictive models for article quality. Few works focused on news on social networks. | {
"cite_N": [
"@cite_4",
"@cite_8"
],
"mid": [
"2019416425",
"2184410296"
],
"abstract": [
"We combine lexical, syntactic, and discourse features to produce a highly predictive model of human readers' judgments of text readability. This is the first study to take into account such a variety of linguistic factors and the first to empirically demonstrate that discourse relations are strongly associated with the perceived quality of text. We show that various surface metrics generally expected to be related to readability are not very good predictors of readability judgments in our Wall Street Journal corpus. We also establish that readability predictors behave differently depending on the task: predicting text readability or ranking the readability. Our experiments indicate that discourse relations are the one class of features that exhibits robustness across these two tasks.",
"Great writing is rare and highly admired. Readers seek out articles that are beautifully written, informative and entertaining. Yet information-access technologies lack capabilities for predicting article quality at this level. In this paper we present first experiments on article quality prediction in the science journalism domain. We introduce a corpus of great pieces of science journalism, along with typical articles from the genre. We implement features to capture aspects of great writing, including surprising, visual and emotional content, as well as general features related to discourse organization and sentence structure. We show that the distinction between great and typical articles can be detected fairly accurately, and that the entire spectrum of our features contribute to the distinction."
]
} |
1902.00750 | 2913911166 | Rapid development of Internet technologies promotes traditional newspapers to report news on social networks. However, people on social networks may have different needs which naturally arises the question: whether can we analyze the influence of writing style on news quality automatically and assist writers in improving news quality? It's challenging due to writing style and 'quality' are hard to measure. First, we use 'popularity' as the measure of 'quality'. It is natural on social networks but brings new problems: popularity are also influenced by event and publisher. So we design two methods to alleviate their influence. Then, we proposed eight types of linguistic features (53 features in all) according eight writing guidelines and analyze their relationship with news quality. The experimental results show these linguistic features influence greatly on news quality. Based on it, we design a news quality assessment model on social network (SNQAM). SNQAM performs excellently on predicting quality, presenting interpretable quality score and giving accessible suggestions on how to improve it according to writing guidelines we referred to. | It is generally difficult to estimate quality' without human intervention especially for articles. used human judgement as the ground truth of article quality while classified articles which were appeared in “The Best American Science Writing” as high-quality articles. To solve this problem, popularity-based methods have been widely used. These works often focus on improving prediction accuracy and pay more attention on factors besides writing style including publisher's social context, information diffusion model, etc. @cite_5 analyze the relationship between user's information and content popularity; @cite_12 @cite_10 used temporal information and model the diffusion of information to predict popularity. All of them focus on predicting popularity but can give little insights on how to write. | {
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"2101656432",
"2489369235",
"2963603520"
],
"abstract": [
"People and information are two core dimensions in a social network. People sharing information (such as blogs, news, albums, etc.) is the basic behavior. In this paper, we focus on predicting item-level social influence to answer the question Who should share What, which can be extended into two information retrieval scenarios: (1) Users ranking: given an item, who should share it so that its diffusion range can be maximized in a social network; (2) Web posts ranking: given a user, what should she share to maximize her influence among her friends. We formulate the social influence prediction problem as the estimation of a user-post matrix, in which each entry represents the strength of influence of a user given a web post. We propose a Hybrid Factor Non-Negative Matrix Factorization (HF-NMF) approach for item-level social influence modeling, and devise an efficient projected gradient method to solve the HF-NMF problem. Intensive experiments are conducted and demonstrate the advantages and characteristics of the proposed method.",
"Modeling and predicting retweeting dynamics in social media has important implications to an array of applications. Existing models either fail to model the triggering effect of retweeting dynamics, e.g., the model based on reinforced Poisson process, or are hard to be trained using only the retweeting dynamics of individual tweet, e.g., the model based on self-exciting Hawkes process. In this paper, motivated by the observation that each retweeting dynamics is generally dominated by a handful of key nodes that separately trigger a high number of retweets, we propose a mixture process to model and predict retweeting dynamics, with each subprocess capturing the retweeting dynamics initiated by a key node. Experiments demonstrate that the proposed model outperforms the state-of-the-art model.",
""
]
} |
1902.00976 | 2912744826 | We examine the classic "Beer-Quiche" game from Cho and Kreps (1987), and relax the assumption that the order placed by the sender is completely observable. Under the optimal degree of transparency, the receiver achieves a higher payoff than with full transparency. Partial obfuscation of the sender's choice encourages separation: committing to a less informative signal about the sender's choice affects the endogenous information generation process such that the receiver thereby secures himself more information. | Asriyan, Fuchs, and Green (2017) @cite_2 explores a similar problem. In their model, the @math sellers have an indivisible asset which value is private information, and have two periods in which they may trade it. In the portion of the paper relevant to this one, they ask how a planner should disclose trade behavior to maximize social welfare". As these authors note, persuasion is not the only objective, since the information policy affects the information content of trading, and hence affects trading itself. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2620184808"
],
"abstract": [
"How effectively does a decentralized marketplace aggregate information that is dispersed throughout the economy? We study this question in a dynamic setting, in which sellers have private information that is correlated with an unobservable aggregate state. We first characterize equilibria with an arbitrary finite number of informed traders. A common feature is that each seller’s trading behavior provides an informative and conditionally independent signal about the aggregate state. We then ask whether the state is revealed as the number of informed traders goes to infinity. Perhaps surprisingly, the answer is no. We provide generic conditions under which the amount of information revealed is necessarily bounded and does not reveal the aggregate state. When these conditions are violated, we provide conditions under which there is coexistence of aggregating and non-aggregating equilibria. We discuss the implications for policies meant to enhance information dissemination in markets. In general, a partially revealing information policy can increase trading surplus."
]
} |
1902.00976 | 2912744826 | We examine the classic "Beer-Quiche" game from Cho and Kreps (1987), and relax the assumption that the order placed by the sender is completely observable. Under the optimal degree of transparency, the receiver achieves a higher payoff than with full transparency. Partial obfuscation of the sender's choice encourages separation: committing to a less informative signal about the sender's choice affects the endogenous information generation process such that the receiver thereby secures himself more information. | Finally, there is a somewhat older literature that looks at mediation in the context of . See for instance Goltsman, H "orner, Pavlov, and Squintani (2009) @cite_13 , Ganguly and Ray (2011) @cite_6 , and Ivanov (2009) @cite_11 . In a well known paper, Forges (1990) @cite_4 considers mediation in a job-market example, in which the signals about the prospective candidate's type are cheap talk. As in the other papers in this literature, this introduction of a mediator enlarges the size of the set of equilibrium payoffs. | {
"cite_N": [
"@cite_4",
"@cite_13",
"@cite_6",
"@cite_11"
],
"mid": [
"2144725035",
"2208680581",
"1544913273",
"2055988385"
],
"abstract": [
"We study (costless) information transmission from a job applicant to an employer who must decide whether to hire him and, if so, which position to give him. We construct equilibrium payoffs requiring at least two signaling steps, or even that no deadline be imposed on the (plain) conversation. The set of communication equilibrium payoffs (achieved with the help of a communication device) is larger than the set of equilibrium payoffs of the plain conversation game but coincides with the set of correlated equilibrium payoffs.",
"We compare three common dispute resolution processes – negotiation, mediation, and arbitration – in the framework of Crawford and Sobel [V. Crawford, J. Sobel, Strategic information transmission, Econometrica 50 (6) (1982) 1431–1451]. Under negotiation, the two parties engage in (possibly arbitrarily long) face-to-face cheap talk. Under mediation, the parties communicate with a neutral third party who makes a non-binding recommendation. Under arbitration, the two parties commit to conform to the third party recommendation. We characterize and compare the optimal mediation and arbitration procedures. Both mediators and arbitrators should optimally filter information, but mediators should also add noise to it. We find that unmediated negotiation performs as well as mediation if and only if the degree of conflict between the parties is low.",
"In the Crawford-Sobel (uniform, quadratic utility) cheap-talk model, we consider a simple mediation scheme (a communication device) in which the informed agent reports one of N possible elements of a partition to the mediator and then the mediator suggests one of N actions to the uninformed decision-maker according to the probability distribution of the device. We show that such a simple mediated equilibrium cannot improve upon the unmediated N-partition Crawford-Sobel equilibrium when the preference divergence parameter (bias) is small.",
"This paper investigates communication between an informed expert and an uninformed principal via a strategic mediator. We demonstrate that, for any bias in the parties' preferences, there exists a strategic mediator that provides the highest expected payoff to the principal, as if the players had communicated through an optimal non-strategic mediator."
]
} |
1902.00671 | 2949808427 | The visual world we sense, interpret and interact everyday is a complex composition of interleaved physical entities. Therefore, it is a very challenging task to generate vivid scenes of similar complexity using computers. In this work, we present a scene generation framework based on Generative Adversarial Networks (GANs) to sequentially compose a scene, breaking down the underlying problem into smaller ones. Different than the existing approaches, our framework offers an explicit control over the elements of a scene through separate background and foreground generators. Starting with an initially generated background, foreground objects then populate the scene one-by-one in a sequential manner. Via quantitative and qualitative experiments on a subset of the MS-COCO dataset, we show that our proposed framework produces not only more diverse images but also copes better with affine transformations and occlusion artifacts of foreground objects than its counterparts. | Various approaches have been proposed to control the generation process by conditioning on a class label @cite_20 , an attribute vector @cite_1 @cite_19 , a text description @cite_13 @cite_18 , or a semantic map @cite_10 @cite_12 . what_where learn to control the foreground object location by conditioning on bounding boxes and keypoint coordinates. pix2pix propose conditional adversarial networks as a general purpose solution to image-to-image translation problems. Their model can generate a scene from a semantic layout map similar to our work. Nevertheless, the image diversity and the limited control over scene elements are limited. | {
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_1",
"@cite_19",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"2796341166",
"2768959015",
"2744091666",
"2754447548",
"2949999304",
"2964216930",
"2950776302"
],
"abstract": [
"To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.",
"Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"We present a generative attribute controller (GAC), a novel functionality for generating or editing an image while intuitively controlling large variations of an attribute. This controller is based on a novel generative model called the conditional filtered generative adversarial network (CFGAN), which is an extension of the conventional conditional GAN (CGAN) that incorporates a filtering architecture into the generator input. Unlike the conventional CGAN, which represents an attribute directly using an observable variable (e.g., the binary indicator of attribute presence) so its controllability is restricted to attribute labeling (e.g., restricted to an ON or OFF control), the CFGAN has a filtering architecture that associates an attribute with a multi-dimensional latent variable, enabling latent variations of the attribute to be represented. We also define the filtering architecture and training scheme considering controllability, enabling the variations of the attribute to be intuitively controlled using typical controllers (radio buttons and slide bars). We evaluated our CFGAN on MNIST, CUB, and CelebA datasets and show that it enables large variations of an attribute to be not only represented but also intuitively controlled while retaining identity. We also show that the learned latent space has enough expressive power to conduct attribute transfer and attribute-based image retrieval.",
"Facial expression editing is a challenging task as it needs a high-level semantic understanding of the input face image. In conventional methods, either paired training data is required or the synthetic face resolution is low. Moreover, only the categories of facial expression can be changed. To address these limitations, we propose an Expression Generative Adversarial Network (ExprGAN) for photo-realistic facial expression editing with controllable expression intensity. An expression controller module is specially designed to learn an expressive and compact expression code in addition to the encoder-decoder network. This novel architecture enables the expression intensity to be continuously adjusted from low to high. We further show that our ExprGAN can be applied for other tasks, such as expression transfer, image retrieval, and data augmentation for training improved face expression recognition models. To tackle the small size of the training database, an effective incremental learning scheme is proposed. Quantitative and qualitative evaluations on the widely used Oulu-CASIA dataset demonstrate the effectiveness of ExprGAN.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"We propose a novel hierarchical approach for text-to-image synthesis by inferring semantic layout. Instead of learning a direct mapping from text to image, our algorithm decomposes the generation process into multiple steps, in which it first constructs a semantic layout from the text by the layout generator and converts the layout to an image by the image generator. The proposed layout generator progressively constructs a semantic layout in a coarse-to-fine manner by generating object bounding boxes and refining each box by estimating object shapes inside the box. The image generator synthesizes an image conditioned on the inferred semantic layout, which provides a useful semantic structure of an image matching with the text description. Our model not only generates semantically more meaningful images, but also allows automatic annotation of generated images and user-controlled generation process by modifying the generated scene layout. We demonstrate the capability of the proposed model on challenging MS-COCO dataset and show that the model can substantially improve the image quality, interpretability of output and semantic alignment to input text over existing approaches.",
"Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data."
]
} |
1902.00761 | 2913107100 | In this paper we propose a convolutional neural network that is designed to upsample a series of sparse range measurements based on the contextual cues gleaned from a high resolution intensity image. Our approach draws inspiration from related work on super-resolution and in-painting. We propose a novel architecture that seeks to pull contextual cues separately from the intensity image and the depth features and then fuse them later in the network. We argue that this approach effectively exploits the relationship between the two modalities and produces accurate results while respecting salient image structures. We present experimental results to demonstrate that our approach is comparable with state of the art methods and generalizes well across multiple datasets. | Monocular depth estimation is an active research field where CNN based methods are currently the state of the art. Different methods have been proposed that use supervised @cite_19 @cite_16 @cite_46 @cite_25 @cite_42 @cite_20 , unsupervised @cite_41 and self-supervised @cite_49 depth estimation strategies. At the time of this writing, the best performing monocular depth estimation algorithm is from Fu al, achieving an inverse RMSE score of 12.98 on the KITTI depth prediction dataset @cite_15 . The authors propose an ordinal regression based method of predicting depth values, as they state that modelling depth estimation as a regression problem results in slow convergence and unsatisfactory local solutions. Li al also discretize the depth prediction problem by formulating it as a classification problem @cite_0 . | {
"cite_N": [
"@cite_15",
"@cite_41",
"@cite_42",
"@cite_0",
"@cite_19",
"@cite_49",
"@cite_46",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"2963488291",
"2520707372",
"2964193874",
"2837715209",
"2171740948",
"2963760790",
"",
"",
"2802665441",
"2810665122"
],
"abstract": [
"Monocular depth estimation, which plays a crucial role in understanding 3D scene geometry, is an ill-posed problem. Recent methods have gained significant improvement by exploring image-level information and hierarchical features from deep convolutional neural networks (DCNNs). These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions. Besides, existing depth estimation networks employ repeated spatial pooling operations, resulting in undesirable low-resolution feature maps. To obtain high-resolution depth maps, skip-connections or multilayer deconvolution networks are required, which complicates network training and consumes much more computations. To eliminate or at least largely reduce these problems, we introduce a spacing-increasing discretization (SID) strategy to discretize depth and recast depth network learning as an ordinal regression problem. By training the network using an ordinary regression loss, our method achieves much higher accuracy and faster convergence in synch. Furthermore, we adopt a multi-scale network structure which avoids unnecessary spatial pooling and captures multi-scale information in parallel. The proposed deep ordinal regression network (DORN) achieves state-of-the-art results on three challenging benchmarks, i.e., KITTI [16], Make3D [49], and NYU Depth v2 [41], and outperforms existing methods by a large margin.",
"Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.",
"Abstract Monocular depth estimation is very challenging in complex compositions depicting multiple objects of diverse scales. Albeit the recent great progress thanks to the deep convolutional neural networks, the state-of-the-art monocular depth estimation methods still fall short to handle such real-world challenging scenarios. In this paper, we propose a deep end-to-end learning framework to tackle these challenges, which learns the direct mapping from a color image to the corresponding depth map. First, we represent monocular depth estimation as a multi-category dense labeling task by contrast to the regression-based formulation. In this way, we could build upon the recent progress in dense labeling such as semantic segmentation. Second, we fuse different side-outputs from our front-end dilated convolutional neural network in a hierarchical way to exploit the multi-scale depth cues for monocular depth estimation, which is critical in achieving scale-aware depth estimation. Third, we propose to utilize soft-weighted-sum inference instead of the hard-max inference, transforming the discretized depth scores to continuous depth values. Thus, we reduce the influence of quantization error and improve the robustness of our method. Extensive experiments have been conducted on the Make3D, NYU v2, and KITTI datasets and superior performance have been achieved on NYU v2 and KITTI datasets compared with current state-of-the-art methods, which shows the superiority of our method. Furthermore, experiments on the NYU v2 dataset reveal that our classification based model is able to learn the probability distribution of depth.",
"In this paper, we present our deep attention-based classification (DABC) network for robust single image depth prediction, in the context of the Robust Vision Challenge 2018 (ROB 2018). Unlike conventional depth prediction, our goal is to design a model that can perform well in both indoor and outdoor scenes with a single parameter set. However, robust depth prediction suffers from two challenging problems: a) How to extract more discriminative features for different scenes (compared to a single scene)? b) How to handle the large differences of depth ranges between indoor and outdoor datasets? To address these two problems, we first formulate depth prediction as a multi-class classification task and apply a softmax classifier to classify the depth label of each pixel. We then introduce a global pooling layer and a channel-wise attention mechanism to adaptively select the discriminative channels of features and to update the original features by assigning important channels with higher weights. Further, to reduce the influence of quantization errors, we employ a soft-weighted sum inference strategy for the final prediction. Experimental results on both indoor and outdoor datasets demonstrate the effectiveness of our method. It is worth mentioning that we won the 2-nd place in single image depth prediction entry of ROB 2018, in conjunction with IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.",
"Single-view depth prediction is a fundamental problem in computer vision. Recently, deep learning methods have led to significant progress, but such methods are limited by the available training data. Current datasets based on 3D sensors have key limitations, including indoor-only images (NYU), small numbers of training examples (Make3D), and sparse sampling (KITTI). We propose to use multi-view Internet photo collections, a virtually unlimited data source, to generate training data via modern structure-from-motion and multi-view stereo (MVS) methods, and present a large depth dataset called MegaDepth based on this idea. Data derived from MVS comes with its own challenges, including noise and unreconstructable objects. We address these challenges with new data cleaning methods, as well as automatically augmenting our data with ordinal depth relations generated using semantic segmentation. We validate the use of large amounts of Internet data by showing that models trained on MegaDepth exhibit strong generalization-not only to novel scenes, but also to other diverse datasets including Make3D, KITTI, and DIW, even when no images from those datasets are seen during training.1",
"",
"",
"To achieve parsimonious inference in per-pixel labeling tasks with a limited computational budget, we propose a unit () that learns to selectively process a subset of spatial locations at each layer of a deep convolutional network. PAG is a generic, architecture-independent, problem-agnostic mechanism that can be readily \"plugged in\" to an existing model with fine-tuning. We utilize PAG in two ways: 1) learning spatially varying pooling fields that improve model performance without the extra computation cost associated with multi-scale pooling, and 2) learning a dynamic computation policy for each pixel to decrease total computation while maintaining accuracy. We extensively evaluate PAG on a variety of per-pixel labeling tasks, including semantic segmentation, boundary detection, monocular depth and surface normal estimation. We demonstrate that PAG allows competitive or state-of-the-art performance on these tasks. Our experiments show that PAG learns dynamic spatial allocation of computation over the input image which provides better performance trade-offs compared to related approaches (e.g., truncating deep models or dynamically skipping whole layers). Generally, we observe PAG can reduce computation by @math without noticeable loss in accuracy and performance degrades gracefully when imposing stronger computational constraints.",
"Abstract In this work, we propose a novel deep Hierarchical Guidance and Regularization (HGR) learning framework for end-to-end monocular depth estimation, which well integrates a hierarchical depth guidance network and a hierarchical regularization learning method for fine-grained depth prediction. The two properties in our proposed HGR framework can be summarized as: (1) the hierarchical depth guidance network automatically learns hierarchical depth representations by supervision guidance and multiple side conv-operations from the basic CNN, leveraging the learned hierarchical depth representations to progressively guide the upsampling and prediction process of upper deconv-layers; (2) the hierarchical regularization learning method integrates various-level information of depth maps, optimizing the network to predict depth maps with similar structure to ground truth. Comprehensive evaluations over three public benchmark datasets (including NYU Depth V2, KITTI and Make3D datasets) well demonstrate the state-of-the-art performance of our proposed depth estimation framework."
]
} |
1902.00761 | 2913107100 | In this paper we propose a convolutional neural network that is designed to upsample a series of sparse range measurements based on the contextual cues gleaned from a high resolution intensity image. Our approach draws inspiration from related work on super-resolution and in-painting. We propose a novel architecture that seeks to pull contextual cues separately from the intensity image and the depth features and then fuse them later in the network. We argue that this approach effectively exploits the relationship between the two modalities and produces accurate results while respecting salient image structures. We present experimental results to demonstrate that our approach is comparable with state of the art methods and generalizes well across multiple datasets. | CNNs have been successfully used in dense stereo depth estimation tasks. Zbontar al proposed a siamese network architecture to learn a similarity measure between two input patches. This similarity measure is then used as a matching cost input for a traditional stereo pipeline @cite_51 . luo2016efficient Recently, many end-to-end methods have been proposed that are able to generate accurate disparity images while preserving edges @cite_34 @cite_10 @cite_7 @cite_12 , of which the work of Chang al is most similar to the network we propose, where the authors propose an end-to-end approach using spatial pyramid pooling to better learn global image dependent features @cite_2 . | {
"cite_N": [
"@cite_7",
"@cite_2",
"@cite_51",
"@cite_34",
"@cite_10",
"@cite_12"
],
"mid": [
"2886944874",
"2963619659",
"2963502507",
"2604231069",
"2793302268",
""
],
"abstract": [
"Disparity estimation for binocular stereo images finds a wide range of applications. Traditional algorithms may fail on featureless regions, which could be handled by high-level clues such as semantic segments. In this paper, we suggest that appropriate incorporation of semantic cues can greatly rectify prediction in commonly-used disparity estimation frameworks. Our method conducts semantic feature embedding and regularizes semantic cues as the loss term to improve learning disparity. Our unified model SegStereo employs semantic features from segmentation and introduces semantic softmax loss, which helps improve the prediction accuracy of disparity maps. The semantic cues work well in both unsupervised and supervised manners. SegStereo achieves state-of-the-art results on KITTI Stereo benchmark and produces decent prediction on both CityScapes and FlyingThings3D datasets.",
"Recent work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNNs). However, current architectures rely on patch-based Siamese networks, lacking the means to exploit context information for finding correspondence in ill-posed regions. To tackle this problem, we propose PSMNet, a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN learns to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The proposed approach was evaluated on several benchmark datasets. Our method ranked first in the KITTI 2012 and 2015 leaderboards before March 18, 2018. The codes of PSMNet are available at: https: github.com JiaRenChang PSMNet.",
"We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"We propose a novel deep learning architecture for regressing disparity from a rectified pair of stereo images. We leverage knowledge of the problem’s geometry to form a cost volume using deep feature representations. We learn to incorporate contextual information using 3-D convolutions over this volume. Disparity values are regressed from the cost volume using a proposed differentiable soft argmin operation, which allows us to train our method end-to-end to sub-pixel accuracy without any additional post-processing or regularization. We evaluate our method on the Scene Flow and KITTI datasets and on KITTI we set a new stateof-the-art benchmark, while being significantly faster than competing approaches.",
"Recently convolutional neural network (CNN) promotes the development of stereo matching greatly. Especially those end-to-end stereo methods achieve best performance. However less attention is paid on encoding context information, simplifying two-stage disparity learning pipeline and improving details in disparity maps. Differently we focus on these problems. Firstly, we propose an one-stage context pyramid based residual pyramid network (CP-RPN) for disparity estimation, in which a context pyramid is embedded to encode multi-scale context clues explicitly. Next, we design a CNN based multi-task learning network called EdgeStereo to recover missing details in disparity maps, utilizing mid-level features from edge detection task. In EdgeStereo, CP-RPN is integrated with a proposed edge detector HED @math based on two-fold multi-task interactions. The end-to-end EdgeStereo outputs the edge map and disparity map directly from a stereo pair without any post-processing or regularization. We discover that edge detection task and stereo matching task can help each other in our EdgeStereo framework. Comprehensive experiments on stereo benchmarks such as Scene Flow and KITTI 2015 show that our method achieves state-of-the-art performance.",
""
]
} |
1902.00761 | 2913107100 | In this paper we propose a convolutional neural network that is designed to upsample a series of sparse range measurements based on the contextual cues gleaned from a high resolution intensity image. Our approach draws inspiration from related work on super-resolution and in-painting. We propose a novel architecture that seeks to pull contextual cues separately from the intensity image and the depth features and then fuse them later in the network. We argue that this approach effectively exploits the relationship between the two modalities and produces accurate results while respecting salient image structures. We present experimental results to demonstrate that our approach is comparable with state of the art methods and generalizes well across multiple datasets. | chodosh2018deep Wang al propose a multi-scale feature fusion method for depth completion @cite_52 using sparse LIDAR data. Ma al propose two methods, a supervised method for depth completion using a ResNet based architecture @cite_24 and a self-supervised method which is currently the top performing depth completion algorithm on the KITTI depth completion benchmark @cite_22 . Their proposed self-supervised method uses the sparse LiDAR input along with pose estimates to add additional training information based on depth and photometric losses. | {
"cite_N": [
"@cite_24",
"@cite_22",
"@cite_52"
],
"mid": [
"2963045776",
"2810837462",
"2897245365"
],
"abstract": [
"We consider the problem of dense depth prediction from a sparse set of depth measurements and a single RGB image. Since depth estimation from monocular images alone is inherently ambiguous and unreliable, to attain a higher level of robustness and accuracy, we introduce additional sparse depth samples, which are either acquired with a low-resolution depth sensor or computed via visual Simultaneous Localization and Mapping (SLAM) algorithms. We propose the use of a single deep regression network to learn directly from the RGB-D raw data, and explore the impact of number of depth samples on prediction accuracy. Our experiments show that, compared to using only RGB images, the addition of 100 spatially random depth samples reduces the prediction root-mean-square error by 50 on the NYU-Depth-v2 indoor dataset. It also boosts the percentage of reliable prediction from 59 to 92 on the KITTI dataset. We demonstrate two applications of the proposed algorithm: a plug-in module in SLAM to convert sparse maps to dense maps, and super-resolution for LiDARs. Software22https: github.com fangchangma sparse-to-dense and video demonstration33https: www.youtube.com watch?v=vNIIT_M7×7Y are publicly available.",
"Depth completion, the technique of estimating a dense depth image from sparse depth measurements, has a variety of applications in robotics and autonomous driving. However, depth completion faces 3 main challenges: the irregularly spaced pattern in the sparse depth input, the difficulty in handling multiple sensor modalities (when color images are available), as well as the lack of dense, pixel-level ground truth depth labels. In this work, we address all these challenges. Specifically, we develop a deep regression model to learn a direct mapping from sparse depth (and color images) to dense depth. We also propose a self-supervised training framework that requires only sequences of color and sparse depth images, without the need for dense depth labels. Our experiments demonstrate that our network, when trained with semi-dense annotations, attains state-of-the- art accuracy and is the winning approach on the KITTI depth completion benchmark at the time of submission. Furthermore, the self-supervised framework outperforms a number of existing solutions trained with semi- dense annotations.",
"Recently deep learning-based methods for dense depth completion from sparse depth data have shown superior performance than traditional techniques. However, sparse depth data lose the details of the scenes, for instance, the spatial and texture information. To overcome this problem, additional single image is introduced and a multi-scale features fusion scheme to learn more correlations of the two different data is proposed. Furthermore, sparse convolution operation to improve feature robustness for sparse depth data is exploited. Experiments demonstrate that the approach obviously improves the performance for depth completion and outperforms all the previous published methods. The authors believe their works also have the guidance significance for stereo images depth estimation fused with sparse LiDAR depth data."
]
} |
1902.01023 | 2950220272 | We describe a novel pipeline to automatically discover hierarchies of repeated sections in musical audio. The proposed method uses similarity network fusion (SNF) to combine different frame-level features into clean affinity matrices, which are then used as input to spectral clustering. While prior spectral clustering approaches to music structure analysis have pre-processed affinity matrices with heuristics specifically designed for this task, we show that the SNF approach directly yields segmentations which agree better with human annotators, as measured by the L-measure'' metric for hierarchical annotations. Furthermore, the SNF approach immediately supports arbitrarily many input features, allowing us to simultaneously discover structure encoded in timbral, harmonic, and rhythmic representations without any changes to the base algorithm. | Similarity network fusion (SNF) is a joint random walk technique that was devised to leverage the strengths of different hand-designed similarity measures for shape classification 2D contours in images @cite_4 . It has since been used in such tasks as cancer phenotype discrimination @cite_17 , image retrieval @cite_5 , and drug taxonomy @cite_5 . SNF was introduced to the music information retrieval community by the authors of @cite_12 to leverage different cross-similarity alignment scores in automatic cover song identification. As in the original application, they use SNF at the object (song) level . By contrast, it was shown in @cite_14 that using SNF at the feature level (i.e., beat-synchronous HPCP and MFCC) can improve cross-similarity matrices between pairs of covers without the need for a network of song-level similarity measures. A precursor to our work used SNF on frame-level features within a song to improve self-similarity matrices for visualization @cite_6 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_6",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"2964083190",
"2009228073",
"",
"2885205930",
"2587288237",
"1987219048"
],
"abstract": [
"",
"Metric learning is a fundamental problem in computer vision. Different features and algorithms may tackle a problem from different angles, and thus often provide complementary information. In this paper, we propose a fusion algorithm which outputs enhanced metrics by combining multiple given metrics (similarity measures). Unlike traditional co-training style algorithms where multi-view features or multiple data subsets are used for classification or regression, we focus on fusing multiple given metrics through diffusion process in an unsupervised way. Our algorithm has its particular advantage when the input similarity matrices are the outputs from diverse algorithms. We provide both theoretical and empirical explanations to our method. Significant improvements over the state-of-the-art results have been observed on various benchmark datasets. For example, we have achieved 100 accuracy (no longer the bull's eye measure) on the MPEG-7 shape dataset. Our method has a wide range of applications in machine learning and computer vision.",
"",
"Abstract Similarity networks contain important topological features and patterns critical to understanding interactions among samples in a large dataset. To create a comprehensive view of the interactions within a dataset, the Similarity Network Fusion (SNF) technique has been proposed to fuse the similarity networks based on different data types into one similarity network that represents the full spectrum of underlying data. In this paper, a modified version of SNF, which is named as Contextual Information based SNF (CI-SNF), is proposed. In CI-SNF, first, modified Jaccard distance is performed on the SNF fused similarity to utilize the contextual information contained in the fused similarity network. Second, the local consistency of samples from the same category is enhanced by speculating that the samples which are located high in the Jaccard distance based ranking list of a specific query are from the same category as the query. Third, the inverted index technique is introduced to utilize the sparsity property of the locally consistent similarity network to enhance the computational efficiency. To verify the effectiveness and efficiency of CI-SNF model, it is applied in four different tasks, Cover Song Identification (CSI), image classification, cancer subtype identification, and drug taxonomy, respectively. Extensive experiments on thirteen challenging datasets demonstrate that CI-SNF scheme outperforms state-of-the-art similarity fusion algorithms including SNF in all four tasks. It is also verified that utilizing the contextual information contained in the SNF-based similarity network helps to enhance the performance of the SNF-based scheme, further.",
"Cover Song Identification (CSI) technique, refers to the process of identifying an alternative version, performance, rendition, or recording of a previously recorded musical composition by measuring and modeling the musical similarity between them quantitatively and objectively. However, it is not possible to describe the similarity between tracks comprehensively and reliably with only one similarity function. In this paper, the Similarity Network Fusion (SNF) technique, which was originally proposed for combining different kernels for predicting drug-target interactions, is adopted to fuse different similarities based on the same descriptor and different similarity functions. First, the Harmonic Pitch Class Profile (HPCP) is extracted from each track. Next, the similarities, in terms of Qmax and Dmax measures, between the HPCP descriptors of any two tracks are calculated, respectively. Then, the track-by-track similarity networks based on Qmax and on Dmax similarity are constructed separately and then fused into one network by SNF. Finally, the fused similarities obtained from the fused similarity network are adopted to train a classifier, which can then be used to identify whether the input two tracks belong to reference cover or reference non-cover pair. Experimental results on Covers80 (http: labrosa.ee.columbia.edu projects coversongs covers80 ), subset of SecondHandSongs (SHS) (http: labrosa.ee.columbia.edu millionsong secondhand), and the Mixed Collection and Mazurka Cover Collection provided by MIREX (http: www.music-ir.org mirex wiki 2016:Audio_Cover_Song_Identification) demonstrate that the proposed scheme performs comparably with or even better than state-of-the-art CSI schemes.",
"Similarity network fusion (SNF) is an approach to integrate multiple data types on the basis of similarity between biological samples rather than individual measurements. The authors demonstrate SNF by constructing patient networks to identify disease subtypes with differential survival profiles."
]
} |
1902.01023 | 2950220272 | We describe a novel pipeline to automatically discover hierarchies of repeated sections in musical audio. The proposed method uses similarity network fusion (SNF) to combine different frame-level features into clean affinity matrices, which are then used as input to spectral clustering. While prior spectral clustering approaches to music structure analysis have pre-processed affinity matrices with heuristics specifically designed for this task, we show that the SNF approach directly yields segmentations which agree better with human annotators, as measured by the L-measure'' metric for hierarchical annotations. Furthermore, the SNF approach immediately supports arbitrarily many input features, allowing us to simultaneously discover structure encoded in timbral, harmonic, and rhythmic representations without any changes to the base algorithm. | As for music structure analysis, the present work builds directly upon the Laplacian spectral decomposition (LSD) method @cite_1 . This method operates by carefully constructing a graph which encodes short-term timbral continuity along with long-term harmonic repetition, and then partitions the graph at multiple scales to recover multi-level segmentations. While this can be effective, the graph construction depends heavily upon the choice of input features, and the resulting method can be somewhat brittle in practice. The method we propose here, in contrast, supports the fusion of arbitrarily many input representations, which facilitates the discovery of both long- and short-range structure along many different musical dimensions, including timbre, harmony, and rhythm. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2406676415"
],
"abstract": [
"Many approaches to analyzing the structure of a musical recording involve detecting sequential patterns within a selfsimilarity matrix derived from time-series features. Such patterns ideally capture repeated sequences, which then form the building blocks of large-scale structure. In this work, techniques from spectral graph theory are applied to analyze repeated patterns in musical recordings. The proposed method produces a low-dimensional encoding of repetition structure, and exposes the hierarchical relationships among structural components at differing levels of granularity. Finally, we demonstrate how to apply the proposed method to the task of music segmentation."
]
} |
1902.00772 | 2914554019 | Author(s): Zhang, Yuqian; Bradic, Jelena | Abstract: We provide a high-dimensional semi-supervised inference framework focused on the mean and variance of the response. Our data are comprised of an extensive set of observations regarding the covariate vectors and a much smaller set of labeled observations where we observe both the response as well as the covariates. We allow the size of the covariates to be much larger than the sample size and impose weak conditions on a statistical form of the data. We provide new estimators of the mean and variance of the response that extend some of the recent results presented in low-dimensional models. In particular, at times we will not necessitate consistent estimation of the functional form of the data. Together with estimation of the population mean and variance, we provide their asymptotic distribution and confidence intervals where we showcase gains in efficiency compared to the sample mean and variance. Our procedure, with minor modifications, is then presented to make important contributions regarding inference about average treatment effects. We also investigate the robustness of estimation and coverage and showcase widespread applicability and generality of the proposed method. | The main contribution of our work is both construction of new estimates as well as an asymptotic normality results that accommodate semi-supervised as well as a high-dimensionality phenomenon. Recent results of @cite_15 @cite_3 @cite_33 consider the class of graph-oriented semi-supervised learning algorithms and proceed to establish estimation or prediction properties of semi-supervised estimators when the number of observed responses was much higher than the number of features in the data. New work of @cite_11 @cite_10 @cite_21 develops semi-supervised approaches for estimation of the conditional mean of the responses. They utilize the additional information to obtain information about the marginal distribution of the covariates in order to then reduce the bias of the local linear estimator. Meanwhile, blue @cite_7 proposed a semi-supervised estimator of the explained variance; apart from discussing a different question, their task requires exact model specification. Semi-supervised inference in the context of classification has had a long tradition; see blue @cite_8 @cite_20 @cite_9 @cite_1 . A major limitation of this line of work is that it has lacked formal statistical inference results. | {
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_15",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2808068360",
"",
"",
"2963601606",
"2550243005",
"2962720813",
"2110316582",
"2795425324",
"2145494108",
"2581201870"
],
"abstract": [
"",
"We consider statistical inference for the explained variance @math under the high-dimensional linear model @math in the semi-supervised setting, where @math is the regression vector and @math is the design covariance matrix. A calibrated estimator, which efficiently integrates both labelled and unlabelled data, is proposed. It is shown that the estimator achieves the minimax optimal rate of convergence in the general semi-supervised framework. The optimality result characterizes how the unlabelled data affects the minimax optimal rate. Moreover, the limiting distribution for the proposed estimator is established and data-driven confidence intervals for the explained variance are constructed. We further develop a randomized calibration technique for statistical inference in the presence of weak signals and apply the obtained inference results to a range of important statistical problems, including signal detection and global testing, prediction accuracy evaluation, and confidence ball construction. The numerical performance of the proposed methodology is demonstrated in simulation studies and an analysis of estimating heritability for a yeast segregant data set with multiple traits.",
"",
"",
"In many modern machine learning applications, the outcome is expensive or time consuming to collect whereas the predictor information is easy to obtain. Semi†supervised (SS) learning aims at utilizing large amounts of ‘unlabelled’ data along with small amounts of ‘labelled’ data to improve the efficiency of a classical supervised approach. Though numerous SS learning classification and prediction procedures have been proposed in recent years, no methods currently exist to evaluate the prediction performance of a working regression model. In the context of developing phenotyping algorithms derived from electronic medical records, we present an efficient two†step estimation procedure for evaluating a binary classifier based on various prediction performance measures in the SS setting. In step I, the labelled data are used to obtain a non†parametrically calibrated estimate of the conditional risk function. In step II, SS estimates of the prediction accuracy parameters are constructed based on the estimated conditional risk function and the unlabelled data. We demonstrate that, under mild regularity conditions, the estimators proposed are consistent and asymptotically normal. Importantly, the asymptotic variance of the SS estimators is always smaller than that of the supervised counterparts under correct model specification. We also correct for potential overfitting bias in the SS estimators in finite samples with cross†validation and we develop a perturbation resampling procedure to approximate their distributions. Our proposals are evaluated through extensive simulation studies and illustrated with two real electronic medical record studies aiming to develop phenotyping algorithms for rheumatoid arthritis and multiple sclerosis.",
"We consider semi-supervised regression when the predictor variables are drawn from an unknown manifold. A simple two step approach to this problem is to: (i) estimate the manifold geodesic distance between any pair of points using both the labeled and unlabeled instances; and (ii) apply a k nearest neighbor regressor based on these distance estimates. We prove that given sufficiently many unlabeled points, this simple method of geodesic kNN regression achieves the optimal finite-sample minimax bound on the mean squared error, as if the manifold were known. Furthermore, we show how this approach can be efficiently implemented, requiring only O(k N log N) operations to estimate the regression function at all N labeled and unlabeled points. We illustrate this approach on two datasets with a manifold structure: indoor localization using WiFi fingerprints and facial pose estimation. In both cases, geodesic kNN is more accurate and much faster than the popular Laplacian eigenvector regressor.",
"Given a weighted graph with N vertices, consider a real-valued regression problem in a semisupervised setting, where one observes n labeled vertices, and the task is to label the remaining ones. We present a theoretical study of p-based Laplacian regularization under a d-dimensional geometric random graph model. We provide a variational characterization of the performance of this regularized learner as N grows to infinity while n stays constant; the associated optimality conditions lead to a partial differential equation that must be satisfied by the associated function estimate f . From this formulation we derive several predictions on the limiting behavior the function f , including (a) a phase transition in its smoothness at the threshold p = d + 1; and (b) a tradeoff between smoothness and sensitivity to the underlying unlabeled data distribution P . Thus, over the range p ≤ d, the function estimate f is degenerate and “spiky,” whereas for p ≥ d + 1, the function estimate f is smooth. We show that the effect of the underlying density vanishes monotonically with p, such that in the limit p = ∞, corresponding to the so-called Absolutely Minimal Lipschitz Extension, the estimate f is independent of the distribution P . Under the assumption of semi-supervised smoothness, ignoring P can lead to poor statistical performance; in particular, we construct a specific example for d = 1 to demonstrate that p = 2 has lower risk than p =∞ due to the former penalty adapting to P and the latter ignoring it. We also provide simulations that verify the accuracy of our predictions for finite sample sizes. Together, these properties show that p = d + 1 is an optimal choice, yielding a function estimate f that is both smooth and non-degenerate, while remaining maximally sensitive to P .",
"Semi-supervised methods use unlabeled data in addition to labeled data to construct predictors. While existing semi-supervised methods have shown some promising empirical performance, their development has been based largely based on heuristics. In this paper we study semi-supervised learning from the viewpoint of minimax theory. Our first result shows that some common methods based on regularization using graph Laplacians do not lead to faster minimax rates of convergence. Thus, the estimators that use the unlabeled data do not have smaller risk than the estimators that use only labeled data. We then develop several new approaches that provably lead to improved performance. The statistical tools of minimax analysis are thus used to offer some new perspective on the problem of semi-supervised learning.",
"There is strong interest in conducting comparative effectiveness research (CER) in electronic medical records (EMR) to evaluate treatment strategies among real-world patients. Inferring causal effects in EMR data, however, is challenging due to the lack of direct observation on pre-specified gold-standard outcomes, in addition to the observational nature of the data. Extracting gold-standard outcomes often requires labor-intensive medical chart review, which is unfeasible for large studies. While one may impute outcomes and estimate average treatment effects (ATE) based on imputed data, naive imputations may lead to biased ATE estimators. In this paper, we frame the problem of estimating the ATE in a semi-supervised learning setting, where a small set of observations is labeled with the true outcome via manual chart review and a large set of unlabeled observations with features predictive of the outcome are available. We develop an imputation-based approach for estimating the ATE that is robust to misspecification of the imputation model. This allows information from the predictive features to be safely leveraged to improve the efficiency in estimating the ATE. The estimator is additionally doubly-robust in that it is consistent under correct specification of either an initial propensity score model or a baseline outcome model. We show that it is locally semiparametric efficient under an ideal semi-supervised model where the distribution of unlabeled data is known. Simulations exhibit the efficiency and robustness of the proposed method compared to existing approaches in finite samples. We illustrate the method to compare rates of treatment response to two biologic agents for treating inflammatory bowel disease using EMR data from Partner's Healthcare.",
"We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or limiting cases. A series of experiments illustrates that the proposed solution benefits from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the \"cluster assumption\". Finally, we also illustrate that the method can also be far superior to manifold learning in high dimension spaces.",
"We consider the linear regression problem under semi-supervised settings wherein the available data typically consists of: (i) a small or moderate sized 'labeled' data, and (ii) a much larger sized 'unlabeled' data. Such data arises naturally from settings where the outcome, unlike the covariates, is expensive to obtain, a frequent scenario in modern studies involving large databases like electronic medical records (EMR). Supervised estimators like the ordinary least squares (OLS) estimator utilize only the labeled data. It is often of interest to investigate if and when the unlabeled data can be exploited to improve estimation of the regression parameter in the adopted linear model. In this paper, we propose a class of 'Efficient and Adaptive Semi-Supervised Estimators' (EASE) to improve estimation efficiency. The EASE are two-step estimators adaptive to model mis-specification, leading to improved (optimal in some cases) efficiency under model mis-specification, and equal (optimal) efficiency under a linear model. This adaptive property, often unaddressed in the existing literature, is crucial for advocating 'safe' use of the unlabeled data. The construction of EASE primarily involves a flexible 'semi-non-parametric' imputation, including a smoothing step that works well even when the number of covariates is not small; and a follow up 'refitting' step along with a cross-validation (CV) strategy both of which have useful practical as well as theoretical implications towards addressing two important issues: under-smoothing and over-fitting. We establish asymptotic results including consistency, asymptotic normality and the adaptive properties of EASE. We also provide influence function expansions and a 'double' CV strategy for inference. The results are further validated through extensive simulations, followed by application to an EMR study on auto-immunity."
]
} |
1902.00772 | 2914554019 | Author(s): Zhang, Yuqian; Bradic, Jelena | Abstract: We provide a high-dimensional semi-supervised inference framework focused on the mean and variance of the response. Our data are comprised of an extensive set of observations regarding the covariate vectors and a much smaller set of labeled observations where we observe both the response as well as the covariates. We allow the size of the covariates to be much larger than the sample size and impose weak conditions on a statistical form of the data. We provide new estimators of the mean and variance of the response that extend some of the recent results presented in low-dimensional models. In particular, at times we will not necessitate consistent estimation of the functional form of the data. Together with estimation of the population mean and variance, we provide their asymptotic distribution and confidence intervals where we showcase gains in efficiency compared to the sample mean and variance. Our procedure, with minor modifications, is then presented to make important contributions regarding inference about average treatment effects. We also investigate the robustness of estimation and coverage and showcase widespread applicability and generality of the proposed method. | A small but growing literature, including @cite_16 @cite_28 have considered the development of semi-supervised estimators. Much of their work is inspired by model-free regression framework; see for example @cite_4 . These papers discuss estimation of the mean and variance and use least-squares methods for which they report confidence intervals as well as asymptotic distribution. To our knowledge, however, we provide a set of conditions under which efficient estimation of the response parameters is established without firm guarantees on the model specification as well as without strong restrictions on the dimensionality of the feature space. | {
"cite_N": [
"@cite_28",
"@cite_16",
"@cite_4"
],
"mid": [
"",
"2464913620",
"2786668885"
],
"abstract": [
"",
"We propose a general semi-supervised inference framework focused on the estimation of the population mean. As usual in semi-supervised settings, there exists an unlabeled sample of covariate vectors and a labeled sample consisting of covariate vectors along with real-valued responses (\"labels\"). Otherwise, the formulation is \"assumption-lean\" in that no major conditions are imposed on the statistical or functional form of the data. We consider both the ideal semi-supervised setting where infinitely many unlabeled samples are available, as well as the ordinary semi-supervised setting in which only a finite number of unlabeled samples is available. Estimators are proposed along with corresponding confidence intervals for the population mean. Theoretical analysis on both the asymptotic distribution and @math -risk for the proposed procedures are given. Surprisingly, the proposed estimators, based on a simple form of the least squares method, outperform the ordinary sample mean. The simple, transparent form of the estimator lends confidence to the perception that its asymptotic improvement over the ordinary sample mean also nearly holds even for moderate size samples. The method is further extended to a nonparametric setting, in which the oracle rate can be achieved asymptotically. The proposed estimators are further illustrated by simulation studies and a real data example involving estimation of the homeless population.",
"For the last two decades, high-dimensional data and methods have proliferated throughout the literature. The classical technique of linear regression, however, has not lost its touch in applications. Most high-dimensional estimation techniques can be seen as variable selection tools which lead to a smaller set of variables where classical linear regression technique applies. In this paper, we prove estimation error and linear representation bounds for the linear regression estimator uniformly over (many) subsets of variables. Based on deterministic inequalities, our results provide \"good\" rates when applied to both independent and dependent data. These results are useful in correctly interpreting the linear regression estimator obtained after exploring the data and also in post model-selection inference. All the results are derived under no model assumptions and are non-asymptotic in nature."
]
} |
1902.00772 | 2914554019 | Author(s): Zhang, Yuqian; Bradic, Jelena | Abstract: We provide a high-dimensional semi-supervised inference framework focused on the mean and variance of the response. Our data are comprised of an extensive set of observations regarding the covariate vectors and a much smaller set of labeled observations where we observe both the response as well as the covariates. We allow the size of the covariates to be much larger than the sample size and impose weak conditions on a statistical form of the data. We provide new estimators of the mean and variance of the response that extend some of the recent results presented in low-dimensional models. In particular, at times we will not necessitate consistent estimation of the functional form of the data. Together with estimation of the population mean and variance, we provide their asymptotic distribution and confidence intervals where we showcase gains in efficiency compared to the sample mean and variance. Our procedure, with minor modifications, is then presented to make important contributions regarding inference about average treatment effects. We also investigate the robustness of estimation and coverage and showcase widespread applicability and generality of the proposed method. | Several papers use generalized methods for estimating heterogeneous treatment effects as well as treatment effect size. Random forests approaches appear in @cite_0 . Other related approaches include those of @cite_6 and @cite_26 , which build learners of sorts that can conform to many types of nonparametric or machine learning methods; however, these papers do not analyze semi-supervised settings and possible improvements therein over using labeled observations only. In improving treatment effects estimates over the labeled observations, we follow our work on semi-supervised estimation of the mean closely whereas for the case of treatment effect size we build further on our work on semi-supervised estimation of the variance of the responses. As we will show we can achieve asymptotic normality under relaxed assumptions, in particular, a consistent estimation is no longer needed for both of the unknowns; one consistency suffices. | {
"cite_N": [
"@cite_0",
"@cite_26",
"@cite_6"
],
"mid": [
"2208550830",
"2583860259",
"2624816748"
],
"abstract": [
"AbstractMany scientific and engineering challenges—ranging from personalized medicine to customized marketing recommendations—require an understanding of treatment effect heterogeneity. In this paper, we develop a non-parametric causal forest for estimating heterogeneous treatment effects that extends Breiman's widely used random forest algorithm. In the potential outcomes framework with unconfoundedness, we show that causal forests are pointwise consistent for the true treatment effect, and have an asymptotically Gaussian and centered sampling distribution. We also discuss a practical method for constructing asymptotic confidence intervals for the true treatment effect that are centered at the causal forest estimates. Our theoretical results rely on a generic Gaussian theory for a large family of random forest algorithms. To our knowledge, this is the first set of results that allows any type of random forest, including classification and regression forests, to be used for provably valid statistical infe...",
"(2016) provide a generic double de-biased machine learning (ML) approach for obtaining valid inferential statements about focal parameters, using Neyman-orthogonal scores and cross-fitting, in settings where nuisance parameters are estimated using ML methods. In this note, we illustrate the application of this method in the context of estimating average treatment effects and average treatment effects on the treated using observational data.",
"There is growing interest in estimating and analyzing heterogeneous treatment effects in experimental and observational studies. We describe a number of metaalgorithms that can take advantage of any supervised learning or regression method in machine learning and statistics to estimate the conditional average treatment effect (CATE) function. Metaalgorithms build on base algorithms—such as random forests (RFs), Bayesian additive regression trees (BARTs), or neural networks—to estimate the CATE, a function that the base algorithms are not designed to estimate directly. We introduce a metaalgorithm, the X-learner, that is provably efficient when the number of units in one treatment group is much larger than in the other and can exploit structural properties of the CATE function. For example, if the CATE function is linear and the response functions in treatment and control are Lipschitz-continuous, the X-learner can still achieve the parametric rate under regularity conditions. We then introduce versions of the X-learner that use RF and BART as base learners. In extensive simulation studies, the X-learner performs favorably, although none of the met alearners is uniformly the best. In two persuasion field experiments from political science, we demonstrate how our X-learner can be used to target treatment regimes and to shed light on underlying mechanisms. A software package is provided that implements our methods."
]
} |
1902.00991 | 2911299112 | Quantum computing (QC) is an emerging computing paradigm with potential to revolutionize the field of computing. QC is a field that is quickly developing globally and has high barriers of entry. In this paper we explore both successful contributors to the field as well as wider QC community with the goal of understanding the backgrounds and training that helped them succeed. We gather data on 148 contributors to open-source quantum computing projects hosted on GitHub and survey 46 members of QC community. Our findings show that QC practitioners and enthusiasts have diverse backgrounds, with most of them having a PhD and trained in physics or computer science. We observe a lack of educational resources on quantum computing. Our goal for these findings is to start a conversation about how best to prepare the next generation of QC researchers and practitioners. | Quantum computing is still a very young field, with software projects constantly pushing the boundary of human knowledge. As such, quantum computing software projects have to deal with all the problems that plague scientific software projects: unforeseen changes in the requirements, lack of software development expertise and limited budgets @cite_23 . The cross-disciplinary nature of quantum computing adds to the complexity of the domain. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2613448825"
],
"abstract": [
"Evolution in scientific software is often according to a specific pattern of software changes: professional scientists, who are not professional software developers, need rapid, dynamic, and domain-specific changes of the software they work this. To address unanticipated software evolution in this field, our objective is to enable these end-users (here: biologists) to change software from the user interface. An approach is presented that integrates technological and methodological solutions. We explain why these solutions are complementary, and how they can be integrated and co-evolved from software design to actual use."
]
} |
1902.00991 | 2911299112 | Quantum computing (QC) is an emerging computing paradigm with potential to revolutionize the field of computing. QC is a field that is quickly developing globally and has high barriers of entry. In this paper we explore both successful contributors to the field as well as wider QC community with the goal of understanding the backgrounds and training that helped them succeed. We gather data on 148 contributors to open-source quantum computing projects hosted on GitHub and survey 46 members of QC community. Our findings show that QC practitioners and enthusiasts have diverse backgrounds, with most of them having a PhD and trained in physics or computer science. We observe a lack of educational resources on quantum computing. Our goal for these findings is to start a conversation about how best to prepare the next generation of QC researchers and practitioners. | Unlike in classical computation, where the computation happens by manipulating bits, the fundamental computational unit in QC is qubit. A bit can have one of two states: 0 or 1. Similarly, qubit state is a unit vector in a two-dimensional complex vector space @cite_31 . A qubit state can be encoded in a state of a quantum mechanical object, for example as polarization of a single photon @cite_31 . The field of quantum computing is in a state of constant change, and is generally expected to continue changing in the foreseeable future. In the past few years multiple Near-term Intermediate-Scale Quantum (NISQ) hardware implementations have been developed @cite_16 and demonstrated to provide a potential for quantum speedups @cite_27 . Naturally, different implementations come with certain trade-offs. For example, trapped ion qubits are generally less noisy and offer better connectivity, whereas superconducting qubits offer faster gate clock speeds and more clear path to scalability @cite_20 . This diversity of hardware introduces an additional degree of complexity for the development of QC algorithms and software, forcing algorithm developers to stay aware of the trade-offs presented by hardware. | {
"cite_N": [
"@cite_27",
"@cite_31",
"@cite_16",
"@cite_20"
],
"mid": [
"2883700436",
"1631356911",
"2781738013",
"2586874551"
],
"abstract": [
"Suppose we have a small quantum computer with only M qubits. Can such a device genuinely speed up certain algorithms, even when the problem size is much larger than M? Here we answer this question to the affirmative. We present a hybrid quantum-classical algorithm to solve 3SAT problems involving n>>M variables that significantly speeds up its fully classical counterpart. This question may be relevant in view of the current quest to build small quantum computers.",
"Preface Acknowledgement Nomenclature and notation Part I. Fundamental Concepts: 1. Introduction and overview 2. Introduction to quantum mechanics 3. Introduction to computer science Part II. Quantum Computation: 4. Quantum circuits 5. The quantum Fourier transform and its applications 6. Quantum search algorithms 7. Quantum computers: physical realisation Part III. Quantum Information: 8. Quantum noise, open quantum systems, and quantum operations 9. Distance measurement for quantum information 10. Quantum error-correction 11. Entropy and information 12. Quantum information theory Appendix A. Notes on basic probability theory Appendix B. Group theory Appendix C. Approximating quantum gates: the Solvay-Kitaev theorem Appendix D. Number theory Appendix E. Public-key cryptography and the RSA cryptosystem Appendix F. Proof of Lieb's theorem References Index.",
"Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today's classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.",
"We run a selection of algorithms on two state-of-the-art 5-qubit quantum computers that are based on different technology platforms. One is a publicly accessible superconducting transmon device (www.research.ibm.com ibm-q) with limited connectivity, and the other is a fully connected trapped-ion system. Even though the two systems have different native quantum interactions, both can be programed in a way that is blind to the underlying hardware, thus allowing a comparison of identical quantum algorithms between different physical systems. We show that quantum algorithms and circuits that use more connectivity clearly benefit from a better-connected system of qubits. Although the quantum systems here are not yet large enough to eclipse classical computers, this experiment exposes critical factors of scaling quantum computers, such as qubit connectivity and gate expressivity. In addition, the results suggest that codesigning particular quantum applications with the hardware itself will be paramount in successfully using quantum computers in the future."
]
} |
1902.00991 | 2911299112 | Quantum computing (QC) is an emerging computing paradigm with potential to revolutionize the field of computing. QC is a field that is quickly developing globally and has high barriers of entry. In this paper we explore both successful contributors to the field as well as wider QC community with the goal of understanding the backgrounds and training that helped them succeed. We gather data on 148 contributors to open-source quantum computing projects hosted on GitHub and survey 46 members of QC community. Our findings show that QC practitioners and enthusiasts have diverse backgrounds, with most of them having a PhD and trained in physics or computer science. We observe a lack of educational resources on quantum computing. Our goal for these findings is to start a conversation about how best to prepare the next generation of QC researchers and practitioners. | A plethora of algorithms leveraging the power of quantum computation have been developed over the years. Shor's @cite_5 and Grover's @cite_10 algorithms are two most well-known examples of quantum algorithms for practical problems with theoretically proven speed-ups over classical state-of-the-art. However, the limitations of NISQ-era hardware make most of them impossible to run in the near-term. Near-term quantum computers are widely believed to be able to provide no more than a few hundreds of non error-corrected qubits. To address this challenge, a number of NISQ approaches have been proposed, most prominent of them Variational Quantum Eigensolver (VQE) @cite_11 and Quantum Approximate Optimization Algorithm (QAOA) @cite_21 . The limitations of near-term hardware make development of practical algorithms especially challenging. | {
"cite_N": [
"@cite_5",
"@cite_21",
"@cite_10",
"@cite_11"
],
"mid": [
"2175346381",
"1568345435",
"2084652510",
"2161685427"
],
"abstract": [
"",
"We introduce a quantum algorithm that produces approximate solutions for combinatorial optimization problems. The algorithm depends on a positive integer p and the quality of the approximation improves as p is increased. The quantum circuit that implements the algorithm consists of unitary gates whose locality is at most the locality of the objective function whose optimum is sought. The depth of the circuit grows linearly with p times (at worst) the number of constraints. If p is fixed, that is, independent of the input size, the algorithm makes use of efficient classical preprocessing. If p grows with the input size a different strategy is proposed. We study the algorithm as applied to MaxCut on regular graphs and analyze its performance on 2-regular and 3-regular graphs for fixed p. For p = 1, on 3-regular graphs the quantum algorithm always finds a cut that is at least 0.6924 times the size of the optimal cut.",
"were proposed in the early 1980’s [Benioff80] and shown to be at least as powerful as classical computers an important but not surprising result, since classical computers, at the deepest level, ultimately follow the laws of quantum mechanics. The description of quantum mechanical computers was formalized in the late 80’s and early 90’s [Deutsch85][BB92] [BV93] [Yao93] and they were shown to be more powerful than classical computers on various specialized problems. In early 1994, [Shor94] demonstrated that a quantum mechanical computer could efficiently solve a well-known problem for which there was no known efficient algorithm using classical computers. This is the problem of integer factorization, i.e. testing whether or not a given integer, N, is prime, in a time which is a finite power of o (logN) . ----------------------------------------------",
"Quantum computers promise to efficiently solve problems that would be practically impossible with a normal computer. develop a variational computation approach that uses any available quantum resources and, with a photonic quantum processing unit, find the ground-state molecular energy of He–H+."
]
} |
1902.00981 | 2911373802 | Estimating what would be an individual's potential response to varying levels of exposure to a treatment is of high practical relevance for several important fields, such as healthcare, economics and public policy. However, existing methods for learning to estimate counterfactual outcomes from observational data are either focused on estimating average dose-response curves, or limited to settings with only two treatments that do not have an associated dosage parameter. Here, we present a novel machine-learning approach towards learning counterfactual representations for estimating individual dose-response curves for any number of treatments with continuous dosage parameters with neural networks. Building on the established potential outcomes framework, we introduce performance metrics, model selection criteria, model architectures, and open benchmarks for estimating individual dose-response curves. Our experiments show that the methods developed in this work set a new state-of-the-art in estimating individual dose-response. | Causal analysis of treatment effects with rigorous experiments is, in many domains, an essential tool for validating interventions. In medicine, prospective experiments, such as RCTs, are the de facto gold standard to evaluate whether a given treatment is efficacious in treating a specific indication across a population @cite_11 @cite_7 . However, performing prospective experiments is expensive, time-consuming, and often not possible for ethical reasons @cite_18 . Historically, there has therefore been considerable interest in developing methodologies for performing causal inference using readily available observational data @cite_27 @cite_24 @cite_19 @cite_15 @cite_1 @cite_33 @cite_17 . The na "ive approach of training supervised models to minimise the observed factual error is in general not a suitable choice for counterfactual inference tasks due to treatment assignment bias and the inability to observe counterfactual outcomes. To address the shortcomings of unsupervised and supervised learning in this setting, several adaptations to established machine-learning methods that aim to enable the estimation of counterfactual outcomes from observational data have recently been proposed @cite_38 @cite_21 @cite_0 @cite_39 @cite_35 @cite_30 @cite_23 @cite_36 . In this work, we build on several of these advances to develop a novel machine-learning framework for estimating individual dose-response. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_18",
"@cite_35",
"@cite_33",
"@cite_7",
"@cite_36",
"@cite_21",
"@cite_1",
"@cite_17",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_0",
"@cite_23",
"@cite_15",
"@cite_11"
],
"mid": [
"2963944907",
"2964271126",
"2009815693",
"2716974933",
"2298338128",
"2413861456",
"2894488843",
"2962695761",
"",
"2885825670",
"2964115178",
"2138290169",
"2150291618",
"2178225550",
"2208550830",
"2785777814",
"2009187570",
"216342353"
],
"abstract": [
"Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of confounders, factors that affect both an intervention and its outcome. A carefully designed observational study attempts to measure all important confounders. However, even if one does not have direct access to all confounders, there may exist noisy and uncertain measurement of proxies for confounders. We build on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect. Our method is based on Variational Autoencoders (VAE) which follow the causal structure of inference with proxies. We show our method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects.",
"Observational studies are rising in importance due to the widespread accumulation of data in fields such as healthcare, education, employment and ecology. We consider the task of answering counterfactual questions such as, \"Would this patient have lower blood sugar had she received a different medication?\". We propose a new algorithmic framework for counterfactual inference which brings together ideas from domain adaptation and representation learning. In addition to a theoretical justification, we perform an empirical comparison with previous approaches to causal inference from observational data. Our deep learning algorithm significantly outperforms the previous state-of-the-art.",
"THE ethics of medical experimentation on human subjects has attracted much attention in recent years. There has, however, been rather less attention paid to the special ethical problems and dilemmas posed by the randomized clinical trial. The sheer number of such trials, the risks and costs that they involve, and the dangers that are posed both by permitting and by restricting their use would seem to warrant further ethical analysis of the randomized clinical trial. This article attempts to distinguish some of the major ethical problems posed by the randomized clinical trial, to set out some of the principal considerations . . .",
"We propose a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data. Our approach conceptualizes causal inference as a multitask learning problem; we model a subject's potential outcomes using a deep multitask network with a set of shared layers among the factual and counterfactual outcomes, and a set of outcome-specific layers. The impact of selection bias in the observational data is alleviated via a propensity-dropout regularization scheme, in which the network is thinned for every training example via a dropout probability that depends on the associated propensity score. The network is trained in alternating phases, where in each phase we use the training examples of one of the two potential outcomes (treated and control populations) to update the weights of the shared layers and the respective outcome-specific layers. Experiments conducted on data based on a real-world observational study show that our algorithm outperforms the state-of-the-art.",
"Abstract Ideally, questions about comparative effectiveness or safety would be answered using an appropriately designed and conducted randomized experiment. When we cannot conduct a randomized experiment, we analyze observational data. Causal inference from large observational databases (big data) can be viewed as an attempt to emulate a randomized experiment-the target experiment or target trial-that would answer the question of interest. When the goal is to guide decisions among several strategies, causal analyses of observational data need to be evaluated with respect to how well they emulate a particular target trial. We outline a framework for comparative effectiveness research using big data that makes the target trial explicit. This framework channels counterfactual theory for comparing the effects of sustained treatment strategies, organizes analytic approaches, provides a structured process for the criticism of observational studies, and helps avoid common methodologic pitfalls.",
"Randomized, controlled trials have become the gold standard of medical knowledge. Yet their scientific and political history offers lessons about the complexity of medicine and disease and the economic and political forces shaping the production and circulation of knowledge.",
"Learning representations for counterfactual inference from observational data is of high practical relevance for many domains, such as healthcare, public policy and economics. Counterfactual inference enables one to answer \"What if...?\" questions, such as \"What would be the outcome if we gave this patient treatment @math ?\". However, current methods for training neural networks for counterfactual inference on observational data are either overly complex, limited to settings with only two available treatment options, or both. Here, we present Perfect Match (PM), a method for training neural networks for counterfactual inference that is easy to implement, compatible with any architecture, does not add computational complexity or hyperparameters, and extends to any number of treatments. PM is based on the idea of augmenting samples within a minibatch with their propensity-matched nearest neighbours. Our experiments demonstrate that PM outperforms a number of more complex state-of-the-art methods in inferring counterfactual outcomes across several real-world and semi-synthetic datasets.",
"There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theoretical analysis and family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability. The algorithms learn a \"balanced\" representation such that the induced treated and control distributions look similar, and we give a novel and intuitive generalization-error bound showing the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distances. Experiments on real and simulated data show the new algorithms match or outperform the state-of-the-art.",
"",
"Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models.",
"Predicated on the increasing abundance of electronic health records, we investigate the problem of inferring individualized treatment effects using observational data. Stemming from the potential outcomes model, we propose a novel multi-task learning framework in which factual and counterfactual outcomes are modeled as the outputs of a function in a vector-valued reproducing kernel Hilbert space (vvRKHS). We develop a nonparametric Bayesian method for learning the treatment effects using a multi-task Gaussian process (GP) with a linear coregionalization kernel as a prior over the vvRKHS. The Bayesian approach allows us to compute individualized measures of confidence in our estimates via pointwise credible intervals, which are crucial for realizing the full potential of precision medicine. The impact of selection bias is alleviated via a risk-based empirical Bayes method for adapting the multi-task GP prior, which jointly minimizes the empirical error in factual outcomes and the uncertainty in (unobserved) counterfactual outcomes. We conduct experiments on observational datasets for an interventional social program applied to premature infants, and a left ventricular assist device applied to cardiac patients wait-listed for a heart transplant. In both experiments, we show that our method significantly outperforms the state-of-the-art.",
"Abstract We outline a framework for causal inference in settings where assignment to a binary treatment is ignorable, but compliance with the assignment is not perfect so that the receipt of treatment is nonignorable. To address the problems associated with comparing subjects by the ignorable assignment—an “intention-to-treat analysis”—we make use of instrumental variables, which have long been used by economists in the context of regression models with constant treatment effects. We show that the instrumental variables (IV) estimand can be embedded within the Rubin Causal Model (RCM) and that under some simple and easily interpretable assumptions, the IV estimand is the average causal effect for a subgroup of units, the compliers. Without these assumptions, the IV estimand is simply the ratio of intention-to-treat causal estimands with no interpretation as an average causal effect. The advantages of embedding the IV approach in the RCM are that it clarifies the nature of critical assumptions needed for a...",
"Abstract : The results of observational studies are often disputed because of nonrandom treatment assignment. For example, patients at greater risk may be overrepresented in some treatment group. This paper discusses the central role of propensity scores and balancing scores in the analysis of observational studies. The propensity score is the (estimated) conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Applications include: matched sampling on the univariate propensity score which is equal percent bias reducing under more general conditions than required for discriminant matching, multivariate adjustment by subclassification on balancing scores where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations, and visual representation of multivariate adjustment by a two-dimensional plot. (Author)",
"There occurs on some occasions a difficulty in deciding the direction of causality between two related variables and also whether or not feedback is occurring. Testable definitions of causality and feedback are proposed and illustrated by use of simple two-variable models. The important problem of apparent instantaneous causality is discussed and it is suggested that the problem often arises due to slowness in recording information or because a sufficiently wide class of possible causal variables has not been used. It can be shown that the cross spectrum between two variables can be decomposed into two parts, each relating to a single causal arm of a feedback situation. Measures of causal lag and causal strength can then be constructed. A generalisation of this result with the partial cross spectrum is suggested.",
"AbstractMany scientific and engineering challenges—ranging from personalized medicine to customized marketing recommendations—require an understanding of treatment effect heterogeneity. In this paper, we develop a non-parametric causal forest for estimating heterogeneous treatment effects that extends Breiman's widely used random forest algorithm. In the potential outcomes framework with unconfoundedness, we show that causal forests are pointwise consistent for the true treatment effect, and have an asymptotically Gaussian and centered sampling distribution. We also discuss a practical method for constructing asymptotic confidence intervals for the true treatment effect that are centered at the causal forest estimates. Our theoretical results rely on a generic Gaussian theory for a large family of random forest algorithms. To our knowledge, this is the first set of results that allows any type of random forest, including classification and regression forests, to be used for provably valid statistical infe...",
"Estimating individualized treatment effects (ITE) is a challenging task due to the need for an individual's potential outcomes to be learned from biased data and without having access to the counterfactuals. We propose a novel method for inferring ITE based on the Generative Adversarial Nets (GANs) framework. Our method, termed Generative Adversarial Nets for inference of Individualized Treatment Effects (GANITE), is motivated by the possibility that we can capture the uncertainty in the counterfactual distributions by attempting to learn them using a GAN. We generate proxies of the counterfactual outcomes using a counterfactual generator, G, and then pass these proxies to an ITE generator, I, in order to train it. By modeling both of these using the GAN framework, we are able to infer based on the factual data, while still accounting for the unseen counterfactuals. We test our method on three real-world datasets (with both binary and multiple treatments) and show that GANITE outperforms state-of-the-art methods.",
"In observational studies with exposures or treatments that vary over time, standard approaches for adjustment of confounding are biased when there exist time-dependent confounders that are also affected by previous treatment. This paper introduces marginal structural models, a new class of causal mo",
"LIST OF ILLUSTRATIONS ix LIST OF TABLES xi ACKNOWLEDGMENTS xiii LIST OF ABBREVIATIONS AND ACRONYMS xvii INTRODUCTION: The Gatekeeper 1 CHAPTER ONE: Reputation and Regulatory Power 33 PART ONE: ORGANIZATIONAL EMPOWERMENT AND CHALLENGE CHAPTER TWO: Reputation and Gatekeeping Authority: The Federal Food, Drug and Cosmetic Act of 1938 and Its Aftermath 73 CHAPTER THREE: The Ambiguous Emergence of American Pharmaceutical Regulation, 1944-1961 118 CHAPTER FOUR: Reputation and Power Crystallized: Thalidomide, Frances Kelsey, and Phased Experiment, 1961-1966 228 CHAPTER FIVE: Reputation and Power Institutionalized: Scientific Networks, Congressional Hearings, and Judicial Affirmation, 1963-1986 298 CHAPTER SIX: Reputation and Power Contested: Emboldened Audiences in Cancer and AIDS, 1977-1992 393 PART TWO: PHARMACEUTICAL REGULATION AND ITS AUDIENCES CHAPTER SEVEN: Reputation and the Organizational Politics of New Drug Review 465 CHAPTER EIGHT: The Governance of Research and Development: Gatekeeping Power, Conceptual Guidance, and Regulation by Satellite 544 CHAPTER NINE: The Other Side of the Gate: Reputation, Power, and Post-Market Regulation 585 CHAPTER TEN: The Detente of Firm and Regulator 635 CHAPTER ELEVEN: American Pharmaceutical Regulation in International Context: Audiences, Comparisons, and Dependencies 686 CHAPTER TWELVE: Conclusion: A Reputation in Relief 727 PRIMARY SOURCES AND ARCHIVAL COLLECTIONS 753 INDEX 759"
]
} |
1902.00981 | 2911373802 | Estimating what would be an individual's potential response to varying levels of exposure to a treatment is of high practical relevance for several important fields, such as healthcare, economics and public policy. However, existing methods for learning to estimate counterfactual outcomes from observational data are either focused on estimating average dose-response curves, or limited to settings with only two treatments that do not have an associated dosage parameter. Here, we present a novel machine-learning approach towards learning counterfactual representations for estimating individual dose-response curves for any number of treatments with continuous dosage parameters with neural networks. Building on the established potential outcomes framework, we introduce performance metrics, model selection criteria, model architectures, and open benchmarks for estimating individual dose-response curves. Our experiments show that the methods developed in this work set a new state-of-the-art in estimating individual dose-response. | In contrast to existing methods, we present the first doubly-robust, non-linear machine-learning approach to learn to estimate individual dose-response curves for multiple available treatments with a continuous dosage parameter from observational data. We address treatment assignment bias using several known regularisation schemes for counterfactual inference. To facilitate future research in this important area, we introduce new performance metrics, model selection criteria, and open benchmarks. We believe this work could be particularly important for applications in precision medicine, where the current state-of-the-art of estimating the average dose response across the entire population does not take into account individual differences, even though large differences in dose-response between individuals are well-documented for many diseases @cite_31 @cite_10 @cite_34 . | {
"cite_N": [
"@cite_31",
"@cite_34",
"@cite_10"
],
"mid": [
"2013536237",
"2081246578",
"2095913156"
],
"abstract": [
"Substantial interpatient variations were demonstrated in the daily doses required to obtain therapeutic gentamicin sulfate serum concentrations in 417 elderly patients. Dosages ranged from 0.3 to 22.0 mg kg day in patients with a normal serum creatinine level. Twenty-five percent of these patients required daily doses higher than the standard regimen of 5 mg kg day, and 33 required less than 3 mg kg day. The drug half-lives in these patients ranged from 0.3 to 32.7 hours, compared with previous reports of 2.5 to four hours. The distribution volumes of these patients ranged from 0.07 to 0.53 L kg, compared with reported values of 0.20 to 0.25 L kg. These wide variations in kinetic variables in elderly patients and the need to obtain narrow ranges in serum concentrations required measuring serum concentrations and individually calculating each patient's dosage requirement early in the treatment course. Doing this consistently produced optimal peak and trough serum levels. Ototoxicity did not occur in any of the patients, and nephrotoxicity may have been drug related in 2 of the elderly patients. ( JAMA 1982;248:3122-3126)",
"ContextMore than 50 million US adults take aspirin regularly for long-term prevention of cardiovascular disease, typically either 81 mg d or 325 mg d. Controversy remains regarding the most appropriate long-term daily dose.ObjectiveTo review the mechanism of action of aspirin and the clinical literature for relationships among aspirin dosage, efficacy, and safety.Evidence AcquisitionA systematic review of the English-language literature was undertaken using MEDLINE and EMBASE (searched through February 2007) and the search term aspirin or acetylsalicylic acid and dose. The search was limited to clinical trials and was extended by a review of bibliographies of pertinent reports of original data and review articles. Published prospective studies using different aspirin dosages in the setting of cardiovascular disease were included.Evidence SynthesisAlthough pharmacodynamic data demonstrate that long-term aspirin dosages as low as 30 mg d are adequate to fully inhibit platelet thromboxane production, dosages as high as 1300 mg d are approved for use. In the United States, 81 mg d of aspirin is prescribed most commonly (60 ), followed by 325 mg d (35 ). The available evidence, predominantly from secondary-prevention observational studies, supports that dosages greater than 75 to 81 mg d do not enhance efficacy, whereas larger dosages are associated with an increased incidence of bleeding events, primarily related to gastrointestinal tract toxicity.ConclusionsCurrently available clinical data do not support the routine, long-term use of aspirin dosages greater than 75 to 81 mg d in the setting of cardiovascular disease prevention. Higher dosages, which may be commonly prescribed, do not better prevent events but are associated with increased risks of gastrointestinal bleeding.",
"The pharmacokinetics of midazolam and its metabolites were studied in 17 patients on mechanical ventilation in a general intensive care unit who were receiving a continuous intravenous infusion of midazolam, adjusted according to the level of induced sedation. Three patients were studied twice. Serum midazolam and α-hydroxymidazolamglucuronide levels were determined during and after infusion. The sedation level was scored on a four-point scale. Half of the observed patients were still drowsy or asleep 10 hours after termination of midazolam infusion. In only one patient was midazolam serum elimination half-life 10 hours. A wide range of midazolam serum levels was associated with adequate sedation, and similarly the midazolam levels at the moment of awakening were highly variable. The serum concentration ratio of midazolam α -hydroxymidazolamglucuronide at the end of the infusion varied from 0.03 to 15.6. Renal function could account for only a part of this variation. Clinical Pharmacology and Therapeutics (1988) 43, 263–269; doi:10.1038 clpt.1988.31"
]
} |
1902.01040 | 2963730393 | Glaucoma is a serious ocular disorder for which the screening and diagnosis are carried out by the examination of the optic nerve head (ONH). The color fundus image (CFI) is the most common modality used for ocular screening. In CFI, the central region which is the optic disc and the optic cup region within the disc are examined to determine one of the important cues for glaucoma diagnosis called the optic cup-to-disc ratio (CDR). CDR calculation requires accurate segmentation of optic disc and cup. Another important cue for glaucoma progression is the variation of depth in ONH region. In this paper, we first propose a deep learning framework to estimate depth from a single fundus image. For the case of monocular retinal depth estimation, we are also plagued by the labeled data insufficiency. To overcome this problem we adopt the technique of pretraining the deep network where, instead of using a denoising autoencoder, we propose a new pretraining scheme called pseudo-depth reconstruction, which serves as a proxy task for retinal depth estimation. Empirically, we show pseudo-depth reconstruction to be a better proxy task than denoising. Our results outperform the existing techniques for depth estimation on the INSPIRE dataset. To extend the use of depth map for optic disc and cup segmentation, we propose a novel fully convolutional guided network, where, along with the color fundus image the network uses the depth map as a guide. We propose a convolutional block called multimodal feature extraction block to extract and fuse the features of the color image and the guide image. We extensively evaluate the proposed segmentation scheme on three datasets- ORIGA, RIMONEr3, and DRISHTI-GS. The performance of the method is comparable and in many cases, outperforms the most recent state of the art. | OD can be seen as a prominent circular region in the fundus images. It is generally brighter compared to the surrounding regions in a fundus image. Because of these characteristics, one of the most common techniques employed is template matching, exemplified in @cite_19 where the Hough Transform is applied on the features extracted from the morphological operations to fit an ellipse or a circle. As an improvement over the template matching based techniques, deformable methods such as Snakes @cite_9 and level-sets @cite_21 apply energy minimization based on handcrafted features. The features are generally based on some form of gradient information and hence sensitive to abnormalities like peripapillary atrophy around the optic disc. Further, they are very sensitive to initialization. There have also been classification-based methods where handcrafted features are extracted from superpixels @cite_34 to classify each superpixel belonging to either OD or background classes. These methods do not tend to be robust as the handcrafted have some innate limitations. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_21",
"@cite_34"
],
"mid": [
"2104324599",
"2170203892",
"2159395868",
"2081178133"
],
"abstract": [
"Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99 of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86 . The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.",
"Reliable and efficient optic disk localization and segmentation are important tasks in automated retinal screening. General-purpose edge detection algorithms often fail to segment the optic disk due to fuzzy boundaries, inconsistent image contrast or missing edge features. This paper presents an algorithm for the localization and segmentation of the optic nerve head boundary in low-resolution images (about 20 spl mu pixel). Optic disk localization is achieved using specialized template matching, and segmentation by a deformable contour model. The latter uses a global elliptical model and a local deformable model with variable edge-strength dependent stiffness. The algorithm is evaluated against a randomly selected database of 100 images from a diabetic screening programme. Ten images were classified as unusable; the others were of variable quality. The localization algorithm succeeded on all bar one usable image; the contour estimation algorithm was qualitatively assessed by an ophthalmologist as having Excellent-Fair performance in 83 of cases, and performs well even on blurred images.",
"Glaucoma is a leading cause of permanent blindness. However, disease progression can be limited if detected early. The optic cup-to-disc ratio (CDR) is one of the main clinical indicators of glaucoma, and is currently determined manually, limiting its potential in mass screening. In this paper, we propose an automatic CDR determination method using a variational level-set approach to segment the optic disc and cup from retinal fundus images. The method is a core component of ARGALI, a system for automated glaucoma risk assessment. Threshold analysis is used in pre-processing to estimate the initial contour. Due to the presence of retinal vasculature traversing the disc and cup boundaries which can cause inaccuracies in the detected contours, an ellipse-fitting post-processing step is also introduced. The method was tested on 104 images from the Singapore Malay Eye Study, and it was found the results produced a clinically acceptable variation of up to 0.2 CDR units from the manually graded samples, with potential use in mass screening.",
"Glaucoma is a chronic eye disease that leads to vision loss. As it cannot be cured, detecting the disease in time is important. Current tests using intraocular pressure (IOP) are not sensitive enough for population based glaucoma screening. Optic nerve head assessment in retinal fundus images is both more promising and superior. This paper proposes optic disc and optic cup segmentation using superpixel classification for glaucoma screening. In optic disc segmentation, histograms, and center surround statistics are used to classify each superpixel as disc or non-disc. A self-assessment reliability score is computed to evaluate the quality of the automated optic disc segmentation. For optic cup segmentation, in addition to the histograms and center surround statistics, the location information is also included into the feature space to boost the performance. The proposed segmentation methods have been evaluated in a database of 650 images with optic disc and optic cup boundaries manually marked by trained professionals. Experimental results show an average overlapping error of 9.5 and 24.1 in optic disc and optic cup segmentation, respectively. The results also show an increase in overlapping error as the reliability score is reduced, which justifies the effectiveness of the self-assessment. The segmented optic disc and optic cup are then used to compute the cup to disc ratio for glaucoma screening. Our proposed method achieves areas under curve of 0.800 and 0.822 in two data sets, which is higher than other methods. The methods can be used for segmentation and glaucoma screening. The self-assessment will be used as an indicator of cases with large errors and enhance the clinical deployment of the automatic segmentation and screening."
]
} |
1902.01040 | 2963730393 | Glaucoma is a serious ocular disorder for which the screening and diagnosis are carried out by the examination of the optic nerve head (ONH). The color fundus image (CFI) is the most common modality used for ocular screening. In CFI, the central region which is the optic disc and the optic cup region within the disc are examined to determine one of the important cues for glaucoma diagnosis called the optic cup-to-disc ratio (CDR). CDR calculation requires accurate segmentation of optic disc and cup. Another important cue for glaucoma progression is the variation of depth in ONH region. In this paper, we first propose a deep learning framework to estimate depth from a single fundus image. For the case of monocular retinal depth estimation, we are also plagued by the labeled data insufficiency. To overcome this problem we adopt the technique of pretraining the deep network where, instead of using a denoising autoencoder, we propose a new pretraining scheme called pseudo-depth reconstruction, which serves as a proxy task for retinal depth estimation. Empirically, we show pseudo-depth reconstruction to be a better proxy task than denoising. Our results outperform the existing techniques for depth estimation on the INSPIRE dataset. To extend the use of depth map for optic disc and cup segmentation, we propose a novel fully convolutional guided network, where, along with the color fundus image the network uses the depth map as a guide. We propose a convolutional block called multimodal feature extraction block to extract and fuse the features of the color image and the guide image. We extensively evaluate the proposed segmentation scheme on three datasets- ORIGA, RIMONEr3, and DRISHTI-GS. The performance of the method is comparable and in many cases, outperforms the most recent state of the art. | The level-set method @cite_21 also managed to solve for Optic cup segmentation with features based on pallor information. But again, they don't tend to be robust in cases that lacked unmarked changes in pallor between the disc and cup. Vessel kinks in the ONH region have been found to be informative @cite_0 for the OC segmentation task. Such vessel bends or kinks are found using wavelet transform or curvature information, and these approaches appear to address a difficult sub problem of accurate vessel bends and kinks detection in the context. Moreover, the consistency of assumptions for the vessel bend to lie on the cup boundary might be data specific. | {
"cite_N": [
"@cite_0",
"@cite_21"
],
"mid": [
"2108824200",
"2159395868"
],
"abstract": [
"Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Glaucoma is one of the most common causes of blindness. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma. In this paper, we present an automatic OD parameterization technique based on segmented OD and cup regions obtained from monocular retinal images. A novel OD segmentation method is proposed which integrates the local image information around each point of interest in multidimensional feature space to provide robustness against variations found in and around the OD region. We also propose a novel cup segmentation method which is based on anatomical evidence such as vessel bends at the cup boundary, considered relevant by glaucoma experts. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A multi-stage strategy is employed to derive a reliable subset of vessel bends called r-bends followed by a local spline fitting to derive the desired cup boundary. The method has been evaluated on 138 images comprising 33 normal and 105 glaucomatous images against three glaucoma experts. The obtained segmentation results show consistency in handling various geometric and photometric variations found across the dataset. The estimation error of the method for vertical cup-to-disk diameter ratio is 0.09 0.08 (mean standard deviation) while for cup-to-disk area ratio it is 0.12 0.10. Overall, the obtained qualitative and quantitative results show effectiveness in both segmentation and subsequent OD parameterization for glaucoma assessment.",
"Glaucoma is a leading cause of permanent blindness. However, disease progression can be limited if detected early. The optic cup-to-disc ratio (CDR) is one of the main clinical indicators of glaucoma, and is currently determined manually, limiting its potential in mass screening. In this paper, we propose an automatic CDR determination method using a variational level-set approach to segment the optic disc and cup from retinal fundus images. The method is a core component of ARGALI, a system for automated glaucoma risk assessment. Threshold analysis is used in pre-processing to estimate the initial contour. Due to the presence of retinal vasculature traversing the disc and cup boundaries which can cause inaccuracies in the detected contours, an ellipse-fitting post-processing step is also introduced. The method was tested on 104 images from the Singapore Malay Eye Study, and it was found the results produced a clinically acceptable variation of up to 0.2 CDR units from the manually graded samples, with potential use in mass screening."
]
} |
1902.01040 | 2963730393 | Glaucoma is a serious ocular disorder for which the screening and diagnosis are carried out by the examination of the optic nerve head (ONH). The color fundus image (CFI) is the most common modality used for ocular screening. In CFI, the central region which is the optic disc and the optic cup region within the disc are examined to determine one of the important cues for glaucoma diagnosis called the optic cup-to-disc ratio (CDR). CDR calculation requires accurate segmentation of optic disc and cup. Another important cue for glaucoma progression is the variation of depth in ONH region. In this paper, we first propose a deep learning framework to estimate depth from a single fundus image. For the case of monocular retinal depth estimation, we are also plagued by the labeled data insufficiency. To overcome this problem we adopt the technique of pretraining the deep network where, instead of using a denoising autoencoder, we propose a new pretraining scheme called pseudo-depth reconstruction, which serves as a proxy task for retinal depth estimation. Empirically, we show pseudo-depth reconstruction to be a better proxy task than denoising. Our results outperform the existing techniques for depth estimation on the INSPIRE dataset. To extend the use of depth map for optic disc and cup segmentation, we propose a novel fully convolutional guided network, where, along with the color fundus image the network uses the depth map as a guide. We propose a convolutional block called multimodal feature extraction block to extract and fuse the features of the color image and the guide image. We extensively evaluate the proposed segmentation scheme on three datasets- ORIGA, RIMONEr3, and DRISHTI-GS. The performance of the method is comparable and in many cases, outperforms the most recent state of the art. | Very recently, there have been several works based on deep learning for OD and OC segmentation. In @cite_26 , the authors proposed a method where convolutional neural networks (CNNs) are used to learn filters in a greedy manner and then used them for feature extraction following which they obtained pixelwise predictions and final segmentation map using graph cut and convex hull transformations. The network contains fully connected layers and they are also not end-to-end, because of the pipeline of different steps involved. In @cite_30 , the authors proposed a fully convolutional end-to-end OD-OC segmentation method. Recently, in @cite_1 the authors proposed a multiscale network based on a modified U-net @cite_12 architecture for OD-OC segmentation. They employed polar transformation (PT) on the RGB fundus images and segmentation map, before feeding the images to the network and they finally used inverse PT on the output. State of the art results were reported using PT but the network is not end-to-end. We proposed a framework to first estimate depth from a single retinal image and then used an end-to-end network to perform multimodal fusion of features, i.e., combining depth and color image features. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_1",
"@cite_12"
],
"mid": [
"2755045970",
"2513367050",
"2782364420",
"1901129140"
],
"abstract": [
"Glaucoma is a highly threatening and widespread ocular disease which may lead to permanent loss in vision. One of the important parameters used for Glaucoma screening in the cup-to-disc ratio (CDR), which requires accurate segmentation of optic cup and disc. We explore fully convolutional networks (FCNs) for the task of joint segmentation of optic cup and disc. We propose a novel improved architecture building upon FCNs by using the concept of residual learning. Additionally, we also explore if adversarial training helps in improving the segmentation results. The method does not require any complicated preprocessing techniques for feature enhancement. We learn a mapping between the retinal images and the corresponding segmentation map using fully convolutional and adversarial networks. We perform extensive experiments of various models on a set of 159 images from RIM-ONE database and also do extensive comparison. The proposed method outperforms the state of the art methods on various evaluation metrics for both disc and cup segmentation.",
"Abstract We present a novel method to segment retinal images using ensemble learning based convolutional neural network (CNN) architectures. An entropy sampling technique is used to select informative points thus reducing computational complexity while performing superior to uniform sampling. The sampled points are used to design a novel learning framework for convolutional filters based on boosting. Filters are learned in several layers with the output of previous layers serving as the input to the next layer. A softmax logistic classifier is subsequently trained on the output of all learned filters and applied on test images. The output of the classifier is subject to an unsupervised graph cut algorithm followed by a convex hull transformation to obtain the final segmentation. Our proposed algorithm for optic cup and disc segmentation outperforms existing methods on the public DRISHTI-GS data set on several metrics.",
"Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multi-label system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system. The experiments show that our M-Net system achieves state-of-the-art OD and OC segmentation result on ORIGA data set. Simultaneously, the proposed method also obtains the satisfactory glaucoma screening performances with calculated CDR value on both ORIGA and SCES datasets.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net ."
]
} |
1902.00732 | 2914567083 | In many traditional job scheduling settings, it is assumed that one knows the time it will take for a job to complete service. In such cases, strategies such as shortest job first can be used to improve performance in terms of measures such as the average time a job waits in the system. We consider the setting where the service time is not known, but is predicted by for example a machine learning algorithm. Our main result is the derivation, under natural assumptions, of formulae for the performance of several strategies for queueing systems that use predictions for service times in order to schedule jobs. As part of our analysis, we suggest the framework of the "price of misprediction," which offers a measure of the cost of using predicted information. | While traditional algorithmic analysis focuses on worst-case algorithm behavior, there is a growing movement to develop frameworks that go beyond worst-case analysis @cite_20 . While such frameworks have existed in the past, most notably via probabilistic analysis (e.g., @cite_18 ), semi-random models (e.g., @cite_4 @cite_5 ), and smoothed analysis @cite_21 , one natural approach that has received little attention is the use of machine-learning-based approaches to provide predictions to algorithms, with the goal of realizing provable performance guarantees. (The idea of using machine learning to give hints as to which heuristic algorithm to employ has been considered in meta-heuristics for several large-scale problems, most notably for satisfiability @cite_16 ; this is a distinct line of work.) | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_21",
"@cite_5",
"@cite_16",
"@cite_20"
],
"mid": [
"2067883080",
"2088272899",
"2034053794",
"2078758709",
"2147148915",
""
],
"abstract": [
"",
"The problem of coloring a graph with the minimum number of colors is well known to be NP-hard, even restricted to k-colorable graphs for constant k ≥ 3. On the other hand, it is known that random k-colorable graphs are easy to k-color. The algorithms for coloring random k-colorable graphs require fairly high edge densities, however. In this paper we present algorithms that color randomly generated k-colorable graphs for much lower edge densities than previous approaches. In addition, to study a wider variety of graph distributions, we also present a model of graphs generated by the semi-random source of Santha and Vazirani (M. Santha and U. V. Vazirani, J. Comput. System Sci.33 (1986), 75-87) that provides a smooth transition between the worst-case and random models. In this model, the graph is generated by a \"noisy adversary\"-an adversary whose decisions (whether or not to insert a particular edge) have some small (random) probability of being reversed. We show that even for quite low noise rates, semi-random k-colorable graphs can be optimally colored with high probability.",
"We introduce the smoothed analysis of algorithms, which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of Gaussian perturbations.",
"Alon, Krivelevich, and Sudakov [Random Struct Algorithms 13(3–4) (1998), 457–466.] designed an algorithm based on spectral techniques that almost surely finds a clique of size hidden in an otherwise random graph. We show that a different algorithm, based on the Lovasz theta function, almost surely both finds the hidden clique and certifies its optimality. Our algorithm has an additional advantage of being more robust: it also works in a semirandom hidden clique model, in which an adversary can remove edges from the random portion of the graph. ©2000 John Wiley & Sons, Inc. Random Struct. Alg., 16, 195–208, 2000",
"It has been widely observed that there is no single \"dominant\" SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the best solver for a given class of instances, we advocate making this decision online on a per-instance basis. Building on previous work, we describe SATzilla, an automated approach for constructing per-instance algorithm portfolios for SAT that use so-called empirical hardness models to choose among their constituent solvers. This approach takes as input a distribution of problem instances and a set of component solvers, and constructs a portfolio optimizing a given objective function (such as mean runtime, percent of instances solved, or score in a competition). The excellent performance of SATzilla was independently verified in the 2007 SAT Competition, where our SATzilla07 solvers won three gold, one silver and one bronze medal. In this article, we go well beyond SATzilla07 by making the portfolio construction scalable and completely automated, and improving it by integrating local search solvers as candidate solvers, by predicting performance score instead of runtime, and by using hierarchical hardness models that take into account different types of SAT instances. We demonstrate the effectiveness of these new techniques in extensive experimental results on data sets including instances from the most recent SAT competition.",
""
]
} |
1902.00732 | 2914567083 | In many traditional job scheduling settings, it is assumed that one knows the time it will take for a job to complete service. In such cases, strategies such as shortest job first can be used to improve performance in terms of measures such as the average time a job waits in the system. We consider the setting where the service time is not known, but is predicted by for example a machine learning algorithm. Our main result is the derivation, under natural assumptions, of formulae for the performance of several strategies for queueing systems that use predictions for service times in order to schedule jobs. As part of our analysis, we suggest the framework of the "price of misprediction," which offers a measure of the cost of using predicted information. | Notable recent work with this theme is that of Lykouris and Vassilvitskii @cite_25 , who show how to use prediction advice from machine learning algorithms to improve online algorithms for caching in a way that provides provable performance guarantees, using the framework of competitive analysis. A series of recent papers consider the setting of optimization with noise, such as in settings when sampling data in order to obtain values used in an optimization algorithm for submodular functions @cite_27 @cite_19 @cite_26 @cite_24 @cite_12 @cite_15 . Other recent works analyze the performance of learned Bloom filter structures @cite_3 @cite_11 , a variation on Bloom filters @cite_1 that make use of machine learning algorithms that predict whether an element is in a given fixed set as a subfilter structure. | {
"cite_N": [
"@cite_26",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2123845384",
"2962771342",
"2741683874",
"2963922672",
"2551080748",
"2886548888",
"2963785501",
"2803124496",
"2890643081"
],
"abstract": [
"",
"In this paper trade-offs among certain computational factors in hash coding are analyzed. The paradigm problem considered is that of testing a series of messages one-by-one for membership in a given set of messages. Two new hash-coding methods are examined and compared with a particular conventional hash-coding method. The computational factors considered are the size of the hash area (space), the time required to identify a message as a nonmember of the given set (reject time), and an allowable error frequency. The new methods are intended to reduce the amount of space required to contain the hash-coded information from that associated with conventional methods. The reduction in space is accomplished by exploiting the possibility that a small fraction of errors of commission may be tolerable in some applications, in particular, applications in which a large amount of data is involved and a core resident hash area is consequently not feasible using conventional methods. In such applications, it is envisaged that overall performance could be improved by using a smaller core resident hash area in conjunction with the new methods and, when necessary, by using some secondary and perhaps time-consuming test to “catch” the small fraction of errors associated with the new methods. An example is discussed which illustrates possible areas of application for the new methods. Analysis of the paradigm problem demonstrates that allowing a small number of test messages to be falsely identified as members of the given set will permit a much smaller hash area to be used without increasing reject time.",
"Indexes are models: a -Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not. In this exploratory research paper, we start from this premise and posit that all existing index structures can be replaced with other types of models, including deep-learning models, which we term learned indexes. We theoretically analyze under which conditions learned indexes outperform traditional index structures and describe the main challenges in designing learned index structures. Our initial results show that our learned indexes can have significant advantages over traditional indexes. More importantly, we believe that the idea of replacing core components of a data management system through learned models has far reaching implications for future systems designs and that this work provides just a glimpse of what might be possible.",
"",
"We consider the problem of maximizing a monotone submodular function under noise, which to the best of our knowledge has not been studied in the past. There has been a great deal of work on optimization of submodular functions under various constraints, with many algorithms that provide desirable approximation guarantees. However, in many applications we do not have access to the submodular function we aim to optimize, but rather to some erroneous or noisy version of it. This raises the question of whether provable guarantees are obtainable in presence of error and noise. We provide initial answers, by focusing on the question of maximizing a monotone submodular function under a cardinality constraint when given access to a noisy oracle of the function. We show that: • For a cardinality constraint k ≥ 2, there is an approximation algorithm whose approxima- tion ratio is arbitrarily close to 1− 1 e; • For k = 1 there is an approximation algorithm whose approximation ratio is arbitrarily close to 1 2 in expectation. No randomized algorithm can obtain an approximation ratio better than 1 2 + o(1) in expectation; • If the noise is adversarial, no non-trivial approximation guarantee can be obtained.",
"We consider the problem of optimization from samples of monotone submodular functions with bounded curvature. In numerous applications, the function optimized is not known a priori, but instead learned from data. What are the guarantees we have when optimizing functions from sampled data? In this paper we show that for any monotone submodular function with curvature c there is a (1 - c) (1 + c - c^2) approximation algorithm for maximization under cardinality constraints when polynomially-many samples are drawn from the uniform distribution over feasible sets. Moreover, we show that this algorithm is optimal. That is, for any c < 1, there exists a submodular function with curvature c for which no algorithm can achieve a better approximation. The curvature assumption is crucial as for general monotone submodular functions no algorithm can obtain a constant-factor approximation for maximization under a cardinality constraint when observing polynomially-many samples drawn from any distribution over feasible sets, even when the function is statistically learnable.",
"",
"",
"",
"Recent work has suggested enhancing Bloom filters by using a pre-filter, based on applying machine learning to determine a function that models the data set the Bloom filter is meant to represent. Here we model such learned Bloom filters, with the following outcomes: (1) we clarify what guarantees can and cannot be associated with such a structure; (2) we show how to estimate what size the learning function must obtain in order to obtain improved performance; (3) we provide a simple method, sandwiching, for optimizing learned Bloom filters; and (4) we propose a design and analysis approach for a learned Bloomier filter, based on our modeling approach."
]
} |
1902.00732 | 2914567083 | In many traditional job scheduling settings, it is assumed that one knows the time it will take for a job to complete service. In such cases, strategies such as shortest job first can be used to improve performance in terms of measures such as the average time a job waits in the system. We consider the setting where the service time is not known, but is predicted by for example a machine learning algorithm. Our main result is the derivation, under natural assumptions, of formulae for the performance of several strategies for queueing systems that use predictions for service times in order to schedule jobs. As part of our analysis, we suggest the framework of the "price of misprediction," which offers a measure of the cost of using predicted information. | In scheduling, some works have looked at the effects of using imprecise information, usually for load balancing in multiple queue settings. For example, Mitzenmacher considers using old load information to place jobs (in the context of the power of two choices) @cite_6 . A strategy called TAGS studies an approach to utilizing multiple queues when no information exists about the service time; jobs that run more than some threshold in the first queue are cancelled and passed to the second queue, and so on @cite_0 . For single queues, recent work by Scully and Harchol-Balter have considered scheduling policies that are based on the amount of service received, where the scheduler only knows the service received approximately, subject to adversarial noise, and the goal is to develop robust policies @cite_10 . Our work differs from these past works in providing a model specifically geared toward studying performance with machine-learning based predictions, and corresponding analyses. | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_6"
],
"mid": [
"1966557675",
"2912153860",
"2154007983"
],
"abstract": [
"",
"A great many scheduling policies for the M G 1 queue are so-called SOAP policies [1], meaning they assign each job a priority based on its age, the amount of service it has received so far. Perhaps the most notable example is the Gittins policy, which minimizes mean response time when job sizes are unknown. However, in some computer systems even job ages, let alone job sizes, are not precisely known by the scheduler. This can occur when scheduling in a time-shared system or over a network. Given that the Gittins policy relies on knowing exact job ages, it is not clear how to minimize mean response time in such settings.In this paper we study scheduling for the M G 1 when the scheduler knows only approximate job ages. We find that naively using the traditional Gittins policy is not robust, meaning that introducing even an infinitesimal amount of noise in job ages can cause a large jump in mean response time. By examining the ways in which this naive policy fails, we construct a simple variation of the Gittins policy, called the shift-flat Gittins policy, which is indeed robust to noise and therefore has near-optimal mean response time. Moreover, we show that our shift-flat construction generalizes, yielding a robust variation of any SOAP policy.",
"We consider the problem of load balancing in dynamic distributed systems in cases where new incoming tasks can make use of old information. For example, consider a multiprocessor system where incoming tasks with exponentially distributed service requirements arrive as a Poisson process, the tasks must choose a processor for service, and a task knows when making this choice the processor queue lengths from T seconds ago. What is a good strategy for choosing a processor in order for tasks to minimize their expected time in the system? Such models can also be used to describe settings where there is a transfer delay between the time a task enters a system and the time it reaches a processor for service. Our models are based on considering the behavior of limiting systems where the number of processors goes to infinity. The limiting systems can be shown to accurately describe the behavior of sufficiently large systems and simulations demonstrate that they are reasonably accurate even for systems with a small number of processors. Our studies of specific models demonstrate the importance of using randomness to break symmetry in these systems and yield important rules of thumb for system design. The most significant result is that only small amounts of queue length information can be extremely useful in these settings; for example, having incoming tasks choose the least loaded of two randomly chosen processors is extremely effective over a large range of possible system parameters. In contrast, using global information can actually degrade performance unless used carefully; for example, unlike most settings where the load information is current, having tasks go to the apparently least loaded server can significantly hurt performance."
]
} |
1902.00843 | 2912757816 | In this paper we consider the problem of how a reinforcement learning agent that is tasked with solving a sequence of reinforcement learning problems (a sequence of Markov decision processes) can use knowledge acquired early in its lifetime to improve its ability to solve new problems. We argue that previous experience with similar problems can provide an agent with information about how it should explore when facing a new but related problem. We show that the search for an optimal exploration strategy can be formulated as a reinforcement learning problem itself and demonstrate that such strategy can leverage patterns found in the structure of related problems. We conclude with experiments that show the benefits of optimizing an exploration strategy using our proposed approach. | There is a large body of work discussing the problem of an agent should behave during exploration . Simple strategies, such as @math -greedy with random action-selection, or , make sense when an agent has no prior knowledge of the problem that is currently trying to solve. The performance of an agent exploring with unguided exploration techniques, such as random action-selection, reduces drastically as the size of the state-space increases @cite_6 . For example, the performance of Boltzmann or softmax action-selection hinges on the accuracy of the action-value estimates. When these estimates are poor (e.g., early during the learning process), it can have a drastic negative effect on the overall learning ability of the agent. More sophisticated methods search for subgoal states to define temporally-extended actions, called , that explore the state-space more efficiently, @cite_16 @cite_29 , use state-visitation counts to encourage the agent to explore states that have not been frequently visited, @cite_11 @cite_28 , or use approximations of a state-transition graph to exploit structural patterns, @cite_22 @cite_5 . | {
"cite_N": [
"@cite_22",
"@cite_28",
"@cite_29",
"@cite_6",
"@cite_5",
"@cite_16",
"@cite_11"
],
"mid": [
"2143958939",
"2663108269",
"2111625828",
"1586504939",
"2950040888",
"2143435603",
"2949475445"
],
"abstract": [
"This paper presents a novel framework called proto-reinforcement learning (PRL), based on a mathematical model of a proto-value function: these are task-independent basis functions that form the building blocks of all value functions on a given state space manifold. Proto-value functions are learned not from rewards, but instead from analyzing the topology of the state space. Formally, proto-value functions are Fourier eigenfunctions of the Laplace-Beltrami diffusion operator on the state space manifold. Proto-value functions facilitate structural decomposition of large state spaces, and form geodesically smooth orthonormal basis functions for approximating any value function. The theoretical basis for proto-value functions combines insights from spectral graph theory, harmonic analysis, and Riemannian manifolds. Proto-value functions enable a novel generation of algorithms called representation policy iteration, unifying the learning of representation and behavior.",
"We introduce a new count-based optimistic exploration algorithm for Reinforcement Learning (RL) that is feasible in environments with high-dimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our -pseudocount achieves generalisation by exploiting same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The -Exploration-Bonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on high-dimensional RL benchmarks.",
"Reinforcement learning addresses the problem of learning to select actions in order to maximize an agent’s performance in unknown environments. To scale reinforcement learning to complex real-world tasks, agent must be able to discover hierarchical structures within their learning and control systems. This paper presents a method by which a reinforcement learning agent can discover subgoals with certain structural properties. By discovering subgoals and including policies to subgoals as actions in its action set, the agent is able to explore more effectively and accelerate learning in other tasks in the same or similar environments where the same subgoals are useful. The agent discovers the subgoals by searching a learned policy model for state that exhibits certain structural properties. This approach is illustrated using gridworld tasks.",
"Publisher Summary This chapter describes two cooperative learning algorithms that can reduce search and decouple the learning rate from state-space size. The first algorithm, called Learning with an External Critic (LEC), is based on the idea of a mentor who watches the learner and generates immediate rewards in response to its most recent actions. This reward is then used temporarily to bias the learner's control strategy. The second algorithm, called Learning By Watching ( LBW), is based on the idea that an agent can gain experience vicariously by relating the observed behavior of others to its own. While LEC algorithms require interaction with knowledgeable agents, LBW algorithms can be effective even when interaction is with equally naive peers. The search time complexity is analyzed for pure unbiased Q-learning, LEC, and LB W algorithms for an important class of state spaces. Generally, the results indicate that unbiased Q-learning can have a search time that is exponential in the depth of the state space, while the LEC and LB W algorithms require at most time linear in the state space size and under appropriate conditions, time independent of the state space size and proportional to the length of the optimal solution path. Homogeneous state spaces are useful for studying the scaling properties of reinforcement learning algorithms because they are analytically tractable.",
"Representation learning and option discovery are two of the biggest challenges in reinforcement learning (RL). Proto-RL is a well known approach for representation learning in MDPs. The representations learned with this framework are called proto-value functions (PVFs). In this paper we address the option discovery problem by showing how PVFs implicitly define options. We do it by introducing eigenpurposes, intrinsic reward functions derived from the learned representations. The options discovered from eigenpurposes traverse the principal directions of the state space. They are useful for multiple tasks because they are independent of the agents' intentions. Moreover, by capturing the diffusion process of a random walk, different options act at different time scales, making them helpful for exploration strategies. We demonstrate features of eigenpurposes in traditional tabular domains as well as in Atari 2600 games.",
"This paper presents a method by which a reinforcement learning agent can automatically discover certain types of subgoals online. By creating useful new subgoals while learning, the agent is able to accelerate learning on the current task and to transfer its expertise to other, related tasks through the reuse of its ability to attain subgoals. The agent discovers subgoals based on commonalities across multiple paths to a solution. We cast the task of finding these commonalities as a multiple-instance learning problem and use the concept of diverse density to find solutions. We illustrate this approach using several gridworld tasks.",
"Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domain-dependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration."
]
} |
1902.00843 | 2912757816 | In this paper we consider the problem of how a reinforcement learning agent that is tasked with solving a sequence of reinforcement learning problems (a sequence of Markov decision processes) can use knowledge acquired early in its lifetime to improve its ability to solve new problems. We argue that previous experience with similar problems can provide an agent with information about how it should explore when facing a new but related problem. We show that the search for an optimal exploration strategy can be formulated as a reinforcement learning problem itself and demonstrate that such strategy can leverage patterns found in the structure of related problems. We conclude with experiments that show the benefits of optimizing an exploration strategy using our proposed approach. | Related to our approach is the idea of meta-learning, or learning to learn, which has also been a recent area of focus. @cite_4 proposed learning an update rule for a class of optimization problems. Given an objective function @math and parameters @math , the authors proposed learning a model, @math , such that the update to parameters @math , at iteration @math are given according to @math . RL has also been used in meta-learning to learn efficient neural network architectures @cite_20 . However, even though one can draw a connection to our work through meta-learning, these methods are not concerned with the problem of exploration. | {
"cite_N": [
"@cite_4",
"@cite_20"
],
"mid": [
"2963775850",
"2767175863"
],
"abstract": [
"The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.",
"Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network - for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR-100 (20 tasks) we obtain cross-stitch performance levels with an 85 reduction in training time."
]
} |
1902.00843 | 2912757816 | In this paper we consider the problem of how a reinforcement learning agent that is tasked with solving a sequence of reinforcement learning problems (a sequence of Markov decision processes) can use knowledge acquired early in its lifetime to improve its ability to solve new problems. We argue that previous experience with similar problems can provide an agent with information about how it should explore when facing a new but related problem. We show that the search for an optimal exploration strategy can be formulated as a reinforcement learning problem itself and demonstrate that such strategy can leverage patterns found in the structure of related problems. We conclude with experiments that show the benefits of optimizing an exploration strategy using our proposed approach. | In the context of RL, a similar idea can be applied by defining a meta-MDP, i.e., considering the agent as part of the environment in a larger MDP. In multi-agent systems, @cite_30 considered other agents as part of the environment from the perspective of each individual agent. @cite_9 proposed the conjugate MDP framework, in which agents solving meta-MDPs (called CoMDPs) can search for the state representation, action representation, or options that maximize the expected return when used by an RL agent solving a single MDP. Despite existing meta-MDP approaches, to the best of our knowledge, ours is the first to use the meta-MDP approach to specifically optimize exploration for a set of related tasks. | {
"cite_N": [
"@cite_30",
"@cite_9"
],
"mid": [
"2544765807",
"2186389117"
],
"abstract": [
"Following work on designing optimal rewards for single agents, we define a multiagent optimal rewards problem (ORP) in common-payoff (or team) settings. This new problem solves for individual agent reward functions that guide agents to better overall team performance relative to teams in which all agents guide their behavior with the same given team-reward function. We present a multiagent architecture in which each agent learns good reward functions from experience using a gradient-based algorithm in addition to performing the usual task of planning good policies (except in this case with respect to the learned rather than the given reward function). Multiagency introduces the challenge of nonstationarity: because the agents learn simultaneously, each agent's learning problem is nonstationary and interdependent on the other agents. We demonstrate on two simple domains that the proposed architecture outperforms the conventional approach in which all the agents use the same given team-reward function (even when accounting for the resource overhead of the reward learning); that the learning algorithm performs stably despite the nonstationarity; and that learning individual reward functions can lead to better specialization of roles than is possible with shared reward, whether learned or given.",
"Many open problems involve the search for a mapping that is used by an algorithm solving an MDP. Useful mappings are often from the state set to some other set. Examples include representation discovery (a mapping to a feature space) and skill discovery (a mapping to skill termination probabilities). Different mappings result in algorithms achieving varying expected returns. In this paper we present a novel approach to the search for any mapping used by any algorithm attempting to solve an MDP, for that which results in maximum expected return."
]
} |
1902.00714 | 2914682925 | Motivated by many existing security and privacy applications, e.g., network traffic attribution, linkage attacks, private web search, and feature-based data de-anonymization, in this paper, we study the Feature-based Data Inferability (FDI) quantification problem. First, we conduct the FDI quantification under both naive and general data models from both a feature distance perspective and a feature distribution perspective. Our quantification explicitly shows the conditions to have a desired fraction of the target users to be Top-K inferable (K is an integer parameter). Then, based on our quantification, we evaluate the user inferability in two cases: network traffic attribution in network forensics and feature-based data de-anonymization. Finally, based on the quantification and evaluation, we discuss the implications of this research for existing feature-based inference systems. | In @cite_4 , designed a network traffic attribution system Kaleido. Kaleido leverages a class of inductive discriminant models to extract user- and context-aware features of network traffic and then build an efficient inference model to conduct real time traffic attribution over high-volume network traces. Another feature-based network forensics application is @cite_14 , where proposed ClickMiner, a novel system that aims to automatically reconstruct user-browser interactions from network traces. A comprehensive survey on network trace-based forensic frameworks can be found in @cite_7 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7"
],
"mid": [
"2136433906",
"2399323483",
"1970399788"
],
"abstract": [
"Recent advances in network traffic capturing techniques have made it feasible to record full traffic traces, often for extended periods of time. Among the applications enabled by full traffic captures, being able to automatically reconstruct user-browser interactions from archived web traffic traces would be helpful in a number of scenarios, such as aiding the forensic analysis of network security incidents. Unfortunately, the modern web is becoming increasingly complex, serving highly dynamic pages that make heavy use of scripting languages, a variety of browser plugins, and asynchronous content requests. Consequently, the semantic gap between user-browser interactions and the network traces has grown significantly, making it challenging to analyze the web traffic produced by even a single user. In this paper, we propose ClickMiner, a novel system that aims to automatically reconstruct user-browser interactions from network traces. Through a user study involving 21 participants, we collected real user browsing traces to evaluate our approach. We show that, on average, ClickMiner can correctly reconstruct between 82 and 90 of user-browser interactions with false positives between 0.74 and 1.16 , and that it outperforms reconstruction algorithms based solely on referrer-based approaches. We also present a number of case studies that aim to demonstrate how ClickMiner can aid the forensic analysis of malware downloads triggered by social engineering attacks.",
"",
"Network forensics is the science that deals with capture, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper makes an exhaustive survey of various network forensic frameworks proposed till date. A generic process model for network forensics is proposed which is built on various existing models of digital forensics. Definition, categorization and motivation for network forensics are clearly stated. The functionality of various Network Forensic Analysis Tools (NFATs) and network security monitoring tools, available for forensics examiners is discussed. The specific research gaps existing in implementation frameworks, process models and analysis tools are identified and major challenges are highlighted. The significance of this work is that it presents an overview on network forensics covering tools, process models and framework implementations, which will be very much useful for security practitioners and researchers in exploring this upcoming and young discipline."
]
} |
1902.00714 | 2914682925 | Motivated by many existing security and privacy applications, e.g., network traffic attribution, linkage attacks, private web search, and feature-based data de-anonymization, in this paper, we study the Feature-based Data Inferability (FDI) quantification problem. First, we conduct the FDI quantification under both naive and general data models from both a feature distance perspective and a feature distribution perspective. Our quantification explicitly shows the conditions to have a desired fraction of the target users to be Top-K inferable (K is an integer parameter). Then, based on our quantification, we evaluate the user inferability in two cases: network traffic attribution in network forensics and feature-based data de-anonymization. Finally, based on the quantification and evaluation, we discuss the implications of this research for existing feature-based inference systems. | In @cite_3 , Caliskan- presented a novel data de-anonymization attack to programmers leveraging the code stylometry. presented another stylometry-based de-anonymization attack in @cite_0 , by which they can identify anonymous authors of anonymous texts. In @cite_15 , Narayanan and Shmatikov presented a new class of statistical de-anonymization attacks to high-dimensional micro-data, e.g., recommendation data, transaction data, and so on. An off-line de-anonymization attack of bubble forms is presented in @cite_12 by | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_12",
"@cite_3"
],
"mid": [
"2029103396",
"2135930857",
"1835817557",
"1463623766"
],
"abstract": [
"Stylometry is a method for identifying anonymous authors of anonymous texts by analyzing their writing style. While stylometric methods have produced impressive results in previous experiments, we wanted to explore their performance on a challenging dataset of particular interest to the security research community. Analysis of underground forums can provide key information about who controls a given bot network or sells a service, and the size and scope of the cybercrime underworld. Previous analyses have been accomplished primarily through analysis of limited structured metadata and painstaking manual analysis. However, the key challenge is to automate this process, since this labor intensive manual approach clearly does not scale. We consider two scenarios. The first involves text written by an unknown cybercriminal and a set of potential suspects. This is standard, supervised stylometry problem made more difficult by multilingual forums that mix l33t-speak conversations with data dumps. In the second scenario, you want to feed a forum into an analysis engine and have it output possible doppelgangers, or users with multiple accounts. While other researchers have explored this problem, we propose a method that produces good results on actual separate accounts, as opposed to data sets created by artificially splitting authors into multiple identities. For scenario 1, we achieve 77 to 84 accuracy on private messages. For scenario 2, we achieve 94 recall with 90 precision on blogs and 85.18 precision with 82.14 recall for underground forum users. We demonstrate the utility of our approach with a case study that includes applying our technique to the Carders forum and manual analysis to validate the results, enabling the discovery of previously undetected doppelganger accounts.",
"We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.",
"Fill-in-the-bubble forms are widely used for surveys, election ballots, and standardized tests. In these and other scenarios, use of the forms comes with an implicit assumption that individuals' bubble markings themselves are not identifying. This work challenges this assumption, demonstrating that fill-in-the-bubble forms could convey a respondent's identity even in the absence of explicit identifying information. We develop methods to capture the unique features of a marked bubble and use machine learning to isolate characteristics indicative of its creator. Using surveys from more than ninety individuals, we apply these techniques and successfully reidentify individuals from markings alone with over 50 accuracy. This bubble-based analysis can have either positive or negative implications depending on the application. Potential applications range from detection of cheating on standardized tests to attacks on the secrecy of election ballots. To protect against negative consequences, we discuss mitigation techniques to remove a bubble's identifying characteristics. We suggest additional tests using longitudinal data and larger datasets to further explore the potential of our approach in realworld applications.",
"Source code authorship attribution is a significant privacy threat to anonymous code contributors. However, it may also enable attribution of successful attacks from code left behind on an infected system, or aid in resolving copyright, copyleft, and plagiarism issues in the programming fields. In this work, we investigate machine learning methods to de-anonymize source code authors of C C++ using coding style. Our Code Stylometry Feature Set is a novel representation of coding style found in source code that reflects coding style from properties derived from abstract syntax trees. Our random forest and abstract syntax tree-based approach attributes more authors (1,600 and 250) with significantly higher accuracy (94 and 98 ) on a larger data set (Google Code Jam) than has been previously achieved. Furthermore, these novel features are robust, difficult to obfuscate, and can be used in other programming languages, such as Python. We also find that (i) the code resulting from difficult programming tasks is easier to attribute than easier tasks and (ii) skilled programmers (who can complete the more difficult tasks) are easier to attribute than less skilled programmers."
]
} |
1708.00894 | 2742475211 | This paper presents a novel method for visual-inertial odometry. The method is based on an information fusion framework employing low-cost IMU sensors and the monocular camera in a standard smartphone. We formulate a sequential inference scheme, where the IMU drives the dynamical model and the camera frames are used in coupling trailing sequences of augmented poses. The novelty in the model is in taking into account all the cross-terms in the updates, thus propagating the inter-connected uncertainties throughout the model. Stronger coupling between the inertial and visual data sources leads to robustness against occlusion and feature-poor environments. We demonstrate results on data collected with an iPhone and provide comparisons against the Tango device and using the EuRoC data set. | In this paper we present a probabilistic approach for fusing information from consumer grade inertial sensors ( 3-axis accelerometer and gyroscope) and a monocular video camera for accurate low-drift odometry. This is practically the most interesting hardware setup as most modern smartphones contain a monocular video camera and an IMU. Despite the wide application potential of such hardware platform, there are not many previous works which demonstrate visual-inertial odometry using standard smartphone sensors. This is most likely due to the relatively low quality of low-cost IMUs which makes inertial navigation challenging. The most notable papers covering the smartphone use case are @cite_19 @cite_9 @cite_35 . However, all these previous approaches are either or in the sense that tracking breaks if there is complete occlusion of camera for short periods of time. This is the case also with the visual-inertial odometry of the Google Tango device. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_35"
],
"mid": [
"2110357983",
"1987441863",
""
],
"abstract": [
"Camera phones are a promising platform for hand-held augmented reality. As their computational resources grow, they are becoming increasingly suitable for visual tracking tasks. At the same time, they still offer considerable challenges: Their cameras offer a narrow field-of-view not best suitable for robust tracking; images are often received at less than 15Hz; long exposure times result in significant motion blur; and finally, a rolling shutter causes severe smearing effects. This paper describes an attempt to implement a keyframe-based SLAMsystem on a camera phone (specifically, the Apple iPhone 3G). We describe a series of adaptations to the Parallel Tracking and Mapping system to mitigate the impact of the device's imaging deficiencies. Early results demonstrate a system capable of generating and augmenting small maps, albeit with reduced accuracy and robustness compared to SLAM on a PC.",
"All existing methods for vision-aided inertial navigation assume a camera with a global shutter, in which all the pixels in an image are captured simultaneously. However, the vast majority of consumer-grade cameras use rolling-shutter sensors, which capture each row of pixels at a slightly different time instant. The effects of the rolling shutter distortion when a camera is in motion can be very significant, and are not modelled by existing visual-inertial motion-tracking methods. In this paper we describe the first, to the best of our knowledge, method for vision-aided inertial navigation using rolling-shutter cameras. Specifically, we present an extended Kalman filter (EKF)-based method for visual-inertial odometry, which fuses the IMU measurements with observations of visual feature tracks provided by the camera. The key contribution of this work is a computationally tractable approach for taking into account the rolling-shutter effect, incurring only minimal approximations. The experimental results from the application of the method show that it is able to track, in real time, the position of a mobile phone moving in an unknown environment with an error accumulation of approximately 0.8 of the distance travelled, over hundreds of meters.",
""
]
} |
1708.00938 | 2953136327 | We propose associative domain adaptation, a novel technique for end-to-end domain adaptation with neural networks, the task of inferring class labels for an unlabeled target domain based on the statistical properties of a labeled source domain. Our training scheme follows the paradigm that in order to effectively derive class labels for the target domain, a network should produce statistically domain invariant embeddings, while minimizing the classification error on the labeled source domain. We accomplish this by reinforcing associations between source and target data directly in embedding space. Our method can easily be added to any existing classification network with no structural and almost no computational overhead. We demonstrate the effectiveness of our approach on various benchmarks and achieve state-of-the-art results across the board with a generic convolutional neural network architecture not specifically tuned to the respective tasks. Finally, we show that the proposed association loss produces embeddings that are more effective for domain adaptation compared to methods employing maximum mean discrepancy as a similarity measure in embedding space. | The CORAL method @cite_22 explicitly forces the covariance of the target data onto the source data (). The authors then apply supervised training to this transformed source domain with original labels (). This idea is extended to second order statistics of features in deep neural networks in @cite_8 . | {
"cite_N": [
"@cite_22",
"@cite_8"
],
"mid": [
"2173393671",
"2467286621"
],
"abstract": [
"Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for the case when the target data have labels, including some that perform very well despite being \"frustratingly easy\" to implement. However, in practice, the target domain is often unlabeled, requiring unsupervised adaptation. We propose a simple, effective, and efficient method for unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Even though it is extraordinarily simple--it can be implemented in four lines of Matlab code--CORAL performs remarkably well in extensive evaluations on standard benchmark datasets.",
"Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target domain is unlabeled, requiring unsupervised adaptation. CORAL is a \"frustratingly easy\" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation. Here, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL). Experiments on standard benchmark datasets show state-of-the-art performance."
]
} |
1708.00938 | 2953136327 | We propose associative domain adaptation, a novel technique for end-to-end domain adaptation with neural networks, the task of inferring class labels for an unlabeled target domain based on the statistical properties of a labeled source domain. Our training scheme follows the paradigm that in order to effectively derive class labels for the target domain, a network should produce statistically domain invariant embeddings, while minimizing the classification error on the labeled source domain. We accomplish this by reinforcing associations between source and target data directly in embedding space. Our method can easily be added to any existing classification network with no structural and almost no computational overhead. We demonstrate the effectiveness of our approach on various benchmarks and achieve state-of-the-art results across the board with a generic convolutional neural network architecture not specifically tuned to the respective tasks. Finally, we show that the proposed association loss produces embeddings that are more effective for domain adaptation compared to methods employing maximum mean discrepancy as a similarity measure in embedding space. | Most works, which explicitly minimize latent feature discrepancy, use MMD in some variant. That is, they use MMD as @math in order to achieve as defined above. The authors of @cite_23 propose the Deep Adaptation Network architecture. Exploiting that learned features transition from general to specific within the network, they train the first layers of a CNN commonly for source and target domain, then train individual task-specific layers while minimizing the multiple kernel maximum mean discrepancies between these layers. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2951670162"
],
"abstract": [
"Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks."
]
} |
1708.00938 | 2953136327 | We propose associative domain adaptation, a novel technique for end-to-end domain adaptation with neural networks, the task of inferring class labels for an unlabeled target domain based on the statistical properties of a labeled source domain. Our training scheme follows the paradigm that in order to effectively derive class labels for the target domain, a network should produce statistically domain invariant embeddings, while minimizing the classification error on the labeled source domain. We accomplish this by reinforcing associations between source and target data directly in embedding space. Our method can easily be added to any existing classification network with no structural and almost no computational overhead. We demonstrate the effectiveness of our approach on various benchmarks and achieve state-of-the-art results across the board with a generic convolutional neural network architecture not specifically tuned to the respective tasks. Finally, we show that the proposed association loss produces embeddings that are more effective for domain adaptation compared to methods employing maximum mean discrepancy as a similarity measure in embedding space. | The technique of task-specific but coupled layers is further explored in @cite_3 and @cite_25 . The authors of @cite_3 propose to individually train source and target domains while the network parameters of each layer are regularized to be linear transformations of each other. In order to train for domain invariant features, they minimize the MMD of the embedding layer. On the other hand, the authors of @cite_25 maintain a shared representation of both domains and private representations of each individual domain in their Domain Separation architecture. | {
"cite_N": [
"@cite_25",
"@cite_3"
],
"mid": [
"2953127297",
"2312004824"
],
"abstract": [
"The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We suggest that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained not only to perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.",
"The performance of a classifier trained on data coming from a specific domain typically degrades when applied to a related but different one. While annotating many samples from the new domain would address this issue, it is often too expensive or impractical. Domain Adaptation has therefore emerged as a solution to this problem; It leverages annotated data from a source domain, in which it is abundant, to train a classifier to operate in a target domain, in which it is either sparse or even lacking altogether. In this context, the recent trend consists of learning deep architectures whose weights are shared for both domains, which essentially amounts to learning domain invariant features. Here, we show that it is more effective to explicitly model the shift from one domain to the other. To this end, we introduce a two-stream architecture, where one operates in the source domain and the other in the target domain. In contrast to other approaches, the weights in corresponding layers are related but not shared . We demonstrate that this both yields higher accuracy than state-of-the-art methods on several object recognition and detection tasks and consistently outperforms networks with shared weights in both supervised and unsupervised settings."
]
} |
1708.00969 | 2743887479 | The popularity and widespread usage of online social networks (OSN) have attracted cyber criminals who have used OSNs as a platform to spread malware. Among different types of malware in OSNs, Trojan is the most popular type with hundreds of attacks on OSN users in the past few years. Trojans infecting a user's computer have the ability to steal confidential information, install ransomware and infect other computers in the network. Therefore, it is important to understand propagation dynamics of Trojans in OSNs in order to detect, contain and remove them as early as possible. In this article, we present an analytical model to study propagation characteristics of Trojans and factors that impact their propagation in an online social network. The proposed model assumes all the topological characteristics of real online social networks. Moreover, the model takes into account attacking trends of modern Trojans, the role of anti-virus (AV) products, and security practices of OSN users and AV software providers. By taking into account these factors, the proposed model can accurately and realistically estimate the infection rate caused by a Trojan malware in an OSN as well as the recovery rate of the user population. | Existing works on modelling malware propagation in online social networks include @cite_13 @cite_29 and @cite_55 . Faghani and his collaborators @cite_13 @cite_29 modeled the propagation of cross-site-scripting (XSS) worms in OSNs. @cite_55 modeled the propagation of Trojans in the social network Twitter, which is represented by directed graphs due to one-directional (follower-followee) relationships. To the best of our knowledge, the analytical model we propose in this article is the first that characterizes the propagation of Trojans in social networks represented by undirected graphs such as Facebook, LinkedIn and Orkut. | {
"cite_N": [
"@cite_55",
"@cite_29",
"@cite_13"
],
"mid": [
"2055407442",
"",
"1984156542"
],
"abstract": [
"Social Networks have rapidly become one of the most used Internet based applications. The structure and ease of information dissemination provides an opportunity for adversaries to use it for their own malicious purpose. In this paper we investigate a popular social network – Twitter as a malware propagation medium. We present a basic model for Twitter-based malware propagation using epidemic theory. Our analysis shows that even with a low degree of connectivity and a low probability of clicking links, Twitter and its structure can be exploited to infect many nodes.",
"",
"We present analytical models and simulation results that characterize the impacts of the following factors on the propagation of cross-site scripting (XSS) worms in online social networks (OSNs): 1) user behaviors, namely, the probability of visiting a friend's profile versus a stranger's; 2) the highly clustered structure of communities; and 3) community sizes. Our analyses and simulation results show that the clustered structure of a community and users' tendency to visit their friends more often than strangers help slow down the propagation of XSS worms in OSNs. We then present a study of selective monitoring schemes that are more resource efficient than the exhaustive checking approach used by the Facebook detection system which monitors every possible read and write operation of every user in the network. The studied selective monitoring schemes take advantage of the characteristics of OSNs such as the highly clustered structure and short average distance to select only a subset of strategically placed users to monitor, thus minimizing resource usage while maximizing the monitoring coverage. We present simulation results to show the effectiveness of the studied selective monitoring schemes for XSS worm detection."
]
} |
1708.01101 | 2952025147 | Articulated human pose estimation is a fundamental yet challenging task in computer vision. The difficulty is particularly pronounced in scale variations of human body parts when camera view changes or severe foreshortening happens. Although pyramid methods are widely used to handle scale changes at inference time, learning feature pyramids in deep convolutional neural networks (DCNNs) is still not well explored. In this work, we design a Pyramid Residual Module (PRMs) to enhance the invariance in scales of DCNNs. Given input features, the PRMs learn convolutional filters on various scales of input features, which are obtained with different subsampling ratios in a multi-branch network. Moreover, we observe that it is inappropriate to adopt existing methods to initialize the weights of multi-branch networks, which achieve superior performance than plain networks in many tasks recently. Therefore, we provide theoretic derivation to extend the current weight initialization scheme to multi-branch network structures. We investigate our method on two standard benchmarks for human pose estimation. Our approach obtains state-of-the-art results on both benchmarks. Code is available at this https URL. | DCNNs combining multiple layers. In contrast to traditional plain networks ( , AlexNet @cite_1 and VGG-nets @cite_21 ), multi-branch networks exhibit better performance on various vision tasks. In classification, the inception models @cite_9 @cite_59 @cite_36 @cite_53 are one of the most successful multi-branch networks. The input of each module is first mapped to low dimension by @math convolutions, then transformed by a set of filters with different sizes to capture various context information and combined by concatenation. ResNet @cite_11 @cite_20 can be regarded as a two-branch networks with one identity mapping branch. ResNeXt @cite_30 is an extension of ResNet, in which all branches share the same topology. The implicitly learned transforms are aggregated by summation. In our work, we use multi-branch network to explore another possibility: to learn multi-scale features. | {
"cite_N": [
"@cite_30",
"@cite_36",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_53",
"@cite_59",
"@cite_20",
"@cite_11"
],
"mid": [
"2953328958",
"2949605076",
"2950179405",
"1686810756",
"",
"2274287116",
"2949117887",
"2302255633",
"2949650786"
],
"abstract": [
"We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call \"cardinality\" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.",
"Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"",
"Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.",
"Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https: github.com KaimingHe resnet-1k-layers.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
]
} |
1708.01101 | 2952025147 | Articulated human pose estimation is a fundamental yet challenging task in computer vision. The difficulty is particularly pronounced in scale variations of human body parts when camera view changes or severe foreshortening happens. Although pyramid methods are widely used to handle scale changes at inference time, learning feature pyramids in deep convolutional neural networks (DCNNs) is still not well explored. In this work, we design a Pyramid Residual Module (PRMs) to enhance the invariance in scales of DCNNs. Given input features, the PRMs learn convolutional filters on various scales of input features, which are obtained with different subsampling ratios in a multi-branch network. Moreover, we observe that it is inappropriate to adopt existing methods to initialize the weights of multi-branch networks, which achieve superior performance than plain networks in many tasks recently. Therefore, we provide theoretic derivation to extend the current weight initialization scheme to multi-branch network structures. We investigate our method on two standard benchmarks for human pose estimation. Our approach obtains state-of-the-art results on both benchmarks. Code is available at this https URL. | Recent methods in pose estimation, object detection and segmentation used features from multiple layers for making predictions @cite_35 @cite_31 @cite_54 @cite_26 @cite_46 @cite_49 . Our approach is complementary to these works. For example, we adopt Hourglass as our basic structure, and replace its original residual units, which learn features from a single scale, with the proposed Pyramid Residual Module. | {
"cite_N": [
"@cite_35",
"@cite_31",
"@cite_26",
"@cite_54",
"@cite_49",
"@cite_46"
],
"mid": [
"2193145675",
"2490270993",
"2951829713",
"",
"2412782625",
"2950762923"
],
"abstract": [
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the proposal sub-network, detection is performed at multiple output layers, so that receptive fields match objects of different scales. These complementary scale-specific detectors are combined to produce a strong multi-scale object detector. The unified network is learned end-to-end, by optimizing a multi-task loss. Feature upsampling by deconvolution is also explored, as an alternative to input upsampling, to reduce the memory and computation costs. State-of-the-art object detection performance, at up to 15 fps, is reported on datasets, such as KITTI and Caltech, containing a substantial number of small objects.",
"It is well known that contextual and multi-scale representations are important for accurate visual recognition. In this paper we present the Inside-Outside Net (ION), an object detector that exploits information both inside and outside the region of interest. Contextual information outside the region of interest is integrated using spatial recurrent neural networks. Inside, we use skip pooling to extract information at multiple scales and levels of abstraction. Through extensive experiments we evaluate the design space and provide readers with an overview of what tricks of the trade are important. ION improves state-of-the-art on PASCAL VOC 2012 object detection from 73.9 to 76.4 mAP. On the new and more challenging MS COCO dataset, we improve state-of-art-the from 19.7 to 33.1 mAP. In the 2015 MS COCO Detection Challenge, our ION model won the Best Student Entry and finished 3rd place overall. As intuition suggests, our detection results provide strong evidence that context and multi-scale representations improve small object detection.",
"",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods."
]
} |
1708.00993 | 2743555600 | Linguistic resources such as part-of-speech (POS) tags have been extensively used in statistical machine translation (SMT) frameworks and have yielded better performances. However, usage of such linguistic annotations in neural machine translation (NMT) systems has been left under-explored. In this work, we show that multi-task learning is a successful and a easy approach to introduce an additional knowledge into an end-to-end neural attentional model. By jointly training several natural language processing (NLP) tasks in one system, we are able to leverage common information and improve the performance of the individual task. We analyze the impact of three design decisions in multi-task learning: the tasks used in training, the training schedule, and the degree of parameter sharing across the tasks, which is defined by the network architecture. The experiments are conducted for an German to English translation task. As additional linguistic resources, we exploit POS information and named-entities (NE). Experiments show that the translation quality can be improved by up to 1.5 BLEU points under the low-resource condition. The performance of the POS tagger is also improved using the multi-task learning scheme. | The POS-based information has been integrated for language models in . In the neural machine translation, using additional word factors like POS-tags has shown to be beneficial @cite_17 . | {
"cite_N": [
"@cite_17"
],
"mid": [
"2410082850"
],
"abstract": [
"Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder--decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-of-speech tags, and syntactic dependency labels as input features to English German, and English->Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An open-source implementation of our neural MT system is available, as are sample files and configurations."
]
} |
1708.00993 | 2743555600 | Linguistic resources such as part-of-speech (POS) tags have been extensively used in statistical machine translation (SMT) frameworks and have yielded better performances. However, usage of such linguistic annotations in neural machine translation (NMT) systems has been left under-explored. In this work, we show that multi-task learning is a successful and a easy approach to introduce an additional knowledge into an end-to-end neural attentional model. By jointly training several natural language processing (NLP) tasks in one system, we are able to leverage common information and improve the performance of the individual task. We analyze the impact of three design decisions in multi-task learning: the tasks used in training, the training schedule, and the degree of parameter sharing across the tasks, which is defined by the network architecture. The experiments are conducted for an German to English translation task. As additional linguistic resources, we exploit POS information and named-entities (NE). Experiments show that the translation quality can be improved by up to 1.5 BLEU points under the low-resource condition. The performance of the POS tagger is also improved using the multi-task learning scheme. | A special case of multi-task learning for attention based models has been explored. In multi-lingual machine translation, for example, the tasks are still machine translation tasks but they need to consider different language pairs. In this case, a system with an individual encoder and decoder @cite_18 as well as a system with a shared encoder-decoder @cite_9 @cite_15 has been proposed. | {
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_15"
],
"mid": [
"2555745756",
"2443536229",
""
],
"abstract": [
"In this paper, we present our first attempts in building a multilingual Neural Machine Translation framework under a unified approach. We are then able to employ attention-based NMT for many-to-many multilingual translation tasks. Our approach does not require any special treatment on the network architecture and it allows us to learn minimal number of free parameters in a standard way of training. Our approach has shown its effectiveness in an under-resourced translation scenario with considerable improvements up to 2.6 BLEU points. In addition, the approach has achieved interesting and promising results when applied in the translation task that there is no direct parallel corpus between source and target languages.",
"In this paper, we propose a novel finetuning algorithm for the recently introduced multi-way, mulitlingual neural machine translate that enables zero-resource machine translation. When used together with novel many-to-one translation strategies, we empirically show that this finetuning algorithm allows the multi-way, multilingual model to translate a zero-resource language pair (1) as well as a single-pair neural translation model trained with up to 1M direct parallel sentences of the same language pair and (2) better than pivot-based translation strategy, while keeping only one additional copy of attention-related parameters.",
""
]
} |
1708.00993 | 2743555600 | Linguistic resources such as part-of-speech (POS) tags have been extensively used in statistical machine translation (SMT) frameworks and have yielded better performances. However, usage of such linguistic annotations in neural machine translation (NMT) systems has been left under-explored. In this work, we show that multi-task learning is a successful and a easy approach to introduce an additional knowledge into an end-to-end neural attentional model. By jointly training several natural language processing (NLP) tasks in one system, we are able to leverage common information and improve the performance of the individual task. We analyze the impact of three design decisions in multi-task learning: the tasks used in training, the training schedule, and the degree of parameter sharing across the tasks, which is defined by the network architecture. The experiments are conducted for an German to English translation task. As additional linguistic resources, we exploit POS information and named-entities (NE). Experiments show that the translation quality can be improved by up to 1.5 BLEU points under the low-resource condition. The performance of the POS tagger is also improved using the multi-task learning scheme. | In the encoder, an RNN is used to encode the source sentence into a fixed size of continuous space representation by inserting the source sentence word-by-word into the network. First, source words are encoded into a one-hot encoding. Then a linear transformation of this into a continuous space, referred to as word embeddings, is learned. An RNN model will learn the source sentence representation over these word embeddings. In a second step, the decoder is initialized by the representation of the source sentence and is then generating the target sequence one word after the other using the last generated word as input for the RNN. In order to get the output probability at each target position, a softmax layer that get the hidden state of the RNN as input is used @cite_6 . | {
"cite_N": [
"@cite_6"
],
"mid": [
"2949888546"
],
"abstract": [
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier."
]
} |
1708.00524 | 2740582239 | NLP tasks are often limited by scarcity of manually annotated data. In social media sentiment analysis and related tasks, researchers have therefore used binarized emoticons and specific hashtags as forms of distant supervision. Our paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations. Through emoji prediction on a dataset of 1246 million tweets containing one of 64 common emojis we obtain state-of-the-art performance on 8 benchmark datasets within sentiment, emotion and sarcasm detection using a single pretrained model. Our analyses confirm that the diversity of our emotional labels yield a performance improvement over previous distant supervision approaches. | Another way of automatically interpreting the emotional content of an emoji is to learn emoji embeddings from the words describing the emoji-semantics in official emoji tables @cite_0 . This approach, in our context, suffers from two severe limitations: a) It requires emojis at test time while there are many domains with limited or no usage of emojis. b) The tables do not capture the dynamics of emoji usage, i.e., drift in an emoji's intended meaning over time. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2527467788"
],
"abstract": [
"Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings. There currently exist several publicly-available, pre-trained sets of word embeddings, but they contain few or no emoji representations even as emoji usage in social media has increased. In this paper we release emoji2vec, pre-trained embeddings for all Unicode emoji which are learned from their description in the Unicode emoji standard. The resulting emoji embeddings can be readily used in downstream social natural language processing applications alongside word2vec. We demonstrate, for the downstream task of sentiment analysis, that emoji embeddings learned from short descriptions outperforms a skip-gram model trained on a large collection of tweets, while avoiding the need for contexts in which emoji need to appear frequently in order to estimate a representation."
]
} |
1708.00524 | 2740582239 | NLP tasks are often limited by scarcity of manually annotated data. In social media sentiment analysis and related tasks, researchers have therefore used binarized emoticons and specific hashtags as forms of distant supervision. Our paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations. Through emoji prediction on a dataset of 1246 million tweets containing one of 64 common emojis we obtain state-of-the-art performance on 8 benchmark datasets within sentiment, emotion and sarcasm detection using a single pretrained model. Our analyses confirm that the diversity of our emotional labels yield a performance improvement over previous distant supervision approaches. | Knowledge can be transferred from the emoji dataset to the target task in many different ways. In particular, multitask learning with simultaneous training on multiple datasets has shown promising results @cite_14 . However, multitask learning requires access to the emoji dataset whenever the classifier needs to be tuned for a new target task. Requiring access to the dataset is problematic in terms of violating data access regulations. There are also issues from a data storage perspective as the dataset used for this research contains hundreds of millions of tweets (see Table ). Instead we use transfer learning @cite_17 as described in , which does not require access to the original dataset, but only the pretrained classifier. | {
"cite_N": [
"@cite_14",
"@cite_17"
],
"mid": [
"2117130368",
"2616180702"
],
"abstract": [
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.",
"Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higher-level representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P(x) is structurally related to some task of interest, say predicting P(y x). This paper focuses on the context of the Unsupervised and Transfer Learning Challenge, on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution."
]
} |
1708.00674 | 2741483448 | Robots operating in populated environments encounter many different types of people, some of whom might have an advanced need for cautious interaction, because of physical impairments or their advanced age. Robots therefore need to recognize such advanced demands to provide appropriate assistance, guidance or other forms of support. In this paper, we propose a depth-based perception pipeline that estimates the position and velocity of people in the environment and categorizes them according to the mobility aids they use: pedestrian, person in wheelchair, person in a wheelchair with a person pushing them, person with crutches and person using a walker. We present a fast region proposal method that feeds a Region-based Convolutional Network (Fast R-CNN). With this, we speed up the object detection process by a factor of seven compared to a dense sliding window approach. We furthermore propose a probabilistic position, velocity and class estimator to smooth the CNN's detections and account for occlusions and misclassifications. In addition, we introduce a new hospital dataset with over 17,000 annotated RGB-D images. Extensive experiments confirm that our pipeline successfully keeps track of people and their mobility aids, even in challenging situations with multiple people from different categories and frequent occlusions. Videos of our experiments and the dataset are available at this http URL | address the problem of multi person tracking and detection using a stereo vision system mounted on a mobile platform, integrating visual odometry, depth estimation and pedestrian detection for improved perception. propose a method to track multiple people from a moving platform based on a particle filter approach. Several different detectors are used such as upper body, face, depth-based shape, skin color and a motion detector. Recently, extensive frameworks that include several people detection and tracking methods for mobile robots operating in indoor environments have been presented @cite_9 @cite_7 . In comparison to the mentioned frameworks, we focus on a multi-class detection problem and do not only track position and velocity but also the class throughout time. Further, previous approaches rely on manually designed detectors for different body parts while we use a single neural network detector that learns those body features automatically. | {
"cite_N": [
"@cite_9",
"@cite_7"
],
"mid": [
"2411443472",
"861352110"
],
"abstract": [
"Tracking people is a key technology for robots and intelligent systems in human environments. Many person detectors, filtering methods and data association algorithms for people tracking have been proposed in the past 15+ years in both the robotics and computer vision communities, achieving decent tracking performances from static and mobile platforms in real-world scenarios. However, little effort has been made to compare these methods, analyze their performance using different sensory modalities and study their impact on different performance metrics. In this paper, we propose a fully integrated real-time multi-modal laser RGB-D people tracking framework for moving platforms in environments like a busy airport terminal. We conduct experiments on two challenging new datasets collected from a first-person perspective, one of them containing very dense crowds of people with up to 30 individuals within close range at the same time. We consider four different, recently proposed tracking methods and study their impact on seven different performance metrics, in both single and multi-modal settings. We extensively discuss our findings, which indicate that more complex data association methods may not always be the better choice, and derive possible future research directions.",
"All currently used mobile robot platforms are able to navigate safely through their environment, avoiding static and dynamic obstacles. However, in human populated environments mere obstacle avoidance is not sufficient to make humans feel comfortable and safe around robots. To this end, a large community is currently producing human-aware navigation approaches to create a more socially acceptable robot behaviour. Amajorbuilding block for all Human-Robot Spatial Interaction is the ability of detecting and tracking humans in the vicinity of the robot. We present a fully integrated people perception framework, designed to run in real-time on a mobile robot. This framework employs detectors based on laser and RGB-D data and a tracking approach able to fuse multiple detectors using different versions of data association and Kalman filtering. The resulting trajectories are transformed into Qualitative Spatial Relations based on a Qualitative Trajectory Calculus, to learn and classify different encounters using a Hidden Markov Model based representation. We present this perception pipeline, which is fully implemented into the Robot Operating System (ROS), in a small proof of concept experiment. All components are readily available for download, and free to use under the MIT license, to researchers in all fields, especially focussing on social interaction learning by providing different kinds of output, i.e. Qualitative Relations and trajectories."
]
} |
1708.00674 | 2741483448 | Robots operating in populated environments encounter many different types of people, some of whom might have an advanced need for cautious interaction, because of physical impairments or their advanced age. Robots therefore need to recognize such advanced demands to provide appropriate assistance, guidance or other forms of support. In this paper, we propose a depth-based perception pipeline that estimates the position and velocity of people in the environment and categorizes them according to the mobility aids they use: pedestrian, person in wheelchair, person in a wheelchair with a person pushing them, person with crutches and person using a walker. We present a fast region proposal method that feeds a Region-based Convolutional Network (Fast R-CNN). With this, we speed up the object detection process by a factor of seven compared to a dense sliding window approach. We furthermore propose a probabilistic position, velocity and class estimator to smooth the CNN's detections and account for occlusions and misclassifications. In addition, we introduce a new hospital dataset with over 17,000 annotated RGB-D images. Extensive experiments confirm that our pipeline successfully keeps track of people and their mobility aids, even in challenging situations with multiple people from different categories and frequent occlusions. Videos of our experiments and the dataset are available at this http URL | Our work is further related to the research area of object detection, which recently is dominated by deep neural network approaches, most prominently by region-based convolutional neural networks @cite_8 @cite_1 , which achieve very good results but are not applicable for real-time yet. An interesting extension to the region-based CNN detection approaches is the very recently introduced region-based fully convolutional neural network presented by , which increases the test-time speed. Recently proposed an approach that formulates object detection as a regression problem. It can operate in real-time and achieves very impressive performance on several object detection benchmarks. We also employ a region-based convolutional neural network classifier and, to achieve a fast runtime, we combine it with our depth-based region proposal method. Recent work on multi-class object recognition and detection applied to mobile robot scenarios include a Lidar-based wheelchair walker detector @cite_2 and a human gender recognition approach @cite_5 . To the best of our knowledge there exists no prior work that presents multi-class people detection applied to service robot scenarios. | {
"cite_N": [
"@cite_2",
"@cite_5",
"@cite_1",
"@cite_8"
],
"mid": [
"2963171508",
"1539839126",
"2613718673",
""
],
"abstract": [
"We introduce the DROW detector, a deep learning-based object detector operating on 2-dimensional (2-D) range data. Laser scanners are lighting invariant, provide accurate 2-D range data, and typically cover a large field of view, making them interesting sensors for robotics applications. So far, research on detection in laser 2-D range data has been dominated by hand-crafted features and boosted classifiers, potentially losing performance due to suboptimal design choices. We propose a convolutional neural network (CNN) based detector for this task. We show how to effectively apply CNNs for detection in 2-D range data, and propose a depth preprocessing step and a voting scheme that significantly improve CNN performance. We demonstrate our approach on wheelchairs and walkers, obtaining state of the art detection results. Apart from the training data, none of our design choices limits the detector to these two classes, though. We provide a ROS node for our detector and release our dataset containing 464 k laser scans, out of which 24 k were annotated.",
"Understanding social context is an important skill for robots that share a space with humans. In this paper, we address the problem of recognizing gender, a key piece of information when interacting with people and understanding human social relations and rules. Unlike previous work which typically considered faces or frontal body views in image data, we address the problem of recognizing gender in RGB-D data from side and back views as well. We present a large, gender-balanced, annotated, multi-perspective RGB-D dataset with full-body views of over a hundred different persons captured with both the Kinect v1 and Kinect v2 sensor. We then learn and compare several classifiers on the Kinect v2 data using a HOG baseline, two state-of-the-art deep-learning methods, and a recent tessellation-based learning approach. Originally developed for person detection in 3D data, the latter is able to learn the best selection, location and scale of a set of simple point cloud features. We show that for gender recognition, it outperforms the other approaches for both standing and walking people while being very efficient to compute with classification rates up to 150 Hz.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
""
]
} |
1708.00674 | 2741483448 | Robots operating in populated environments encounter many different types of people, some of whom might have an advanced need for cautious interaction, because of physical impairments or their advanced age. Robots therefore need to recognize such advanced demands to provide appropriate assistance, guidance or other forms of support. In this paper, we propose a depth-based perception pipeline that estimates the position and velocity of people in the environment and categorizes them according to the mobility aids they use: pedestrian, person in wheelchair, person in a wheelchair with a person pushing them, person with crutches and person using a walker. We present a fast region proposal method that feeds a Region-based Convolutional Network (Fast R-CNN). With this, we speed up the object detection process by a factor of seven compared to a dense sliding window approach. We furthermore propose a probabilistic position, velocity and class estimator to smooth the CNN's detections and account for occlusions and misclassifications. In addition, we introduce a new hospital dataset with over 17,000 annotated RGB-D images. Extensive experiments confirm that our pipeline successfully keeps track of people and their mobility aids, even in challenging situations with multiple people from different categories and frequent occlusions. Videos of our experiments and the dataset are available at this http URL | Another contribution of this paper is a novel, annotated large-scale dataset for multi-class people detection. In the literature there are several other datasets that include multi-class labels for people, mostly from the area of human attribute recognition @cite_11 @cite_10 or more specifically gender recognition @cite_5 . Our dataset can be valuable for the robotics community, because on one hand it provides a large number of labeled images and on the other hand it is recorded from a mobile platform. Very recently and most similar to our dataset recorded video sequences of people from a moving camera for the task of human attribute recognition. | {
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_11"
],
"mid": [
"1539839126",
"2286727787",
""
],
"abstract": [
"Understanding social context is an important skill for robots that share a space with humans. In this paper, we address the problem of recognizing gender, a key piece of information when interacting with people and understanding human social relations and rules. Unlike previous work which typically considered faces or frontal body views in image data, we address the problem of recognizing gender in RGB-D data from side and back views as well. We present a large, gender-balanced, annotated, multi-perspective RGB-D dataset with full-body views of over a hundred different persons captured with both the Kinect v1 and Kinect v2 sensor. We then learn and compare several classifiers on the Kinect v2 data using a HOG baseline, two state-of-the-art deep-learning methods, and a recent tessellation-based learning approach. Originally developed for person detection in 3D data, the latter is able to learn the best selection, location and scale of a set of simple point cloud features. We show that for gender recognition, it outperforms the other approaches for both standing and walking people while being very efficient to compute with classification rates up to 150 Hz.",
"This paper addresses the problem of human visual attribute recognition, i.e., the prediction of a fixed set of semantic attributes given an image of a person. Previous work often considered the different attributes independently from each other, without taking advantage of possible dependencies between them. In contrast, we propose a method to jointly train a CNN model for all attributes that can take advantage of those dependencies, considering as input only the image without additional external pose, part or context information. We report detailed experiments examining the contribution of individual aspects, which yields beneficial insights for other researchers. Our holistic CNN achieves superior performance on two publicly available attribute datasets improving on methods that additionally rely on pose-alignment or context. To support further evaluations, we present a novel dataset, based on realistic outdoor video sequences, that contains more than 27,000 pedestrians annotated with 10 attributes. Finally, we explore design options to embrace the N A labels inherently present in this task.",
""
]
} |
1708.00601 | 2739564724 | This paper conducts a rigorous analysis for provable estimation of multidimensional arrays, in particular third-order tensors, from a random subset of its corrupted entries. Our study rests heavily on a recently proposed tensor algebraic framework in which we can obtain tensor singular value decomposition (t-SVD) that is similar to the SVD for matrices, and define a new notion of tensor rank referred to as the tubal rank. We prove that by simply solving a convex program, which minimizes a weighted combination of tubal nuclear norm, a convex surrogate for the tubal rank, and the @math -norm, one can recover an incoherent tensor exactly with overwhelming probability, provided that its tubal rank is not too large and that the corruptions are reasonably sparse. Interestingly, our result includes the recovery guarantees for the problems of tensor completion (TC) and tensor principal component analysis (TRPCA) under the same algebraic setup as special cases. An alternating direction method of multipliers (ADMM) algorithm is presented to solve this optimization problem. Numerical experiments verify our theory and real-world applications demonstrate the effectiveness of our algorithm. | In TC problem, we would like to recover a low-rank tensor when a limited number of its entries are observed. Jain and Oh @cite_1 show that an @math symmetric tensor with CP-rank @math can be accurately estimated from @math randomly sampled entries under standard incoherence conditions on the tensor factors. @cite_23 , highly scalable algorithms have been proposed for the tasks of filling the missing entries in multidimensional data by the integration of CP decomposition and block coordinate descent (BCD) methods. This optimization problem is non-convex and hence only local minimum can be arrived at. As we all know, it is often computationally intractable to determine the CP rank or its best convex approximation of a tensor, which makes it very difficult to recover tensors with low CP rank, particularly via convex programming. | {
"cite_N": [
"@cite_1",
"@cite_23"
],
"mid": [
"2130800351",
"2095729436"
],
"abstract": [
"We study the problem of low-rank tensor factorization in the presence of missing data. We ask the following question: how many sampled entries do we need, to efficiently and exactly reconstruct a tensor with a low-rank orthogonal decomposition? We propose a novel alternating minimization based method which iteratively refines estimates of the singular vectors. We show that under certain standard assumptions, our method can recover a three-mode n × n × n dimensional rank-r tensor exactly from O(n3 2r5 log4 n) randomly sampled entries. In the process of proving this result, we solve two challenging sub-problems for tensors with missing data. First, in analyzing the initialization step, we prove a generalization of a celebrated result by on the spectrum of random graphs. We show that this initialization step alone is sufficient to achieve the root mean squared error on the parameters bounded by C(r2n3 2(log n)4 |Ω|) from |Ω| observed entries for some constant C independent of n and r. Next, we prove global convergence of alternating minimization with this good initialization. Simulations suggest that the dependence of the sample size on the dimensionality n is indeed tight.",
"Novel parallel algorithms for tensor completion problems, with applications to recommender systems and function learning.Parallelization strategy offers greatly reduced memory requirements compared to previously published matrix equivalents.Convergence results for both alternating least squares and cyclic coordinate descent. Low-rank tensor completion addresses the task of filling in missing entries in multi-dimensional data. It has proven its versatility in numerous applications, including context-aware recommender systems and multivariate function learning. To handle large-scale datasets and applications that feature high dimensions, the development of distributed algorithms is central. In this work, we propose novel, highly scalable algorithms based on a combination of the canonical polyadic (CP) tensor format with block coordinate descent methods. Although similar algorithms have been proposed for the matrix case, the case of higher dimensions gives rise to a number of new challenges and requires a different paradigm for data distribution. The convergence of our algorithms is analyzed and numerical experiments illustrate their performance on distributed-memory architectures for tensors from a range of different applications."
]
} |
1708.00601 | 2739564724 | This paper conducts a rigorous analysis for provable estimation of multidimensional arrays, in particular third-order tensors, from a random subset of its corrupted entries. Our study rests heavily on a recently proposed tensor algebraic framework in which we can obtain tensor singular value decomposition (t-SVD) that is similar to the SVD for matrices, and define a new notion of tensor rank referred to as the tubal rank. We prove that by simply solving a convex program, which minimizes a weighted combination of tubal nuclear norm, a convex surrogate for the tubal rank, and the @math -norm, one can recover an incoherent tensor exactly with overwhelming probability, provided that its tubal rank is not too large and that the corruptions are reasonably sparse. Interestingly, our result includes the recovery guarantees for the problems of tensor completion (TC) and tensor principal component analysis (TRPCA) under the same algebraic setup as special cases. An alternating direction method of multipliers (ADMM) algorithm is presented to solve this optimization problem. Numerical experiments verify our theory and real-world applications demonstrate the effectiveness of our algorithm. | The rank sparsity tensor decomposition (RSTD) algorithm @cite_16 applies variable-splitting to both components, and utilizes a classic BCD algorithm to solve an unconstrained problem obtained by relaxing all the constraints as quadratic penalty terms. This method has many parameters to tune and does not have a iteration complexity guarantee. The Multi-linear Augmented Lagrange Multiplier (MALM) method @cite_9 divides the original TRPCA problem into independent robust principal component analysis (RPCA) problems @cite_44 . This reformulation makes the final solution hard to be optimal since consistency among the auxiliary variables is not considered. @cite_33 , convex and non-convex approaches derived from the ADMM algorithm, are introduced, but there are no guarantees on their recovery performance. Lu @cite_20 propose a convex optimization, which is indeed a simple and elegant tensor extension of RPCA. They show that under certain incoherence conditions, the solution to the convex optimization perfectly recovers the low-rank and the sparse components, provided that the tubal rank of target tensor is not too large, and that corruption term is reasonably sparse. | {
"cite_N": [
"@cite_33",
"@cite_9",
"@cite_44",
"@cite_16",
"@cite_20"
],
"mid": [
"1999136078",
"",
"2145962650",
"2153496554",
"2435918055"
],
"abstract": [
"Robust tensor recovery plays an instrumental role in robustifying tensor decompositions for multilinear data analysis against outliers, gross corruptions, and missing values and has a diverse array of applications. In this paper, we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust principal component analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number o...",
"",
"This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.",
"Confronted with the high-dimensional tensor-like visual data, we derive a method for the decomposition of an observed tensor into a low-dimensional structure plus unbounded but sparse irregular patterns. The optimal rank-(R1,R2, ...Rn) tensor decomposition model that we propose in this paper, could automatically explore the low-dimensional structure of the tensor data, seeking optimal dimension and basis for each mode and separating the irregular patterns. Consequently, our method accounts for the implicit multi-factor structure of tensor-like visual data in an explicit and concise manner. In addition, the optimal tensor decomposition is formulated as a convex optimization through relaxation technique. We then develop a block coordinate descent (BCD) based algorithm to efficiently solve the problem. In experiments, we show several applications of our method in computer vision and the results are promising.",
"This paper studies the Tensor Robust Principal Component (TRPCA) problem which extends the known Robust PCA ( 2011) to the tensor case. Our model is based on a new tensor Singular Value Decomposition (t-SVD) (Kilmer and Martin 2011) and its induced tensor tubal rank and tensor nuclear norm. Consider that we have a 3-way tensor @math such that @math , where @math has low tubal rank and @math is sparse. Is that possible to recover both components? In this work, we prove that under certain suitable assumptions, we can recover both the low-rank and the sparse components exactly by simply solving a convex program whose objective is a weighted combination of the tensor nuclear norm and the @math -norm, i.e., @math , where @math . Interestingly, TRPCA involves RPCA as a special case when @math and thus it is a simple and elegant tensor extension of RPCA. Also numerical experiments verify our theory and the application for the image denoising demonstrates the effectiveness of our method."
]
} |
1708.00666 | 2740060125 | In this paper, we investigate a weakly-supervised object detection framework. Most existing frameworks focus on using static images to learn object detectors. However, these detectors often fail to generalize to videos because of the existing domain shift. Therefore, we investigate learning these detectors directly from boring videos of daily activities. Instead of using bounding boxes, we explore the use of action descriptions as supervision since they are relatively easy to gather. A common issue, however, is that objects of interest that are not involved in human actions are often absent in global action descriptions known as "missing label". To tackle this problem, we propose a novel temporal dynamic graph Long Short-Term Memory network (TD-Graph LSTM). TD-Graph LSTM enables global temporal reasoning by constructing a dynamic graph that is based on temporal correlations of object proposals and spans the entire video. The missing label issue for each individual frame can thus be significantly alleviated by transferring knowledge across correlated objects proposals in the whole video. Extensive evaluations on a large-scale daily-life action dataset (i.e., Charades) demonstrates the superiority of our proposed method. We also release object bounding-box annotations for more than 5,000 frames in Charades. We believe this annotated data can also benefit other research on video-based object recognition in the future. | Recurrent neural networks, especially Long Short-Term Memory (LSTM) @cite_10 , have been adopted to address many video processing tasks such as action recognition @cite_22 , action detection @cite_0 , video prediction @cite_7 @cite_36 , and video summarization @cite_50 . However, limited by the fixed propagation route of existing LSTM structures @cite_10 , most of the previous works @cite_22 @cite_20 @cite_11 can only learn the temporal interdependency between the holistic frames rather than more fine-grained object-level motion patterns. Some recent approaches develop more complicated recurrent network structures. For instance, structural-RNN @cite_45 develops a scalable method for casting an arbitrary spatio-temporal graph as a rich RNN mixture. A more recent Graph LSTM @cite_35 defined over a pre-defined graph topology enables the inference for more complex structured data. However, both of them require a pre-fixed network structure for information propagation, which is impractical for weakly-supervised slightly-supervised object detection without the knowledge of object localizations and precise object class labels. To handle the propagation over dynamically specified graph structures, we thus propose a new temporal dynamic network structure that supports the inference over the constantly changing graph topologies in different training steps. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_7",
"@cite_36",
"@cite_0",
"@cite_45",
"@cite_50",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2179259799",
"",
"2952453038",
"",
"2179401333",
"2952072685",
"2963919999",
"",
"",
""
],
"abstract": [
"Semantic object parsing is a fundamental task for understanding objects in detail in computer vision community, where incorporating multi-level contextual information is critical for achieving such fine-grained pixel-level recognition. Prior methods often leverage the contextual information through post-processing predicted confidence maps. In this work, we propose a novel deep Local-Global Long Short-Term Memory (LG-LSTM) architecture to seamlessly incorporate short-distance and long-distance spatial dependencies into the feature learning over all pixel positions. In each LG-LSTM layer, local guidance from neighboring positions and global guidance from the whole image are imposed on each position to better exploit complex local and global contextual information. Individual LSTMs for distinct spatial dimensions are also utilized to intrinsically capture various spatial layouts of semantic parts in the images, yielding distinct hidden and memory cells of each position for each dimension. In our parsing approach, several LG-LSTM layers are stacked and appended to the intermediate convolutional layers to directly enhance visual features, allowing network parameters to be learned in an end-to-end way. The long chains of sequential computation by stacked LG-LSTM layers also enable each pixel to sense a much larger region for inference benefiting from the memorization of previous dependencies in all positions along all dimensions. Comprehensive evaluations on three public datasets well demonstrate the significant superiority of our LG-LSTM over other state-of-the-art methods.",
"",
"We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.",
"",
"In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2 or less) of the video frames.",
"Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain. In this paper, we propose a new method that achieves this goal with only image-level labels of whether the objects are present or not. Our approach combines a discriminative submodular cover problem for automatically discovering a set of positive object windows with a smoothed latent SVM formulation. The latter allows us to leverage efficient quasi-Newton optimization techniques. Our experiments demonstrate that the proposed approach provides a 50 relative improvement in mean average precision over the current state-of-the-art on PASCAL VOC 2007 detection.",
"We propose a novel supervised learning technique for summarizing videos by automatically selecting keyframes or key subshots. Casting the task as a structured prediction problem, our main idea is to use Long Short-Term Memory (LSTM) to model the variable-range temporal dependency among video frames, so as to derive both representative and compact video summaries. The proposed model successfully accounts for the sequential structure crucial to generating meaningful video summaries, leading to state-of-the-art results on two benchmark datasets. In addition to advances in modeling techniques, we introduce a strategy to address the need for a large amount of annotated data for training complex learning approaches to summarization. There, our main idea is to exploit auxiliary annotated video summarization datasets, in spite of their heterogeneity in visual styles and contents. Specifically, we show that domain adaptation techniques can improve learning by reducing the discrepancies in the original datasets’ statistical properties.",
"",
"",
""
]
} |
1708.00583 | 2741325263 | Depth from defocus (DfD) and stereo matching are two most studied passive depth sensing schemes. The techniques are essentially complementary: DfD can robustly handle repetitive textures that are problematic for stereo matching whereas stereo matching is insensitive to defocus blurs and can handle large depth range. In this paper, we present a unified learning-based technique to conduct hybrid DfD and stereo matching. Our input is image triplets: a stereo pair and a defocused image of one of the stereo views. We first apply depth-guided light field rendering to construct a comprehensive training dataset for such hybrid sensing setups. Next, we adopt the hourglass network architecture to separately conduct depth inference from DfD and stereo. Finally, we exploit different connection methods between the two separate networks for integrating them into a unified solution to produce high fidelity 3D disparity maps. Comprehensive experiments on real and synthetic data show that our new learning-based hybrid 3D sensing technique can significantly improve accuracy and robustness in 3D reconstruction. | Stereo matching is probably one of the most studied problems in computer vision. We refer the readers to the comprehensive survey @cite_18 @cite_2 . Here we only discuss the most relevant works. Our work is motivated by recent advances in deep neural network. One stream focuses on learning the patch matching function. The seminal work by Z bontar and LeCun @cite_41 leveraged convolutional neural network (CNN) to predict the matching cost of image patches, then enforced smoothness constraints to refine depth estimation. @cite_15 investigated multiple network architectures to learn a general similarity function for wide baseline stereo. Han @cite_38 described a unified approach that includes both feature representation and feature comparison functions. Luo @cite_28 used a product layer to facilitate the matching process, and formulate the depth estimation as a multi-class classification problem. Other network architectures @cite_42 @cite_21 @cite_1 have also been proposed to serve a similar purpose. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_41",
"@cite_28",
"@cite_42",
"@cite_21",
"@cite_1",
"@cite_2",
"@cite_15"
],
"mid": [
"1929856797",
"2104974755",
"2963502507",
"2440384215",
"2214868166",
"",
"",
"",
"1955055330"
],
"abstract": [
"Motivated by recent successes on learning feature representations and on learning feature comparison functions, we propose a unified approach to combining both for training a patch matching system. Our system, dubbed Match-Net, consists of a deep convolutional network that extracts features from patches and a network of three fully connected layers that computes a similarity between the extracted features. To ensure experimental repeatability, we train MatchNet on standard datasets and employ an input sampler to augment the training set with synthetic exemplar pairs that reduce overfitting. Once trained, we achieve better computational efficiency during matching by disassembling MatchNet and separately applying the feature computation and similarity networks in two sequential stages. We perform a comprehensive set of experiments on standard datasets to carefully study the contributions of each aspect of MatchNet, with direct comparisons to established methods. Our results confirm that our unified approach improves accuracy over previous state-of-the-art results on patch matching datasets, while reducing the storage requirement for descriptors. We make pre-trained MatchNet publicly available.",
"Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.",
"We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.",
"This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.",
"",
"",
"",
"In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets."
]
} |
1708.00583 | 2741325263 | Depth from defocus (DfD) and stereo matching are two most studied passive depth sensing schemes. The techniques are essentially complementary: DfD can robustly handle repetitive textures that are problematic for stereo matching whereas stereo matching is insensitive to defocus blurs and can handle large depth range. In this paper, we present a unified learning-based technique to conduct hybrid DfD and stereo matching. Our input is image triplets: a stereo pair and a defocused image of one of the stereo views. We first apply depth-guided light field rendering to construct a comprehensive training dataset for such hybrid sensing setups. Next, we adopt the hourglass network architecture to separately conduct depth inference from DfD and stereo. Finally, we exploit different connection methods between the two separate networks for integrating them into a unified solution to produce high fidelity 3D disparity maps. Comprehensive experiments on real and synthetic data show that our new learning-based hybrid 3D sensing technique can significantly improve accuracy and robustness in 3D reconstruction. | In the computational imaging community, there has been a handful of works that aim to combine stereo and DfD. Early approaches @cite_27 @cite_44 use a coarse estimation from DfD to reduce the search space of correspondence matching in stereo. Rajagopalan @cite_31 used a defocused stereo pair to recover depth and restore the all-focus image. Recently, Tao @cite_24 analyzed the variances of the epipolar plane image (EPI) to infer depth: the horizontal variance after vertical integration of the EPI encodes the defocus cue, while vertical variance represents the disparity cue. Both cues are then jointly optimized in a MRF framework. Takeda @cite_37 analyzed the relationship between point spread function and binocular disparity in the frequency domain, and jointly resolved the depth and deblurred the image. Wang @cite_25 presented a hybrid camera system that is composed of two calibrated auxiliary cameras and an uncalibrated main camera. The calibrated cameras were used to infer depth and the main camera provides DfD cues for boundary refinement. Our approach instead leverages the neural network to combine DfD and stereo estimations. To our knowledge, this is the first approach that employs deep learning for stereo and DfD fusion. | {
"cite_N": [
"@cite_37",
"@cite_44",
"@cite_24",
"@cite_27",
"@cite_31",
"@cite_25"
],
"mid": [
"2089129849",
"",
"",
"2115463575",
"2107882199",
"2529015851"
],
"abstract": [
"In this paper we propose a novel depth measurement method by fusing depth from defocus (DFD) and stereo. One of the problems of passive stereo method is the difficulty of finding correct correspondence between images when an object has a repetitive pattern or edges parallel to the epipolar line. On the other hand, the accuracy of DFD method is inherently limited by the effective diameter of the lens. Therefore, we propose the fusion of stereo method and DFD by giving different focus distances for left and right cameras of a stereo camera with coded apertures. Two types of depth cues, defocus and disparity, are naturally integrated by the magnification and phase shift of a single point spread function (PSF) per camera. In this paper we give the proof of the proportional relationship between the diameter of defocus and disparity which makes the calibration easy. We also show the outstanding performance of our method which has both advantages of two depth cues through simulation and actual experiments.",
"",
"",
"A new method for actively recovering depth information using image defocus is demonstrated and shown to support active stereo vision depth recovery by providing monocular depth estimates to guide the positioning of cameras for stereo processing. This active depth-from-defocus approach employs a spatial frequency model for image defocus which incorporates the optical transfer function of the image acquisition system and a maximum likelihood estimator to determine the amount of defocus present in a sequence of two or more images taken from the same pose. This defocus estimate is translated into a measurement of depth and associated uncertainty that is used to control the positioning of a variable baseline stereo camera system. This cooperative arrangement significantly reduces the matching uncertainty of the stereo correspondence process and increases the depth resolution obtainable with an active stereo vision platform.",
"We propose a method for estimating depth from images captured with a real aperture camera by fusing defocus and stereo cues. The idea is to use stereo-based constraints in conjunction with defocusing to obtain improved estimates of depth over those of stereo or defocus alone. The depth map as well as the original image of the scene are modeled as Markov random fields with a smoothness prior, and their estimates are obtained by minimizing a suitable energy function using simulated annealing. The main advantage of the proposed method, despite being computationally less efficient than the standard stereo or DFD method, is simultaneous recovery of depth as well as space-variant restoration of the original focused image of the scene.",
"In this work, we propose a multi-camera system where we combine a main high-quality camera with two low-res auxiliary cameras. The auxiliary cameras are well calibrated and act as a passive depth sensor by generating disparity maps. The main camera has an interchangeable lens and can produce good quality images at high resolution. Our goal is, given the low-res depth map from the auxiliary cameras, generate a depth map from the viewpoint of the main camera. The advantage of our system, compared to other systems such as light-field cameras or RGBD sensors, is the ability to generate a high-resolution color image with a complete depth map, without sacrificing resolution and with minimal auxiliary hardware. Since the main camera has an interchangeable lens, it cannot be calibrated beforehand, and directly applying stereo matching on it and either of the auxiliary cameras often leads to unsatisfactory results. Utilizing both the calibrated cameras at once, we propose a novel approach to better estimate the disparity map of the main camera. Then by combining the defocus cue of the main camera, the disparity map can be further improved. We demonstrate the performance of our algorithm on various scenes."
]
} |
1708.00530 | 2740688636 | The second largest eigenvalue of a transition matrix @math has connections with many properties of the underlying Markov chain, and especially its convergence rate towards the stationary distribution. In this paper, we give an asymptotic upper bound for the second eigenvalue when @math is the transition matrix of the simple random walk over a random directed graph with given degree sequence. This is the first result concerning the asymptotic behavior of the spectral gap for sparse non-reversible Markov chains with an unknown stationary distribution. An immediate consequence of our result is a generalization of the well-known Friedman theorem, for undirected regular graphs. Our result is based on a variation of the trace method introduced by Bordenave (2015). | In this paper, we consider random directed (multi)graphs with a specified sequence of in-degrees and out-degrees; when all the degrees are equal to @math , this model reduces to the directed @math -regular case. Our construction with half-edges is a directed variant of the classical configuration model (see @cite_33 ). When the degrees are bounded independently of the size of the graph, such multigraphs are sparse, meaning they have few edges. Even if digraphs are much more difficult to handle than undirected graphs, they are also one step closer to reality when modelling real-life situtations: see @cite_24 @cite_21 and references for (many) examples of graph-modelling that go beyond the Internet graph. | {
"cite_N": [
"@cite_24",
"@cite_21",
"@cite_33"
],
"mid": [
"2169015768",
"1714396858",
"2020270524"
],
"abstract": [
"Recent work on the structure of social networks and the internet has focused attention on graphs with distributions of vertex degree that are significantly different from the Poisson degree distributions that have been widely studied in the past. In this paper we develop in detail the theory of random graphs with arbitrary degree distributions. In addition to simple undirected, unipartite graphs, we examine the properties of directed and bipartite graphs. Among other results, we derive exact expressions for the position of the phase transition at which a giant component first forms, the mean component size, the size of the giant component if there is one, the mean number of vertices a certain distance away from a randomly chosen vertex, and the average vertex-vertex distance within a graph. We apply our theory to some real-world graphs, including the worldwide web and collaboration graphs of scientists and Fortune 1000 company directors. We demonstrate that in some cases random graphs with appropriate distributions of vertex degree predict with surprising accuracy the behavior of the real world, while in others there is a measurable discrepancy between theory and reality, perhaps indicating the presence of additional social structure in the network that is not captured by the random graph.",
"The aim of this article is to discuss some applications of random processes in searching and reaching consensus on finite graphs. The topics covered are: Why random walks?, Speeding up random walks, Random and deterministic walks, Interacting particles and voting, Searching changing graphs.",
"Given a graph G = (V, E) and a set of κ pairs of vertices in V, we are interested in finding, for each pair (ai, bi), a path connecting ai to bi such that the set of κ paths so found is edge-disjoint. (For arbitrary graphs the problem is NP-complete, although it is in P if κ is fixed.)We present a polynomial time randomized algorithm for finding edge-disjoint paths in the random regular graph Gn,r, for sufficiently large r. (The graph is chosen first, then an adversary chooses the pairs of end-points.) We show that almost every Gn,r is such that all sets of κ = Ω(n log n) pairs of vertices can be joined. This is within a constant factor of the optimum."
]
} |
1708.00530 | 2740688636 | The second largest eigenvalue of a transition matrix @math has connections with many properties of the underlying Markov chain, and especially its convergence rate towards the stationary distribution. In this paper, we give an asymptotic upper bound for the second eigenvalue when @math is the transition matrix of the simple random walk over a random directed graph with given degree sequence. This is the first result concerning the asymptotic behavior of the spectral gap for sparse non-reversible Markov chains with an unknown stationary distribution. An immediate consequence of our result is a generalization of the well-known Friedman theorem, for undirected regular graphs. Our result is based on a variation of the trace method introduced by Bordenave (2015). | We finally mention some related questions and conjectures. What is the link between @math and the cutoff phenomenon for the Markov chain ? Do graphs in our model having @math exibit cutoff ? Is the upper bound optimal ? In the Friedman theorem, the difficult part was to prove the upper bound while the lower bound had been proven very early ( @cite_26 ) using the full strength of the symmetric nature of @math . We have proven an upper bound for our model, but no lower bound is known yet. This paper deals with the second eigenvalue of random digraphs in general. In the specific case of @math -regular digraphs, it is conjectured in [Section 7] bordenave_chafai_survey that the whole empirical spectral measure of the adjacency matrix of a @math -regular digraph converges almost surely in distribution to @math , a complex version of the Kesten-McKay distribution, namely [ ( d z) = ^ -1 d^2(d-1) (d^2 - |z|^2)^2 1 _ |z| d d z. ] | {
"cite_N": [
"@cite_26"
],
"mid": [
"2036878392"
],
"abstract": [
"Abstract It is shown that the second largest eigenvalue of the adjacency matrix of any d-regular graph G containing two edges the distance between which is at least 2k + 2 is at least 2 d − 1 − (2 d − 1 − 1) (k+1) ."
]
} |
1708.00768 | 2740620148 | We consider the problem of streaming kernel regression, when the observations arrive sequentially and the goal is to recover the underlying mean function, assumed to belong to an RKHS. The variance of the noise is not assumed to be known. In this context, we tackle the problem of tuning the regularization parameter adaptively at each time step, while maintaining tight confidence bounds estimates on the value of the mean function at each point. To this end, we first generalize existing results for finite-dimensional linear regression with fixed regularization and known variance to the kernel setup with a regularization parameter allowed to be a measurable function of past observations. Then, using appropriate self-normalized inequalities we build upper and lower bound estimates for the variance, leading to Bersntein-like concentration bounds. The later is used in order to define the adaptive regularization. The bounds resulting from our technique are valid uniformly over all observation points and all time steps, and are compared against the literature with numerical experiments. Finally, the potential of these tools is illustrated by an application to kernelized bandits, where we revisit the Kernel UCB and Kernel Thompson Sampling procedures, and show the benefits of the novel adaptive kernel tuning strategy. | Theorem extends the self-normalized bounds of @cite_7 from the setting of linear function spaces to that of an RKHS with sub-Gaussian noise. Based on a nontrivial adaptation of the Laplace method, it yields self-normalized inequalities in a setting of possibly infinite dimension. It generalizes the following result of @cite_6 to kernel regression with @math , which was already a generalization of a previous result by @cite_1 for bounded noise. It is also more general than the concentration result from @cite_3 , for kernel regression with @math , which holds . | {
"cite_N": [
"@cite_3",
"@cite_1",
"@cite_6",
"@cite_7"
],
"mid": [
"2950238385",
"2951665052",
"2133104104",
"2119738618"
],
"abstract": [
"We tackle the problem of online reward maximisation over a large finite set of actions described by their contexts. We focus on the case when the number of actions is too big to sample all of them even once. However we assume that we have access to the similarities between actions' contexts and that the expected reward is an arbitrary linear function of the contexts' images in the related reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB algorithm, and give a cumulative regret bound through a frequentist analysis. For contextual bandits, the related algorithm GP-UCB turns out to be a special case of our algorithm, and our finite-time analysis improves the regret bound of GP-UCB for the agnostic case, both in the terms of the kernel-dependent quantity and the RKHS norm of the reward function. Moreover, for the linear kernel, our regret bound matches the lower bound for contextual linear bandits.",
"Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multi-armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.",
"Bayesian optimisation has gained great popularity as a tool for optimising the parameters of machine learning algorithms and models. Somewhat ironically, setting up the hyper-parameters of Bayesian optimisation methods is notoriously hard. While reasonable practical solutions have been advanced, they can often fail to find the best optima. Surprisingly, there is little theoretical analysis of this crucial problem in the literature. To address this, we derive a cumulative regret bound for Bayesian optimisation with Gaussian processes and unknown kernel hyper-parameters in the stochastic setting. The bound, which applies to the expected improvement acquisition function and sub-Gaussian observation noise, provides us with guidelines on how to design hyper-parameter estimation methods. A simple simulation demonstrates the importance of following these guidelines.",
"We improve the theoretical analysis and empirical performance of algorithms for the stochastic multi-armed bandit problem and the linear stochastic multi-armed bandit problem. In particular, we show that a simple modification of Auer's UCB algorithm (Auer, 2002) achieves with high probability constant regret. More importantly, we modify and, consequently, improve the analysis of the algorithm for the for linear stochastic bandit problem studied by Auer (2002), (2008), Rusmevichientong and Tsitsiklis (2010), (2010). Our modification improves the regret bound by a logarithmic factor, though experiments show a vast improvement. In both cases, the improvement stems from the construction of smaller confidence sets. For their construction we use a novel tail inequality for vector-valued martingales."
]
} |
1708.00768 | 2740620148 | We consider the problem of streaming kernel regression, when the observations arrive sequentially and the goal is to recover the underlying mean function, assumed to belong to an RKHS. The variance of the noise is not assumed to be known. In this context, we tackle the problem of tuning the regularization parameter adaptively at each time step, while maintaining tight confidence bounds estimates on the value of the mean function at each point. To this end, we first generalize existing results for finite-dimensional linear regression with fixed regularization and known variance to the kernel setup with a regularization parameter allowed to be a measurable function of past observations. Then, using appropriate self-normalized inequalities we build upper and lower bound estimates for the variance, leading to Bersntein-like concentration bounds. The later is used in order to define the adaptive regularization. The bounds resulting from our technique are valid uniformly over all observation points and all time steps, and are compared against the literature with numerical experiments. Finally, the potential of these tools is illustrated by an application to kernelized bandits, where we revisit the Kernel UCB and Kernel Thompson Sampling procedures, and show the benefits of the novel adaptive kernel tuning strategy. | Theorem extends Theorem to the case when based on gathered observations. To the best of our knowledge, no such result exists in the literature at the time of writing this paper. Moreover, Theorem provides variance estimates with confidence bounds scaling with @math , in the spirit of the results from @cite_0 , that were provided in the i.i.d. case. Thus, Theorem also appears to be new. Finally, Corollary further specifies Theorem to the situation where the regularization is tuned according to Theorem , yielding a fully adaptive regularization procedure with explicit confidence bounds. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2949395086"
],
"abstract": [
"We give improved constants for data dependent and variance sensitive confidence bounds, called empirical Bernstein bounds, and extend these inequalities to hold uniformly over classes of functionswhose growth function is polynomial in the sample size n. The bounds lead us to consider sample variance penalization, a novel learning method which takes into account the empirical variance of the loss function. We give conditions under which sample variance penalization is effective. In particular, we present a bound on the excess risk incurred by the method. Using this, we argue that there are situations in which the excess risk of our method is of order 1 n, while the excess risk of empirical risk minimization is of order 1 sqrt n . We show some experimental results, which confirm the theory. Finally, we discuss the potential application of our results to sample compression schemes."
]
} |
1708.00577 | 2952163956 | Visual tracking is intrinsically a temporal problem. Discriminative Correlation Filters (DCF) have demonstrated excellent performance for high-speed generic visual object tracking. Built upon their seminal work, there has been a plethora of recent improvements relying on convolutional neural network (CNN) pretrained on ImageNet as a feature extractor for visual tracking. However, most of their works relying on ad hoc analysis to design the weights for different layers either using boosting or hedging techniques as an ensemble tracker. In this paper, we go beyond the conventional DCF framework and propose a Kernalised Multi-resolution Convnet (KMC) formulation that utilises hierarchical response maps to directly output the target movement. When directly deployed the learnt network to predict the unseen challenging UAV tracking dataset without any weight adjustment, the proposed model consistently achieves excellent tracking performance. Moreover, the transfered multi-reslution CNN renders it possible to be integrated into the RNN temporal learning framework, therefore opening the door on the end-to-end temporal deep learning (TDL) for visual tracking. | Recent works exploit the structure of CNN to learn the target online: a three-layer CNN is trained on-the-flight in @cite_11 ; a deep autoencoder @cite_4 is first pre-trained offline and then finetuned for binary classification in online tracking. Since the pre-training is performed in an unsupervised way by reconstructing gray images with very low resolution, the learned deep features has limited discriminative power for tracking. Moreover, without pre-traning and with limited training samples obtained online, CNN fails to capture object semantics and is not robust to deformation. Both @cite_11 and @cite_4 train deep networks online with limited training samples, and inevitably suffer from overfitting. Transferring the hierarchical features learned for image classification tasks have been shown to be effective for numerous vision tasks, e.g., image segmentation @cite_12 , salient object detection @cite_14 . More recent methods @cite_15 @cite_5 @cite_8 @cite_16 adopt deep convolution networks trained on a large scale image classification task @cite_9 to improve tracking performance. The rich representation of transferred features from deep nets enables trackers to construct more robust, power appearance model over the traditional hand crafted feature based trackers. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"2213123247",
"2118097920",
"",
"2117539524",
"",
"2470456807",
"",
"2952632681",
"1956687234"
],
"abstract": [
"The state-of-the-art salient object detection models are able to perform well for relatively simple scenes, yet for more complex ones, they still have difficulties in highlighting salient objects completely from background, largely due to the lack of sufficiently robust features for saliency prediction. To address such an issue, this paper proposes a novel hierarchy-associated feature construction framework for salient object detection, which is based on integrating elementary features from multi-level regions in a hierarchy. Furthermore, multi-layered deep learning features are introduced and incorporated as elementary features into this framework through a compact integration scheme. This leads to a rich feature representation, which is able to represent the context of the whole object background and is much more discriminative as well as robust for salient object detection. Extensive experiments on the most widely used and challenging benchmark datasets demonstrate that the proposed approach substantially outperforms the state-of-the-art on salient object detection.",
"In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked de-noising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).",
"",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.",
"",
"Due to the limited amount of training samples, finetuning pre-trained deep models online is prone to overfitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.",
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Deep neural networks, albeit their great success on feature learning in various computer vision tasks, are usually considered as impractical for online visual tracking because they require very long training time and a large number of training samples. In this work, we present an efficient and very robust online tracking algorithm using a single Convolutional Neural Network (CNN) for learning effective feature representations of the target object over time. Our contributions are multifold: First, we introduce a novel truncated structural loss function that maintains as many training samples as possible and reduces the risk of tracking error accumulation, thus drift, by accommodating the uncertainty of the model output. Second, we enhance the ordinary Stochastic Gradient Descent approach in CNN training with a temporal selection mechanism, which generates positive and negative samples within different time periods. Finally, we propose to update the CNN model in a “lazy” style to speed-up the training stage, where the network is updated only when a significant appearance change occurs on the object, without sacrificing tracking accuracy. The CNN tracker outperforms all compared state-of-the-art methods in our extensive evaluations that involve 18 well-known benchmark video sequences."
]
} |
1708.00577 | 2952163956 | Visual tracking is intrinsically a temporal problem. Discriminative Correlation Filters (DCF) have demonstrated excellent performance for high-speed generic visual object tracking. Built upon their seminal work, there has been a plethora of recent improvements relying on convolutional neural network (CNN) pretrained on ImageNet as a feature extractor for visual tracking. However, most of their works relying on ad hoc analysis to design the weights for different layers either using boosting or hedging techniques as an ensemble tracker. In this paper, we go beyond the conventional DCF framework and propose a Kernalised Multi-resolution Convnet (KMC) formulation that utilises hierarchical response maps to directly output the target movement. When directly deployed the learnt network to predict the unseen challenging UAV tracking dataset without any weight adjustment, the proposed model consistently achieves excellent tracking performance. Moreover, the transfered multi-reslution CNN renders it possible to be integrated into the RNN temporal learning framework, therefore opening the door on the end-to-end temporal deep learning (TDL) for visual tracking. | Variations in the appearance of the object in tracking, such as variations in geometry photometry, camera viewpoint, partial occlusion or out-of-view, pose a major challenge to object tracking. TLD @cite_30 employs two experts to identify the false negatives and false positives to train a detector. The experts are independent, which ensures mutual compensation of their errors to alleviate the problem of drifting. A short and long term cognitive psychology principle is adopted in @cite_23 to design a flexible representation that can adopt to changes in object appearance during tracking. A parameter-free Hedging algorithm is proposed in @cite_21 for the problem of decision-theoretic online learning, especially for the applications when the number of actions is very large and optimally setting the parameter is not well understood. An improved Hedge algorithm considering historical performance is proposed in @cite_29 to weight the decision from different CNN layers. | {
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_21",
"@cite_23"
],
"mid": [
"",
"2473868734",
"2949554195",
"1915599933"
],
"abstract": [
"",
"In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers.",
"We study the problem of decision-theoretic online learning (DTOL). Motivated by practical applications, we focus on DTOL when the number of actions is very large. Previous algorithms for learning in this framework have a tunable learning rate parameter, and a barrier to using online-learning in practical applications is that it is not understood how to set this parameter optimally, particularly when the number of actions is large. In this paper, we offer a clean solution by proposing a novel and completely parameter-free algorithm for DTOL. We introduce a new notion of regret, which is more natural for applications with a large number of actions. We show that our algorithm achieves good performance with respect to this new notion of regret; in addition, it also achieves performance close to that of the best bounds achieved by previous algorithms with optimally-tuned parameters, according to previous notions of regret.",
"Variations in the appearance of a tracked object, such as changes in geometry photometry, camera viewpoint, illumination, or partial occlusion, pose a major challenge to object tracking. Here, we adopt cognitive psychology principles to design a flexible representation that can adapt to changes in object appearance during tracking. Inspired by the well-known Atkinson-Shiffrin Memory Model, we propose MUlti-Store Tracker (MUSTer), a dual-component approach consisting of short- and long-term memory stores to process target appearance memories. A powerful and efficient Integrated Correlation Filter (ICF) is employed in the short-term store for short-term tracking. The integrated long-term component, which is based on keypoint matching-tracking and RANSAC estimation, can interact with the long-term memory and provide additional information for output control. MUSTer was extensively evaluated on the CVPR2013 Online Object Tracking Benchmark (OOTB) and ALOV++ datasets. The experimental results demonstrated the superior performance of MUSTer in comparison with other state-of-art trackers."
]
} |
1708.00625 | 2740593609 | We propose a new framework for abstractive text summarization based on a sequence-to-sequence oriented encoder-decoder model equipped with a deep recurrent generative decoder (DRGN). Latent structure information implied in the target summaries is learned based on a recurrent latent random model for improving the summarization quality. Neural variational inference is employed to address the intractable posterior inference for the recurrent latent variables. Abstractive summaries are generated based on both the generative latent variables and the discriminative deterministic states. Extensive experiments on some benchmark datasets in different languages show that DRGN achieves improvements over the state-of-the-art methods. | Automatic summarization is the process of automatically generating a summary that retains the most important content of the original text document @cite_5 . Traditionally, the summarization methods can be classified into three categories: extraction-based methods @cite_23 @cite_28 @cite_4 @cite_17 @cite_19 @cite_16 @cite_14 @cite_18 , compression-based methods @cite_12 @cite_0 @cite_29 @cite_35 , and abstraction-based methods. In fact, previous investigations show that human-written summaries are more abstractive @cite_26 @cite_39 . Abstraction-based approaches can generate new sentences based on the facts from different source sentences. employed sentence fusion to generate a new sentence. proposed a more fine-grained fusion framework, where new sentences are generated by selecting and merging salient phrases. These methods can be regarded as a kind of indirect abstractive summarization, and complicated constraints are used to guarantee the linguistic quality. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_26",
"@cite_28",
"@cite_29",
"@cite_39",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"2604777655",
"",
"2329305226",
"2167232041",
"2012561700",
"1980813323",
"1836005856",
"2963241389",
"",
"2952138241",
"2110693578",
"2218641061",
"2307381258",
"2251384446",
"2105542801"
],
"abstract": [
"We propose a new unsupervised sentence salience framework for Multi-Document Summarization (MDS), which can be divided into two components: latent semantic modeling and salience estimation. For latent semantic modeling, a neural generative model called Variational Auto-Encoders (VAEs) is employed to describe the observed sentences and the corresponding latent semantic representations. Neural variational inference is used for the posterior inference of the latent variables. For salience estimation, we propose an unsupervised data reconstruction framework, which jointly considers the reconstruction for latent semantic space and observed term vector space. Therefore, we can capture the salience of sentences from these two different and complementary vector spaces. Thereafter, the VAEs-based latent semantic model is integrated into the sentence salience estimation component in a unified fashion, and the whole framework can be trained jointly by back-propagation via multi-task learning. Experimental results on the benchmark datasets DUC and TAC show that our framework achieves better performance than the state-of-the-art models.",
"",
"Query relevance ranking and sentence saliency ranking are the two main tasks in extractive query-focused summarization. Previous supervised summarization systems often perform the two tasks in isolation. However, since reference summaries are the trade-off between relevance and saliency, using them as supervision, neither of the two rankers could be trained well. This paper proposes a novel summarization system called AttSum, which tackles the two tasks jointly. It automatically learns distributed representations for sentences as well as the document cluster. Meanwhile, it applies the attention mechanism to simulate the attentive reading of human behavior when a query is given. Extensive experiments are conducted on DUC query-focused summarization benchmark datasets. Without using any hand-crafted features, AttSum achieves competitive performance. It is also observed that the sentences recognized to focus on the query indeed meet the query need.",
"Most of the existing multi-document summarization methods decompose the documents into sentences and work directly in the sentence space using a term-sentence matrix. However, the knowledge on the document side, i.e. the topics embedded in the documents, can help the context understanding and guide the sentence selection in the summarization procedure. In this paper, we propose a new Bayesian sentence-based topic model for summarization by making use of both the term-document and term-sentence associations. An efficient variational Bayesian algorithm is derived for model parameter estimation. Experimental results on benchmark data sets show the effectiveness of the proposed model for the multi-document summarization task.",
"A system that can produce informative summaries, highlighting common information found in many online documents, will help Web users to pinpoint information that they need without extensive reading. In this article, we introduce sentence fusion, a novel text-to-text generation technique for synthesizing common information across documents. Sentence fusion involves bottom-up local multisequence alignment to identify phrases conveying similar information and statistical generation to combine common phrases into a sentence. Sentence fusion moves the summarization field from the use of purely extractive methods to the generation of abstracts that contain sentences not found in any of the input documents and can synthesize information across sources.",
"This paper discusses a text extraction approach to multi-document summarization that builds on single-document summarization methods by using additional, available information about the document set as a whole and the relationships between the documents. Multi-document summarization differs from single in that the issues of compression, speed, redundancy and passage selection are critical in the formation of useful summaries. Our approach addresses these issues by using domain-independent techniques based mainly on fast, statistical processing, a metric for reducing redundancy and maximizing diversity in the selected passages, and a modular framework to allow easy parameterization for different genres, corpora characteristics and user requirements.",
"We propose a new MDS paradigm called reader-aware multi-document summarization (RA-MDS). Specifically, a set of reader comments associated with the news reports are also collected. The generated summaries from the reports for the event should be salient according to not only the reports but also the reader comments. To tackle this RAMDS problem, we propose a sparse-coding-based method that is able to calculate the salience of the text units by jointly considering news reports and reader comments. Another reader-aware characteristic of our framework is to improve linguistic quality via entity rewriting. The rewriting consideration is jointly assessed together with other summarization requirements under a unified optimization model. To support the generation of compressive summaries via optimization, we explore a finer syntactic unit, namely, noun verb phrase. In this work, we also generate a data set for conducting RA-MDS. Extensive experiments on this data set and some classical data sets demonstrate the effectiveness of our proposed approach.",
"We propose an abstraction-based multidocument summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun verb phrases. Different from existing abstraction-based approaches, our method first constructs a pool of concepts and facts represented by phrases from the input documents. Then new sentences are generated by selecting and merging informative phrases to maximize the salience of phrases and meanwhile satisfy the sentence construction constraints. We employ integer linear optimization for conducting phrase selection and merging simultaneously in order to achieve the global optimal solution for a summary. Experimental results on the benchmark data set TAC 2011 show that our framework outperforms the state-ofthe-art models under automated pyramid evaluation metric, and achieves reasonably well results on manual linguistic quality evaluation.",
"",
"We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art. Our model has the additional advantage of being very interpretable, since it allows visualization of its predictions broken up by abstract features such as information content, salience and novelty. Another novel contribution of our work is abstractive training of our extractive model that can train on human generated reference summaries alone, eliminating the need for sentence-level extractive labels.",
"We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.",
"Numerous approaches for identifying important content for automatic text summarization have been developed to date. Topic representation approaches first derive an intermediate representation of the text that captures the topics discussed in the input. Based on these representations of topics, sentences in the input document are scored for importance. In contrast, in indicator representation approaches, the text is represented by a diverse set of possible indicators of importance which do not aim at discovering topicality. These indicators are combined, very often using machine learning techniques, to score the importance of each sentence. Finally, a summary is produced by selecting sentences in a greedy approach, choosing the sentences that will go in the summary one by one, or globally optimizing the selection, choosing the best set of sentences to form a summary. In this chapter we give a broad overview of existing approaches based on these distinctions, with particular attention on how representation, sentence scoring or summary selection strategies alter the overall performance of the summarizer. We also point out some of the peculiarities of the task of summarization which have posed challenges to machine learning approaches for the problem, and some of the suggested solutions.",
"Traditional approaches to extractive summarization rely heavily on humanengineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features. We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor. This architecture allows us to develop different classes of summarization models which can extract sentences or words. We train our models on large scale corpora containing hundreds of thousands of document-summary pairs 1 . Experimental results on two summarization datasets demonstrate that our models obtain results comparable to the state of the art without any access to linguistic annotation.",
"Joint compression and summarization has been used recently to generate high quality summaries. However, such word-based joint optimization is computationally expensive. In this paper we adopt the ‘sentence compression + sentence selection’ pipeline approach for compressive summarization, but propose to perform summary guided compression, rather than generic sentence-based compression. To create an annotated corpus, the human annotators were asked to compress sentences while explicitly given the important summary words in the sentences. Using this corpus, we train a supervised sentence compression model using a set of word-, syntax-, and documentlevel features. During summarization, we use multiple compressed sentences in the integer linear programming framework to select salient summary sentences. Our results on the TAC 2008 and 2011 summarization data sets show that by incorporating the guided sentence compression model, our summarization system can yield significant performance gain as compared to the state-of-the-art.",
"Scoring sentences in documents given abstract summaries created by humans is important in extractive multi-document summarization. In this paper, we formulate extractive summarization as a two step learning problem building a generative model for pattern discovery and a regression model for inference. We calculate scores for sentences in document clusters based on their latent characteristics using a hierarchical topic model. Then, using these scores, we train a regression model based on the lexical and structural characteristics of the sentences, and use the model to score sentences of new documents to form a summary. Our system advances current state-of-the-art improving ROUGE scores by 7 . Generated summaries are less redundant and more coherent based upon manual quality evaluations."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.