aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1902.07456 | 2925799800 | Aggregate location statistics are used in a number of mobility analytics to express how many people are in a certain location at a given time (but not who). However, prior work has shown that an adversary with some prior knowledge of a victim's mobility patterns can mount membership inference attacks to determine whether or not that user contributed to the aggregates. In this paper, we set to understand why such inferences are successful and what can be done to mitigate them. We conduct an in-depth feature analysis, finding that the volume of data contributed and the regularity and particularity of mobility patterns play a crucial role in the attack. We then use these insights to adapt defenses proposed in the location privacy literature to the aggregate setting, and evaluate their privacy-utility trade-offs for common mobility analytics. We show that, while there is no silver bullet that enables arbitrary analysis, there are defenses that provide reasonable utility for particular tasks while reducing the extent of the inference. | Aggregate location privacy. Aggregation is often not an effective way to preserve the privacy of location data, as aggregates leak information about individual users. @cite_42 reconstruct victims' location trajectories from aggregate mobility data, without any prior knowledge, while @cite_14 shows that aggregate location time-series can be used by an adversary to build accurate profiles of users' movements. Finally, @cite_47 study the effect of defenses on finding points of interest while computing aggregated statistics of geo-located measurements; in this work, we focus on a different privacy violation, i.e., membership inference. | {
"cite_N": [
"@cite_14",
"@cite_47",
"@cite_42"
],
"mid": [
"2952005775",
"2910636828",
"2593227599"
],
"abstract": [
"Information about people's movements and the locations they visit enables an increasing number of mobility analytics applications, e.g., in the context of urban and transportation planning, In this setting, rather than collecting or sharing raw data, entities often use aggregation as a privacy protection mechanism, aiming to hide individual users' location traces. Furthermore, to bound information leakage from the aggregates, they can perturb the input of the aggregation or its output to ensure that these are differentially private. In this paper, we set to evaluate the impact of releasing aggregate location time-series on the privacy of individuals contributing to the aggregation. We introduce a framework allowing us to reason about privacy against an adversary attempting to predict users' locations or recover their mobility patterns. We formalize these attacks as inference problems, and discuss a few strategies to model the adversary's prior knowledge based on the information she may have access to. We then use the framework to quantify the privacy loss stemming from aggregate location data, with and without the protection of differential privacy, using two real-world mobility datasets. We find that aggregates do leak information about individuals' punctual locations and mobility profiles. The density of the observations, as well as timing, play important roles, e.g., regular patterns during peak hours are better protected than sporadic movements. Finally, our evaluation shows that both output and input perturbation offer little additional protection, unless they introduce large amounts of noise ultimately destroying the utility of the data.",
"",
"Human mobility data has been ubiquitously collected through cellular networks and mobile applications, and publicly released for academic research and commercial purposes for the last decade. Since releasing individual's mobility records usually gives rise to privacy issues, datasets owners tend to only publish aggregated mobility data, such as the number of users covered by a cellular tower at a specific timestamp, which is believed to be sufficient for preserving users' privacy. However, in this paper, we argue and prove that even publishing aggregated mobility data could lead to privacy breach in individuals' trajectories. We develop an attack system that is able to exploit the uniqueness and regularity of human mobility to recover individual's trajectories from the aggregated mobility data without any prior knowledge. By conducting experiments on two real-world datasets collected from both mobile application and cellular network, we reveal that the attack system is able to recover users' trajectories with accuracy about 73 91 at the scale of tens of thousands to hundreds of thousands users, which indicates severe privacy leakage in such datasets. Through the investigation on aggregated mobility data, our work recognizes a novel privacy problem in publishing statistic data, which appeals for immediate attentions from both academy and industry."
]
} |
1902.07456 | 2925799800 | Aggregate location statistics are used in a number of mobility analytics to express how many people are in a certain location at a given time (but not who). However, prior work has shown that an adversary with some prior knowledge of a victim's mobility patterns can mount membership inference attacks to determine whether or not that user contributed to the aggregates. In this paper, we set to understand why such inferences are successful and what can be done to mitigate them. We conduct an in-depth feature analysis, finding that the volume of data contributed and the regularity and particularity of mobility patterns play a crucial role in the attack. We then use these insights to adapt defenses proposed in the location privacy literature to the aggregate setting, and evaluate their privacy-utility trade-offs for common mobility analytics. We show that, while there is no silver bullet that enables arbitrary analysis, there are defenses that provide reasonable utility for particular tasks while reducing the extent of the inference. | Membership inference on aggregate locations. As discussed in , @cite_24 model MIAs against aggregate locations using a distinguishability game, and train a classifier to differentiate aggregates including the data of a target from those that do not. While our analysis is based on their attacks, our research objective is substantially different. 's main goal is to investigate the feasibility of inference attacks; whereas, we aim to gain a deeper understanding about the reasons behind the attacks' success, providing insights about locations and times that ease inference and the characteristics of the users that are affected more than others. Moreover, @cite_24 only studies the utility-privacy trade-off provided by differential privacy @cite_12 @cite_2 , while we use the insights obtained in our analysis to select potential mitigation approaches, which we evaluate, both in terms of privacy and utility, in the context of various spatio-temporal analytics tasks. | {
"cite_N": [
"@cite_24",
"@cite_12",
"@cite_2"
],
"mid": [
"",
"2104803737",
"2109426455"
],
"abstract": [
"",
"We propose the first differentially private aggregation algorithm for distributed time-series data that offers good practical utility without any trusted server. This addresses two important challenges in participatory data-mining applications where (i) individual users collect temporally correlated time-series data (such as location traces, web history, personal health data), and (ii) an untrusted third-party aggregator wishes to run aggregate queries on the data. To ensure differential privacy for time-series data despite the presence of temporal correlation, we propose the Fourier Perturbation Algorithm (FPAk). Standard differential privacy techniques perform poorly for time-series data. To answer n queries, such techniques can result in a noise of Θ(n) to each query answer, making the answers practically useless if n is large. Our FPAk algorithm perturbs the Discrete Fourier Transform of the query answers. For answering n queries, FPAk improves the expected error from Θ(n) to roughly Θ(k) where k is the number of Fourier coefficients that can (approximately) reconstruct all the n query answers. Our experiments show that k To deal with the absence of a trusted central server, we propose the Distributed Laplace Perturbation Algorithm (DLPA) to add noise in a distributed way in order to guarantee differential privacy. To the best of our knowledge, DLPA is the first distributed differentially private algorithm that can scale with a large number of users: DLPA outperforms the only other distributed solution for differential privacy proposed so far, by reducing the computational load per user from O(U) to O(1) where U is the number of users.",
"Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning."
]
} |
1902.07653 | 2915095025 | Automatic age estimation from facial images represents an important task in computer vision. This paper analyses the effect of gender, age, ethnic, makeup and expression attributes of faces as sources of bias to improve deep apparent age prediction. Following recent works where it is shown that apparent age labels benefit real age estimation, rather than direct real to real age regression, our main contribution is the integration, in an end-to-end architecture, of face attributes for apparent age prediction with an additional loss for real age regression. Experimental results on the APPA-REAL dataset indicate the proposed network successfully take advantage of the adopted attributes to improve both apparent and real age estimation. Our model outperformed a state-of-the-art architecture proposed to separately address apparent and real age regression. Finally, we present preliminary results and discussion of a proof of concept application using the proposed model to regress the apparent age of an individual based on the gender of an external observer. | This section reviews related work on real and apparent age estimation. Early and recent works are briefly discussed without the intention of providing an extended and comprehensive review on the topic. To this end, we refer the reader to @cite_13 . Then, we revisit related studies on the analysis of bias in age estimation. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2807323414"
],
"abstract": [
"Facial aging adversely impacts performance of face recognition and face verification and authentication using facial features. This stochastic personalized inevitable process poses dynamic theoretical and practical challenge to the computer vision and pattern recognition community. Age estimation is labeling a face image with exact real age or age group. How do humans recognize faces across ages? Do they learn the pattern or use age-invariant features? What are these age-invariant features that uniquely identify one across ages? These questions and others have attracted significant interest in the computer vision and pattern recognition research community. In this paper, we present a thorough analysis of recent research in aging and age estimation. We discuss popular algorithms used in age estimation, existing models, and how they compare with each other; we compare performance of various systems and how they are evaluated, age estimation challenges, and insights for future research."
]
} |
1902.07653 | 2915095025 | Automatic age estimation from facial images represents an important task in computer vision. This paper analyses the effect of gender, age, ethnic, makeup and expression attributes of faces as sources of bias to improve deep apparent age prediction. Following recent works where it is shown that apparent age labels benefit real age estimation, rather than direct real to real age regression, our main contribution is the integration, in an end-to-end architecture, of face attributes for apparent age prediction with an additional loss for real age regression. Experimental results on the APPA-REAL dataset indicate the proposed network successfully take advantage of the adopted attributes to improve both apparent and real age estimation. Our model outperformed a state-of-the-art architecture proposed to separately address apparent and real age regression. Finally, we present preliminary results and discussion of a proof of concept application using the proposed model to regress the apparent age of an individual based on the gender of an external observer. | In the case of apparent age estimation, each face image usually contains multiple age labels, related to variations in perception coming from different annotators observers. Agustsson @cite_17 reported that real age estimation could be successfully tackled as a combination of apparent and real age estimation by learning residuals. Geng @cite_2 modeled an aging pattern by constructing a subspace given a set of ordered face images by age. In the aging pattern, each position indicates its apparent age. Zhu @cite_4 proposed to learn deep representations in a cascaded way. They analysed how to utilise a large number of face images without apparent age labels to learn a face representation, as well as how to tune a deep network using a limited number of labelled samples. Malli @cite_6 proposed to group face images within a specified age range to train an ensemble of deep learning models. The outputs of these trained models are then combined to obtain a final apparent age estimation. | {
"cite_N": [
"@cite_2",
"@cite_6",
"@cite_4",
"@cite_17"
],
"mid": [
"2106488920",
"2414236216",
"2249960609",
"2725329413"
],
"abstract": [
"While recognition of most facial variations, such as identity, expression, and gender, has been extensively studied, automatic age estimation has rarely been explored. In contrast to other facial variations, aging variation presents several unique characteristics which make age estimation a challenging task. This paper proposes an automatic age estimation method named AGES (AGing pattErn Subspace). The basic idea is to model the aging pattern, which is defined as the sequence of a particular individual's face images sorted in time order, by constructing a representative subspace. The proper aging pattern for a previously unseen face image is determined by the projection in the subspace that can reconstruct the face image with minimum reconstruction error, while the position of the face image in that aging pattern will then indicate its age. In the experiments, AGES and its variants are compared with the limited existing age estimation methods (WAS and AAS) and some well-established classification methods (kNN, BP, C4.5, and SVM). Moreover, a comparison with human perception ability on age is conducted. It is interesting to note that the performance of AGES is not only significantly better than that of all the other algorithms, but also comparable to that of the human observers.",
"In this paper, we address the problem of apparent age estimation. Different from estimating the real age of individuals, in which each face image has a single age label, in this problem, face images have multiple age labels, corresponding to the ages perceived by the annotators, when they look at these images. This provides an intriguing computer vision problem, since in generic image or object classification tasks, it is typical to have a single ground truth label per class. To account for multiple labels per image, instead of using average age of the annotated face image as the class label, we have grouped the face images that are within a specified age range. Using these age groups and their age-shifted groupings, we have trained an ensemble of deep learning models. Before feeding an input face image to a deep learning model, five facial landmark points are detected and used for 2-D alignment. We have employed and fine tuned convolutional neural networks (CNNs) that are based on VGG-16 [24] architecture and pretrained on the IMDB-WIKI dataset [22]. The outputs of these deep learning models are then combined to produce the final estimation. Proposed method achieves 0.3668 error in the final ChaLearn LAP 2016 challenge test set [5].",
"Age estimation from facial images is an important problem in computer vision and pattern recognition. Typically the goal is to predict the chronological age of a person given his or her face picture. It is seldom to study a related problem, that is, how old does a person look like from?the face photo? It is called apparent age estimation. A key difference between apparent age estimation and the traditional age estimation is that the age labels are annotated by human assessors rather than the real chronological age. The challenge for apparent age estimation is that there are?not many face images available with annotated age labels. Further, the annotated age labels for each face photo may not be consistent among different assessors. We study the problem of apparent age estimation by addressing the issues from different aspects, such as how to utilize a large number of face images without apparent age labels to learn a face representation using the deep neural networks, how to tune the deep networks using a limited number of examples with apparent age labels, and how well the machine learning methods can perform to estimate apparent ages. The apparent age data is from the ChaLearn Looking At People (LAP) challenge 2015. Using the protocol and time frame given by the challenge competition, we have achieved an error of 0.294835 on the final evaluation, and our result has been ranked the 3rd place in this competition.",
"After decades of research, the real (biological) age estimation from a single face image reached maturity thanks to the availability of large public face databases and impressive accuracies achieved by recently proposed methods. The estimation of “apparent age” is a related task concerning the age perceived by human observers. Significant advances have been also made in this new research direction with the recent Looking At People challenges. In this paper we make several contributions to age estimation research. (i) We introduce APPA-REAL, a large face image database with both real and apparent age annotations. (ii)We study the relationship between real and apparent age. (iii) We develop a residual age regression method to further improve the performance. (iv) We show that real age estimation can be successfully tackled as an apparent age estimation followed by an apparent to real age residual regression. (v) We graphically reveal the facial regions on which the CNN focuses in order to perform apparent and real age estimation tasks."
]
} |
1902.07653 | 2915095025 | Automatic age estimation from facial images represents an important task in computer vision. This paper analyses the effect of gender, age, ethnic, makeup and expression attributes of faces as sources of bias to improve deep apparent age prediction. Following recent works where it is shown that apparent age labels benefit real age estimation, rather than direct real to real age regression, our main contribution is the integration, in an end-to-end architecture, of face attributes for apparent age prediction with an additional loss for real age regression. Experimental results on the APPA-REAL dataset indicate the proposed network successfully take advantage of the adopted attributes to improve both apparent and real age estimation. Our model outperformed a state-of-the-art architecture proposed to separately address apparent and real age regression. Finally, we present preliminary results and discussion of a proof of concept application using the proposed model to regress the apparent age of an individual based on the gender of an external observer. | While state-of-the-art machine learning algorithms can provide accurate prediction performances for age estimation, either if real or apparent age are considered, they are still affected by different variations in face characteristics. But how can age prediction performances be enhanced in this case? With this objective in mind, the analysis of bias in age perception has recently emerged @cite_1 @cite_19 . Can we better understand age perception and their biases so that the findings can be used to regress a better real age estimation? In this line, Clap 'e s @cite_1 found some consistent biases in the APPA-REAL @cite_17 dataset when relating apparent to real age. However, an end-to-end approach for bias removal was not considered. According to Alvi @cite_19 , training an age predictor on a dataset that is not balanced for gender can lead to gender biased predictions. They presented an algorithm to remove biases from the feature representation, as well as to ensure that the network is blind to a known bias in the dataset. Thus, improving classification accuracy, particularly when training networks on extremely biased datasets. | {
"cite_N": [
"@cite_19",
"@cite_1",
"@cite_17"
],
"mid": [
"2953218089",
"2901900434",
"2725329413"
],
"abstract": [
"Neural networks achieve the state-of-the-art in image classification tasks. However, they can encode spurious variations or biases that may be present in the training data. For example, training an age predictor on a dataset that is not balanced for gender can lead to gender biased predicitons (e.g. wrongly predicting that males are older if only elderly males are in the training set). We present two distinct contributions: 1) An algorithm that can remove multiple sources of variation from the feature representation of a network. We demonstrate that this algorithm can be used to remove biases from the feature representation, and thereby improve classification accuracies, when training networks on extremely biased datasets. 2) An ancestral origin database of 14,000 images of individuals from East Asia, the Indian subcontinent, sub-Saharan Africa, and Western Europe. We demonstrate on this dataset, for a number of facial attribute classification tasks, that we are able to remove racial biases from the network feature representation.",
"Real age estimation in still images of faces is an active area of research in the computer vision community. However, very few works attempted to analyse the apparent age as perceived by observers. Apparent age estimation is a subjective task, which is affected by many factors present in the image as well as by observer's characteristics. In this work, we enhance the APPA-REAL dataset, containing around 8K images with real and apparent ages, with new annotated attributes, namely gender, ethnic, makeup, and expression. Age and gender from a subset of guessers is also provided. We show there exists some consistent bias for a subset of these attributes when relating apparent to real age. In addition we run simple experiments with a basic Convolutional Neural Network (CNN) showing that considering apparent labels for training improves real age estimation rather than training with real ages. We also perform bias correction on CNN predictions, showing that it further enhance final age recognition performance.",
"After decades of research, the real (biological) age estimation from a single face image reached maturity thanks to the availability of large public face databases and impressive accuracies achieved by recently proposed methods. The estimation of “apparent age” is a related task concerning the age perceived by human observers. Significant advances have been also made in this new research direction with the recent Looking At People challenges. In this paper we make several contributions to age estimation research. (i) We introduce APPA-REAL, a large face image database with both real and apparent age annotations. (ii)We study the relationship between real and apparent age. (iii) We develop a residual age regression method to further improve the performance. (iv) We show that real age estimation can be successfully tackled as an apparent age estimation followed by an apparent to real age residual regression. (v) We graphically reveal the facial regions on which the CNN focuses in order to perform apparent and real age estimation tasks."
]
} |
1902.07517 | 2913869475 | Abstract Noisy labeled data represent a rich source of information that often are easily accessible and cheap to obtain, but label noise might also have many negative consequences if not accounted for. How to fully utilize noisy labels has been studied extensively within the framework of standard supervised machine learning over a period of several decades. However, very little research has been conducted on solving the challenge posed by noisy labels in non-standard settings. This includes situations where only a fraction of the samples are labeled (semi-supervised) and each high-dimensional sample is associated with multiple labels. In this work, we present a novel semi-supervised and multi-label dimensionality reduction method that effectively utilizes information from both noisy multi-labels and unlabeled data. With the proposed Noisy multi-label semi-supervised dimensionality reduction (NMLSDR) method, the noisy multi-labels are denoised and unlabeled data are labeled simultaneously via a specially designed label propagation algorithm. NMLSDR then learns a projection matrix for reducing the dimensionality by maximizing the dependence between the enlarged and denoised multi-label space and the features in the projected space. Extensive experiments on synthetic data, benchmark datasets, as well as a real-world case study, demonstrate the effectiveness of the proposed algorithm and show that it outperforms state-of-the-art multi-label feature extraction algorithms. | In this section we review related unsupervised, semi-supervised and supervised DR methods. DR may be obtained both by feature extraction, i.e. by a data transformation, and by feature selection @cite_61 . Here, we refer to DR in the sense of feature extraction. | {
"cite_N": [
"@cite_61"
],
"mid": [
"2119479037"
],
"abstract": [
"Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. These areas include text processing of internet documents, gene expression array analysis, and combinatorial chemistry. The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data. The contributions of this special issue cover a wide range of aspects of such problems: providing a better definition of the objective function, feature construction, feature ranking, multivariate feature selection, efficient search methods, and feature validity assessment methods."
]
} |
1902.07517 | 2913869475 | Abstract Noisy labeled data represent a rich source of information that often are easily accessible and cheap to obtain, but label noise might also have many negative consequences if not accounted for. How to fully utilize noisy labels has been studied extensively within the framework of standard supervised machine learning over a period of several decades. However, very little research has been conducted on solving the challenge posed by noisy labels in non-standard settings. This includes situations where only a fraction of the samples are labeled (semi-supervised) and each high-dimensional sample is associated with multiple labels. In this work, we present a novel semi-supervised and multi-label dimensionality reduction method that effectively utilizes information from both noisy multi-labels and unlabeled data. With the proposed Noisy multi-label semi-supervised dimensionality reduction (NMLSDR) method, the noisy multi-labels are denoised and unlabeled data are labeled simultaneously via a specially designed label propagation algorithm. NMLSDR then learns a projection matrix for reducing the dimensionality by maximizing the dependence between the enlarged and denoised multi-label space and the features in the projected space. Extensive experiments on synthetic data, benchmark datasets, as well as a real-world case study, demonstrate the effectiveness of the proposed algorithm and show that it outperforms state-of-the-art multi-label feature extraction algorithms. | Unsupervised DR methods do not exploit label information and can therefore straightforwardly be applied to multi-label data by simply ignoring the labels. For example, principal component analysis (PCA) aims to find the projection such that the variance of the input space is maximally preserved @cite_30 . Other methods aim to find a lower dimensional embedding that preserves the manifold structure of the data, and examples of these include Locally linear embedding @cite_63 , Laplacian eigenmaps @cite_27 and ISOMAP @cite_71 . | {
"cite_N": [
"@cite_30",
"@cite_27",
"@cite_63",
"@cite_71"
],
"mid": [
"2148694408",
"2156718197",
"2053186076",
"2001141328"
],
"abstract": [
"Introduction * Properties of Population Principal Components * Properties of Sample Principal Components * Interpreting Principal Components: Examples * Graphical Representation of Data Using Principal Components * Choosing a Subset of Principal Components or Variables * Principal Component Analysis and Factor Analysis * Principal Components in Regression Analysis * Principal Components Used with Other Multivariate Techniques * Outlier Detection, Influential Observations and Robust Estimation * Rotation and Interpretation of Principal Components * Principal Component Analysis for Time Series and Other Non-Independent Data * Principal Component Analysis for Special Types of Data * Generalizations and Adaptations of Principal Component Analysis",
"Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in",
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure."
]
} |
1902.07517 | 2913869475 | Abstract Noisy labeled data represent a rich source of information that often are easily accessible and cheap to obtain, but label noise might also have many negative consequences if not accounted for. How to fully utilize noisy labels has been studied extensively within the framework of standard supervised machine learning over a period of several decades. However, very little research has been conducted on solving the challenge posed by noisy labels in non-standard settings. This includes situations where only a fraction of the samples are labeled (semi-supervised) and each high-dimensional sample is associated with multiple labels. In this work, we present a novel semi-supervised and multi-label dimensionality reduction method that effectively utilizes information from both noisy multi-labels and unlabeled data. With the proposed Noisy multi-label semi-supervised dimensionality reduction (NMLSDR) method, the noisy multi-labels are denoised and unlabeled data are labeled simultaneously via a specially designed label propagation algorithm. NMLSDR then learns a projection matrix for reducing the dimensionality by maximizing the dependence between the enlarged and denoised multi-label space and the features in the projected space. Extensive experiments on synthetic data, benchmark datasets, as well as a real-world case study, demonstrate the effectiveness of the proposed algorithm and show that it outperforms state-of-the-art multi-label feature extraction algorithms. | One of the most well-known supervised DR methods is linear discriminative analysis (LDA) @cite_22 , which aims at finding the linear projection that maximizes the within-class similarity and at the same time minimizes the between-class similarity. LDA has been extended to multi-label LDA (MLDA) in several different ways @cite_48 @cite_51 @cite_47 @cite_50 @cite_57 . The difference between these methods basically consists in the way the labels are weighted in the algorithm. Following the notation in @cite_43 , wMLDAb @cite_48 uses binary weights, wMLDAe @cite_16 uses entropy-based weights, wMLDAc @cite_70 uses correlation-based weights, wMLDAf @cite_50 uses fuzzy-based weights, whereas wMLDAd @cite_43 uses dependence-based weights. | {
"cite_N": [
"@cite_22",
"@cite_70",
"@cite_48",
"@cite_57",
"@cite_43",
"@cite_50",
"@cite_47",
"@cite_16",
"@cite_51"
],
"mid": [
"2001619934",
"1526895711",
"2097616835",
"",
"2614937618",
"2046194813",
"",
"2164308541",
""
],
"abstract": [
"",
"Multi-label problems arise frequently in image and video annotations, and many other related applications such as multi-topic text categorization, music classification, etc. Like other computer vision tasks, multi-label image and video annotations also suffer from the difficulty of high dimensionality because images often have a large number of features. Linear discriminant analysis (LDA) is a well-known method for dimensionality reduction. However, the classical Linear Discriminant Analysis (LDA) only works for single-label multi-class classifications and cannot be directly applied to multi-label multi-class classifications. It is desirable to naturally generalize the classical LDA to multi-label formulations. At the same time, multi-label data present a new opportunity to improve classification accuracy through label correlations, which are absent in single-label data. In this work, we propose a novel Multi-label Linear Discriminant Analysis (MLDA) method to take advantage of label correlations and explore the powerful classification capability of the classical LDA to deal with multi-label multi-class problems. Extensive experimental evaluations on five public multi-label data sets demonstrate excellent performance of our method.",
"Linear discriminant analysis (LDA) is one of the most popular dimension reduction methods, but it is originally focused on a single-labeled problem. In this paper, we derive the formulation for applying LDA for a multi-labeled problem. We also propose a generalized LDA algorithm which is effective in a high dimensional multi-labeled problem. Experimental results demonstrate that by considering multi-labeled structure, LDA can achieve computational efficiency and also improve classification performances.",
"",
"Abstract Linear discriminant analysis (LDA) is one of the most popular single-label (multi-class) feature extraction techniques. For multi-label case, two slightly different generalized versions have been introduced independently. We argue whether there exists a framework to unify such two multi-label LDA methods and to derive more well-performed multi-label LDA techniques further. In this paper, we build a weighted multi-label LDA framework (wMLDA) to consolidate two existing multi-label LDA-type methods with binary and correlation-based weight forms, and further collect two additional weight forms with entropy and fuzzy principles. To exploit both label and feature information more sufficiently, via maximizing dependence based on Hilbert–Schmidt independence criterion, a novel dependence-based weight form is proposed, which is formulated as a non-convex quadratic programing problem with l 1 -norm and non-negative constraints and then is solved by random block coordinate descent method with a linear convergence rate. Experiments on ten data sets illustrate that our dependence-based wMLDA works the best, and five wMLDA-type algorithms are superior to canonical correlation analysis and multi-label dimensionality reduction via dependency maximization, according to five multi-label classification performance measures and Wilcoxon statistical test.",
"Multi-label classification refers to learning tasks with each instance belonging to one or more classes simultaneously. It arose from real-world applications such as information retrieval, text categorization and functional genomics. Currently, most of the multi-label learning methods use the strategy called binary relevance, which constructs a classifier for each unique label by grouping data into positives (examples with this label) and negatives (examples without this label). With binary relevance, an example with multiple labels is considered as a positive data for each label it belongs to. For some classes, this data point may behave like an outlier confusing classifiers, especially in the cases of well-separated classes. In this paper, we first introduce a new strategy called soft relevance, where each multi-label example is assigned a relevance score to the labels it belongs to. This soft relevance is then employed in a voting function used in a k nearest neighbor classifier. Furthermore, a voting-margin ratio is introduced to the k nearest neighbor classifier for better performance. We compare the proposed method to other multi-label learning methods over three multi-label datasets and demonstrate that the proposed method provides an effective way to multi-label learning.",
"",
"Feature selection on multi-label documents for automatic text categorization is an under-explored research area. This paper presents a systematic document transformation framework, whereby the multi-label documents are transformed into single-label documents before applying standard feature selection algorithms, to solve the multi-label feature selection problem. Under this framework, we undertake a comparative study on four intuitive document transformation approaches and propose a novel approach called entropy-based label assignment (ELA), which assigns the labels weights to a multi-label document based on label entropy. Three standard feature selection algorithms are utilized for evaluating the document transformation approaches in order to verify its impact on multi-class text categorization problems. Using a SVM classifier and two multi-label evaluation benchmark text collections, we show that the choice of document transformation approaches can significantly influence the performance of multi-class categorization and that our proposed document transformation approach ELA can achieve better performance than all other approaches.",
""
]
} |
1902.07517 | 2913869475 | Abstract Noisy labeled data represent a rich source of information that often are easily accessible and cheap to obtain, but label noise might also have many negative consequences if not accounted for. How to fully utilize noisy labels has been studied extensively within the framework of standard supervised machine learning over a period of several decades. However, very little research has been conducted on solving the challenge posed by noisy labels in non-standard settings. This includes situations where only a fraction of the samples are labeled (semi-supervised) and each high-dimensional sample is associated with multiple labels. In this work, we present a novel semi-supervised and multi-label dimensionality reduction method that effectively utilizes information from both noisy multi-labels and unlabeled data. With the proposed Noisy multi-label semi-supervised dimensionality reduction (NMLSDR) method, the noisy multi-labels are denoised and unlabeled data are labeled simultaneously via a specially designed label propagation algorithm. NMLSDR then learns a projection matrix for reducing the dimensionality by maximizing the dependence between the enlarged and denoised multi-label space and the features in the projected space. Extensive experiments on synthetic data, benchmark datasets, as well as a real-world case study, demonstrate the effectiveness of the proposed algorithm and show that it outperforms state-of-the-art multi-label feature extraction algorithms. | Canonical correlation analysis (CCA) @cite_1 is a method that maximizes the linear correlation between two sets of variables, which in the case of DR are the set of labels and the set of features derived from the projected space. CCA can be directly applied also for multi-labels without any modifications. Multi-label informed latent semantic indexing (MLSI) @cite_79 is a DR method that aims at both preserving the information of inputs and capturing the correlations between the labels. In the Multi-label least square (ML-LS) method one extracts a common subspace that is assumed to be shared among multiple labels by solving a generalized eigenvalue decomposition problem @cite_20 . | {
"cite_N": [
"@cite_79",
"@cite_1",
"@cite_20"
],
"mid": [
"2146012283",
"2100235303",
"2042759724"
],
"abstract": [
"Latent semantic indexing (LSI) is a well-known unsupervised approach for dimensionality reduction in information retrieval. However if the output information (i.e. category labels) is available, it is often beneficial to derive the indexing not only based on the inputs but also on the target values in the training data set. This is of particular importance in applications with multiple labels, in which each document can belong to several categories simultaneously. In this paper we introduce the multi-label informed latent semantic indexing (MLSI) algorithm which preserves the information of inputs and meanwhile captures the correlations between the multiple outputs. The recovered \"latent semantics\" thus incorporate the human-annotated category information and can be used to greatly improve the prediction accuracy. Empirical study based on two data sets, Reuters-21578 and RCV1, demonstrates very encouraging results.",
"We present a general method using kernel canonical correlation analysis to learn a semantic representation to web images and their associated text. The semantic space provides a common representation and enables a comparison between the text and images. In the experiments, we look at two approaches of retrieving images based on only their content from a text query. We compare orthogonalization approaches against a standard cross-representation retrieval technique known as the generalized vector space model.",
"Multi-label problems arise in various domains such as multi-topic document categorization, protein function prediction, and automatic image annotation. One natural way to deal with such problems is to construct a binary classifier for each label, resulting in a set of independent binary classification problems. Since multiple labels share the same input space, and the semantics conveyed by different labels are usually correlated, it is essential to exploit the correlation information contained in different labels. In this paper, we consider a general framework for extracting shared structures in multi-label classification. In this framework, a common subspace is assumed to be shared among multiple labels. We show that the optimal solution to the proposed formulation can be obtained by solving a generalized eigenvalue problem, though the problem is nonconvex. For high-dimensional problems, direct computation of the solution is expensive, and we develop an efficient algorithm for this case. One appealing feature of the proposed framework is that it includes several well-known algorithms as special cases, thus elucidating their intrinsic relationships. We further show that the proposed framework can be extended to the kernel-induced feature space. We have conducted extensive experiments on multi-topic web page categorization and automatic gene expression pattern image annotation tasks, and results demonstrate the effectiveness of the proposed formulation in comparison with several representative algorithms."
]
} |
1902.07517 | 2913869475 | Abstract Noisy labeled data represent a rich source of information that often are easily accessible and cheap to obtain, but label noise might also have many negative consequences if not accounted for. How to fully utilize noisy labels has been studied extensively within the framework of standard supervised machine learning over a period of several decades. However, very little research has been conducted on solving the challenge posed by noisy labels in non-standard settings. This includes situations where only a fraction of the samples are labeled (semi-supervised) and each high-dimensional sample is associated with multiple labels. In this work, we present a novel semi-supervised and multi-label dimensionality reduction method that effectively utilizes information from both noisy multi-labels and unlabeled data. With the proposed Noisy multi-label semi-supervised dimensionality reduction (NMLSDR) method, the noisy multi-labels are denoised and unlabeled data are labeled simultaneously via a specially designed label propagation algorithm. NMLSDR then learns a projection matrix for reducing the dimensionality by maximizing the dependence between the enlarged and denoised multi-label space and the features in the projected space. Extensive experiments on synthetic data, benchmark datasets, as well as a real-world case study, demonstrate the effectiveness of the proposed algorithm and show that it outperforms state-of-the-art multi-label feature extraction algorithms. | @cite_77 , a supervised method for doing DR based on dependence maximization @cite_62 called Multi-label dimensionality reduction via dependence maximization (MDDM) was introduced. MDDM attempts to maximize the feature-label dependence using the Hilbert-Schmidt independence criterion and was originally formulated in two different ways. MDDMp is based on orthonormal projection directions, whereas MDDMf makes the projected features orthonormal. showed that MDDMp can be formulated using least squares and added a PCA term to the cost function in a new method called Multi-label feature extraction via maximizing feature variance and feature-label dependence simultaneously (MVMD) @cite_25 . | {
"cite_N": [
"@cite_77",
"@cite_62",
"@cite_25"
],
"mid": [
"1972490990",
"1638081485",
"2253239179"
],
"abstract": [
"Multilabel learning deals with data associated with multiple labels simultaneously. Like other data mining and machine learning tasks, multilabel learning also suffers from the curse of dimensionality. Dimensionality reduction has been studied for many years, however, multilabel dimensionality reduction remains almost untouched. In this article, we propose a multilabel dimensionality reduction method, MDDM, with two kinds of projection strategies, attempting to project the original data into a lower-dimensional feature space maximizing the dependence between the original feature description and the associated class labels. Based on the Hilbert-Schmidt Independence Criterion, we derive a eigen-decomposition problem which enables the dimensionality reduction process to be efficient. Experiments validate the performance of MDDM.",
"We propose an independence criterion based on the eigenspectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on HSIC do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods.",
"We derive a least-squares formulation for MDDMp technique.A novel multi-label feature extraction algorithm is proposed.Our algorithm maximizes both feature variance and feature-label dependence.Experiments show that our algorithm is a competitive candidate. Dimensionality reduction is an important pre-processing procedure for multi-label classification to mitigate the possible effect of dimensionality curse, which is divided into feature extraction and selection. Principal component analysis (PCA) and multi-label dimensionality reduction via dependence maximization (MDDM) represent two mainstream feature extraction techniques for unsupervised and supervised paradigms. They produce many small and a few large positive eigenvalues respectively, which could deteriorate the classification performance due to an improper number of projection directions. It has been proved that PCA proposed primarily via maximizing feature variance is associated with a least-squares formulation. In this paper, we prove that MDDM with orthonormal projection directions also falls into the least-squares framework, which originally maximizes Hilbert-Schmidt independence criterion (HSIC). Then we propose a novel multi-label feature extraction method to integrate two least-squares formulae through a linear combination, which maximizes both feature variance and feature-label dependence simultaneously and thus results in a proper number of positive eigenvalues. Experimental results on eight data sets show that our proposed method can achieve a better performance, compared with other seven state-of-the-art multi-label feature extraction algorithms."
]
} |
1902.07517 | 2913869475 | Abstract Noisy labeled data represent a rich source of information that often are easily accessible and cheap to obtain, but label noise might also have many negative consequences if not accounted for. How to fully utilize noisy labels has been studied extensively within the framework of standard supervised machine learning over a period of several decades. However, very little research has been conducted on solving the challenge posed by noisy labels in non-standard settings. This includes situations where only a fraction of the samples are labeled (semi-supervised) and each high-dimensional sample is associated with multiple labels. In this work, we present a novel semi-supervised and multi-label dimensionality reduction method that effectively utilizes information from both noisy multi-labels and unlabeled data. With the proposed Noisy multi-label semi-supervised dimensionality reduction (NMLSDR) method, the noisy multi-labels are denoised and unlabeled data are labeled simultaneously via a specially designed label propagation algorithm. NMLSDR then learns a projection matrix for reducing the dimensionality by maximizing the dependence between the enlarged and denoised multi-label space and the features in the projected space. Extensive experiments on synthetic data, benchmark datasets, as well as a real-world case study, demonstrate the effectiveness of the proposed algorithm and show that it outperforms state-of-the-art multi-label feature extraction algorithms. | The most closely related existing DR methods to NMLSDR are the semi-supervised multi-label methods. The Semi-supervised dimension reduction for multi-label classification method (SSDR-MC) @cite_17 , Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning @cite_84 , and Semisupervised multilabel learning with joint dimensionality reduction @cite_52 are semi-supervised multi-label methods that simultaneously learn a classifier and a low dimensional embedding. | {
"cite_N": [
"@cite_52",
"@cite_84",
"@cite_17"
],
"mid": [
"2336020590",
"2096730755",
"1513235864"
],
"abstract": [
"Mutlilabel classification arises in various domains including computer vision and machine learning. Given a single instance, multilabel classification aims to learn a set of labels simultaneously. However, existing methods fail to address two key problems: 1) exploiting correlations among instances and 2) reducing computational complexity. In this letter, we propose a new semisupervised multilabel classification algorithm with joint dimensionality reduction. First, an elaborate matrix is designed for evaluating instance similarity; thus, it can take both labeled and unlabeled instances into consideration. Second, a linear dimensionality reduction matrix is added into the framework of multilabel classification. Besides, the dimensionality reduction matrix and the objective function can be optimized simultaneously. Finally, we design an efficient algorithm to solve the dual problem of the proposed model. Experiment results demonstrate that the proposed method is effective and promising.",
"Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F\"1, and micro F\"1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.",
"A significant challenge to make learning techniques more suitable for general purpose use in AI is to move beyond i) complete supervision, ii) low dimensional data and iii) a single label per instance. Solving this challenge would allow making predictions for high dimensional large dataset with multiple (but possibly incomplete) labelings. While other work has addressed each of these problems separately, in this paper we show how to address them together, namely the problem of semi-supervised dimension reduction for multi-labeled classification, SSDR-MC. To our knowledge this is the first paper that attempts to address all challenges together. In this work, we study a novel joint learning framework which performs optimization for dimension reduction and multi-label inference in semi-supervised setting. The experimental results validate the performance of our approach, and demonstrate the effectiveness of connecting dimension reduction and learning."
]
} |
1902.07517 | 2913869475 | Abstract Noisy labeled data represent a rich source of information that often are easily accessible and cheap to obtain, but label noise might also have many negative consequences if not accounted for. How to fully utilize noisy labels has been studied extensively within the framework of standard supervised machine learning over a period of several decades. However, very little research has been conducted on solving the challenge posed by noisy labels in non-standard settings. This includes situations where only a fraction of the samples are labeled (semi-supervised) and each high-dimensional sample is associated with multiple labels. In this work, we present a novel semi-supervised and multi-label dimensionality reduction method that effectively utilizes information from both noisy multi-labels and unlabeled data. With the proposed Noisy multi-label semi-supervised dimensionality reduction (NMLSDR) method, the noisy multi-labels are denoised and unlabeled data are labeled simultaneously via a specially designed label propagation algorithm. NMLSDR then learns a projection matrix for reducing the dimensionality by maximizing the dependence between the enlarged and denoised multi-label space and the features in the projected space. Extensive experiments on synthetic data, benchmark datasets, as well as a real-world case study, demonstrate the effectiveness of the proposed algorithm and show that it outperforms state-of-the-art multi-label feature extraction algorithms. | Other semi-supervised multi-label DR methods are semi-supervised formulations of the corresponding supervised multi-label DR method. introduced semi-supervised CCA based on Laplacian regularization @cite_106 . Several different semi-supervised formulations of MLDA have also been proposed. Multi-label dimensionality reduction based on semi-supervised discriminant analysis (MSDA) adds two regularization terms computed from an adjacency matrix and a similarity correlation matrix, respectively, to the MLDA objective function @cite_78 . In the Semi-supervised multi-label dimensionality reduction (SSMLDR) @cite_55 method one does label propagation to obtain soft labels for the unlabeled data. Thereafter the soft labels of all data are used to compute the MLDA scatter matrices. An other extension of MLDA is Semi-supervised multi-label linear discriminant analysis (SMLDA) @cite_69 , which later was modified and renamed Semi-supervised multi-label dimensionality reduction based on dependence maximization (SMDRdm) @cite_105 . In SMDRdm the scatter matrices are computed based on only labeled data. However, a HSIC term is also added to the familiar Rayleigh quotient containing the two scatter matrices, which is computed based on soft labels for both labeled and unlabeled data obtained in a similar way as in SSMLDR. | {
"cite_N": [
"@cite_69",
"@cite_78",
"@cite_55",
"@cite_106",
"@cite_105"
],
"mid": [
"",
"1979899483",
"2584571478",
"2090392676",
"2762016725"
],
"abstract": [
"",
"Multi-label data with high dimensionality often occurs, which will produce large time and energy overheads when directly used in classification tasks. To solve this problem, a novel algorithm called multi-label dimensionality reduction via semi-supervised discriminant analysis (MSDA) was proposed. It was expected to derive an objective discriminant function as smooth as possible on the data manifold by multi-label learning and semi-supervised learning. By virtue of the latent imformation, which was provided by the graph weighted matrix of sample attributes and the similarity correlation matrix of partial sample labels, MSDA readily made the separability between different classes achieve maximization and estimated the intrinsic geometric structure in the lower manifold space by employing unlabeled data. Extensive experimental results on several real multi-label datasets show that after dimensionality reduction using MSDA, the average classification accuracy is about 9.71 higher than that of other algorithms, and several evaluation metrices like Hamming-loss are also superior to those of other dimensionality reduction methods.",
"Multi-label data with high dimensionality arise frequently in data mining and machine learning. It is not only time consuming but also computationally unreliable when we use high-dimensional data directly. Supervised dimensionality reduction approaches are based on the assumption that there are large amounts of labeled data. It is infeasible to label a large number of training samples in practice especially in multi-label learning. To address these challenges, we propose a novel algorithm, namely Semi-Supervised Multi-Label Dimensionality Reduction (SSMLDR), which can utilize the information from both labeled data and unlabeled data in an effective way. First, the proposed algorithm enlarges the multi-label information from the labeled data to the unlabeled data through a special designed label propagation method. It then learns a transformation matrix to perform dimensionality reduction by incorporating the enlarged multi-label information. Extensive experiments on a broad range of datasets validate the effectiveness of our approach against other well-established algorithms.",
"Kernel canonical correlation analysis (KCCA) is a general technique for subspace learning that incorporates principal components analysis (PCA) and Fisher linear discriminant analysis (LDA) as special cases. By finding directions that maximize correlation, KCCA learns representations that are more closely tied to the underlying process that generates the data and can ignore high-variance noise directions. However, for data where acquisition in one or more modalities is expensive or otherwise limited, KCCA may suffer from small sample effects. We propose to use semi-supervised Laplacian regularization to utilize data that are present in only one modality. This approach is able to find highly correlated directions that also lie along the data manifold, resulting in a more robust estimate of correlated subspaces. Functional magnetic resonance imaging (fMRI) acquired data are naturally amenable to subspace techniques as data are well aligned. fMRI data of the human brain are a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human fMRI data, with regression to single and multi-variate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, the semi-supervised variants of KCCA performed better than the supervised variants, including a supervised variant with Laplacian regularization. We additionally analyze the weights learned by the regression in order to infer brain regions that are important to different types of visual processing.",
"Like other machine learning paradigms, multi-label learning also suffers from the curse of dimensionality problem. Multi-label dimensionality reduction can alleviate the problem but they generally ask for sufficient labeled samples. Nevertheless, we often may only have scarce labeled samples and abundant unlabeled samples. In this paper, we propose a @math emi-supervised @math ulti-label @math imensionality @math eduction based on @math ependence @math aximization approach (SMDRdm in short). SDMRdm assumes the semantic similarity and feature similarity of multi-label samples are inter-depended. SMDRdm first applies label propagation on a neighborhood graph composed with labeled and unlabeled samples to obtain the soft labels of unlabeled samples, and then measures the semantic similarity between all the training samples (including unlabeled ones) based on these soft labels and available labels of labeled samples. Next, it measures the feature similarity between samples in the subspace projected by the target projective matrix, instead of the original high-dimensional feature space. After that, it maximizes the dependence between these two types of similarities and incorporates the dependence into linear discriminant analysis to optimize the target projective matrix. Experiments on publicly accessible multi-label data sets demonstrate that SMDRdm achieves more prominent results than other related approaches across various evaluation metrics. In addition, the empirical study also shows the semantic similarity between samples derived from soft labels works better than that derived from scarce available labels."
]
} |
1902.07476 | 2934262148 | Assigning a label to each pixel in an image, namely semantic segmentation, has been an important task in computer vision, and has applications in autonomous driving, robotic navigation, localization, and scene understanding. Fully convolutional neural networks have proved to be a successful solution for the task over the years but most of the work being done focuses primarily on accuracy. In this paper, we present a computationally efficient approach to semantic segmentation, while achieving a high mean intersection over union (mIOU), (70.33 ) on Cityscapes challenge. The network proposed is capable of running real-time on mobile devices. In addition, we make our code and model weights publicly available. | CNNs have shown to be the state-of-art method for the task of semantic segmentation over the recent years. Especially fully convolutional neural networks (FCNNs) have demonstrated great performance on feature generation task and end-to-end training and hence is widely used in semantic segmentation as encoders. Moreover, memory friendly and computationally light designs such as @cite_9 @cite_11 @cite_5 @cite_3 , have shown to perform well in speed-accuracy trade-off by taking advantage of approaches such as depthwise separable convolution, bottleneck design and batch normalization @cite_21 . These efficient designs are promising for usage on mobile CPUs and GPUs, hence motivated us to use such networks as encoders for the challenging task of semantic segmentation. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_5",
"@cite_11"
],
"mid": [
"2612445135",
"1836465849",
"2883780447",
"2963125010",
"2963163009"
],
"abstract": [
"We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82 top-5 test error, exceeding the accuracy of human raters.",
"Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.",
"We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8 ) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves 13A— actual speedup over AlexNet while maintaining comparable accuracy.",
"In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters."
]
} |
1902.07476 | 2934262148 | Assigning a label to each pixel in an image, namely semantic segmentation, has been an important task in computer vision, and has applications in autonomous driving, robotic navigation, localization, and scene understanding. Fully convolutional neural networks have proved to be a successful solution for the task over the years but most of the work being done focuses primarily on accuracy. In this paper, we present a computationally efficient approach to semantic segmentation, while achieving a high mean intersection over union (mIOU), (70.33 ) on Cityscapes challenge. The network proposed is capable of running real-time on mobile devices. In addition, we make our code and model weights publicly available. | FCNN models for semantic segmentation proposed in the field have been the top-performing approach in many benchmarks such as @cite_15 @cite_8 @cite_18 @cite_14 . But these approaches use deep feature generators and complex reconstruction methods for the task, thus making them unsuitable for mobile use, especially for the application of autonomous cars where resources are scarce and computation delays are undesired @cite_10 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_8",
"@cite_15",
"@cite_10"
],
"mid": [
"2037227137",
"2737258237",
"2340897893",
"1861492603",
"2892220819"
],
"abstract": [
"The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008---2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community's progress through time using the methods of (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.",
"Scene parsing, or recognizing and segmenting objects and stuff in an image, is one of the key problems in computer vision. Despite the communitys efforts in data collection, there are still few image datasets covering a wide range of scenes and object categories with dense and detailed annotations for scene parsing. In this paper, we introduce and analyze the ADE20K dataset, spanning diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. A scene parsing benchmark is built upon the ADE20K with 150 object and stuff classes included. Several segmentation baseline models are evaluated on the benchmark. A novel network design called Cascade Segmentation Module is proposed to parse a scene into stuff, objects, and object parts in a cascade and improve over the baselines. We further show that the trained scene parsing networks can lead to applications such as image content removal and scene synthesis1.",
"Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.",
"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.",
"Semantic segmentation is a critical module in robotics related applications, especially autonomous driving. Most of the research on semantic segmentation is focused on improving the accuracy with less attention paid to computationally efficient solutions. Majority of the efficient semantic segmentation algorithms have customized optimizations without scalability and there is no systematic way to compare them. In this paper, we present a real-time segmentation benchmarking framework and study various segmentation algorithms for autonomous driving. We implemented a generic meta-architecture via a decoupled design where different types of encoders and decoders can be plugged in independently. We provide several example encoders including VGG16, Resnet18, MobileNet, and ShuffleNet and decoders including SkipNet, UNet and Dilation Frontend. The framework is scalable for addition of new encoders and decoders developed in the community for other vision tasks. We performed detailed experimental analysis on cityscapes dataset for various combinations of encoder and decoder. The modular framework enabled rapid prototyping of a custom efficient architecture which provides x143 GFLOPs reduction compared to SegNet and runs real-time at 15 fps on NVIDIA Jetson TX2. The source code of the framework is publicly available."
]
} |
1902.07476 | 2934262148 | Assigning a label to each pixel in an image, namely semantic segmentation, has been an important task in computer vision, and has applications in autonomous driving, robotic navigation, localization, and scene understanding. Fully convolutional neural networks have proved to be a successful solution for the task over the years but most of the work being done focuses primarily on accuracy. In this paper, we present a computationally efficient approach to semantic segmentation, while achieving a high mean intersection over union (mIOU), (70.33 ) on Cityscapes challenge. The network proposed is capable of running real-time on mobile devices. In addition, we make our code and model weights publicly available. | In this sense, one of the recent proposals in feature generation, ShuffleNet V2 @cite_3 , demonstrates significant efficiency boost over the others while performing accurately. According to @cite_3 , there are four main guidelines to follow for achieving a highly efficient network design. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2883780447"
],
"abstract": [
"Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff."
]
} |
1902.07476 | 2934262148 | Assigning a label to each pixel in an image, namely semantic segmentation, has been an important task in computer vision, and has applications in autonomous driving, robotic navigation, localization, and scene understanding. Fully convolutional neural networks have proved to be a successful solution for the task over the years but most of the work being done focuses primarily on accuracy. In this paper, we present a computationally efficient approach to semantic segmentation, while achieving a high mean intersection over union (mIOU), (70.33 ) on Cityscapes challenge. The network proposed is capable of running real-time on mobile devices. In addition, we make our code and model weights publicly available. | Another important issue is the metric of performance for convolutional neural networks. The efficiency of CNNs is commonly reported by the total number of floating point operations (FLOPs). It is pointed out in @cite_3 that, despite their similar number of FLOPs, networks may have different inference speeds, emphasizing that this metric alone can be misleading and may lead to poor designs. They argue that discrepancy can be due to memory access cost (MAC), parallelism capability of the design and platform dependent optimizations on specific operations such as cuDNN’s @math Conv. Furthermore, they offer to use a direct metric (e.g., speed) instead of an indirect metric such as FLOPs. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2883780447"
],
"abstract": [
"Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff."
]
} |
1902.07476 | 2934262148 | Assigning a label to each pixel in an image, namely semantic segmentation, has been an important task in computer vision, and has applications in autonomous driving, robotic navigation, localization, and scene understanding. Fully convolutional neural networks have proved to be a successful solution for the task over the years but most of the work being done focuses primarily on accuracy. In this paper, we present a computationally efficient approach to semantic segmentation, while achieving a high mean intersection over union (mIOU), (70.33 ) on Cityscapes challenge. The network proposed is capable of running real-time on mobile devices. In addition, we make our code and model weights publicly available. | Atrous convolutions, or dilated convolutions, are shown to be a powerful tool in the semantic segmentation task @cite_13 . By using atrous convolutions it is possible to use pretrained ImageNet networks such as @cite_3 @cite_11 to extract denser feature maps by replacing downscaling at the last layers with atrous rates, thus allowing us to control the dimensions of the features. Furthermore, they can be used to enlarge the field of view of the filters to embody multi-scale context. Examples of atrous convolutions at different rates are shown in Figure . | {
"cite_N": [
"@cite_13",
"@cite_3",
"@cite_11"
],
"mid": [
"2630837129",
"2883780447",
"2963163009"
],
"abstract": [
"In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.",
"Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.",
"In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters."
]
} |
1902.07476 | 2934262148 | Assigning a label to each pixel in an image, namely semantic segmentation, has been an important task in computer vision, and has applications in autonomous driving, robotic navigation, localization, and scene understanding. Fully convolutional neural networks have proved to be a successful solution for the task over the years but most of the work being done focuses primarily on accuracy. In this paper, we present a computationally efficient approach to semantic segmentation, while achieving a high mean intersection over union (mIOU), (70.33 ) on Cityscapes challenge. The network proposed is capable of running real-time on mobile devices. In addition, we make our code and model weights publicly available. | DeepLabV3+ DPC @cite_16 achieves state-of-art accuracy when it is combined with their modified version of Xception @cite_6 backbone. In their work, Mobilenet V2 has shown to have a correlation of accuracy with Xception @cite_6 while having a shorter training time, and thus it is used in the random search @cite_2 of a dense prediction cell (DPC). Our work is inspired by the accuracy that they have achieved with the Mobilenet V2 backbone on Cityscapes set in @cite_16 , and their approach of combining atrous separable convolutions with spatial pyramid pooling in @cite_13 . To be more specific, we use lightweight prediction cell (denoted as basic) and DPC which were used on the Mobilenet V2 features, and the atrous separable convolutions on the bottom layers of feature extractor in order to keep higher resolution features. | {
"cite_N": [
"@cite_16",
"@cite_13",
"@cite_6",
"@cite_2"
],
"mid": [
"2891778567",
"2630837129",
"2531409750",
"2097998348"
],
"abstract": [
"The design of neural network architectures is an important component for achieving state-of-the-art performance with machine learning systems across a broad array of tasks. Much work has endeavored to design and build architectures automatically through clever construction of a search space paired with simple learning algorithms. Recent progress has demonstrated that such meta-learning methods may exceed scalable human-invented architectures on image classification tasks. An open question is the degree to which such methods may generalize to new domains. In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing, person-part segmentation, and semantic image segmentation. Constructing viable search spaces in this domain is challenging because of the multi-scale representation of visual information and the necessity to operate on high resolution imagery. Based on a survey of techniques in dense image prediction, we construct a recursive search space and demonstrate that even with efficient random search, we can identify architectures that outperform human-invented architectures and achieve state-of-the-art performance on three dense prediction tasks including 82.7 on Cityscapes (street scene parsing), 71.3 on PASCAL-Person-Part (person-part segmentation), and 87.9 on PASCAL VOC 2012 (semantic image segmentation). Additionally, the resulting architecture is more computationally efficient, requiring half the parameters and half the computational cost as previous state of the art systems.",
"In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.",
"We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.",
"Grid search and manual search are the most widely used strategies for hyper-parameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid. Empirical evidence comes from a comparison with a large previous study that used grid search and manual search to configure neural networks and deep belief networks. Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time. Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration space. Compared with deep belief networks configured by a thoughtful combination of manual search and grid search, purely random search over the same 32-dimensional configuration space found statistically equal performance on four of seven data sets, and superior performance on one of seven. A Gaussian process analysis of the function from hyper-parameters to validation set performance reveals that for most data sets only a few of the hyper-parameters really matter, but that different hyper-parameters are important on different data sets. This phenomenon makes grid search a poor choice for configuring algorithms for new data sets. Our analysis casts some light on why recent \"High Throughput\" methods achieve surprising success--they appear to search through a large number of hyper-parameters because most hyper-parameters do not matter much. We anticipate that growing interest in large hierarchical models will place an increasing burden on techniques for hyper-parameter optimization; this work shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper-parameter optimization algorithms."
]
} |
1902.07476 | 2934262148 | Assigning a label to each pixel in an image, namely semantic segmentation, has been an important task in computer vision, and has applications in autonomous driving, robotic navigation, localization, and scene understanding. Fully convolutional neural networks have proved to be a successful solution for the task over the years but most of the work being done focuses primarily on accuracy. In this paper, we present a computationally efficient approach to semantic segmentation, while achieving a high mean intersection over union (mIOU), (70.33 ) on Cityscapes challenge. The network proposed is capable of running real-time on mobile devices. In addition, we make our code and model weights publicly available. | Semantic segmentation as a real-time task has gained momentum on popularity recently. ENet @cite_24 is an efficient and lightweight network offering low number of FLOPs in the design and ability to run real-time on NVIDIA TX1 by taking advantage of bottleneck module. Recently, ENet was further fine-tuned by @cite_12 , increasing the Cityscapes mean intersection over union from 58.29 Our literature review showed us that the ShuffleNet V2 architecture is yet to be used in semantic segmentation task as a feature generator. Both @cite_10 and SHUFFLESEG @cite_22 point out the low FLOP achievable by using ShuffleNet and show comparable accuracy and fast inference speeds. In this work, we exploit improved ShuffleNet V2 as an encoder module modified by atrous convolutions and well-proven encoder heads of DeepLabV3 and DPC in conjunction. Then, we evaluate the network on Cityscapes, a challenging task in the field of scene parsing. | {
"cite_N": [
"@cite_24",
"@cite_10",
"@cite_22",
"@cite_12"
],
"mid": [
"2419448466",
"2892220819",
"2793866399",
"2795587607"
],
"abstract": [
"The ability to perform pixel-wise semantic segmentation in real-time is of paramount importance in practical mobile applications. Recent deep neural networks aimed at this task have the disadvantage of requiring a large number of floating point operations and have long run-times that hinder their usability. In this paper, we propose a novel deep neural network architecture named ENet (efficient neural network), created specifically for tasks requiring low latency operation. ENet is up to 18x faster, requires 75x less FLOPs, has 79x less parameters, and provides similar or better accuracy to existing models. We have tested it on CamVid, Cityscapes and SUN datasets and report on comparisons with existing state-of-the-art methods, and the trade-offs between accuracy and processing time of a network. We present performance measurements of the proposed architecture on embedded systems and suggest possible software improvements that could make ENet even faster.",
"Semantic segmentation is a critical module in robotics related applications, especially autonomous driving. Most of the research on semantic segmentation is focused on improving the accuracy with less attention paid to computationally efficient solutions. Majority of the efficient semantic segmentation algorithms have customized optimizations without scalability and there is no systematic way to compare them. In this paper, we present a real-time segmentation benchmarking framework and study various segmentation algorithms for autonomous driving. We implemented a generic meta-architecture via a decoupled design where different types of encoders and decoders can be plugged in independently. We provide several example encoders including VGG16, Resnet18, MobileNet, and ShuffleNet and decoders including SkipNet, UNet and Dilation Frontend. The framework is scalable for addition of new encoders and decoders developed in the community for other vision tasks. We performed detailed experimental analysis on cityscapes dataset for various combinations of encoder and decoder. The modular framework enabled rapid prototyping of a custom efficient architecture which provides x143 GFLOPs reduction compared to SegNet and runs real-time at 15 fps on NVIDIA Jetson TX2. The source code of the framework is publicly available.",
"Real-time semantic segmentation is of significant importance for mobile and robotics related applications. We propose a computationally efficient segmentation network which we term as ShuffleSeg. The proposed architecture is based on grouped convolution and channel shuffling in its encoder for improving the performance. An ablation study of different decoding methods is compared including Skip architecture, UNet, and Dilation Frontend. Interesting insights on the speed and accuracy tradeoff is discussed. It is shown that skip architecture in the decoding method provides the best compromise for the goal of real-time performance, while it provides adequate accuracy by utilizing higher resolution feature maps for a more accurate segmentation. ShuffleSeg is evaluated on CityScapes and compared against the state of the art real-time segmentation networks. It achieves 2x GFLOPs reduction, while it provides on par mean intersection over union of 58.3 on CityScapes test set. ShuffleSeg runs at 15.7 frames per second on NVIDIA Jetson TX2, which makes it of great potential for real-time applications.",
"The Jaccard index, also referred to as the intersection-over-union score, is commonly employed in the evaluation of image segmentation results given its perceptual qualities, scale invariance - which lends appropriate relevance to small objects, and appropriate counting of false negatives, in comparison to per-pixel losses. We present a method for direct optimization of the mean intersection-over-union loss in neural networks, in the context of semantic image segmentation, based on the convex LovAisz extension of submodular losses. The loss is shown to perform better with respect to the Jaccard index measure than the traditionally used cross-entropy loss. We show quantitative and qualitative differences between optimizing the Jaccard index per image versus optimizing the Jaccard index taken over an entire dataset. We evaluate the impact of our method in a semantic segmentation pipeline and show substantially improved intersection-over-union segmentation scores on the Pascal VOC and Cityscapes datasets using state-of-the-art deep learning segmentation architectures."
]
} |
1902.07430 | 2916814360 | Multishot Magnetic Resonance Imaging (MRI) is a promising imaging modality that can produce a high-resolution image with relatively less data acquisition time. The downside of multishot MRI is that it is very sensitive to subject motion and even small amounts of motion during the scan can produce artifacts in the final MR image that may cause misdiagnosis. Numerous efforts have been made to address this issue; however, all of these proposals are limited in terms of how much motion they can correct and the required computational time. In this paper, we propose a novel generative networks based conjugate gradient SENSE (CG-SENSE) reconstruction framework for motion correction in multishot MRI. The proposed framework first employs CG-SENSE reconstruction to produce the motion-corrupted image and then a generative adversarial network (GAN) is used to correct the motion artifacts. The proposed method has been rigorously evaluated on synthetically corrupted data on varying degrees of motion, numbers of shots, and encoding trajectories. Our analyses (both quantitative as well as qualitative visual analysis) establishes that the proposed method significantly robust and outperforms state-of-the-art motion correction techniques and also reduces severalfold of computational times. | MRI is highly sensitive to subject motion during the -space data acquisition, which can reduce image quality by inducing the motion artifacts. The artifacts by rigid motion are widely observed in multishot MR images during the clinical examination @cite_34 , therefore, the application of motion correction techniques is essentially performed during or after the reconstruction process to obtain an artifact-free image. Retrospective motion correction (RMC) techniques are applied to the rigid motion correction @cite_1 @cite_3 . They perform the -space data acquisition without considering the potential motion and object motion is estimated from acquired -space data @cite_14 . Many researchers proposed different RMC based method for rigid motion correction. For instance, @cite_24 studied the inconsistencies of -space caused by subject motion using parallel imaging (PI) technique. The inconsistent data is discarded and replaced with consistent data generated by the parallel imaging technique to compensate the motion artifacts. This method produces an image with fewer motion artifacts albeit with a lower signal to noise ratio (SNR). | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_34"
],
"mid": [
"1812490466",
"2134886608",
"2128391413",
"1967659711",
"1559195783"
],
"abstract": [
"Subject motion during magnetic resonance imaging (MRI) has been problematic since its introduction as a clinical imaging modality. While sensitivity to particle motion or blood flow can be used to provide useful image contrast, bulk motion presents a considerable problem in the majority of clinical applications. It is one of the most frequent sources of artifacts. Over 30 years of research have produced numerous methods to mitigate or correct for motion artifacts, but no single method can be applied in all imaging situations. Instead, a “toolbox” of methods exists, where each tool is suitable for some tasks, but not for others. This article reviews the origins of motion artifacts and presents current mitigation and correction methods. In some imaging situations, the currently available motion correction tools are highly effective; in other cases, appropriate tools still need to be developed. It seems likely that this multifaceted approach will be what eventually solves the motion sensitivity problem in MRI, rather than a single solution that is effective in all situations. This review places a strong emphasis on explaining the physics behind the occurrence of such artifacts, with the aim of aiding artifact detection and mitigation in particular clinical situations. J. Magn. Reson. Imaging 2015;42:887–901.",
"Motion of an object degrades MR images, as the acquisition is time-dependent, and thus k-space is inconsistently sampled. This causes ghosts. Current motion correction methods make restrictive assumptions on the type of motions, for example, that it is a translation or rotation, and use special properties of k-space for these transformations. Such methods, however, cannot be generalized easily to nonrigid types of motions, and even rotations in multiple shots can be a problem. Here, a method is presented that can handle general nonrigid motion models. A general matrix equation gives the corrupted image from the ideal object. Thus, inversion of this system allows us to get the ideal image from the corrupted one. This inversion is possible by efficient methods mixing Fourier transforms with the conjugate gradient method. A faster but empirical inversion is discussed as well as methods to determine the motion. Simulated three-dimensional affine data and two-dimensional pulsation data and in vivo nonrigid data are used for demonstration. All examples are multishot images where the object moves between shots. The results indicate that it is now possible to correct for nonrigid types of motion that are representative of many types of patient motion, although computation times remain an issue.",
"A new method for correction of MRI motion artifacts induced by corrupted k-space data, acquired by multiple receiver coils such as phased arrays, is presented. In our approach, a projections onto convex sets (POCS)-based method for reconstruction of sensitivity encoded MRI data (POCSENSE) is employed to identify corrupted k-space samples. After the erroneous data are discarded from the dataset, the artifact-free images are restored from the remaining data using coil sensitivity profiles. The error detection and data restoration are based on informational redundancy of phased-array data and may be applied to full and reduced datasets. An important advantage of the new POCS-based method is that, in addition to multicoil data redundancy, it can use a priori known properties about the imaged object for improved MR image artifact correction. The use of such information was shown to improve significantly k-space error detection and image artifact correction. The method was validated on data corrupted by simulated and real motion such as head motion and pulsatile flow. Magn Reson Med 63:1104–1110, 2010. V C 2010",
"A method has been developed using techniques from partially parallel imaging (PPI) to detect localized inconsistencies in k-space that are caused by certain types of motion. The inconsistent data are discarded and consistent data regenerated from the remaining data using PPI techniques. The price is a small decrease in signal-to-noise ratio (SNR) and additional postprocessing. An iterative scheme is presented which does not require separately acquired coil sensitivity information for the PPI reconstructions. This method has been found to reduce artifact levels in phantom and in vivo test studies. Magn Reson Med 47:677–686, 2002. © 2002 Wiley-Liss, Inc.",
"Abstract Purpose To assess the prevalence, severity, and cost estimates associated with motion artifacts identified on clinical MR examinations, with a focus on the neuroaxis. Methods A retrospective review of 1 randomly selected full calendar week of MR examinations (April 2014) was conducted for the detection of significant motion artifacts in examinations performed at a single institution on 3 different MR scanners. A base-case cost estimate was computed from recently available institutional data, and correlated with sequence time and severity of motion artifacts. Results A total of 192 completed clinical examinations were reviewed. Significant motion artifacts were identified on sequences in 7.5 of outpatient and 29.4 of inpatient and or emergency department MR examinations. The prevalence of repeat sequences was 19.8 of total MRI examinations. The base-case cost estimate yielded a potential cost to the hospital of @math 115,000 per scanner per year may affect hospitals, owing to motion artifacts (univariate sensitivity analysis suggested a lower bound of @math 139,000). Conclusions Motion artifacts represent a frequent cause of MR image degradation, particularly for inpatient and emergency department patients, resulting in substantial costs to the radiology department. Greater attention and resources should be directed toward providing practical solutions to this dilemma."
]
} |
1902.07430 | 2916814360 | Multishot Magnetic Resonance Imaging (MRI) is a promising imaging modality that can produce a high-resolution image with relatively less data acquisition time. The downside of multishot MRI is that it is very sensitive to subject motion and even small amounts of motion during the scan can produce artifacts in the final MR image that may cause misdiagnosis. Numerous efforts have been made to address this issue; however, all of these proposals are limited in terms of how much motion they can correct and the required computational time. In this paper, we propose a novel generative networks based conjugate gradient SENSE (CG-SENSE) reconstruction framework for motion correction in multishot MRI. The proposed framework first employs CG-SENSE reconstruction to produce the motion-corrupted image and then a generative adversarial network (GAN) is used to correct the motion artifacts. The proposed method has been rigorously evaluated on synthetically corrupted data on varying degrees of motion, numbers of shots, and encoding trajectories. Our analyses (both quantitative as well as qualitative visual analysis) establishes that the proposed method significantly robust and outperforms state-of-the-art motion correction techniques and also reduces severalfold of computational times. | @cite_12 proposed a joint reconstruction and motion correction technique to iteratively search for motion trajectory. Gradient-based optimization approach has been opted to efficiently explore the search space. The same authors extended their work in @cite_16 by disintegrating the image into small windows that contain local rigid motion and used their own forward model to construct an objective function that optimizes the unknown motion parameters. Similarly, @cite_20 proposed the use of a forward model to correct motion artifacts. However, this technique utilises the full reconstruction inverse to integrate the information of multi-coils for estimation and correction of motion. In another study @cite_9 , authors extended their framework to correct three-dimensional motion (i.e., in-plane and through-plane motion). Through the plane, the motion is corrected by sampling the slices in overlapped manner. | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_20",
"@cite_12"
],
"mid": [
"2626006036",
"1754483689",
"2297651635",
"2160857949"
],
"abstract": [
"Purpose To introduce a methodology for the reconstruction of multi-shot, multi-slice magnetic resonance imaging able to cope with both within-plane and through-plane rigid motion and to describe its application in structural brain imaging. Theory and Methods The method alternates between motion estimation and reconstruction using a common objective function for both. Estimates of three-dimensional motion states for each shot and slice are gradually refined by improving on the fit of current reconstructions to the partial k-space information from multiple coils. Overlapped slices and super-resolution allow recovery of through-plane motion and outlier rejection discards artifacted shots. The method is applied to T2 and T1 brain scans acquired in different views. Results The procedure has greatly diminished artifacts in a database of 1883 neonatal image volumes, as assessed by image quality metrics and visual inspection. Examples showing the ability to correct for motion and robustness against damaged shots are provided. Combination of motion corrected reconstructions for different views has shown further artifact suppression and resolution recovery. Conclusion The proposed method addresses the problem of rigid motion in multi-shot multi-slice anatomical brain scans. Tests on a large collection of potentially corrupted datasets have shown a remarkable image quality improvement. Magn Reson Med, 2017. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.",
"Purpose Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Methods Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. Results We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. Conclusion The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. Magn Reson Med 73:1457–1468, 2015. © 2014 Wiley Periodicals, Inc.",
"This paper introduces a framework for the reconstruction of magnetic resonance images in the presence of rigid motion. The rationale behind our proposal is to make use of the partial @math -space information provided by multiple receiver coils in order to estimate the position of the imaged object throughout the shots that contribute to the image. The estimated motion is incorporated into the reconstruction model in an iterative manner to obtain a motion-free image. The method is parameter-free, does not assume any prior model for the image to be reconstructed, avoids blurred images due to resampling, does not make use of external sensors, and does not require modifications in the acquisition sequence. Validation is performed using synthetically corrupted data to study the limits for full motion-recovered reconstruction in terms of the amount of motion, encoding trajectories, number of shots and availability of prior information, and to compare with the state of the art. Quantitative and visual results of its application to a highly challenging volumetric brain imaging cohort of @math neonates are also presented, showing the ability of the proposed reconstruction to generally improve the quality of reconstructed images, as evaluated by both sparsity and gradient entropy based metrics.",
"Purpose Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. Methods The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. Results The method has been evaluated on both synthetic and real data in two and three dimentions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimentional volume. Conclusion The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Magn Reson Med 70:1608–1618, 2013. © 2013 Wiley Periodicals, Inc."
]
} |
1902.07403 | 2256543725 | The objective of the system presented in this paper is to give users tactile feedback while walking in a virtual world through an anthropomorphic finger motion interface. We determined that the synchrony between the first person perspective and proprioceptive information together with the motor activity of the user's fingers are able to induce an illusionary feeling that is equivalent to the sense of ownership of the invisible avatar's legs. Under this condition, the perception of the ground under the virtual avatar's foot is felt through the user's fingertip. The experiments indicated that using our method the scale of the tactile perception of the texture roughness was extended and that the enlargement ratio was proportional to the avatar's body (foot) size. In order to display the target tactile perception to the users, we have to control only the virtual avatar's body (foot) size and the roughness of the tactile texture. Our results suggest that in terms of tactile perception fingers can be a replacement for legs in locomotion interfaces. | Multiple locomotion interfaces have been proposed and invented, such as gamepads and treadmills. However, no natural, general-purpose locomotion interface exists. What is most needed for locomotion in virtual environments (VEs)? Body-based information about the translational and rotational components of movement helps users to perform a navigational search task @cite_1 . @cite_27 examined the body-based cues resulting from active movements that facilitate the acquisition of spatial knowledge. Full body locomotion is possible on walking simulators. For example, the Omnidirectional treadmill @cite_3 or Torus treadmill @cite_12 enable users to move virtually in any direction while their position in the real world is fixed. An advantage of this approach, which supports full-body motion, is that it facilitates tactile or force feedback directly to the user's soles. The haptic experience that corresponds to the rendering of floor attributes and ground properties is a key to natural locomotion. However, entire devices supporting full-body motions tend to be large and complicated. | {
"cite_N": [
"@cite_27",
"@cite_1",
"@cite_12",
"@cite_3"
],
"mid": [
"1973595287",
"2133882256",
"2101895037",
"2158085556"
],
"abstract": [
"Previous research has shown that inertial cues resulting from passive transport through a large environment do not necessarily facilitate acquiring knowledge about its layout. Here we examine whether the additional body-based cues that result from active movement facilitate the acquisition of spatial knowledge. Three groups of participants learned locations along an 840-m route. One group walked the route during learning, allowing access to body-based cues (i.e., vestibular, proprioceptive, and efferent information). Another group learned by sitting in the laboratory, watching videos made from the first group. A third group watched a specially made video that minimized potentially confusing head-on-trunk rotations of the viewpoint. All groups were tested on their knowledge of directions in the environment as well as on its configural properties. Having access to body-based information reduced pointing error by a small but significant amount. Regardless of the sensory information available during learning, participants exhibited strikingly common biases.",
"Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90p of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50p of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications.",
"This paper describes experiments regarding navigation performance using a new locomotion interface for walking through virtual space. Although traveling on foot is the most intuitive style of locomotion, proprioceptive feedback from walking is not provided in most applications of virtual environments. We developed an infinite surface driven by actuators for enabling a sense of walking. Torus-shaped surfaces are selected to realize the locomotion interface. The device employs twelve sets of treadmills, connected side by side and driven in perpendicular directions. The virtual infinite surface is generated by the motion of the treadmills. A walker can go in any direction while his her position is fixed in the real world. The device is called a Torus Treadmill. Navigation performance was measured by path-reproduction tests. Subjects were immersed in a virtual grass-covered plain on which a cone-shaped target object was placed. The subjects first traveled to the target object. After they reached it, the target object disappeared and the rehomed subjects were asked to return to the place where the target object was placed. We also set two target objects, and the subject traveled along a bent path. We compared two locomotion modes: walking on the Torus Treadmill and moving purely by joystick operation. The results of the bent-path experiment showed that the accuracy of the path reproduction in the Torus Treadmill mode is better than that of joystick mode.",
"The Omni-Directional Treadmill (ODT) is a revolutionary device for locomotion in large-scale virtual environments. The device allows its user to walk or jog in any direction of travel. It is the third generation in a series of devices built for this purpose for the U.S. Army’s Dismounted Infantry Training Program. We first describe the device in terms of its construction and operating characteristics. We then report on an analysis consisting of a series of locomotion and maneuvering tasks on the ODT. We observed user motions and system responses to those motions from the perspective of the user. Each task is described in terms of what causes certain motions to trigger unpredictable responses causing loss of balance or at least causing the user to become consciously aware of their movements. We conclude that the two primary shortcomings in the ODT are its tracking system and machine control mechanism for centering the user on the treads."
]
} |
1902.07403 | 2256543725 | The objective of the system presented in this paper is to give users tactile feedback while walking in a virtual world through an anthropomorphic finger motion interface. We determined that the synchrony between the first person perspective and proprioceptive information together with the motor activity of the user's fingers are able to induce an illusionary feeling that is equivalent to the sense of ownership of the invisible avatar's legs. Under this condition, the perception of the ground under the virtual avatar's foot is felt through the user's fingertip. The experiments indicated that using our method the scale of the tactile perception of the texture roughness was extended and that the enlargement ratio was proportional to the avatar's body (foot) size. In order to display the target tactile perception to the users, we have to control only the virtual avatar's body (foot) size and the roughness of the tactile texture. Our results suggest that in terms of tactile perception fingers can be a replacement for legs in locomotion interfaces. | Instead of simulating full-body locomotion, several interaction techniques using a full-body metaphor were presented. The Walking-in-Place technique @cite_2 avoids the user being relocated in the real world. The user walks by utilizing walk-like gestures instead of actually walking. The Step WIM technique @cite_21 allows the user to interact with VEs through a hand-held miniature copy of the scene. Other techniques that transform the VE or user's motion by rotation or scaling include redirected walking @cite_6 , scaled-translational-gain @cite_17 @cite_9 , seven-league-boots @cite_10 , and motion compression @cite_24 . Generally, these metaphor techniques are lacking in terms of kinesthetic feedback. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_2",
"@cite_10",
"@cite_17"
],
"mid": [
"2617723522",
"2093956463",
"2129149254",
"2041186355",
"2031144777",
"2111432142",
"2107822562"
],
"abstract": [
"",
"This paper presents a set of interaction techniques for hands-free multi-scale navigation through virtual environments. We believe that hands-free navigation, unlike the majority of navigation techniques based on hand motions, has the greatest potential for maximizing the interactivity of virtual environments since navigation modes are offloaded from modal hand gestures to more direct motions of the feet and torso. Not only are the users’ hands freed to perform tasks such as modeling, notetaking and object manipulation, but we also believe that foot and torso movements may inherently be more natural for some navigation tasks. The particular interactions that we developed include a leaning technique for moving small and medium distances, a foot-gesture controlled Step WIM that acts as a floor map for moving larger distances, and a viewing technique that enables a user to view a full 360 degrees in only a three-walled semi-immersive environment by subtly amplifying the mapping between their torso rotation and the virtual world. We formatively designed and evaluated our techniques in existing projects related to archaeological reconstructions, free-form modeling, and interior design. In each case, our informal observations have indicated that motions such as walking and leaning are both appropriate for navigation and are effective in cognitively simplifying complex virtual environment interactions since functionality is more evenly distributed across the body.",
"This paper describes a method for allowing people to virtually move around a CAVE™ without ever having to turn to face the missing back wall. We describe the method, and report a pilot study of 28 participants, half of whom moved through the virtual world using a hand-held controller, and the other half used the new technique called 'Redirected Walking in Place' (RWP). The results show that the current instantiation of the RWP technique does not result in a lower frequency of looking towards the missing wall. However, the results also show that the sense of presence in the virtual environment is significantly and negatively correlated with the amount that the back wall is seen. There is evidence that RWP does reduce the chance of seeing the blank wall for some participants. The increased sense of presence through never having to face the blank wall, and the results of this pilot study show the RWP has promise and merits further development.",
"Telepresent walking allows visits to remote places such as museums, exhibitions, architecture, or industrial sites with a high degree of realism. While walking freely around in the user environment, the user sees the remote environment through the \"eyes\" of a remote mobile teleoperator. For that purpose, the user's motion is tracked and transferred to the teleoperator. Without additional processing of the motion data, the size of the remote environment to be explored is limited to the size of the user environment. This paper proposes an extension of telepresent walking to arbitrarily large remote or virtual spaces based on compressing wide-area motion into the available user space. Motion compression is a novel approach and does not make use of scaling or walking-in-place metaphors. Rather, motion compression introduces some deviation of curvature between user motion and teleoperator motion. An optimization approach is used to find the user path of minimum curvature deviation with respect to a given predicted teleoperator path that fits inside the boundaries of the user environment. Turning angles and travel distances are mapped with a 1:1 ratio to provide the desired impression of realistic self-locomotion in the teleoperator's environment. The effects of the curvature deviation on inconsistent perception of locomotion are studied in two experiments.",
"This article presents an interactive technique for moving through an immersive virtual environment (or “virtual reality”). The technique is suitable for applications where locomotion is restricted to ground level. The technique is derived from the idea that presence in virtual environments may be enhanced the stronger the match between proprioceptive information from human body movements and sensory feedback from the computer-generated displays. The technique is an attempt to simulate body movements associated with walking. The participant “walks in place” to move through the virtual environment across distances greater than the physical limitations imposed by the electromagnetic tracking devices. A neural network is used to analyze the stream of coordinates from the head-mounted display, to determine whether or not the participant is walking on the spot. Whenever it determines the walking behavior, the participant is moved through virtual space in the direction of his or her gaze. We discuss two experimental studies to assess the impact on presence of this method in comparison to the usual hand-pointing method of navigation in virtual reality. The studies suggest that subjective rating of presence is enhanced by the walking method provided that participants associate subjectively with the virtual body provided in the environment. An application of the technique to climbing steps and ladders is also presented.",
"When an immersive virtual environment represents a space that is larger than the available space within which a user can travel by directly walking, it becomes necessary to consider alternative methods for traveling through that space. The traditional solution is to require the user to travel 'indirectly', using a device that changes his viewpoint in the environment without actually requiring him to move - for example, a joystick. However, other solutions involving variations on direct walking are also possible. In this paper, we present a new metaphor for natural, augmented direct locomotion through moderately large-scale immersive virtual environments (IVEs) presented via head mounted display systems, which we call seven league boots. The key characteristic of this method is that it involves determining a user's intended direction of travel and then augmenting only the component of his or her motion that is aligned with that direction. After reviewing previously proposed methods for enabling intuitive locomotion through large IVEs, we begin by describing the technical implementation details of our novel method, discussing the various alternative options that we explored and parameters that we varied in an attempt to attain optimal performance. We then present the results of a pilot observer experiment that we conducted in an attempt to obtain objective, qualitative insight into the relative strengths and weaknesses of our new method, in comparison to the three most commonly used alternative locomotion methods: flying, via use of a wand; normal walking, with a uniform gain applied to the output of the tracker; and normal walking without gain, but with the location and orientation of the larger virtual environment periodically adjusted relative to position of the participant in the real environment. In this study we found, among other things, that for travel down a long, straight virtual hallway, participants overwhelmingly preferred the seven league boots method to the other methods, overall",
"Navigating through large virtual environments using a head-mounted display (HMD) is difficult due to the spatial limitations of the tracking system. We conducted two experiments to examine methods of exploring large virtual spaces with an HMD under translation conditions different than normal walking. Experiment 1 compares locomotion in the virtual environment using two different motor actions to translate the subject. The study contrasts user learning and orientation of two different translational gains of bipedal locomotion (not scaled and scaled by ten) with joystick locomotion, where rotation in both locomotion interfaces is accomplished by physically turning. Experiment 2 looks further at the effects of increasing the translational gain of bipedal locomotion in a virtual environment. A subject's spatial learning and orientation were evaluated in three gain conditions where each physical step was: not scaled, scaled by two, or scaled by ten (1:1, 2:1, 10:1, respectively). A sub-study of this experiment compared the performance of people who played video games against people who did not."
]
} |
1902.07146 | 2973958128 | We consider the full shift @math where @math , @math being a finite alphabet. For a class of potentials which contains in particular potentials @math with variation decreasing like @math for some @math , we prove that their corresponding equilibrium state @math satisfies a Gaussian concentration bound. Namely, we prove that there exists a constant @math such that, for all @math and for all separately Lipschitz functions @math , the exponential moment of @math is bounded by @math . The crucial point is that @math is independent of @math and @math . We then derive various consequences of this inequality. For instance, we obtain bounds on the fluctuations of the empirical frequency of blocks, the speed of convergence of the empirical measure, and speed of Markov approximation of @math . We also derive an almost-sure central limit theorem. | For @math is Lipschitz, Theorem was proved in @cite_7 . The main goal of @cite_7 was then to deal with nonuniformly hyperbolic systems modeled by a Young tower. For maps of the unit interval, one can have for instance maps with an indifferent fixed point. When the tower has exponential tails, the authors of @cite_7 proved a Gaussian concentration bound. When the tower has polynomial tails, they proved moment concentration bounds. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2126252439"
],
"abstract": [
"For dynamical systems modeled by a Young tower with exponential tails, we prove an exponential concentration inequality for all separately Lipschitz observables of n variables. When tails are polynomial, we prove polynomial concentration inequalities. Those inequalities are optimal. We give some applications of such inequalities to specific systems and specific observables."
]
} |
1902.07146 | 2973958128 | We consider the full shift @math where @math , @math being a finite alphabet. For a class of potentials which contains in particular potentials @math with variation decreasing like @math for some @math , we prove that their corresponding equilibrium state @math satisfies a Gaussian concentration bound. Namely, we prove that there exists a constant @math such that, for all @math and for all separately Lipschitz functions @math , the exponential moment of @math is bounded by @math . The crucial point is that @math is independent of @math and @math . We then derive various consequences of this inequality. For instance, we obtain bounds on the fluctuations of the empirical frequency of blocks, the speed of convergence of the empirical measure, and speed of Markov approximation of @math . We also derive an almost-sure central limit theorem. | The novelty here is to prove the Gaussian concentration bound for potentials with a variation decaying subexponentially. Let us know briefly explain what this means geometrically'. It is well-known that a uniformly expanding map @math of the unit interval with a finite Markov partition which is piecewise @math , for some @math , can be coded by a subshift of finite type @math over a finite alphabet. Then, @math induces a potential @math on @math which is Lipschitz. The pullback of @math is then the unique absolutely continuous invariant probability measure for @math . In @cite_8 , the authors showed that, given @math which is not Lipschitz, one can construct a uniformly expanding map of the unit interval with a finite Markov partition which is piecewise @math , but not piecewise @math for any @math , and such that the pullback of @math is the Lebesgue measure. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1994286529"
],
"abstract": [
"We show how to construct a topological Markov map of the interval whose invariant probability measure is the stationary law of a given stochastic chain of infinite order. In particular we characterize the maps corresponding to stochastic chains with memory of variable length. The problem treated here is the converse of the classical construction of the Gibbs formalism for Markov expanding maps of the interval."
]
} |
1902.07146 | 2973958128 | We consider the full shift @math where @math , @math being a finite alphabet. For a class of potentials which contains in particular potentials @math with variation decreasing like @math for some @math , we prove that their corresponding equilibrium state @math satisfies a Gaussian concentration bound. Namely, we prove that there exists a constant @math such that, for all @math and for all separately Lipschitz functions @math , the exponential moment of @math is bounded by @math . The crucial point is that @math is independent of @math and @math . We then derive various consequences of this inequality. For instance, we obtain bounds on the fluctuations of the empirical frequency of blocks, the speed of convergence of the empirical measure, and speed of Markov approximation of @math . We also derive an almost-sure central limit theorem. | Let us also mention the paper @cite_2 in which the authors prove a Gaussian concentration bound for a @math which is attractive and of summable variation (whereas we need a bit more than summable). Their proof is based on coupling. However, they consider functions @math on @math , not on @math as in this paper. For such functions, the analogue of @math is @math . It is clear that a Gaussian concentration bound for functions @math implies a Gaussian concentration bound for functions @math , but the converse is not true. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2116579833"
],
"abstract": [
"We prove that uniqueness of the stationary chain, or equivalently, of the @math -measure, compatible with an attractive regular probability kernel is equivalent to either one of the following two assertions for this chain: (1) it is a finitary coding of an i.i.d. process with countable alphabet, (2) the concentration of measure holds at exponential rate. We show in particular that if a stationary chain is uniquely defined by a kernel that is continuous and attractive, then this chain can be sampled using a coupling-from-the-past algorithm. For the original Bramson-Kalikow model we further prove that there exists a unique compatible chain if and only if the chain is a finitary coding of a finite alphabet i.i.d. process. Finally, we obtain some partial results on conditions for phase transition for general chains of infinite order."
]
} |
1902.07110 | 2952432259 | While reinforcement learning can effectively improve language generation models, it often suffers from generating incoherent and repetitive phrases paulus2017deep . In this paper, we propose a novel repetition normalized adversarial reward to mitigate these problems. Our repetition penalized reward can greatly reduce the repetition rate and adversarial training mitigates generating incoherent phrases. Our model significantly outperforms the baseline model on ROUGE-1 ,(+3.24), ROUGE-L ,(+2.25), and a decreased repetition-rate (-4.98 ). | Deep learning methods are first applied to two sentence-level abstractive summarization task on DUC-2004 and Gigaword datasets @cite_13 with an encoder-decoder model. This model is further extended by hierarchical network @cite_16 , variational autoencoders @cite_10 , a coarse to fine approach @cite_22 and minimum risk training @cite_5 . As long summaries becomes more important, CNN Daily Mail dataset was released in @cite_16 . Pointer-generator with coverage loss @cite_12 is proposed to approach the task by enabling the model to copy unknown words from article and penalizing the repetition with coverage mechanism. @cite_14 proposes deep communicating agents for representing a long document for abstractive summarization. There are more papers focusing on extractive summarizations @cite_21 @cite_0 . Memory Network @cite_6 @cite_1 , which can include external knowledge, might also be included into summarization model. | {
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_22",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_12"
],
"mid": [
"1843891098",
"2794945088",
"2741438349",
"2952138241",
"",
"",
"",
"2735492478",
"",
"2951652470",
"2606974598"
],
"abstract": [
"Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.",
"We present deep communicating agents in an encoder-decoder architecture to address the challenges of representing a long document for abstractive summarization. With deep communicating agents, the task of encoding a long text is divided across multiple collaborating agents, each in charge of a subsection of the input text. These encoders are connected to a single decoder, trained end-to-end using reinforcement learning to generate a focused and coherent summary. Empirical results demonstrate that multiple communicating encoders lead to a higher quality summary compared to several strong baselines, including those based on a single encoder or multiple non-communicating encoders.",
"",
"We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art. Our model has the additional advantage of being very interpretable, since it allows visualization of its predictions broken up by abstract features such as information content, salience and novelty. Another novel contribution of our work is abstractive training of our extractive model that can train on human generated reference summaries alone, eliminating the need for sentence-level extractive labels.",
"",
"",
"",
"Recently, neural models have been proposed for headline generation by learning to map documents to headlines with recurrent neural network. In this work, we give a detailed introduction and comparison of existing work and recent improvements in neural headline generation, with particular attention on how encoders, decoders and neural model training strategies alter the overall performance of the headline generation system. Furthermore, we perform quantitative analysis of most existing neural headline generation systems and summarize several key factors that impact the performance of headline generation systems. Meanwhile, we carry on detailed error analysis to typical neural headline generation systems in order to gain more comprehension. Our results and conclusions are hoped to benefit future research studies.",
"",
"In this work we explore deep generative models of text in which the latent representation of a document is itself drawn from a discrete language model distribution. We formulate a variational auto-encoder for inference in this model and apply it to the task of compressing sentences. In this application the generative model first draws a latent summary sentence from a background language model, and then subsequently draws the observed sentence conditioned on this latent summary. In our empirical evaluation we show that generative formulations of both abstractive and extractive compression yield state-of-the-art results when trained on a large amount of supervised data. Further, we explore semi-supervised compression scenarios where we show that it is possible to achieve performance competitive with previously proposed supervised models while training on a fraction of the supervised data.",
""
]
} |
1902.07071 | 2912192808 | Playing back vibrotactile signals through actuators is commonly used to simulate tactile feelings of virtual textured surfaces. However, there is often a small mismatch between the simulated tactile feelings and intended tactile feelings by tactile designers. Thus, a method of modulating the vibrotactile perception is required. We focus on fine roughness perception and we propose a method using a pseudo-haptic effect to modulate fine roughness perception of vibrotactile texture. Specifically, we visually modify the pointer's position on the screen slightly, which indicates the touch position on textured surfaces. We hypothesized that if users receive vibrational feedback watching the pointer visually oscillating back forth and left right, users would believe the vibrotactile surfaces more uneven. We also hypothesized that as the size of visual oscillation is getting larger, the amount of modification of roughness perception of vibrotactile surfaces would be larger. We conducted user studies to test the hypotheses. Results of first user study suggested that users felt vibrotactile texture with our method rougher than they did without our method at a high probability. Results of second user study suggested that users felt different roughness for vibrational texture in response to the size of visual oscillation. These results confirmed our hypotheses and they suggested that our method was effective. Also, the same effect could potentially be applied to the visual movement of virtual hands or fingertips when users are interacting with virtual surfaces using their hands. | Previous studies have applied additional vibrotactile signals to users and modulated fine roughness perception. @cite_15 presented users with two surfaces, one stationary and one vibrating. They found that users tended to perceive the vibrating surface as rougher than the stationary one. @cite_9 proposed a method of selectively modifying the roughness sensations of real materials by applying additional vibrotactile stimuli to users' finger pads. They verified through user studies that their method successfully modulated roughness perception. | {
"cite_N": [
"@cite_9",
"@cite_15"
],
"mid": [
"2070813945",
"1974642054"
],
"abstract": [
"In this study, we developed vibrotactile display methods that can assist designers in product design. In order to achieve realistic sensations required for such designing purposes, we used real materials such as cloth, paper, wood, and leather and applied vibrotactile stimuli to modify the roughness sensations of these materials. This approach allowed us to present textures of various virtual materials with a strong sense of reality. We verified that our proposed methods could selectively modify the fine and macro-roughness sensations of real materials. The methods are expected to aid product designers in deciding tactile sensations suitable for their products.",
"According to the duplex theory of tactile texture perception, detection of cutaneous vibrations produced when the exploring finger moves across a surface contributes importantly to the perception of fine textures. If this is true, a vibrating surface should feel different from a stationary one. To test this prediction, experiments were conducted in which subjects examined two identical surfaces, one of which was surreptitiously made to vibrate, and judged which of the two was smoother. In experiment 1, the vibrating surface was less and less often judged smoother as the amplitude of (150 Hz) vibration increased. The effect was comparable in subjects who realized the surface was vibrating and those who did not. Experiment 2 showed that different frequencies (150–400 Hz) were equally effective in eliciting the effect when equated in sensation level (dB SL). The results suggest that vibrotaction contributes to texture perception, and that, at least within the Pacinian channel, it does so by means of an inten..."
]
} |
1902.07071 | 2912192808 | Playing back vibrotactile signals through actuators is commonly used to simulate tactile feelings of virtual textured surfaces. However, there is often a small mismatch between the simulated tactile feelings and intended tactile feelings by tactile designers. Thus, a method of modulating the vibrotactile perception is required. We focus on fine roughness perception and we propose a method using a pseudo-haptic effect to modulate fine roughness perception of vibrotactile texture. Specifically, we visually modify the pointer's position on the screen slightly, which indicates the touch position on textured surfaces. We hypothesized that if users receive vibrational feedback watching the pointer visually oscillating back forth and left right, users would believe the vibrotactile surfaces more uneven. We also hypothesized that as the size of visual oscillation is getting larger, the amount of modification of roughness perception of vibrotactile surfaces would be larger. We conducted user studies to test the hypotheses. Results of first user study suggested that users felt vibrotactile texture with our method rougher than they did without our method at a high probability. Results of second user study suggested that users felt different roughness for vibrational texture in response to the size of visual oscillation. These results confirmed our hypotheses and they suggested that our method was effective. Also, the same effect could potentially be applied to the visual movement of virtual hands or fingertips when users are interacting with virtual surfaces using their hands. | Cross-modal visuo-haptic interaction has been studied for a long time. Studies on it are based on the key idea that when visual and other senses conflict, vision often dominates in multisensory integration, so sensory input can be distorted in favor of vision. For example, in their classic experiment, Rock and Victor @cite_10 asked users to look at and touch an object. They created a conflict between vision and touch by distorting the visually perceived shape from the actual shape perceived by touch. As a result, users reported that the object felt the way it looked, suggesting that the conflict between vision and touch was completely resolved in favor of vision, and users were unaware of the conflict. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1967342442"
],
"abstract": [
"Observers were presented with an object whose visual shape, because of optical distortion, differed considerably from its tactual shape. After simultaneously grasping and viewing the object, the observers were required to indicate their impression of it by drawing it or by matching another object to it. The results reveal that vision is strongly dominant, often without the observer9s being aware of a conflict."
]
} |
1902.06797 | 2949977994 | Time-aligned lyrics can enrich the music listening experience by enabling karaoke, text-based song retrieval and intra-song navigation, and other applications. Compared to text-to-speech alignment, lyrics alignment remains highly challenging, despite many attempts to combine numerous sub-modules including vocal separation and detection in an effort to break down the problem. Furthermore, training required fine-grained annotations to be available in some form. Here, we present a novel system based on a modified Wave-U-Net architecture, which predicts character probabilities directly from raw audio using learnt multi-scale representations of the various signal components. There are no sub-modules whose interdependencies need to be optimized. Our training procedure is designed to work with weak, line-level annotations available in the real world. With a mean alignment error of 0.35s on a standard dataset our system outperforms the state-of-the-art by an order of magnitude. | Several approaches make additional assumptions to further simplify the problem. For example, the method presented in @cite_18 assumes that chord labels are attached to the lyrics and exploits them during the alignment process. Other approaches assume that the lyrics are pre-aligned at a line or phrase level so that the method only needs to refine the alignment within these sections @cite_25 @cite_11 @cite_10 . Since music often contains repeated segments, some methods additionally analyze and compare the musical structure in a recording and in corresponding lyrics @cite_3 @cite_27 . | {
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_27",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"",
"2052675777",
"2707788252",
"2057745663",
"2482558056"
],
"abstract": [
"",
"",
"Transcribing lyrics from musical audio is a challenging research problem which has not benefited from many advances made in the related field of automatic speech recognition, owing to the prevalent musical accompaniment and differences between the spoken and sung voice. However, one aspect of this problem which has yet to be exploited by researchers is that significant portions of the lyrics will be repeated throughout the song. In this paper we investigate how this information can be leveraged to form a consensus transcription with improved consistency and accuracy. Our results show that improvements can be gained using a variety of techniques, and that relative gains are largest under the most challenging and realistic experimental conditions.",
"The massive amount of digital music data available necessitates automated methods for processing, classifying and organizing large volumes of songs. As music discovery and interactive music applications become commonplace, the ability to synchronize lyric text information with an audio recording has gained interest. This paper presents an approach for lyric-audio alignment by comparing synthesized speech with a vocal track removed from an instrument mixture using source separation. We take a hierarchical approach to solve the problem, assuming a set of paragraph-music segment pairs is given and focus on within-segment lyric alignment at the word level. A synthesized speech signal is generated to reflect the properties of the music signal by controlling the speech rate and gender. Dynamic time warping finds the shortest path between the synthesized speech and separated vocal. The resulting path is used to calculate the timestamps of words in the original signal. The system results in approximately half a second of misalignment error on average. Finally, we discuss the challenges and suggest improvements to the method.",
"The paper considers the task of recognizing phonemes and words from a singing input by using a phonetic hidden Markov model recognizer. The system is targeted to both monophonic singing and singing in polyphonic music. A vocal separation algorithm is applied to separate the singing from polyphonic music. Due to the lack of annotated singing databases, the recognizer is trained using speech and linearly adapted to singing. Global adaptation to singing is found to improve singing recognition performance. Further improvement is obtained by gender-specific adaptation. We also study adaptation with multiple base classes defined by either phonetic or acoustic similarity. We test phoneme-level and word-level n-gram language models. The phoneme language models are trained on the speech database text. The large-vocabulary word-level language model is trained on a database of textual lyrics. Two applications are presented. The recognizer is used to align textual lyrics to vocals in polyphonic music, obtaining an average error of 0.94 seconds for line-level alignment. A query-by-singing retrieval application based on the recognized words is also constructed; in 57 of the cases, the first retrieved song is the correct one.",
"This study addresses the task of aligning lyrics with accompanied singing recordings. With a vowel-only representation of lyric syllables, our approach evaluates likelihood scores of vowel types with glottal pulse shapes and formant frequencies extracted from a small set of singing examples. The proposed vowel likelihood model is used in conjunction with a prior model of frame-wise syllable sequence in determining an optimal evolution of syllabic position. In lyrics alignment experiments, we optimized numerical parameters on two independent development sets and then tested the optimized system on two other datasets. New objective performance measures are introduced in the evaluation to provide further insight into the quality of alignment. Use of glottal pulse shapes and formant frequencies is shown by a controlled experiment to account for a 0.07 difference in average normalized alignment error. Another controlled experiment demonstrates that, with a difference of 0.03, F0-invariant glottal pulse shape gives a lower average normalized alignment error than does F0-invariant spectrum envelope, the latter being assumed by MFCC-based timbre models."
]
} |
1902.06797 | 2949977994 | Time-aligned lyrics can enrich the music listening experience by enabling karaoke, text-based song retrieval and intra-song navigation, and other applications. Compared to text-to-speech alignment, lyrics alignment remains highly challenging, despite many attempts to combine numerous sub-modules including vocal separation and detection in an effort to break down the problem. Furthermore, training required fine-grained annotations to be available in some form. Here, we present a novel system based on a modified Wave-U-Net architecture, which predicts character probabilities directly from raw audio using learnt multi-scale representations of the various signal components. There are no sub-modules whose interdependencies need to be optimized. Our training procedure is designed to work with weak, line-level annotations available in the real world. With a mean alignment error of 0.35s on a standard dataset our system outperforms the state-of-the-art by an order of magnitude. | Furthermore, many systems rely on rather complex training or parameter optimization procedures, which can affect the training duration or reliability. For example, the phoneme detectors mentioned above require a fine-grained phoneme labelling during training. As such a dataset is not available for music, the system in @cite_22 periodically re-calculates an alignment between the lyrics and recordings in the training dataset (Viterbi forced alignment) and continues a frame-wise training based on the results. This procedure is a variant of Viterbi training @cite_6 , which was found to accelerate convergence in some cases but which often also led to inferior model performance as the hard-alignment can bias the training towards solutions that generalize less well compared to approaches using soft-alignments (Baum-Welch training) @cite_16 . Finally, systems often consist of multiple complex stages @cite_24 @cite_7 @cite_11 @cite_10 @cite_23 , introducing many parameters that are not optimized jointly, so that errors tend to propagate between stages. In contrast, all parameters in our system are trained jointly on polyphonic music, we only require weak alignment annotations on the level of lyrical lines and employ a soft-alignment" during training to stabilize the model performance. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"2577008904",
"2134387846",
"2128652941",
"",
"2964150074",
"2009570821",
"2707788252",
"2482558056"
],
"abstract": [
"",
"This paper describes a system that can automatically synchronize polyphonic musical audio signals with their corresponding lyrics. Although methods for synchronizing monophonic speech signals and corresponding text transcriptions by using Viterbi alignment techniques have been proposed, these methods cannot be applied to vocals in CD recordings because vocals are often overlapped by accompaniment sounds. In addition to a conventional method for reducing the influence of the accompaniment sounds, we therefore developed four methods to overcome this problem: a method for detecting vocal sections, a method for constructing robust phoneme networks, a method for detecting fricative sounds, and a method for adapting a speech-recognizer phone model to segregated vocal signals. We then report experimental results for each of these methods and also describe our music playback interface that utilizes our system for synchronizing music and lyrics.",
"A hybrid method for continuous-speech recognition which combines hidden Markov models (HMMs) and a connectionist technique called connectionist Viterbi training (CVT) is presented. CVT can be run iteratively and can be applied to large-vocabulary recognition tasks. Successful completion of training the connectionist component of the system, despite the large network size and volume of training data, depends largely on several measures taken to reduce learning time. The system is trained and tested on the TI NBS speaker-independent continuous-digits database. Performance on test data for unknown-length strings is 98.5 word accuracy and 95.0 string accuracy. Several improvements to the current system are expected to increase these accuracies significantly. >",
"",
"Spoken content processing (such as retrieval and browsing) is maturing, but the singing content is still almost completely left out. Songs are human voice carrying plenty of semantic information just as speech, and may be considered as a special type of speech with highly flexible prosody. The various problems in song audio, for example the significantly changing phone duration over highly flexible pitch contours, make the recognition of lyrics from song audio much more difficult. This paper reports an initial attempt towards this goal. We collected music-removed version of English songs directly from commercial singing content. The best results were obtained by TDNN-BLSTM with data augmentation with 3-fold speed perturbation plus some special approaches. The WER achieved (73.90 ) was significantly lower than the baseline (96.21 ), but still relatively high.",
"Probablistic models are becoming increasingly important in analyzing the huge amount of data being produced by large-scale DNA-sequencing efforts such as the Human Genome Project. For example, hidden Markov models are used for analyzing biological sequences, linguistic-grammar-based probabilistic models for identifying RNA secondary structure, and probabilistic evolutionary models for inferring phylogenies of sequences from different organisms. This book gives a unified, up-to-date and self-contained account, with a Bayesian slant, of such methods, and more generally to probabilistic methods of sequence analysis. Written by an interdisciplinary team of authors, it is accessible to molecular biologists, computer scientists, and mathematicians with no formal knowledge of the other fields, and at the same time presents the state of the art in this new and important field.",
"The massive amount of digital music data available necessitates automated methods for processing, classifying and organizing large volumes of songs. As music discovery and interactive music applications become commonplace, the ability to synchronize lyric text information with an audio recording has gained interest. This paper presents an approach for lyric-audio alignment by comparing synthesized speech with a vocal track removed from an instrument mixture using source separation. We take a hierarchical approach to solve the problem, assuming a set of paragraph-music segment pairs is given and focus on within-segment lyric alignment at the word level. A synthesized speech signal is generated to reflect the properties of the music signal by controlling the speech rate and gender. Dynamic time warping finds the shortest path between the synthesized speech and separated vocal. The resulting path is used to calculate the timestamps of words in the original signal. The system results in approximately half a second of misalignment error on average. Finally, we discuss the challenges and suggest improvements to the method.",
"This study addresses the task of aligning lyrics with accompanied singing recordings. With a vowel-only representation of lyric syllables, our approach evaluates likelihood scores of vowel types with glottal pulse shapes and formant frequencies extracted from a small set of singing examples. The proposed vowel likelihood model is used in conjunction with a prior model of frame-wise syllable sequence in determining an optimal evolution of syllabic position. In lyrics alignment experiments, we optimized numerical parameters on two independent development sets and then tested the optimized system on two other datasets. New objective performance measures are introduced in the evaluation to provide further insight into the quality of alignment. Use of glottal pulse shapes and formant frequencies is shown by a controlled experiment to account for a 0.07 difference in average normalized alignment error. Another controlled experiment demonstrates that, with a difference of 0.03, F0-invariant glottal pulse shape gives a lower average normalized alignment error than does F0-invariant spectrum envelope, the latter being assumed by MFCC-based timbre models."
]
} |
1902.06740 | 2915180734 | A common technique to improve speed and robustness of learning in deep reinforcement learning (DRL) and many other machine learning algorithms is to run multiple learning agents in parallel. A neglected component in the development of these algorithms has been how best to arrange the learning agents involved to better facilitate distributed search. Here we draw upon results from the networked optimization and collective intelligence literatures suggesting that arranging learning agents in less than fully connected topologies (the implicit way agents are commonly arranged in) can improve learning. We explore the relative performance of four popular families of graphs and observe that one such family (Erdos-Renyi random graphs) empirically outperforms the standard fully-connected communication topology across several DRL benchmark tasks. We observe that 1000 learning agents arranged in an Erdos-Renyi graph can perform as well as 3000 agents arranged in the standard fully-connected topology, showing the large learning improvement possible when carefully designing the topology over which agents communicate. We complement these empirical results with a preliminary theoretical investigation of why less than fully connected topologies can perform better. Overall, our work suggests that distributed machine learning algorithms could be made more efficient if the communication topology between learning agents was optimized. | There have been many variants of Evolution Strategies over the years, such as CMA-ES @cite_18 which also updates the covariance matrix of the Gaussian distribution, Natural Evolution strategies @cite_24 where the inverse of the Fisher Information Matrix of search distributions is used in the gradient update rule, and, of course, the Evolution Strategies of @cite_17 (which we build on) which was modified for scalability in DRL. However, in all the approaches described above, agents are organized in an implicit fully-connected centralized topology. | {
"cite_N": [
"@cite_24",
"@cite_18",
"@cite_17"
],
"mid": [
"2151965738",
"1503296932",
"2596367596"
],
"abstract": [
"This paper presents Natural Evolution Strategies (NES), a recent family of black-box optimization algorithms that use the natural gradient to update a parameterized search distribution in the direction of higher expected fitness. We introduce a collection of techniques that address issues of convergence, robustness, sample complexity, computational complexity and sensitivity to hyperparameters. This paper explores a number of implementations of the NES family, such as general-purpose multi-variate normal distributions and separable distributions tailored towards search in high dimensional spaces. Experimental results show best published performance on various standard benchmarks, as well as competitive performance on others.",
"In this paper we introduce a restart-CMA-evolution strategy, where the population size is increased for each restart (IPOP). By increasing the population size the search characteristic becomes more global after each restart. The IPOP-CMA-ES is evaluated on the test suit of 25 functions designed for the special session on real-parameter optimization of CEC 2005. Its performance is compared to a local restart strategy with constant small population size. On unimodal functions the performance is similar. On multi-modal functions the local restart strategy significantly outperforms IPOP in 4 test cases whereas IPOP performs significantly better in 29 out of 60 tested cases.",
"We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation."
]
} |
1902.06740 | 2915180734 | A common technique to improve speed and robustness of learning in deep reinforcement learning (DRL) and many other machine learning algorithms is to run multiple learning agents in parallel. A neglected component in the development of these algorithms has been how best to arrange the learning agents involved to better facilitate distributed search. Here we draw upon results from the networked optimization and collective intelligence literatures suggesting that arranging learning agents in less than fully connected topologies (the implicit way agents are commonly arranged in) can improve learning. We explore the relative performance of four popular families of graphs and observe that one such family (Erdos-Renyi random graphs) empirically outperforms the standard fully-connected communication topology across several DRL benchmark tasks. We observe that 1000 learning agents arranged in an Erdos-Renyi graph can perform as well as 3000 agents arranged in the standard fully-connected topology, showing the large learning improvement possible when carefully designing the topology over which agents communicate. We complement these empirical results with a preliminary theoretical investigation of why less than fully connected topologies can perform better. Overall, our work suggests that distributed machine learning algorithms could be made more efficient if the communication topology between learning agents was optimized. | A focus of recent DRL has been the ability to be able to run more and more agents in parallel (i.e. scalability). An early example is the Gorila framework @cite_8 that collects experiences in parallel from many agents. Another is A3C @cite_0 that we discussed earlier. IMPALA @cite_12 is a recent algorithm which solves many tasks with a single parameter set. Population Based Training @cite_28 optimizes both learning weights and hyperparameters. Again, these algorithms implicitly use a fully-connected topology between learning agents. | {
"cite_N": [
"@cite_0",
"@cite_28",
"@cite_12",
"@cite_8"
],
"mid": [
"2260756217",
"2770298516",
"2786036274",
"1658008008"
],
"abstract": [
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
"Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present , a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance.",
"In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time, which is already a problem in single task learning. We have developed a new distributed agent IMPALA (Importance-Weighted Actor Learner Architecture) that can scale to thousands of machines and achieve a throughput rate of 250,000 frames per second. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace, which was critical for achieving learning stability. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (, 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (, 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents, use less data and crucially exhibits positive transfer between tasks as a result of its multi-task approach.",
"We present the first massively distributed architecture for deep reinforcement learning. This architecture uses four main components: parallel actors that generate new behaviour; parallel learners that are trained from stored experience; a distributed neural network to represent the value function or behaviour policy; and a distributed store of experience. We used our architecture to implement the Deep Q-Network algorithm (DQN). Our distributed algorithm was applied to 49 games from Atari 2600 games from the Arcade Learning Environment, using identical hyperparameters. Our performance surpassed non-distributed DQN in 41 of the 49 games and also reduced the wall-time required to achieve these results by an order of magnitude on most games."
]
} |
1902.06740 | 2915180734 | A common technique to improve speed and robustness of learning in deep reinforcement learning (DRL) and many other machine learning algorithms is to run multiple learning agents in parallel. A neglected component in the development of these algorithms has been how best to arrange the learning agents involved to better facilitate distributed search. Here we draw upon results from the networked optimization and collective intelligence literatures suggesting that arranging learning agents in less than fully connected topologies (the implicit way agents are commonly arranged in) can improve learning. We explore the relative performance of four popular families of graphs and observe that one such family (Erdos-Renyi random graphs) empirically outperforms the standard fully-connected communication topology across several DRL benchmark tasks. We observe that 1000 learning agents arranged in an Erdos-Renyi graph can perform as well as 3000 agents arranged in the standard fully-connected topology, showing the large learning improvement possible when carefully designing the topology over which agents communicate. We complement these empirical results with a preliminary theoretical investigation of why less than fully connected topologies can perform better. Overall, our work suggests that distributed machine learning algorithms could be made more efficient if the communication topology between learning agents was optimized. | There has also been work in the multi-agent reinforcement learning literature focusing on how independent agents can solve competitive and collaborative problems. For example, recent work investigated the role communication topology, but it is focused on agents solving different tasks @cite_6 . One recent study @cite_19 investigated the effect of communication network topology, but only as an aside, and on very small networks - and they also observe improvements when using not fully-connected networks. | {
"cite_N": [
"@cite_19",
"@cite_6"
],
"mid": [
"2765172389",
"2788115019"
],
"abstract": [
"We propose a multiagent distributed actor-critic algorithm for multitask reinforcement learning (MRL), named Diff-DAC. The agents are connected, forming a (possibly sparse) network. Each agent is assigned a task and has access to data from this local task only. During the learning process, the agents are able to communicate some parameters to their neighbors. Since the agents incorporate their neighbors' parameters into their own learning rules, the information is diffused across the network, and they can learn a common policy that generalizes well across all tasks. Diff-DAC is scalable since the computational complexity and communication overhead per agent grow with the number of neighbors, rather than with the total number of agents. Moreover, the algorithm is fully distributed in the sense that agents self-organize, with no need for coordinator node. Diff-DAC follows an actor-critic scheme where the value function and the policy are approximated with deep neural networks, being able to learn expressive policies from raw data. As a by-product of Diff-DAC's derivation from duality theory, we provide novel insights into the standard actor-critic framework, showing that it is actually an instance of the dual ascent method to approximate the solution of a linear program. Experiments illustrate the performance of the algorithm in the cart-pole, inverted pendulum, and swing-up cart-pole environments.",
"We consider the problem of multi-agent reinforcement learning (MARL), where the agents are located at the nodes of a time-varying communication network. Specifically, we assume that the reward functions of the agents might correspond to different tasks, and are only known to the corresponding agent. Moreover, each agent makes individual decisions based on both the information observed locally and the messages received from its neighbors over the network. Within this setting, the collective goal of the agents is to maximize the globally averaged return over the network through exchanging information with their neighbors. To this end, we propose two decentralized actor-critic algorithms with function approximation, which are applicable to large-scale MARL problems where both the number of states and the number of agents are massively large. Under the decentralized structure, the actor step is performed individually by each agent with no need to infer the policies of others. For the critic step, we propose a consensus update via communication over the network. Our algorithms are fully incremental and can be implemented in an online fashion. Convergence analyses of the algorithms are provided when the value functions are approximated within the class of linear functions. Extensive simulation results with both linear and nonlinear function approximations are presented to validate the proposed algorithms. Our work appears to be the first study of fully decentralized MARL algorithms for networked agents with function approximation, with provable convergence guarantees."
]
} |
1902.06740 | 2915180734 | A common technique to improve speed and robustness of learning in deep reinforcement learning (DRL) and many other machine learning algorithms is to run multiple learning agents in parallel. A neglected component in the development of these algorithms has been how best to arrange the learning agents involved to better facilitate distributed search. Here we draw upon results from the networked optimization and collective intelligence literatures suggesting that arranging learning agents in less than fully connected topologies (the implicit way agents are commonly arranged in) can improve learning. We explore the relative performance of four popular families of graphs and observe that one such family (Erdos-Renyi random graphs) empirically outperforms the standard fully-connected communication topology across several DRL benchmark tasks. We observe that 1000 learning agents arranged in an Erdos-Renyi graph can perform as well as 3000 agents arranged in the standard fully-connected topology, showing the large learning improvement possible when carefully designing the topology over which agents communicate. We complement these empirical results with a preliminary theoretical investigation of why less than fully connected topologies can perform better. Overall, our work suggests that distributed machine learning algorithms could be made more efficient if the communication topology between learning agents was optimized. | On the other hand, work in the networked optimization literature has demonstrated that the network structure of communication between nodes significantly affects the convergence rate and accuracy of multi-agent learning @cite_31 @cite_23 @cite_25 . However this work has been focused on solving global objective functions that are the sum (or average) of private, local node-based objective functions - which is not always an appropriate framework for deep reinforcement learning. In the collective intelligence literature, alternative network structures have been shown to result in increased exploration, higher overall maximum reward, and higher diversity of solutions in both simulated high-dimensional optimization @cite_15 and human experiments @cite_10 . | {
"cite_N": [
"@cite_15",
"@cite_23",
"@cite_31",
"@cite_10",
"@cite_25"
],
"mid": [
"2097141357",
"",
"2760049195",
"2409848655",
"2130263842"
],
"abstract": [
"Whether as team members brainstorming or cultures experimenting with new technologies, problem solvers communicate and share ideas. This paper examines how the structure of communication networks among actors can affect system-level performance. We present an agent-based computer simulation model of information sharing in which the less successful emulate the more successful. Results suggest that when agents are dealing with a complex problem, the more efficient the network at disseminating information, the better the short-run but the lower the long-run performance of the system. The dynamic underlying this result is that an inefficient network maintains diversity in the system and is thus better for exploration than an efficient network, supporting a more thorough search for solutions in the long run. For intermediate time frames, there is an inverted-U relationship between connectedness and performance, in which both poorly and well-connected systems perform badly, and moderately connected systems perf...",
"",
"In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions. Algorithms interleave local computations with communication among all or a subset of the nodes. Motivated by a variety of applications---distributed estimation in sensor networks, fitting models to massive data sets, and distributed control of multi-robot systems, to name a few---significant advances have been made towards the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area. In general, rates of convergence depend not only on the number of nodes involved and the desired level of accuracy, but also on the structure and nature of the network over which nodes communicate (e.g., whether links are directed or undirected, static or time-varying). We survey the state-of-the-art algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.",
"Previous studies have disagreed over whether efficient or inefficient network structures should be more effective in promoting group performance. Here, Barkoczi and Galesic demonstrate that which structure is superior depends on the social learning strategy used by individuals in the network.",
"We consider a distributed multi-agent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and without agents sharing the explicit form of their objectives. We propose an asynchronous broadcast-based algorithm where the communications over the network are subject to random link failures. We investigate the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize, where each agent chooses its own stepsize independently of the other agents. Under some standard conditions on the gradient errors, we establish almost sure convergence of the method to an optimal point for diminishing stepsize. For constant stepsize, we establish some error bounds on the expected distance from the optimal point and the expected function value. We also provide numerical results."
]
} |
1902.06768 | 2950750876 | Mobile robots need to create high-definition 3D maps of the environment for applications such as remote surveillance and infrastructure mapping. Accurate semantic processing of the acquired 3D point cloud is critical for allowing the robot to obtain a high-level understanding of the surrounding objects and perform context-aware decision making. Existing techniques for point cloud semantic segmentation are mostly applied on a single-frame or offline basis, with no way to integrate the segmentation results over time. This paper proposes an online method for mobile robots to incrementally build a semantically-rich 3D point cloud of the environment. The proposed deep neural network, MCPNet, is trained to predict class labels and object instance labels for each point in the scanned point cloud in an incremental fashion. A multi-view context pooling (MCP) operator is used to combine point features obtained from multiple viewpoints to improve the classification accuracy. The proposed architecture was trained and evaluated on ray-traced scans derived from the Stanford 3D Indoor Spaces dataset. Results show that the proposed approach led to 15 improvement in point-wise accuracy and 7 improvement in NMI compared to the next best online method, with only a 6 drop in accuracy compared to the PointNet-based offline approach. | Semantic processing of point cloud data acquired from a mobile robot can be carried out in several forms such as clustering and classification. The clustering process aims to subdivide a large point cloud into smaller chunks that form semantically cohesive units. Clustering methods usually rely on generating seed points, then performing a region growing procedure to create a point cloud cluster around the seed point @cite_16 . Cluster assignment is propagated to neighboring points based on criteria such as distance, similarity of normal vectors, and similarity of color @cite_1 . More sophisticated methods for clustering use deep learning frameworks to infer a point feature embedding that can be used to predict point grouping proposals, for example, Similarity Group Proposal Networks (SGPN) @cite_23 . In this case, the distance between points in the learned embedding space is used as the criteria for grouping points together. | {
"cite_N": [
"@cite_16",
"@cite_1",
"@cite_23"
],
"mid": [
"2794149315",
"",
"2769312834"
],
"abstract": [
"Localization in 3-D point clouds is a highly challenging task due to the complexity associated with extracting information from 3-D data. This letter proposes an incremental approach addressing this problem efficiently. The presented method first accumulates the measurements in a dynamic voxel grid and selectively updates the point normals affected by the insertion. An incremental segmentation algorithm, based on region growing, tracks the evolution of single segments, which enables an efficient recognition strategy using partitioning and caching of geometric consistencies. We show that the incremental method can perform global localization at 10 Hz in an urban driving environment, a speedup of @math 7.1 over the compared batch solution. The efficiency of the method makes it suitable for applications where real-time localization is required and enables its usage on cheaper low-energy systems. Our implementation is available open source along with instructions for running the system. (The implementation is available at https: github.com ethz-asl segmatch and a video demonstration is available at https: youtu.be cHfs3HLzc2Y .)",
"",
"We introduce Similarity Group Proposal Network (SGPN), a simple and intuitive deep learning framework for 3D object instance segmentation on point clouds. SGPN uses a single network to predict point grouping proposals and a corresponding semantic class for each proposal, from which we can directly extract instance segmentation results. Important to the effectiveness of SGPN is its novel representation of 3D instance segmentation results in the form of a similarity matrix that indicates the similarity between each pair of points in embedded feature space, thus producing an accurate grouping proposal for each point. Experimental results on various 3D scenes show the effectiveness of our method on 3D instance segmentation, and we also evaluate the capability of SGPN to improve 3D object detection and semantic segmentation results. We also demonstrate its flexibility by seamlessly incorporating 2D CNN features into the framework to boost performance."
]
} |
1902.06768 | 2950750876 | Mobile robots need to create high-definition 3D maps of the environment for applications such as remote surveillance and infrastructure mapping. Accurate semantic processing of the acquired 3D point cloud is critical for allowing the robot to obtain a high-level understanding of the surrounding objects and perform context-aware decision making. Existing techniques for point cloud semantic segmentation are mostly applied on a single-frame or offline basis, with no way to integrate the segmentation results over time. This paper proposes an online method for mobile robots to incrementally build a semantically-rich 3D point cloud of the environment. The proposed deep neural network, MCPNet, is trained to predict class labels and object instance labels for each point in the scanned point cloud in an incremental fashion. A multi-view context pooling (MCP) operator is used to combine point features obtained from multiple viewpoints to improve the classification accuracy. The proposed architecture was trained and evaluated on ray-traced scans derived from the Stanford 3D Indoor Spaces dataset. Results show that the proposed approach led to 15 improvement in point-wise accuracy and 7 improvement in NMI compared to the next best online method, with only a 6 drop in accuracy compared to the PointNet-based offline approach. | On the other hand, classification of point cloud data can be carried out at the scale of the point level, such that each point has an individual label. Some methods project the 3D point cloud into a 2D form and perform segmentation on the resulting image @cite_18 @cite_9 . Other methods use 3D convolutions to operate on a voxel grid and compute features in a layered fashion @cite_2 @cite_10 . Due to the poor computational scalability of 3D convolutions, an alternative is to compute both point features and scene features derived from pooling operations, which are then concatenated to predict class probabilities for each point @cite_5 @cite_12 . Further advancements to this line of work use superpoint graphs @cite_20 , recurrent networks @cite_15 , or coarse-to-fine hierarchies @cite_17 to incorporate neighborhood information and local dependencies to the prediction stage. However, these methods are usually applied in the offline setting, i.e. as a post-processing step after complete point cloud data is obtained, and do not take into account occlusion effects that comes into play with robotic scanning. | {
"cite_N": [
"@cite_18",
"@cite_9",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_20",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2766577666",
"2798468281",
"2211722331",
"2560609797",
"2770046775",
"2769473888",
"2556802233",
"2624503621",
"2777356020"
],
"abstract": [
"In this paper, we address semantic segmentation of road-objects from 3D LiDAR point clouds. In particular, we wish to detect and categorize instances of interest, such as cars, pedestrians and cyclists. We formulate this problem as a point- wise classification problem, and propose an end-to-end pipeline called SqueezeSeg based on convolutional neural networks (CNN): the CNN takes a transformed LiDAR point cloud as input and directly outputs a point-wise label map, which is then refined by a conditional random field (CRF) implemented as a recurrent layer. Instance-level labels are then obtained by conventional clustering algorithms. Our CNN model is trained on LiDAR point clouds from the KITTI dataset, and our point-wise segmentation labels are derived from 3D bounding boxes from KITTI. To obtain extra training data, we built a LiDAR simulator into Grand Theft Auto V (GTA-V), a popular video game, to synthesize large amounts of realistic training data. Our experiments show that SqueezeSeg achieves high accuracy with astonishingly fast and stable runtime (8.7 ms per frame), highly desirable for autonomous driving applications. Furthermore, additionally training on synthesized data boosts validation accuracy on real-world data. Our source code and synthesized data will be open-sourced.",
"For applications such as autonomous driving, self-localization camera pose estimation and scene parsing are crucial technologies. In this paper, we propose a unified framework to tackle these two problems simultaneously. The uniqueness of our design is a sensor fusion scheme which integrates camera videos, motion sensors (GPS IMU), and a 3D semantic map in order to achieve robustness and efficiency of the system. Specifically, we first have an initial coarse camera pose obtained from consumer-grade GPS IMU, based on which a label map can be rendered from the 3D semantic map. Then, the rendered label map and the RGB image are jointly fed into a pose CNN, yielding a corrected camera pose. In addition, to incorporate temporal information, a multi-layer recurrent neural network (RNN) is further deployed improve the pose accuracy. Finally, based on the pose from RNN, we render a new label map, which is fed together with the RGB image into a segment CNN which produces per-pixel semantic label. In order to validate our approach, we build a dataset with registered 3D point clouds and video camera images. Both the point clouds and the images are semantically-labeled. Each video frame has ground truth pose from highly accurate motion sensors. We show that practically, pose estimation solely relying on images like PoseNet may fail due to street view confusion, and it is important to fuse multiple sensors. Finally, various ablation studies are performed, which demonstrate the effectiveness of the proposed system. In particular, we show that scene parsing and pose estimation are mutually beneficial to achieve a more robust and accurate system.",
"Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.",
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"Deep learning approaches have made tremendous progress in the field of semantic segmentation over the past few years. However, most current approaches operate in the 2D image space. Direct semantic segmentation of unstructured 3D point clouds is still an open research problem. The recently proposed PointNet architecture presents an interesting step ahead in that it can operate on unstructured point clouds, achieving encouraging segmentation results. However, it subdivides the input points into a grid of blocks and processes each such block individually. In this paper, we investigate the question how such an architecture can be extended to incorporate larger-scale spatial context. We build upon PointNet and propose two extensions that enlarge the receptive field over the 3D scene. We evaluate the proposed strategies on challenging indoor and outdoor datasets and show improved results in both scenarios.",
"We propose a novel deep learning-based framework to tackle the challenge of semantic segmentation of large-scale point clouds of millions of points. We argue that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements. SPGs offer a compact yet rich representation of contextual relationships between object parts, which is then exploited by a graph convolutional network. Our framework sets a new state of the art for segmenting outdoor LiDAR scans (+11.9 and +8.8 mIoU points for both Semantic3D test sets), as well as indoor scans (+12.4 mIoU points for the S3DIS dataset).",
"We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.",
"Few prior works study deep learning on point sets. PointNet by is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.",
"We introduce ScanComplete, a novel data-driven approach for taking an incomplete 3D scan of a scene as input and predicting a complete 3D model along with per-voxel semantic labels. The key contribution of our method is its ability to handle large scenes with varying spatial extent, managing the cubic growth in data size as scene size increases. To this end, we devise a fully-convolutional generative 3D CNN model whose filter kernels are invariant to the overall scene size. The model can be trained on scene subvolumes but deployed on arbitrarily large scenes at test time. In addition, we propose a coarse-to-fine inference strategy in order to produce high-resolution output while also leveraging large input context sizes. In an extensive series of experiments, we carefully evaluate different model design choices, considering both deterministic and probabilistic models for completion and semantic inference. Our results show that we outperform other methods not only in the size of the environments handled and processing efficiency, but also with regard to completion quality and semantic segmentation performance by a significant margin."
]
} |
1902.06768 | 2950750876 | Mobile robots need to create high-definition 3D maps of the environment for applications such as remote surveillance and infrastructure mapping. Accurate semantic processing of the acquired 3D point cloud is critical for allowing the robot to obtain a high-level understanding of the surrounding objects and perform context-aware decision making. Existing techniques for point cloud semantic segmentation are mostly applied on a single-frame or offline basis, with no way to integrate the segmentation results over time. This paper proposes an online method for mobile robots to incrementally build a semantically-rich 3D point cloud of the environment. The proposed deep neural network, MCPNet, is trained to predict class labels and object instance labels for each point in the scanned point cloud in an incremental fashion. A multi-view context pooling (MCP) operator is used to combine point features obtained from multiple viewpoints to improve the classification accuracy. The proposed architecture was trained and evaluated on ray-traced scans derived from the Stanford 3D Indoor Spaces dataset. Results show that the proposed approach led to 15 improvement in point-wise accuracy and 7 improvement in NMI compared to the next best online method, with only a 6 drop in accuracy compared to the PointNet-based offline approach. | In contrast, for robotics applications, point cloud data of a scene is usually incrementally obtained in separate scans as the robot moves to different points around the site of interest. To make use of multi-view information, features from multiple viewpoints can be combined using operations such as global view pooling @cite_22 , grouped pooling @cite_8 , or joint viewpoint prediction and categorization @cite_6 . However, these methods perform classification at the object level in the offline setting, where the views are combined with complete point cloud information. For classification at the point level, several works @cite_3 @cite_24 use a method where point features are computed repeatedly for each observation and merged to determine the final classification. However, the final view merging process is still performed offline using computationally-heavy methods such as conditional random fields. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_3",
"@cite_6",
"@cite_24"
],
"mid": [
"",
"2799162093",
"2892346030",
"2964342398",
"2415351208"
],
"abstract": [
"",
"3D shape recognition has attracted much attention recently. Its recent advances advocate the usage of deep features and achieve the state-of-the-art performance. However, existing deep features for 3D shape recognition are restricted to a view-to-shape setting, which learns the shape descriptor from the view-level feature directly. Despite the exciting progress on view-based 3D shape description, the intrinsic hierarchical correlation and discriminability among views have not been well exploited, which is important for 3D shape representation. To tackle this issue, in this paper, we propose a group-view convolutional neural network (GVCNN) framework for hierarchical correlation modeling towards discriminative 3D shape description. The proposed GVCNN framework is composed of a hierarchical view-group-shape architecture, i.e., from the view level, the group level and the shape level, which are organized using a grouping strategy. Concretely, we first use an expanded CNN to extract a view level descriptor. Then, a grouping module is introduced to estimate the content discrimination of each view, based on which all views can be splitted into different groups according to their discriminative level. A group level description can be further generated by pooling from view descriptors. Finally, all group level descriptors are combined into the shape level descriptor according to their discriminative weights. Experimental results and comparison with state-of-the-art methods show that our proposed GVCNN method can achieve a significant performance gain on both the 3D shape classification and retrieval tasks.",
"Applications that provide location related services need to understand the environment in which humans live such that verbal references and human interaction are possible. We formulate this semantic labelling task as the problem of learning the semantic labels from the perceived 3D structure. In this contribution we propose a batch approach and a novel multi-view frame fusion technique to exploit multiple views for improving the semantic labelling results. The batch approach works offline and is the direct application of an existing single-view method to scene reconstructions with multiple views. The multi-view frame fusion works in an incremental fashion accumulating the single-view results, hence allowing the online multi-view semantic segmentation of single frames and the offline reconstruction of semantic maps. Our experiments show the superiority of the approaches based on our fusion scheme, which leads to a more accurate semantic labelling.",
"We propose a Convolutional Neural Network (CNN)-based model \"RotationNet,\" which takes multi-view images of an object as input and jointly estimates its pose and object category. Unlike previous approaches that use known viewpoint labels for training, our method treats the viewpoint labels as latent variables, which are learned in an unsupervised manner during the training using an unaligned object dataset. RotationNet is designed to use only a partial set of multi-view images for inference, and this property makes it useful in practical scenarios where only partial views are available. Moreover, our pose alignment strategy enables one to obtain view-specific feature representations shared across classes, which is important to maintain high accuracy in both object categorization and pose estimation. Effectiveness of RotationNet is demonstrated by its superior performance to the state-of-the-art methods of 3D object classification on 10- and 40-class ModelNet datasets. We also show that RotationNet, even trained without known poses, achieves the state-of-the-art performance on an object pose estimation dataset.",
"While the main trend of 3D object recognition has been to infer object detection from single views of the scene — i.e., 2.5D data — this work explores the direction on performing object recognition on 3D data that is reconstructed from multiple viewpoints, under the conjecture that such data can improve the robustness of an object recognition system. To achieve this goal, we propose a framework which is able (i) to carry out incremental real-time segmentation of a 3D scene while being reconstructed via Simultaneous Localization And Mapping (SLAM), and (ii) to simultaneously and incrementally carry out 3D object recognition and pose estimation on the reconstructed and segmented 3D representations. Experimental results demonstrate the advantages of our approach with respect to traditional single view-based object recognition and pose estimation approaches, as well as its usefulness in robotic perception and augmented reality applications."
]
} |
1902.06937 | 2916059710 | The key idea of Bayesian optimization is replacing an expensive target function with a cheap surrogate model. By selection of an acquisition function for Bayesian optimization, we trade off between exploration and exploitation. The acquisition function typically depends on the mean and the variance of the surrogate model at a given point. The most common Gaussian process-based surrogate model assumes that the target with fixed parameters is a realization of a Gaussian process. However, often the target function doesn't satisfy this approximation. Here we consider target functions that come from the binomial distribution with the parameter that depends on inputs. Typically we can vary how many Bernoulli samples we obtain during each evaluation. We propose a general Gaussian process model that takes into account Bernoulli outputs. To make things work we consider a simple acquisition function based on Expected Improvement and a heuristic strategy to choose the number of samples at each point thus taking into account precision of the obtained output. | Area of application of Bayesian optimization known in different areas under different names are quite wide. A recent overview of Bayesian optimization is provided by authors in @cite_5 , see this article and references in it. Below we cover some issues related to our specific applications. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2192203593"
],
"abstract": [
"Big Data applications are typically associated with systems involving large numbers of users, massive complex software systems, and large-scale heterogeneous computing and storage architectures. The construction of such systems involves many distributed design choices. The end products (e.g., recommendation systems, medical analysis tools, real-time game engines, speech recognizers) thus involve many tunable configuration parameters. These parameters are often specified and hard-coded into the software by various developers or teams. If optimized jointly, these parameters can result in significant improvements. Bayesian optimization is a powerful tool for the joint optimization of design choices that is gaining great popularity in recent years. It promises greater automation so as to increase both product quality and human productivity. This review paper introduces Bayesian optimization, highlights some of its methodological aspects, and showcases a wide range of applications."
]
} |
1902.06937 | 2916059710 | The key idea of Bayesian optimization is replacing an expensive target function with a cheap surrogate model. By selection of an acquisition function for Bayesian optimization, we trade off between exploration and exploitation. The acquisition function typically depends on the mean and the variance of the surrogate model at a given point. The most common Gaussian process-based surrogate model assumes that the target with fixed parameters is a realization of a Gaussian process. However, often the target function doesn't satisfy this approximation. Here we consider target functions that come from the binomial distribution with the parameter that depends on inputs. Typically we can vary how many Bernoulli samples we obtain during each evaluation. We propose a general Gaussian process model that takes into account Bernoulli outputs. To make things work we consider a simple acquisition function based on Expected Improvement and a heuristic strategy to choose the number of samples at each point thus taking into account precision of the obtained output. | We start of range of applications where the output is binomial. See e.g. problems of hyperparameter tuning or AutoML: In work @cite_10 authors propose an early stopping criterion combined with modification of EI acquisition function in which evaluation of a configuration is stopped if predicted performance is worse than the current best configuration. Bayesian optimization was used for tuning of hyperparameters for Alpha Go @cite_7 as well as for other deep learning based systems @cite_22 . Also see @cite_20 and @cite_16 for high energy physics. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"2408019865",
"2903717821",
"2777915048",
"2266822037",
"2784115570"
],
"abstract": [
"Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband.",
"During the development of AlphaGo, its many hyper-parameters were tuned with Bayesian optimization multiple times. This automatic tuning process resulted in substantial improvements in playing strength. For example, prior to the match with Lee Sedol, we tuned the latest AlphaGo agent and this improved its win-rate from 50 to 66.5 in self-play games. This tuned version was deployed in the final match. Of course, since we tuned AlphaGo many times during its development cycle, the compounded contribution was even higher than this percentage. It is our hope that this brief case study will be of interest to Go fans, and also provide Bayesian optimization practitioners with some insights and inspiration.",
"The SHiP experiment is designed to search for very weakly interacting particles beyond the Standard Model which are produced in a 400 GeV c proton beam dump at the CERN SPS. The critical challenge for this experiment is to keep the Standard Model background level negligible. In the beam dump, around 1011 muons will be produced per second. The muon rate in the spectrometer has to be reduced by at least four orders of magnitude to avoid muoninduced backgrounds. It is demonstrated that new improved active muon shield may be used to magnetically deflect the muons out of the acceptance of the spectrometer.",
"Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that the resulting network performs poorly and terminate the corresponding evaluation to save time. In this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve. Experiments with a broad range of neural network architectures on various prominent object recognition benchmarks show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts.",
""
]
} |
1902.06937 | 2916059710 | The key idea of Bayesian optimization is replacing an expensive target function with a cheap surrogate model. By selection of an acquisition function for Bayesian optimization, we trade off between exploration and exploitation. The acquisition function typically depends on the mean and the variance of the surrogate model at a given point. The most common Gaussian process-based surrogate model assumes that the target with fixed parameters is a realization of a Gaussian process. However, often the target function doesn't satisfy this approximation. Here we consider target functions that come from the binomial distribution with the parameter that depends on inputs. Typically we can vary how many Bernoulli samples we obtain during each evaluation. We propose a general Gaussian process model that takes into account Bernoulli outputs. To make things work we consider a simple acquisition function based on Expected Improvement and a heuristic strategy to choose the number of samples at each point thus taking into account precision of the obtained output. | As mentioned in chapter values of a black box could have Binomial distribution. It means that the exact Bayesian inference fails, since the likelihood is not Gaussian. The same problem arises when you try to adapt the Gaussian processes for the task of classification @cite_21 or robust regression with Laplace or Cauchy likelihood @cite_13 . | {
"cite_N": [
"@cite_21",
"@cite_13"
],
"mid": [
"2157826563",
"2036084078"
],
"abstract": [
"We provide a comprehensive overview of many recent algorithms for approximate inference in Gaussian process models for probabilistic binary classification. The relationships between several approaches are elucidated theoretically, and the properties of the different algorithms are corroborated by experimental results. We examine both 1) the quality of the predictive distributions and 2) the suitability of the different marginal likelihood approximations for model selection (selecting hyperparameters) and compare to a gold standard based on MCMC. Interestingly, some methods produce good predictive distributions although their marginal likelihood approximations are poor. Strong conclusions are drawn about the methods: The Expectation Propagation algorithm is almost always the method of choice unless the computational budget is very tight. We also extend existing methods in various ways, and provide unifying code implementing all approaches.",
"The variational approximation of posterior distributions by multivariate gaussians has been much less popular in the machine learning community compared to the corresponding approximation by factorizing distributions. This is for a good reason: the gaussian approximation is in general plagued by an number of variational parameters to be optimized, being the number of random variables. In this letter, we discuss the relationship between the Laplace and the variational approximation, and we show that for models with gaussian priors and factorizing likelihoods, the number of variational parameters is actually . The approach is applied to gaussian process regression with nongaussian likelihoods."
]
} |
1902.06937 | 2916059710 | The key idea of Bayesian optimization is replacing an expensive target function with a cheap surrogate model. By selection of an acquisition function for Bayesian optimization, we trade off between exploration and exploitation. The acquisition function typically depends on the mean and the variance of the surrogate model at a given point. The most common Gaussian process-based surrogate model assumes that the target with fixed parameters is a realization of a Gaussian process. However, often the target function doesn't satisfy this approximation. Here we consider target functions that come from the binomial distribution with the parameter that depends on inputs. Typically we can vary how many Bernoulli samples we obtain during each evaluation. We propose a general Gaussian process model that takes into account Bernoulli outputs. To make things work we consider a simple acquisition function based on Expected Improvement and a heuristic strategy to choose the number of samples at each point thus taking into account precision of the obtained output. | To use these models one can approximate non-Gaussian posterior by Gaussian distribution. Many approaches are used in this area, to name a few @cite_11 : Markov-chain Monte Carlo @cite_21 , Laplace approximation @cite_17 , mean field variational inference @cite_2 , and expectation propagation @cite_4 . GP models like GP classifier, GP counter or GP regression use different observations likelihoods: Bernoulli, Poisson, Gaussian, Binomial and etc. All these distributions are samples of exponential family. Aim of the work @cite_8 is to show how to create a framework unifying all existing GP models and making easier creating of new ones using distribution from exponential family. | {
"cite_N": [
"@cite_11",
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_2",
"@cite_17"
],
"mid": [
"1663973292",
"1934021597",
"2125901936",
"2157826563",
"2225156818",
"1746819321"
],
"abstract": [
"Cristopher M. BishopInformation Science and StatisticsSpringer 2006, 738 pagesAs the author writes in the preface of the book, pattern recognition has its origin inengineering, whereas machine learning grew out of computer science. However, theseactivities can be viewed as two facets of the same field, and they have undergonesubstantial development over the past years.Bayesian methods are widely used, while graphical models have emerged as a generalframework for describing and applying probabilistic models. Similarly new modelsbased on kernels have had significant impact on both algorithms and applications.This textbook reflects these recent developments while providing a comprehensiveintroduction to the fields of pattern recognition and machine learning. It is aimedat advanced undergraduate or first year PhD students, as well as researchers andpractitioners. It can be consider as an introductory course to the subject.The first four chapters are devoted to the concepts of Probability and Statistics that areneededforreadingtherestofthebook,sowecanimaginethatthespeedishighinorderto get from zero to infinity. I believe that it is better to study the book after a previouscourse on Probability and Statistics. On the other hand, a basic knowledge of linearalgebra and multivariate calculus is assumed.The other chapters give to a classic probabilist or statistician a point of view on someapplications that are very interesting but far from his usual world. In all the text themathematical aspects are at the second level in relation withthe ideas and intuitionsthatthe author wants to communicate.The book is supported by a great deal of additional material, including lecture slides aswell as the complete set of figures used in it, and the reader is encouraged to visit thebook web site for the latest information. So it can be very useful for a course or a talkabout the subject.",
"This paper presents a new deterministic approximation technique in Bayesian networks. This method, \"Expectation Propagation,\" unifies two previous techniques: assumed-density filtering, an extension of the Kalman filter, and loopy belief propagation, an extension of belief propagation in Bayesian networks. Loopy belief propagation, because it propagates exact belief states, is useful for a limited class of belief networks, such as those which are purely discrete. Expectation Propagation approximates the belief states by only retaining expectations, such as mean and varitmce, and iterates until these expectations are consistent throughout the network. This makes it applicable to hybrid networks with discrete and continuous nodes. Experiments with Gaussian mixture models show Expectation Propagation to be donvincingly better than methods with similar computational cost: Laplace's method, variational Bayes, and Monte Carlo. Expectation Propagation also provides an efficient algorithm for training Bayes point machine classifiers.",
"A generalized Gaussian process model (GGPM) is a unifying framework that encompasses many existing Gaussian process (GP) models, such as GP regression, classification, and counting. In the GGPM framework, the observation likelihood of the GP model is itself parameterized using the exponential family distribution (EFD). In this paper, we consider efficient algorithms for approximate inference on GGPMs using the general form of the EFD. A particular GP model and its associated inference algorithms can then be formed by changing the parameters of the EFD, thus greatly simplifying its creation for task-specific output domains. We demonstrate the efficacy of this framework by creating several new GP models for regressing to non-negative reals and to real intervals. We also consider a closed-form Taylor approximation for efficient inference on GGPMs, and elaborate on its connections with other model-specific heuristic closed-form approximations. Finally, we present a comprehensive set of experiments to compare approximate inference algorithms on a wide variety of GGPMs.",
"We provide a comprehensive overview of many recent algorithms for approximate inference in Gaussian process models for probabilistic binary classification. The relationships between several approaches are elucidated theoretically, and the properties of the different algorithms are corroborated by experimental results. We examine both 1) the quality of the predictive distributions and 2) the suitability of the different marginal likelihood approximations for model selection (selecting hyperparameters) and compare to a gold standard based on MCMC. Interestingly, some methods produce good predictive distributions although their marginal likelihood approximations are poor. Strong conclusions are drawn about the methods: The Expectation Propagation algorithm is almost always the method of choice unless the computational budget is very tight. We also extend existing methods in various ways, and provide unifying code implementing all approaches.",
"ABSTRACTOne of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this article, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find a member of that family which is close to the target density. Closeness is measured by Kullback–Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data...",
"Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes."
]
} |
1902.06457 | 2952785670 | Compared to the standard success (coverage) probability, the meta distribution of the signal-to-interference ratio (SIR) provides much more fine-grained information about the network performance. We consider general heterogeneous cellular networks (HCNs) with base station tiers modeled by arbitrary stationary and ergodic non-Poisson point processes. The exact analysis of non-Poisson network models is notoriously difficult, even in terms of the standard success probability, let alone the meta distribution. Hence we propose a simple approach to approximate the SIR meta distribution for non-Poisson networks based on the ASAPPP ("approximate SIR analysis based on the Poisson point process") method. We prove that the asymptotic horizontal gap @math between its standard success probability and that for the Poisson point process exactly characterizes the gap between the @math th moment of the conditional success probability, as the SIR threshold goes to @math . The gap @math allows two simple approximations of the meta distribution for general HCNs: 1) the per-tier approximation by applying the shift @math to each tier and 2) the effective gain approximation by directly shifting the meta distribution for the homogeneous independent Poisson network. Given the generality of the model considered and the fine-grained nature of the meta distribution, these approximations work surprisingly well. | The works in @cite_7 @cite_11 @cite_13 obtained analytically tractable results for the HIP model. For HCNs with non-Poisson deployments, it is often the case that it is hard to perform an exact mathematical analysis of key performance metrics such as the SIR distribution (sometimes called the coverage probability). Even if an exact expression of the SIR distribution exists, it is available in a complex form that does not help gain insights about the performance of the network for different network parameters @cite_22 @cite_21 @cite_23 @cite_18 @cite_10 . | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_10",
"@cite_21",
"@cite_23",
"@cite_13",
"@cite_11"
],
"mid": [
"2750003873",
"2076773434",
"2149170915",
"2964267089",
"1584905095",
"1989097819",
"2058477717",
"2059973889"
],
"abstract": [
"The Poisson point process (PPP) has been widely employed to model wireless networks and analyze their performance. The PPP has the property that nodes are conditionally independent from each other. As such, it may not be a suitable model for many networks, where there exists repulsion among the nodes. In order to address this limitation, we adopt a Poisson hardcore process (PHCP), in which no two nodes can be closer than a repulsion radius from one another. We consider two-tier heterogeneous networks, where the spatial distributions of transmitters in the first-tier and the second-tier networks follow a PHCP and a PPP, respectively. To alleviate inter-tier interference, we consider a guard zone for the first-tier network and presume that the second-tier transmitters located in the zone are deactivated. Under this setup, the activated second-tier transmitters form a Poisson hard-core hole process. We first derive exact computable expressions of the coverage probability and introduce a method to efficiently evaluate the expressions. Then, we provide approximations of the coverage probability, which have lower computational complexities. In addition, as a special case, we investigate the coverage probability of single-tier networks by modeling the locations of transmitters as a PHCP.",
"We consider spatial stochastic models of downlink heterogeneous cellular networks (HCNs) with multiple tiers, where the base stations (BSs) of each tier have a particular spatial density, transmission power and path-loss exponent. Prior works on such spatial models of HCNs assume, due to its tractability, that the BSs are deployed according to homogeneous Poisson point processes. This means that the BSs are located independently of each other and their spatial correlation is ignored. In the current paper, we propose two spatial models for the analysis of downlink HCNs, in which the BSs are deployed according to @a-Ginibre point processes. The @a-Ginibre point processes constitute a class of determinantal point processes and account for the repulsion between the BSs. Besides, the degree of repulsion is adjustable according to the value of @[email protected]?(0,1]. In one proposed model, the BSs of different tiers are deployed according to mutually independent @a-Ginibre processes, where the @a can take different values for the different tiers. In the other model, all the BSs are deployed according to an @a-Ginibre point process and they are classified into multiple tiers by mutually independent marks. For these proposed models, we derive computable representations for the coverage probability of a typical user-the probability that the downlink signal-to-interference-plus-noise ratio for the typical user achieves a target threshold. We exhibit the results of some numerical experiments and compare the proposed models and the Poisson based model.",
"Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given , adding more tiers and or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR.",
"The growing complexity of heterogeneous cellular networks (HetNets) has necessitated a variety of user and base station (BS) configurations to be considered for realistic performance evaluation and system design. This is directly reflected in the HetNet simulation models proposed by standardization bodies, such as the 3rd Generation Partnership Project (3GPP). Complementary to these simulation models, stochastic geometry-based approach, modeling the locations of the users, and the @math tiers of BSs as independent and homogeneous Poisson point processes (PPPs), has gained prominence in the past few years. Despite its success in revealing useful insights, this PPP-based @math -tier HetNet model is not rich enough to capture spatial coupling between user and BS locations that exists in real-world HetNet deployments and is included in 3GPP simulation models. In this paper, we demonstrate that modeling a fraction of users and arbitrary number of BS tiers alternatively with a Poisson cluster process (PCP) captures the aforementioned coupling, thus bridging the gap between the 3GPP simulation models and the PPP-based analytic model for HetNets. We further show that the downlink coverage probability of a typical user under maximum signal-to-interference-ratio ( @math ) association can be expressed in terms of the sum-product functionals over PPP, PCP, and its associated offspring point process, which are all characterized as a part of our analysis. We also show that the proposed model converges to the PPP-based HetNet model as the cluster size of the PCPs tends to infinity. Finally, we specialize our analysis based on general PCPs for Thomas and Matern cluster processes. Special instances of the proposed model closely resemble the different configurations for BS and user locations considered in 3GPP simulations.",
"Due to its tractability, a multitier model of mutually independent Poisson point processes (PPPs) for heterogeneous cellular networks (HCNs) has recently been attracting much attention. However, in reality, the locations of the BSs, within each tier and across tiers, are not fully independent. Accordingly, in this paper, we propose two HCN models with inter-tier dependence (Case 1) and intra-tier dependence (Case 2), respectively. In Case 1, the macro-base station (MBS) and the pico-base station (PBS) deployments follow a Poisson point process (PPP) and a Poisson hole process (PHP), respectively. Under this setup and conditioning on a fixed serving distance (distance between a user and its nearest serving BS), we derive bounds on the outage probabilities of both macro and pico users. We also use a fitted Poisson cluster process to approximate the PHP, which is shown to provide a good approximation of the interference and outage statistics. In Case 2, the MBSs and the PBSs follow a PPP and an independent Matern cluster process, respectively. Explicit expressions of the interference and the outage probability are derived first for fixed serving distance and second with random distance, and we derive the outage performance, the per-user capacity, and the area spectral efficiency (ASE) for both cases. It turns out that the proposed Case 2 model is a more appropriate and accurate model for a HCN with hotspot regions than the multitier independent PPP model since the latter underestimates some key performance metrics, such as the per-user capacity and the ASE, by a factor of 1.5 to 2. Overall, the two models proposed provide good tradeoffs between the accuracy, tractability, and practicability.",
"Future mobile networks are visualized as networks that consist of more than one type of base station to cope with rising user demands. Such networks are referred to as heterogeneous networks. There have been various attempts at modeling and optimization of such networks using spatial point processes, some of which are alluded to (later) in this paper. We model a heterogeneous network consisting of two types of base stations by using a particular Poisson cluster process model. The main contributions are two-fold. First, a complete description of the interference in heterogeneous networks is derived in the form of its Laplace functional. Second, using an asymptotic convergence result which was shown in our previous work, we derive the expressions for the mean and variance of the distribution to which the interference converges. The utility of this framework is discussed for both the contributions.",
"Motivated by the ongoing discussion on coordinated multipoint in wireless cellular standard bodies, this paper considers the problem of base station cooperation in the downlink of heterogeneous cellular networks. The focus of this paper is the joint transmission scenario, where an ideal backhaul network allows a set of randomly located base stations, possibly belonging to different network tiers, to jointly transmit data, to mitigate intercell interference and hence improve coverage and spectral efficiency. Using tools from stochastic geometry, an integral expression for the network coverage probability is derived in the scenario where the typical user located at an arbitrary location, i.e., the general user, receives data from a pool of base stations that are selected based on their average received power levels. An expression for the coverage probability is also derived for the typical user located at the point equidistant from three base stations, which we refer to as the worst case user. In the special case where cooperation is limited to two base stations, numerical evaluations illustrate absolute gains in coverage probability of up to 17 for the general user and 24 for the worst case user compared with the noncooperative case. It is also shown that no diversity gain is achieved using noncoherent joint transmission, whereas full diversity gain can be achieved at the receiver if the transmitting base stations have channel state information.",
"The Signal to Interference Plus Noise Ratio (SINR) on a wireless link is an important basis for consideration of outage, capacity, and throughput in a cellular network. It is therefore important to understand the SINR distribution within such networks, and in particular heterogeneous cellular networks, since these are expected to dominate future network deployments . Until recently the distribution of SINR in heterogeneous networks was studied almost exclusively via simulation, for selected scenarios representing pre-defined arrangements of users and the elements of the heterogeneous network such as macro-cells, femto-cells, etc. However, the dynamic nature of heterogeneous networks makes it difficult to design a few representative simulation scenarios from which general inferences can be drawn that apply to all deployments. In this paper, we examine the downlink of a heterogeneous cellular network made up of multiple tiers of transmitters (e.g., macro-, micro-, pico-, and femto-cells) and provide a general theoretical analysis of the distribution of the SINR at an arbitrarily-located user. Using physically realistic stochastic models for the locations of the base stations (BSs) in the tiers, we can compute the general SINR distribution in closed form. We illustrate a use of this approach for a three-tier network by calculating the probability of the user being able to camp on a macro-cell or an open-access (OA) femto-cell in the presence of Closed Subscriber Group (CSG) femto-cells. We show that this probability depends only on the relative densities and transmit powers of the macro- and femto-cells, the fraction of femto-cells operating in OA vs. Closed Subscriber Group (CSG) mode, and on the parameters of the wireless channel model. For an operator considering a femto overlay on a macro network, the parameters of the femto deployment can be selected from a set of universal curves."
]
} |
1902.06457 | 2952785670 | Compared to the standard success (coverage) probability, the meta distribution of the signal-to-interference ratio (SIR) provides much more fine-grained information about the network performance. We consider general heterogeneous cellular networks (HCNs) with base station tiers modeled by arbitrary stationary and ergodic non-Poisson point processes. The exact analysis of non-Poisson network models is notoriously difficult, even in terms of the standard success probability, let alone the meta distribution. Hence we propose a simple approach to approximate the SIR meta distribution for non-Poisson networks based on the ASAPPP ("approximate SIR analysis based on the Poisson point process") method. We prove that the asymptotic horizontal gap @math between its standard success probability and that for the Poisson point process exactly characterizes the gap between the @math th moment of the conditional success probability, as the SIR threshold goes to @math . The gap @math allows two simple approximations of the meta distribution for general HCNs: 1) the per-tier approximation by applying the shift @math to each tier and 2) the effective gain approximation by directly shifting the meta distribution for the homogeneous independent Poisson network. Given the generality of the model considered and the fine-grained nature of the meta distribution, these approximations work surprisingly well. | The meta distribution of the SIR for cellular networks was proposed in @cite_24 , where the focus was on the downlink of the Poisson cellular network. Furthermore, the meta distribution of the SIR was calculated for both the downlink and the uplink of the Poisson cellular network with power control in @cite_28 , for the downlink Poisson cellular network underlaid with a device-to-device (D2D) network in @cite_16 , for the non-orthogonal multiple access (NOMA) network in @cite_20 , and with base station cooperation in @cite_2 . For general cellular networks with a multi-slope path loss model, @cite_17 gave a scaling law involving the parameters of BS and user point processes ( e.g. , the density of the point process) that keeps the meta distribution of the SIR the same. For the HIP-based @math -tier HCN, @cite_3 calculated the SIR meta distribution with cell range expansion. | {
"cite_N": [
"@cite_28",
"@cite_3",
"@cite_24",
"@cite_2",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"2963262497",
"2962926283",
"1640283668",
"2769047447",
"2605261135",
"2962815534",
"2962780424"
],
"abstract": [
"The meta distribution of the signal-to-interference ratio (SIR) provides fine-grained information about the performance of individual links in a wireless network. This paper focuses on the analysis of the meta distribution of the SIR for both the cellular network uplink and downlink with fractional power control. For the uplink scenario, an approximation of the interfering user point process with a non-homogeneous Poisson point process is used. The moments of the meta distribution for both scenarios are calculated. Some bounds, the analytical expression, the mean local delay, and the beta approximation of the meta distribution are provided. The results give interesting insights into the effect of the power control in both the uplink and downlink. Detailed simulations show that the approximations made in the analysis are well justified.",
"Heterogeneous cellular networks (HCNs) constitute a necessary step in the evolution of cellular networks. In this paper, we apply the signal-to-interference ratio (SIR) meta distribution framework for a refined SIR performance analysis of HCNs, focusing on @math -tier heterogeneous cellular networks based on the homogeneous independent Poisson point process (PPP) model, with range expansion bias (offloading bias) in each tier. Expressions for the @math -th moment of the conditional success probability for both the entire network and each tier are derived, based on which the exact meta distributions and the beta approximations are evaluated and compared. Key performance metrics, including the mean success probability, the variance of the conditional success probability, the mean local delay, and the asymptotic SIR gains of each tier are obtained. The results show that the biases are detrimental to the overall mean success probability of the whole network and that the @math -th moment curve of the conditional success probability of each tier can be tightly approximated by the horizontal shifted versions of the first moment curve of the single-tier PPP network. We also provide lower bounds for the region of the active probabilities of the base stations to keep the mean local delay of each tier finite.",
"The calculation of the SIR distribution at the typical receiver (or, equivalently, the success probability of transmissions over the typical link) in Poisson bipolar and cellular networks with Rayleigh fading is relatively straightforward, but it only provides limited information on the success probabilities of the individual links. This paper focuses on the meta distribution of the SIR, which is the distribution of the conditional success probability @math given the point process, and provides bounds, an exact analytical expression, and a simple approximation for it. The meta distribution provides fine-grained information on the SIR and answers questions such as “What fraction of users in a Poisson cellular network achieve 90 link reliability if the required SIR is 5 dB?” Interestingly, in the bipolar model, if the transmit probability @math is reduced while increasing the network density @math such that the density of concurrent transmitters @math stays constant as @math , @math degenerates to a constant, i.e., all links have exactly the same success probability in the limit, which is the one of the typical link. In contrast, in the cellular case, if the interfering base stations are active independently with probability @math , the variance of @math approaches a non-zero constant when @math is reduced to 0 while keeping the mean success probability constant.",
"The meta distribution provides fine-grained information on the signal-to-interference ratio (SIR) compared with the SIR distribution at the typical user. This paper first derives the meta distribution of the SIR in heterogeneous cellular networks with downlink coordinated multipoint transmission reception, including joint transmission (JT), dynamic point blanking (DPB), and dynamic point selection dynamic point blanking (DPS DPB), for the general typical user and the worst-case user (the typical user located at the Voronoi vertex in a single-tier network). A more general scheme called JT-DPB, which is the combination of JT and DPB, is studied. The moments of the conditional success probability are derived for the calculation of the meta distribution and the mean local delay. An exact analytical expression, the beta approximation, and simulation results of the meta distribution are provided. From the theoretical results, we gain insights on the benefits of different cooperation schemes and the impact of the number of cooperating base stations and other network parameters.",
"We study the performance of device-to-device (D2D) communication underlaying cellular wireless network in terms of the meta distribution of the signal-to-interference ratio (SIR), which is the distribution of the conditional SIR distribution given the locations of the wireless nodes. Modeling D2D transmitters and base stations as Poisson point processes (PPPs), moments of the conditional SIR distribution are derived in order to calculate analytical expressions for the meta distribution and the mean local delay of the typical D2D receiver and cellular downlink user. It turns out that for D2D users, the total interference from the D2D interferers and base stations is equal in distribution to that of a single PPP, while for downlink users, the effect of the interference from the D2D network is more complicated. We also derive the region of transmit probabilities for the D2D users and base stations that result in a finite mean local delay and give a simple inner bound on that region. Finally, the impact of increasing the base station density on the mean local delay, the meta distribution, and the density of users reliably served is investigated with numerical results.",
"We develop an analytical framework to derive the meta distribution and moments of the conditional success probability (CSP), which is defined as success probability for a given realization of the transmitters, in large-scale co-channel uplink and downlink non-orthogonal multiple access (NOMA) networks with one NOMA cluster per cell. The moments of CSP translate to various network performance metrics such as the standard success or signal-to-interference ratio (SIR) coverage probability (which is the 1-st moment), the mean local delay (which is the −1st moment in a static network setting), and the meta distribution (which is the complementary cumulative distribution function of the success or SIR coverage probability and can be approximated by using the 1st and 2nd moments). For the uplink NOMA network, to make the framework tractable, we propose two point process models for the spatial locations of the inter-cell interferers by utilizing the base station (BS) user pair correlation function. We validate the proposed models by comparing the second moment measure of each model with that of the actual point process for the inter-cluster (or inter-cell) interferers obtained via simulations. For downlink NOMA, we derive closed-form solutions for the moments of the CSP, success (or coverage) probability, mean local delay, and meta distribution for the users. As an application of the developed analytical framework, we use the closed-form expressions to optimize the power allocations for downlink NOMA users in order to maximize the success probability of a given NOMA user with and without latency constraints. Closed-form optimal solutions for the transmit powers are obtained for two-user NOMA scenario. We note that maximizing the success probability with latency constraints can significantly impact the optimal power solutions for low SIR thresholds and favor orthogonal multiple access.",
"In this letter, we introduce a general cellular network model, where: 1) users and base stations (BSs) are distributed as two general point processes that may be coupled; 2) pathloss is assumed to follow a multi-slope power-law pathloss model; and 3) fading (power) is assumed to be independent across all wireless links. For this setup, we first obtain a set of contours representing the same meta distribution of signal-to-interference ratio , which is the distribution of the conditional coverage probability given the point process, for different values of the parameters of the pathloss function and BS and user point processes. This general result is then specialized to 3GPP-inspired user and BS configurations obtained by combining Poisson point process and Poisson cluster process."
]
} |
1902.06317 | 2912032057 | Many real-world services can be provided through multiple VNF graphs, corresponding, e.g., to high- and low-quality variants of the service itself. Based on this observation, we extend the concept of service scaling in network orchestration to service shifting, i.e., switching between the VNF graphs implementing the same service. Service shifting can serve multiple goals, from reducing operational costs to reacting to infrastructure problems. Furthermore, it enhances the flexibility of service-level agreements between network operators and third party content providers ("verticals"). In this paper, we introduce and describe the service shifting concept, its benefits, and the associated challenges, with special reference to how service shifting can be integrated within real-world 5G architectures and implementations. We conclude that existing network orchestration frameworks can be easily extended to support service shifting, and its adoption has the potential to make 5G network slices easier for the operators to manage under high-load conditions, while still meeting the verticals' requirements. | A first group of works, including @cite_0 @cite_7 , focus on establishing a link between the new use cases for 5G network, their requirements (e.g., the need to concurrently support multiple vertical services), and network slicing. Specifically, @cite_0 focuses on cloud Radio Access Network (RAN) scenarios, and remarks how network slicing is able to simplify the management of user mobility across access networks and the associated resource allocation decisions. The authors of @cite_7 take the viewpoint of a network operator, and discuss how network slicing can simplify the creation of multiple, virtualized access networks with different speed, latency, and reliability requirements. Taking the same viewpoint, @cite_2 compares the main options for the management of network slices, i.e., provider-managed and tenant-managed slices. | {
"cite_N": [
"@cite_0",
"@cite_7",
"@cite_2"
],
"mid": [
"2604174486",
"2605961225",
""
],
"abstract": [
"5G networks are expected to be able to satisfy users' different QoS requirements. Network slicing is a promising technology for 5G networks to provide services tailored for users' specific QoS demands. Driven by the increased massive wireless data traffic from different application scenarios, efficient resource allocation schemes should be exploited to improve the flexibility of network resource allocation and capacity of 5G networks based on network slicing. Due to the diversity of 5G application scenarios, new mobility management schemes are greatly needed to guarantee seamless handover in network-slicing-based 5G systems. In this article, we introduce a logical architecture for network-slicing-based 5G systems, and present a scheme for managing mobility between different access networks, as well as a joint power and subchannel allocation scheme in spectrum-sharing two-tier systems based on network slicing, where both the co-tier interference and cross-tier interference are taken into account. Simulation results demonstrate that the proposed resource allocation scheme can flexibly allocate network resources between different slices in 5G systems. Finally, several open issues and challenges in network-slicing-based 5G networks are discussed, including network reconstruction, network slicing management, and cooperation with other 5G technologies.",
"We argue for network slicing as an efficient solution that addresses the diverse requirements of 5G mobile networks, thus providing the necessary flexibility and scalability associated with future network implementations. We elaborate on the challenges that emerge when designing 5G networks based on network slicing. We focus on the architectural aspects associated with the coexistence of dedicated as well as shared slices in the network. In particular, we analyze the realization options of a flexible radio access network with focus on network slicing and their impact on the design of 5G mobile networks. In addition to the technical study, this article provides an investigation of the revenue potential of network slicing, where the applications that originate from this concept and the profit capabilities from the network operator�s perspective are put forward.",
""
]
} |
1902.06317 | 2912032057 | Many real-world services can be provided through multiple VNF graphs, corresponding, e.g., to high- and low-quality variants of the service itself. Based on this observation, we extend the concept of service scaling in network orchestration to service shifting, i.e., switching between the VNF graphs implementing the same service. Service shifting can serve multiple goals, from reducing operational costs to reacting to infrastructure problems. Furthermore, it enhances the flexibility of service-level agreements between network operators and third party content providers ("verticals"). In this paper, we introduce and describe the service shifting concept, its benefits, and the associated challenges, with special reference to how service shifting can be integrated within real-world 5G architectures and implementations. We conclude that existing network orchestration frameworks can be easily extended to support service shifting, and its adoption has the potential to make 5G network slices easier for the operators to manage under high-load conditions, while still meeting the verticals' requirements. | A substantial body of works is dedicated to the decisions required in network slicing scenarios, i.e., the network orchestration problem. The work in @cite_10 identifies the unique algorithmic challenges associated with network slicing, including the need to account for different types of constraints -- from end-to-end delays to multi-tenancy and isolation issues. Furthermore, it presents a low-complexity solution concept for real-time network slicing, based on monitoring and forecasting the state of the network, and on an efficient, two-phase, online optimization. The study in @cite_8 focuses on a core network as a service (CNaaS) scenario, where multiple verticals share their virtual EPC (vEPC) instances. The high-level objective is to satisfy all the verticals' requirements with the smallest possible number of vEPC instances, hence, the lowest cost for the operator. To this end, the authors resort to cooperative game theory, and study how to build coalitions of verticals sharing the same vEPC instance. | {
"cite_N": [
"@cite_10",
"@cite_8"
],
"mid": [
"2744111766",
"2790027353"
],
"abstract": [
"Network slicing is a technique for flexible resource provisioning in future wireless networks. With the powerful SDN and NFV technologies available, network slices can be quickly deployed and centrally managed, leading to simplified management, better resource utilization, and cost efficiency by commoditization of resources. Departing from the one-type-fits-all design philosophy, future wireless networks will employ the network slicing methodology in order to accommodate applications with widely diverse requirements over the same physical network. On the other hand, deciding how to efficiently allocate, manage, and control the slice resources in real time is very challenging. This article focuses on the algorithmic challenges that emerge in efficient network slicing, necessitating novel techniques from the communities of operation research, networking, and computer science.",
"Many ongoing research activities relevant to 5G mobile systems concern the virtualization of the mobile core network, including the evolved packet core (EPC) elements, aiming for system scalability, elasticity, flexibility, and cost-efficiency. Virtual EPC (vEPC) 5G core will principally rely on some key technologies, such as network function virtualization, software defined networking, and cloud computing, enabling the concept of mobile carrier cloud. The key idea beneath this concept, also known as core network as a service, consists in deploying virtual instances (i.e., virtual machines or containers) of key core network functions [i.e., virtual network functions (VNF) of 4G or 5G], such as the mobility management entity (MME), Serving GateWay (SGW), Packet Data network gateWay (PGW), access and mobility management function (AMF), session management function (SMF), authentication server function (AUSF), and user plane functions, over a federated cloud. In this vein, an efficient VNF placement algorithm is highly needed to sustain the quality of service (QoS) while reducing the deployment cost. Our contribution in this paper is twofold. First, we devise an algorithm that derives the optimal number of virtual instances of 4G (MME, SGW, and PGW) or 5G (AMF, SMF, and AUSF) core network elements to meet the requirements of a specific mobile traffic. Second, we propose an algorithm for the placement of these virtual instances over a federated cloud. While the first algorithm is based on mixed integer linear programming, the second is based on coalition formation game, wherein the aim is to build coalitions of cloud networks to host the virtual instances of the vEPC 5G core elements. The obtained results clearly indicate the advantages of the proposed algorithms in ensuring QoS given a fixed cost for vEPC 5G core deployment, while maximizing the profits of cloud operators."
]
} |
1902.06317 | 2912032057 | Many real-world services can be provided through multiple VNF graphs, corresponding, e.g., to high- and low-quality variants of the service itself. Based on this observation, we extend the concept of service scaling in network orchestration to service shifting, i.e., switching between the VNF graphs implementing the same service. Service shifting can serve multiple goals, from reducing operational costs to reacting to infrastructure problems. Furthermore, it enhances the flexibility of service-level agreements between network operators and third party content providers ("verticals"). In this paper, we introduce and describe the service shifting concept, its benefits, and the associated challenges, with special reference to how service shifting can be integrated within real-world 5G architectures and implementations. We conclude that existing network orchestration frameworks can be easily extended to support service shifting, and its adoption has the potential to make 5G network slices easier for the operators to manage under high-load conditions, while still meeting the verticals' requirements. | Many works, including @cite_14 @cite_13 seek to jointly make the decisions required for network orchestration, i.e., VNF placement, VNF resource assignment, and traffic routing. In both @cite_14 @cite_13 , the rationale is that such decisions impact each other, and it is thus necessary to account for their interaction. The two works have different underlying assumptions (as an example, the CPU assigned to each VNF is static in @cite_14 and dynamic in @cite_13 ) and use different methodologies (namely, graph theory in @cite_14 and queuing theory in @cite_13 ). Finally, several works propose algorithmic approaches tailored to a specific application of network slicing: examples include @cite_3 , which focuses on Internet-of-things (IoT) scenarios and seeks to make energy-efficient orchestration decisions. | {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_3"
],
"mid": [
"2963926316",
"2792251914",
"2895100588"
],
"abstract": [
"To adapt to continuously changing workloads in networks, components of the running network services may need to be replicated ( scaling the network service) and allocated to physical resources ( placement ) dynamically, also necessitating dynamic re-routing of flows between service components. In this paper, we propose joint optimization of scaling, placement, and routing (JASPER), a fully automated approach to jointly optimizing scaling, placement, and routing for complex network services, consisting of multiple (virtualized) components. JASPER handles multiple network services that share the same substrate network; services can be dynamically added or removed and dynamic workload changes are handled. Our approach lets service designers specify their services on a high level of abstraction using service templates . JASPER automatically makes scaling, placement and routing decisions, enabling quick reaction to changes. We formalize the problem, analyze its complexity, and develop two algorithms to solve it. Extensive empirical results show the applicability and effectiveness of the proposed approach.",
"Thanks to network slicing, 5G networks will support a variety of services in a flexible and swift manner. In this context, we seek to make high-quality, joint optimal decisions concerning the placement of VNFs across the physical hosts for realizing the services, and the allocation of CPU resources in VNFs sharing a host. To this end, we present a queuing-based system model, accounting for all the entities involved in 5G networks. Then, we propose a fast and efficient solution strategy yielding near-optimal decisions. We evaluate our approach in multiple scenarios that well represent real-world services, and find it to consistently outperform state-of-the-art alternatives and closely match the optimum.",
"The next-generation mobile network anticipates integrated heterogeneous fronthaul and backhaul technologies referred to as a unified crosshaul architecture. The crosshaul enables a flexible and cost-efficient infrastructure for handling mobile data tsunami from dense Internet of things (IoT). However, stabilization, energy efficiency, and latency have not been jointly considered in the optimization of crosshaul performance. To overcome these issues, we propose an orchestration scheme referred to as the stabilized green crosshaul orchestration (SGCO). SGCO utilizes a Lyapunov-theory-based drift-plus-penalty policy to determine the optimal amount of offloaded data that should be processed either at the eastbound or westbound computing platforms to minimize energy consumption. To achieve system stability, the cache buffer is considered as the main constraint in developing the optimization process. Moreover, the amount of offloaded data transmitted via crosshaul links is selected by adopting the binary min-knapsack problem. Accordingly, a lightweight heuristic algorithm is proposed. As the cache buffer is stabilized and the computations are controlled, the SGCO ensures adjustable computing latency threshold for various IoT services. The performance analysis shows that the proposed SGCO scheme exposes effective energy consumption compared to other existing schemes while maintaining system stability considering latency."
]
} |
1902.06317 | 2912032057 | Many real-world services can be provided through multiple VNF graphs, corresponding, e.g., to high- and low-quality variants of the service itself. Based on this observation, we extend the concept of service scaling in network orchestration to service shifting, i.e., switching between the VNF graphs implementing the same service. Service shifting can serve multiple goals, from reducing operational costs to reacting to infrastructure problems. Furthermore, it enhances the flexibility of service-level agreements between network operators and third party content providers ("verticals"). In this paper, we introduce and describe the service shifting concept, its benefits, and the associated challenges, with special reference to how service shifting can be integrated within real-world 5G architectures and implementations. We conclude that existing network orchestration frameworks can be easily extended to support service shifting, and its adoption has the potential to make 5G network slices easier for the operators to manage under high-load conditions, while still meeting the verticals' requirements. | Especially relevant to our study are those works that take into account reliability and survivability in 5G networks. Among these, @cite_9 focuses on a vehicular scenario where multiple access networks are available, e.g., mmWave and Wi-Fi. In such a context, the reliability of individual wireless links is estimated, and mission-critical traffic is routed through the link or links whose aggregate reliability matches the requirements. @cite_1 studies how to combine unreliable individual VNFs into a reliable service chain. The basic approach is to enhance reliability through duplication, e.g., deploying two instances of the same VNF so that if one fails the other can take over. However, this can lead to unused resources and higher-than-necessary cost. To counter this, the authors formulate an optimization problem yielding the minimum-cost duplication decisions consistent with reliability targets. @cite_4 takes an opposite approach to a similar problem, and aims at augmenting the VNF graph, e.g., by duplicating some parts thereof, to obtain the required reliability level. | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_4"
],
"mid": [
"2793438365",
"2790045047",
"2792336262"
],
"abstract": [
"Network softwarization is a major paradigm shift, which enables programmable and flexible system operation in challenging use cases. In the fifth-generation (5G) mobile networks, the more advanced scenarios envision transfer of high-rate mission-critical traffic. Achieving end-to-end reliability of these stringent sessions requires support from multiple radio access technologies and calls for dynamic orchestration of resources across both radio access and core network segments. Emerging 5G systems can already offer network slicing, multi-connectivity, and end-to-end quality provisioning mechanisms for critical data transfers within a single software-controlled network. Whereas these individual enablers are already in active development, a holistic perspective on how to construct a unified, service-ready system as well as understand the implications of critical traffic on serving other user sessions is not yet available. Against this background, this paper first introduces a softwarized 5G architecture for end-to-end reliability of the mission-critical traffic. Then, a mathematical framework is contributed to model the process of critical session transfers in a softwarized 5G access network, and the corresponding impact on other user sessions is quantified. Finally, a prototype hardware implementation is completed to investigate the practical effects of supporting mission-critical data in a softwarized 5G core network, as well as substantiate the key system design choices.",
"Network Function Virtualization (NFV) has revolutionized service provisioning in cloud datacenter networks. It enables the complete decoupling of Network Functions (NFs) from the physical hardware middle boxes that network operators deploy for implementing service-specific and strictly ordered NF chains. Precisely, NFV allows for dispatching NFs as instances of plain software called virtual network functions (VNFs) running on virtual machines hosted by one or more industry standard physical machines. Nevertheless, NF softwarization introduces processing vulnerability ( e.g. , failures caused by hardware or software, and so on). Since any failure of VNFs could break down an entire service chain, thus interrupting the service, the functionality of an NFV-enabled network will require a higher reliability compared with traditional networks. This paper encloses an in-depth investigation of a reliability-aware joint VNF chain placement and flow routing optimization. In order to guarantee the required reliability, an incremental approach is proposed to determine the number of required VNF backups. Through illustration, it is shown herein that the formulated single path routing model can be easily extended to support resource sharing between adjacent backup VNF instances. This paper advocates the absolute existence of a share-resource-based VNF assignment strategy that is capable of trading off all of the reliability, bandwidth, and computing resources consumption of a given service chain. A heuristic is proposed to work around the complexity of the presently formulated integer linear programming (ILP). Thorough numerical analysis and simulations are conducted in order to verify and assert the validity, correctness, and effectiveness of this proposed heuristic reflecting its ability to achieve very close results to those obtained through the resolution of the complex ILP within a negligible amount of time. Above and beyond, the proposed resource-sharing-based VNF placement scheme outperforms existing resource-sharing agnostic schemes by 15.6 and 14.7 in terms of bandwidth and CPU utilization respectively.",
"A key challenge in network virtualization is to efficiently map a virtual network (VN) on a substrate network (SN), while accounting for possible substrate failures. This is known as the survivable VN embedding (SVNE) problem. The state-of-the-art literature has studied the SVNE problem from infrastructure providers’ (InPs’) perspective, i.e., provisioning backup resources in the SN. A rather unexplored solution spectrum is to augment the VN with sufficient spare backup capacity to survive substrate failures and embed the resulting VN accordingly. Such augmentation enables InPs to offload failure recovery decisions to the VN operator, thus providing more flexible VN management. In this paper, we study the problem of jointly optimizing spare capacity allocation in a VN and embedding the VN to guarantee full bandwidth in the presence of multiple substrate link failures. We formulate the optimal solution to this problem as a quadratic integer program that we transform into an integer linear program. We also propose a heuristic algorithm to solve larger instances of the problem. Based on analytical study and simulation, our key findings are: 1) provisioning shared backup resources in the VN can yield 33 more resource efficient embedding compared to doing the same at the SN level and 2) our heuristic allocates 21 extra resources compared to the optimal, while executing several orders of magnitude faster."
]
} |
1902.06317 | 2912032057 | Many real-world services can be provided through multiple VNF graphs, corresponding, e.g., to high- and low-quality variants of the service itself. Based on this observation, we extend the concept of service scaling in network orchestration to service shifting, i.e., switching between the VNF graphs implementing the same service. Service shifting can serve multiple goals, from reducing operational costs to reacting to infrastructure problems. Furthermore, it enhances the flexibility of service-level agreements between network operators and third party content providers ("verticals"). In this paper, we introduce and describe the service shifting concept, its benefits, and the associated challenges, with special reference to how service shifting can be integrated within real-world 5G architectures and implementations. We conclude that existing network orchestration frameworks can be easily extended to support service shifting, and its adoption has the potential to make 5G network slices easier for the operators to manage under high-load conditions, while still meeting the verticals' requirements. | With the exception of @cite_4 , all the above works assume that VNF graphs are given and immutable; furthermore, @cite_4 itself envisions to perform some operations on the one VNF graph given as an input, as opposed to having multiple VNF graphs providing the same service. It is also worth remarking that our service shifting approach can be used to pursue any goal, be it cost minimization (as in @cite_8 @cite_14 @cite_5 @cite_3 ), reliability survivability (as in @cite_9 @cite_1 @cite_4 ), or a combination of the two. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_5"
],
"mid": [
"2963926316",
"2792336262",
"2790027353",
"2793438365",
"2790045047",
"2895100588",
"2896316144"
],
"abstract": [
"To adapt to continuously changing workloads in networks, components of the running network services may need to be replicated ( scaling the network service) and allocated to physical resources ( placement ) dynamically, also necessitating dynamic re-routing of flows between service components. In this paper, we propose joint optimization of scaling, placement, and routing (JASPER), a fully automated approach to jointly optimizing scaling, placement, and routing for complex network services, consisting of multiple (virtualized) components. JASPER handles multiple network services that share the same substrate network; services can be dynamically added or removed and dynamic workload changes are handled. Our approach lets service designers specify their services on a high level of abstraction using service templates . JASPER automatically makes scaling, placement and routing decisions, enabling quick reaction to changes. We formalize the problem, analyze its complexity, and develop two algorithms to solve it. Extensive empirical results show the applicability and effectiveness of the proposed approach.",
"A key challenge in network virtualization is to efficiently map a virtual network (VN) on a substrate network (SN), while accounting for possible substrate failures. This is known as the survivable VN embedding (SVNE) problem. The state-of-the-art literature has studied the SVNE problem from infrastructure providers’ (InPs’) perspective, i.e., provisioning backup resources in the SN. A rather unexplored solution spectrum is to augment the VN with sufficient spare backup capacity to survive substrate failures and embed the resulting VN accordingly. Such augmentation enables InPs to offload failure recovery decisions to the VN operator, thus providing more flexible VN management. In this paper, we study the problem of jointly optimizing spare capacity allocation in a VN and embedding the VN to guarantee full bandwidth in the presence of multiple substrate link failures. We formulate the optimal solution to this problem as a quadratic integer program that we transform into an integer linear program. We also propose a heuristic algorithm to solve larger instances of the problem. Based on analytical study and simulation, our key findings are: 1) provisioning shared backup resources in the VN can yield 33 more resource efficient embedding compared to doing the same at the SN level and 2) our heuristic allocates 21 extra resources compared to the optimal, while executing several orders of magnitude faster.",
"Many ongoing research activities relevant to 5G mobile systems concern the virtualization of the mobile core network, including the evolved packet core (EPC) elements, aiming for system scalability, elasticity, flexibility, and cost-efficiency. Virtual EPC (vEPC) 5G core will principally rely on some key technologies, such as network function virtualization, software defined networking, and cloud computing, enabling the concept of mobile carrier cloud. The key idea beneath this concept, also known as core network as a service, consists in deploying virtual instances (i.e., virtual machines or containers) of key core network functions [i.e., virtual network functions (VNF) of 4G or 5G], such as the mobility management entity (MME), Serving GateWay (SGW), Packet Data network gateWay (PGW), access and mobility management function (AMF), session management function (SMF), authentication server function (AUSF), and user plane functions, over a federated cloud. In this vein, an efficient VNF placement algorithm is highly needed to sustain the quality of service (QoS) while reducing the deployment cost. Our contribution in this paper is twofold. First, we devise an algorithm that derives the optimal number of virtual instances of 4G (MME, SGW, and PGW) or 5G (AMF, SMF, and AUSF) core network elements to meet the requirements of a specific mobile traffic. Second, we propose an algorithm for the placement of these virtual instances over a federated cloud. While the first algorithm is based on mixed integer linear programming, the second is based on coalition formation game, wherein the aim is to build coalitions of cloud networks to host the virtual instances of the vEPC 5G core elements. The obtained results clearly indicate the advantages of the proposed algorithms in ensuring QoS given a fixed cost for vEPC 5G core deployment, while maximizing the profits of cloud operators.",
"Network softwarization is a major paradigm shift, which enables programmable and flexible system operation in challenging use cases. In the fifth-generation (5G) mobile networks, the more advanced scenarios envision transfer of high-rate mission-critical traffic. Achieving end-to-end reliability of these stringent sessions requires support from multiple radio access technologies and calls for dynamic orchestration of resources across both radio access and core network segments. Emerging 5G systems can already offer network slicing, multi-connectivity, and end-to-end quality provisioning mechanisms for critical data transfers within a single software-controlled network. Whereas these individual enablers are already in active development, a holistic perspective on how to construct a unified, service-ready system as well as understand the implications of critical traffic on serving other user sessions is not yet available. Against this background, this paper first introduces a softwarized 5G architecture for end-to-end reliability of the mission-critical traffic. Then, a mathematical framework is contributed to model the process of critical session transfers in a softwarized 5G access network, and the corresponding impact on other user sessions is quantified. Finally, a prototype hardware implementation is completed to investigate the practical effects of supporting mission-critical data in a softwarized 5G core network, as well as substantiate the key system design choices.",
"Network Function Virtualization (NFV) has revolutionized service provisioning in cloud datacenter networks. It enables the complete decoupling of Network Functions (NFs) from the physical hardware middle boxes that network operators deploy for implementing service-specific and strictly ordered NF chains. Precisely, NFV allows for dispatching NFs as instances of plain software called virtual network functions (VNFs) running on virtual machines hosted by one or more industry standard physical machines. Nevertheless, NF softwarization introduces processing vulnerability ( e.g. , failures caused by hardware or software, and so on). Since any failure of VNFs could break down an entire service chain, thus interrupting the service, the functionality of an NFV-enabled network will require a higher reliability compared with traditional networks. This paper encloses an in-depth investigation of a reliability-aware joint VNF chain placement and flow routing optimization. In order to guarantee the required reliability, an incremental approach is proposed to determine the number of required VNF backups. Through illustration, it is shown herein that the formulated single path routing model can be easily extended to support resource sharing between adjacent backup VNF instances. This paper advocates the absolute existence of a share-resource-based VNF assignment strategy that is capable of trading off all of the reliability, bandwidth, and computing resources consumption of a given service chain. A heuristic is proposed to work around the complexity of the presently formulated integer linear programming (ILP). Thorough numerical analysis and simulations are conducted in order to verify and assert the validity, correctness, and effectiveness of this proposed heuristic reflecting its ability to achieve very close results to those obtained through the resolution of the complex ILP within a negligible amount of time. Above and beyond, the proposed resource-sharing-based VNF placement scheme outperforms existing resource-sharing agnostic schemes by 15.6 and 14.7 in terms of bandwidth and CPU utilization respectively.",
"The next-generation mobile network anticipates integrated heterogeneous fronthaul and backhaul technologies referred to as a unified crosshaul architecture. The crosshaul enables a flexible and cost-efficient infrastructure for handling mobile data tsunami from dense Internet of things (IoT). However, stabilization, energy efficiency, and latency have not been jointly considered in the optimization of crosshaul performance. To overcome these issues, we propose an orchestration scheme referred to as the stabilized green crosshaul orchestration (SGCO). SGCO utilizes a Lyapunov-theory-based drift-plus-penalty policy to determine the optimal amount of offloaded data that should be processed either at the eastbound or westbound computing platforms to minimize energy consumption. To achieve system stability, the cache buffer is considered as the main constraint in developing the optimization process. Moreover, the amount of offloaded data transmitted via crosshaul links is selected by adopting the binary min-knapsack problem. Accordingly, a lightweight heuristic algorithm is proposed. As the cache buffer is stabilized and the computations are controlled, the SGCO ensures adjustable computing latency threshold for various IoT services. The performance analysis shows that the proposed SGCO scheme exposes effective energy consumption compared to other existing schemes while maintaining system stability considering latency.",
"As a crucial step moving towards the next generation of super-fast wireless networks, recently the fifth-generation (5G) mobile wireless networks have received a plethora of research attention and efforts from both the academia and industry. The 5G mobile wireless networks are expected to provision distinct delay-bounded quality of service (QoS) guarantees for a wide range of multimedia services, applications, and users with extremely diverse requirements. However, how to efficiently support multimedia services over 5G wireless networks has imposed many new challenging issues not encountered before in the fourth-generation wireless networks. To overcome these new challenges, we propose a novel network-function virtualization and mobile-traffic offloading based software-defined network (SDN) architecture for heterogeneous statistical QoS provisioning over 5G multimedia mobile wireless networks. Specifically, we develop the novel SDN architecture to scalably virtualize wireless resources and physical infrastructures, based on user’s locations and requests, into three types of virtual wireless networks: virtual networks without offloading, virtual networks with WiFi offloading, and virtual networks with device-to-device offloading. We derive the optimal transmit power allocation schemes to maximize the aggregate effective capacity, overall spectrum efficiency, and other related performances for these three types of virtual wireless networks. We also derive the scalability improvements of our proposed three integrated virtual networks. Finally, we validate and evaluate our developed schemes through numerical analyses, showing significant performance improvements as compared with other existing schemes."
]
} |
1902.06550 | 2914118323 | While modern convolutional neural networks achieve outstanding accuracy on many image classification tasks, they are, compared to humans, much more sensitive to image degradation. Here, we describe a variant of Batch Normalization, LocalNorm, that regularizes the normalization layer in the spirit of Dropout while dynamically adapting to the local image intensity and contrast at test-time. We show that the resulting deep neural networks are much more resistant to noise-induced image degradation, improving accuracy by up to three times, while achieving the same or slightly better accuracy on non-degraded classical benchmarks. In computational terms, LocalNorm adds negligible training cost and little or no cost at inference time, and can be applied to already-trained networks in a straightforward manner. | Lighting and noise conditions can vary wildly over images, and various pre-processing steps are typically included in an image-processing pipeline to adjust color and reduce noise. In traditional computer vision, different filters and probabilistic models for image denoising are applied @cite_4 . Modern approaches for noise removal include deep neural networks, like Noise2Noise @cite_16 , DURR @cite_13 , and a denoising AutoEncoder @cite_22 where the network is trained on a combination of noisy and original images to improve its performance on noisy dataset thus increasing the networks' robustness to image noise and also to train a better classifier. However, as noted in @cite_9 , training on images that include one type of noise in DNNs does not generalize to other types of noise. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_9",
"@cite_16",
"@cite_13"
],
"mid": [
"199564985",
"2145094598",
"2888339491",
"2793146153",
"2804167378"
],
"abstract": [
"Removing noise from the original signal is still a challenging problem for researchers. There have been several published algorithms and each approach has its assumptions, advantages, and limitations. This paper presents a review of some significant work in the area of image denoising. After a brief introduction, some popular approaches are classified into different groups and an overview of various algorithms and analysis is provided. Insights and potential future trends in the area of denoising are also discussed.",
"We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.",
"We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.",
"We apply basic statistical reasoning to signal reconstruction by machine learning -- learning to map corrupted observations to clean signals -- with a simple and powerful conclusion: under certain common circumstances, it is possible to learn to restore signals without ever observing clean ones, at performance close or equal to training using clean exemplars. We show applications in photographic noise removal, denoising of synthetic Monte Carlo images, and reconstruction of MRI scans from undersampled inputs, all based on only observing corrupted data.",
"In this paper, we propose a new control framework called the moving endpoint control to restore images corrupted by different degradation levels in one model. The proposed control problem contains a restoration dynamics which is modeled by an RNN. The moving endpoint, which is essentially the terminal time of the associated dynamics, is determined by a policy network. We call the proposed model the dynamically unfolding recurrent restorer (DURR). Numerical experiments show that DURR is able to achieve state-of-the-art performances on blind image denoising and JPEG image deblocking. Furthermore, DURR can well generalize to images with higher degradation levels that are not included in the training stage."
]
} |
1902.06550 | 2914118323 | While modern convolutional neural networks achieve outstanding accuracy on many image classification tasks, they are, compared to humans, much more sensitive to image degradation. Here, we describe a variant of Batch Normalization, LocalNorm, that regularizes the normalization layer in the spirit of Dropout while dynamically adapting to the local image intensity and contrast at test-time. We show that the resulting deep neural networks are much more resistant to noise-induced image degradation, improving accuracy by up to three times, while achieving the same or slightly better accuracy on non-degraded classical benchmarks. In computational terms, LocalNorm adds negligible training cost and little or no cost at inference time, and can be applied to already-trained networks in a straightforward manner. | Normalization is typically used to rescale the dynamic range of an image. This idea has also been applied to deep learning in various guises, and notably Batch Normalization () @cite_21 was introduced to renormalize the mean and standard deviation of neural activations using an end-to-end trainable parametrization. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2949117887"
],
"abstract": [
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters."
]
} |
1902.06550 | 2914118323 | While modern convolutional neural networks achieve outstanding accuracy on many image classification tasks, they are, compared to humans, much more sensitive to image degradation. Here, we describe a variant of Batch Normalization, LocalNorm, that regularizes the normalization layer in the spirit of Dropout while dynamically adapting to the local image intensity and contrast at test-time. We show that the resulting deep neural networks are much more resistant to noise-induced image degradation, improving accuracy by up to three times, while achieving the same or slightly better accuracy on non-degraded classical benchmarks. In computational terms, LocalNorm adds negligible training cost and little or no cost at inference time, and can be applied to already-trained networks in a straightforward manner. | Group Normalization (GroupNorm) @cite_0 was designed to enable the use of larger batches. In general, the use of larger batch sizes improves the generalization ability of the network and accelerates the training process @cite_12 @cite_2 . Large batch sizes however are typically limited by the locally available computational resources. Group normalization computes summarizing statistics only over a subset of channels (the group; Figure(c)), normalizing the computational group along the @math axes. The computational group for GroupNorm is thus defined as @math . Instance Normalization (InstaNorm) @cite_15 @cite_20 was created for style transfer and quantity improvement. InstaNorm normalizes pixels of one sample in a single channel (Figure(d)). The InstaNorm computational group is defined as @math . | {
"cite_N": [
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_12",
"@cite_20"
],
"mid": [
"2795783309",
"2622263826",
"2502312327",
"2963702144",
"2572730214"
],
"abstract": [
"Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6 lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries.",
"Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a hyper-parameter-free linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves 90 scaling efficiency when moving from 8 to 256 GPUs. Our findings enable training visual recognition models on internet-scale data with high efficiency.",
"It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at this https URL. Full paper can be found at arXiv:1701.02096.",
"It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate @math and scaling the batch size @math . Finally, one can increase the momentum coefficient @math and scale @math , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train Inception-ResNet-V2 on ImageNet to @math validation accuracy in under 2500 parameter updates, efficiently utilizing training batches of 65536 images.",
"The recent work of , who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage."
]
} |
1902.06550 | 2914118323 | While modern convolutional neural networks achieve outstanding accuracy on many image classification tasks, they are, compared to humans, much more sensitive to image degradation. Here, we describe a variant of Batch Normalization, LocalNorm, that regularizes the normalization layer in the spirit of Dropout while dynamically adapting to the local image intensity and contrast at test-time. We show that the resulting deep neural networks are much more resistant to noise-induced image degradation, improving accuracy by up to three times, while achieving the same or slightly better accuracy on non-degraded classical benchmarks. In computational terms, LocalNorm adds negligible training cost and little or no cost at inference time, and can be applied to already-trained networks in a straightforward manner. | Switchable Normalization (SwitchNorm) @cite_17 was proposed as the linear combination of BatchNorm, LayerNorm and InstaNorm: in the SwitchNorm layer, the relative weighing of each kind of normalization method is adjusted during the training process. This allows the network to learn the right type of normalization at the right place in the network to improve performance; this does come however at the expense of a sizable increase in parameters and computation. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2811135961"
],
"abstract": [
"We address a learning-to-normalize problem by proposing Switchable Normalization (SN), which learns to select different normalizers for different normalization layers of a deep neural network. SN employs three distinct scopes to compute statistics (means and variances) including a channel, a layer, and a minibatch. SN switches between them by learning their importance weights in an end-to-end manner. It has several good properties. First, it adapts to various network architectures and tasks (see Fig.1). Second, it is robust to a wide range of batch sizes, maintaining high performance even when small minibatch is presented (e.g. 2 images GPU). Third, SN does not have sensitive hyper-parameter, unlike group normalization that searches the number of groups as a hyper-parameter. Without bells and whistles, SN outperforms its counterparts on various challenging benchmarks, such as ImageNet, COCO, CityScapes, ADE20K, and Kinetics. Analyses of SN are also presented. We hope SN will help ease the usage and understand the normalization techniques in deep learning. The code of SN has been made available in this https URL."
]
} |
1902.06158 | 2949305370 | Proximal gradient method has been playing an important role to solve many machine learning tasks, especially for the nonsmooth problems. However, in some machine learning problems such as the bandit model and the black-box learning problem, proximal gradient method could fail because the explicit gradients of these problems are difficult or infeasible to obtain. The gradient-free (zeroth-order) method can address these problems because only the objective function values are required in the optimization. Recently, the first zeroth-order proximal stochastic algorithm was proposed to solve the nonconvex nonsmooth problems. However, its convergence rate is @math for the nonconvex problems, which is significantly slower than the best convergence rate @math of the zeroth-order stochastic algorithm, where @math is the iteration number. To fill this gap, in the paper, we propose a class of faster zeroth-order proximal stochastic methods with the variance reduction techniques of SVRG and SAGA, which are denoted as ZO-ProxSVRG and ZO-ProxSAGA, respectively. In theoretical analysis, we address the main challenge that an unbiased estimate of the true gradient does not hold in the zeroth-order case, which was required in previous theoretical analysis of both SVRG and SAGA. Moreover, we prove that both ZO-ProxSVRG and ZO-ProxSAGA algorithms have @math convergence rates. Finally, the experimental results verify that our algorithms have a faster convergence rate than the existing zeroth-order proximal stochastic algorithm. | The above zeroth-order methods mainly focus on the (strongly) convex problems. In fact, there exist many nonconvex machine learning tasks, whose explicit gradients are not available, such as the nonconvex black-box learning problems . Thus, several recent works have begun to study the zeroth-order stochastic methods for the nonconvex optimization. For example, @cite_4 proposed the randomized stochastic gradient-free (RSGF) method, , a zeroth-order stochastic gradient method. To accelerate optimization, more recently, @cite_8 @cite_5 proposed the zeroth-order stochastic variance reduction gradient (ZO-SVRG) methods. Moreover, to solve the large-scale machine learning problems, some asynchronous parallel stochastic zeroth-order algorithms have been proposed in . | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_8"
],
"mid": [
"2807044327",
"2963470657",
"2803300317"
],
"abstract": [
"Derivative-free optimization has become an important technique used in machine learning for optimizing black-box models. To conduct updates without explicitly computing gradient, most current approaches iteratively sample a random search direction from Gaussian distribution and compute the estimated gradient along that direction. However, due to the variance in the search direction, the convergence rates and query complexities of existing methods suffer from a factor of @math , where @math is the problem dimension. In this paper, we introduce a novel Stochastic Zeroth-order method with Variance Reduction under Gaussian smoothing (SZVR-G) and establish the complexity for optimizing non-convex problems. With variance reduction on both sample space and search space, the complexity of our algorithm is sublinear to @math and is strictly better than current approaches, in both smooth and non-smooth cases. Moreover, we extend the proposed method to the mini-batch version. Our experimental results demonstrate the superior performance of the proposed method over existing derivative-free optimization techniques. Furthermore, we successfully apply our method to conduct a universal black-box attack to deep neural networks and present some interesting results.",
"In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a postoptimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and we show that such modification allows us to improve significantly the large-deviation properties of the algorithm. These methods are then specialized for solving a class of simulation-based optimization problems in which only stochastic zeroth-order information is available.",
"As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying. This paper addresses these challenges by presenting: a) a comprehensive theoretical analysis of variance reduced zeroth-order (ZO) optimization, b) a novel variance reduced ZO algorithm, called ZO-SVRG, and c) an experimental evaluation of our approach in the context of two compelling applications, black-box chemical material classification and generation of adversarial examples from black-box deep neural network models. Our theoretical analysis uncovers an essential difficulty in the analysis of ZO-SVRG: the unbiased assumption on gradient estimates no longer holds. We prove that compared to its first-order counterpart, ZO-SVRG with a two-point random gradient estimator could suffer an additional error of order @math , where @math is the mini-batch size. To mitigate this error, we propose two accelerated versions of ZO-SVRG utilizing variance reduced gradient estimators, which achieve the best rate known for ZO stochastic optimization (in terms of iterations). Our extensive experimental results show that our approaches outperform other state-of-the-art ZO algorithms, and strike a balance between the convergence rate and the function query complexity."
]
} |
1902.06158 | 2949305370 | Proximal gradient method has been playing an important role to solve many machine learning tasks, especially for the nonsmooth problems. However, in some machine learning problems such as the bandit model and the black-box learning problem, proximal gradient method could fail because the explicit gradients of these problems are difficult or infeasible to obtain. The gradient-free (zeroth-order) method can address these problems because only the objective function values are required in the optimization. Recently, the first zeroth-order proximal stochastic algorithm was proposed to solve the nonconvex nonsmooth problems. However, its convergence rate is @math for the nonconvex problems, which is significantly slower than the best convergence rate @math of the zeroth-order stochastic algorithm, where @math is the iteration number. To fill this gap, in the paper, we propose a class of faster zeroth-order proximal stochastic methods with the variance reduction techniques of SVRG and SAGA, which are denoted as ZO-ProxSVRG and ZO-ProxSAGA, respectively. In theoretical analysis, we address the main challenge that an unbiased estimate of the true gradient does not hold in the zeroth-order case, which was required in previous theoretical analysis of both SVRG and SAGA. Moreover, we prove that both ZO-ProxSVRG and ZO-ProxSAGA algorithms have @math convergence rates. Finally, the experimental results verify that our algorithms have a faster convergence rate than the existing zeroth-order proximal stochastic algorithm. | Although the above zeroth-order stochastic methods can effectively solve the nonconvex optimization, there are few zeroth-order stochastic methods for the composite optimization except the RSPGF method presented in . In addition, @cite_5 have also studied the zeroth-order algorithm for solving the nonconvex nonsmooth problem, which is different from problem . | {
"cite_N": [
"@cite_5"
],
"mid": [
"2807044327"
],
"abstract": [
"Derivative-free optimization has become an important technique used in machine learning for optimizing black-box models. To conduct updates without explicitly computing gradient, most current approaches iteratively sample a random search direction from Gaussian distribution and compute the estimated gradient along that direction. However, due to the variance in the search direction, the convergence rates and query complexities of existing methods suffer from a factor of @math , where @math is the problem dimension. In this paper, we introduce a novel Stochastic Zeroth-order method with Variance Reduction under Gaussian smoothing (SZVR-G) and establish the complexity for optimizing non-convex problems. With variance reduction on both sample space and search space, the complexity of our algorithm is sublinear to @math and is strictly better than current approaches, in both smooth and non-smooth cases. Moreover, we extend the proposed method to the mini-batch version. Our experimental results demonstrate the superior performance of the proposed method over existing derivative-free optimization techniques. Furthermore, we successfully apply our method to conduct a universal black-box attack to deep neural networks and present some interesting results."
]
} |
1902.06015 | 2913010492 | We consider learning two layer neural networks using stochastic gradient descent. The mean-field description of this learning dynamics approximates the evolution of the network weights by an evolution in the space of probability distributions in @math (where @math is the number of parameters associated to each neuron). This evolution can be defined through a partial differential equation or, equivalently, as the gradient flow in the Wasserstein space of probability distributions. Earlier work shows that (under some regularity assumptions), the mean field description is accurate as soon as the number of hidden units is much larger than the dimension @math . In this paper we establish stronger and more general approximation guarantees. First of all, we show that the number of hidden units only needs to be larger than a quantity dependent on the regularity properties of the data, and independent of the dimensions. Next, we generalize this analysis to the case of unbounded activation functions, which was not covered by earlier bounds. We extend our results to noisy stochastic gradient descent. Finally, we show that kernel ridge regression can be recovered as a special limit of the mean field analysis. | As mentioned above, classical approximation theory already uses (either implicitly or explicitly) the idea of lifting the class of @math -neurons neural networks, cf. Eq. , to the infinite-dimensional space parametrized by probability distributions @math , see e.g. @cite_8 @cite_12 @cite_1 @cite_13 . This idea was exploited algorithmically, e.g. in @cite_15 @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_1",
"@cite_15",
"@cite_13",
"@cite_12"
],
"mid": [
"2771061327",
"",
"2099579348",
"",
"1542886316",
"2166116275"
],
"abstract": [
"The superior performance of ensemble methods with infinite models are well known. Most of these methods are based on optimization problems in infinite-dimensional spaces with some regularization, for instance, boosting methods and convex neural networks use @math -regularization with the non-negative constraint. However, due to the difficulty of handling @math -regularization, these problems require early stopping or a rough approximation to solve it inexactly. In this paper, we propose a new ensemble learning method that performs in a space of probability measures, that is, our method can handle the @math -constraint and the non-negative constraint in a rigorous way. Such an optimization is realized by proposing a general purpose stochastic optimization method for learning probability measures via parameterization using transport maps on base models. As a result of running the method, a transport map to output an infinite ensemble is obtained, which forms a residual-type network. From the perspective of functional gradient methods, we give a convergence rate as fast as that of a stochastic optimization method for finite dimensional nonconvex problems. Moreover, we show an interior optimality property of a local optimality condition used in our analysis.",
"",
"Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A sup 3 spl radic ((log n) m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.",
"",
"This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a \"large margin.\" The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics.",
"Approximation properties of a class of artificial neural networks are established. It is shown that feedforward networks with one layer of sigmoidal nonlinearities achieve integrated squared error of order O(1 n), where n is the number of nodes. The approximated function is assumed to have a bound on the first moment of the magnitude distribution of the Fourier transform. The nonlinear parameters associated with the sigmoidal nodes, as well as the parameters of linear combination, are adjusted in the approximation. In contrast, it is shown that for series expansions with n terms, in which only the parameters of linear combination are adjusted, the integrated squared approximation error cannot be made smaller than order 1 n sup 2 d uniformly for functions satisfying the same smoothness assumption, where d is the dimension of the input to the function. For the class of functions examined, the approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings. >"
]
} |
1902.06015 | 2913010492 | We consider learning two layer neural networks using stochastic gradient descent. The mean-field description of this learning dynamics approximates the evolution of the network weights by an evolution in the space of probability distributions in @math (where @math is the number of parameters associated to each neuron). This evolution can be defined through a partial differential equation or, equivalently, as the gradient flow in the Wasserstein space of probability distributions. Earlier work shows that (under some regularity assumptions), the mean field description is accurate as soon as the number of hidden units is much larger than the dimension @math . In this paper we establish stronger and more general approximation guarantees. First of all, we show that the number of hidden units only needs to be larger than a quantity dependent on the regularity properties of the data, and independent of the dimensions. Next, we generalize this analysis to the case of unbounded activation functions, which was not covered by earlier bounds. We extend our results to noisy stochastic gradient descent. Finally, we show that kernel ridge regression can be recovered as a special limit of the mean field analysis. | Only very recently (stochastic) gradient descent was proved to converge (for large enough number of neurons) to the infinite-dimensional evolution @cite_16 @cite_10 @cite_17 @cite_20 . In particular, @cite_16 proves quantitative bounds to approximate SGD by the mean-field dynamics. Our work is mainly motivated by the objective to obtain a better scaling with dimension and to allow for unbounded second-layer coefficients. | {
"cite_N": [
"@cite_16",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2963095610",
"2798986185",
"2952469083",
"2798826368"
],
"abstract": [
"Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires optimizing a nonconvex high-dimensional objective (risk function), a problem that is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the former case, does this happen because local minima are absent or because SGD somehow avoids them? In the latter, why do local minima reached by SGD have good generalization properties? In this paper, we consider a simple case, namely two-layer neural networks, and prove that—in a suitable scaling limit—SGD dynamics is captured by a certain nonlinear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows for “averaging out” some of the complexities of the landscape of neural networks and can be used to prove a general convergence result for noisy SGD.",
"Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, potentially of great use in computational and applied mathematics. That said, there are few rigorous results about the representation error and trainability of neural networks. Here we characterize both the error and the scaling of the error with the size of the network by reinterpreting the standard optimization algorithm used in machine learning applications, stochastic gradient descent, as the evolution of a particle system with interactions governed by a potential related to the objective or \"loss\" function used to train the network. We show that, when the number @math of parameters is large, the empirical distribution of the particles descends on a convex landscape towards a minimizer at a rate independent of @math . We establish a Law of Large Numbers and a Central Limit Theorem for the empirical distribution, which together show that the approximation error of the network universally scales as @math . Remarkably, these properties do not depend on the dimensionality of the domain of the function that we seek to represent. Our analysis also quantifies the scale and nature of the noise introduced by stochastic gradient descent and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural network to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as @math .",
"Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension.",
"Machine learning, and in particular neural network models, have revolutionized fields such as image, text, and speech recognition. Today, many important real-world applications in these areas are driven by neural networks. There are also growing applications in engineering, robotics, medicine, and finance. Despite their immense success in practice, there is limited mathematical understanding of neural networks. This paper illustrates how neural networks can be studied via stochastic analysis, and develops approaches for addressing some of the technical challenges which arise. We analyze one-layer neural networks in the asymptotic regime of simultaneously (A) large network sizes and (B) large numbers of stochastic gradient descent training iterations. We rigorously prove that the empirical distribution of the neural network parameters converges to the solution of a nonlinear partial differential equation. This result can be considered a law of large numbers for neural networks. In addition, a consequence of our analysis is that the trained parameters of the neural network asymptotically become independent, a property which is commonly called \"propagation of chaos\"."
]
} |
1902.06015 | 2913010492 | We consider learning two layer neural networks using stochastic gradient descent. The mean-field description of this learning dynamics approximates the evolution of the network weights by an evolution in the space of probability distributions in @math (where @math is the number of parameters associated to each neuron). This evolution can be defined through a partial differential equation or, equivalently, as the gradient flow in the Wasserstein space of probability distributions. Earlier work shows that (under some regularity assumptions), the mean field description is accurate as soon as the number of hidden units is much larger than the dimension @math . In this paper we establish stronger and more general approximation guarantees. First of all, we show that the number of hidden units only needs to be larger than a quantity dependent on the regularity properties of the data, and independent of the dimensions. Next, we generalize this analysis to the case of unbounded activation functions, which was not covered by earlier bounds. We extend our results to noisy stochastic gradient descent. Finally, we show that kernel ridge regression can be recovered as a special limit of the mean field analysis. | The mean-field description was exploited in several papers to establish global convergence results. In @cite_16 global convergence was proved in special examples, and in a general setting for noisy SGD. The papers @cite_10 @cite_20 studied global convergence by exploiting the homogeneity properties of Eq. . In particular, @cite_20 proves a general global convergence result. For initial conditions @math with full support, the PDE converges to a global minimum provided activations are homogeneous in the parameters. Notice that the presence of unbounded second layer coefficients is crucial in order to achieve homogeneity. Unfortunately, the results of @cite_20 do not provide quantitative approximation bounds relating the PDE to finite- @math SGD. The present paper fills this gap by establishing approximation bounds that apply to the setting of @cite_20 . | {
"cite_N": [
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"2963095610",
"2798986185",
"2952469083"
],
"abstract": [
"Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires optimizing a nonconvex high-dimensional objective (risk function), a problem that is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the former case, does this happen because local minima are absent or because SGD somehow avoids them? In the latter, why do local minima reached by SGD have good generalization properties? In this paper, we consider a simple case, namely two-layer neural networks, and prove that—in a suitable scaling limit—SGD dynamics is captured by a certain nonlinear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows for “averaging out” some of the complexities of the landscape of neural networks and can be used to prove a general convergence result for noisy SGD.",
"Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, potentially of great use in computational and applied mathematics. That said, there are few rigorous results about the representation error and trainability of neural networks. Here we characterize both the error and the scaling of the error with the size of the network by reinterpreting the standard optimization algorithm used in machine learning applications, stochastic gradient descent, as the evolution of a particle system with interactions governed by a potential related to the objective or \"loss\" function used to train the network. We show that, when the number @math of parameters is large, the empirical distribution of the particles descends on a convex landscape towards a minimizer at a rate independent of @math . We establish a Law of Large Numbers and a Central Limit Theorem for the empirical distribution, which together show that the approximation error of the network universally scales as @math . Remarkably, these properties do not depend on the dimensionality of the domain of the function that we seek to represent. Our analysis also quantifies the scale and nature of the noise introduced by stochastic gradient descent and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural network to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as @math .",
"Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension."
]
} |
1902.06015 | 2913010492 | We consider learning two layer neural networks using stochastic gradient descent. The mean-field description of this learning dynamics approximates the evolution of the network weights by an evolution in the space of probability distributions in @math (where @math is the number of parameters associated to each neuron). This evolution can be defined through a partial differential equation or, equivalently, as the gradient flow in the Wasserstein space of probability distributions. Earlier work shows that (under some regularity assumptions), the mean field description is accurate as soon as the number of hidden units is much larger than the dimension @math . In this paper we establish stronger and more general approximation guarantees. First of all, we show that the number of hidden units only needs to be larger than a quantity dependent on the regularity properties of the data, and independent of the dimensions. Next, we generalize this analysis to the case of unbounded activation functions, which was not covered by earlier bounds. We extend our results to noisy stochastic gradient descent. Finally, we show that kernel ridge regression can be recovered as a special limit of the mean field analysis. | Finally, a recent stream of works @cite_26 @cite_14 @cite_25 @cite_11 @cite_27 argues that, as @math two-layers networks are actually performing a type of kernel ridge regression. As shown in @cite_6 , this phenomenon is not limited to neural network, but generic for a broad class of models. As expected, the kernel regime can indeed be recovered as a special limit of the mean-field dynamics , cf. Section . Let us emphasize that here we focus on the population rather than the empirical risk. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_6",
"@cite_27",
"@cite_25",
"@cite_11"
],
"mid": [
"2907047316",
"2950743785",
"2904838594",
"2899748887",
"2894604724",
"2899790086"
],
"abstract": [
"The advent of deep learning is a breakthrough in artificial intelligence, for which a theoretical understanding is lacking. Supervised deep learning involves the training of neural networks with a large number @math of parameters. For large enough @math , in the so-called over-parametrized regime, one can essentially fit the training data points. Sparsity-based arguments would suggest that the generalization error increases as @math grows past a certain threshold @math . Instead, empirical studies have shown that in the over-parametrized regime, generalization error keeps decreasing with @math . We resolve this paradox, through a new framework. We rely on the so-called Neural Tangent Kernel, which connects large neural nets to kernel methods, to show that the initialization causes finite-size random fluctuations @math of the neural net output function @math around its expectation @math . These affect the generalization error @math for classification: under natural assumptions, it decays to a plateau value @math in a power-law fashion @math . This description breaks down at a so-called jamming transition @math . At this threshold, we argue that @math diverges. This result leads to a plausible explanation for the cusp in test error known to occur at @math . Our results are confirmed by extensive empirical observations on the MNIST and CIFAR image datasets. Our analysis finally suggests that, given a computational envelope, it is best to use several nets of intermediate sizes, just beyond @math , and to average their outputs.",
"At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function @math (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function @math follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.",
"In a series of recent theoretical works, it has been shown that strongly over-parameterized neural networks trained with gradient-based methods could converge linearly to zero training loss, with their parameters hardly varying. In this note, our goal is to exhibit the simple structure that is behind these results. In a simplified setting, we prove that \"lazy training\" essentially solves a kernel regression. We also show that this behavior is not so much due to over-parameterization than to a choice of scaling, often implicit, that allows to linearize the model around its initialization. These theoretical results complemented with simple numerical experiments make it seem unlikely that \"lazy training\" is behind the many successes of neural networks in high dimensional tasks.",
"Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, networks used in practice are going wider and deeper. On the theoretical side, a long line of works has been focusing on training neural networks with one hidden layer. The theory of multi-layer networks remains largely unsettled. In this work, we prove why stochastic gradient descent (SGD) can find @math on the training objective of DNNs in @math . We only make two assumptions: the inputs are non-degenerate and the network is over-parameterized. The latter means the network width is sufficiently large: @math in @math , the number of layers and in @math , the number of samples. Our key technique is to derive that, in a sufficiently large neighborhood of the random initialization, the optimization landscape is almost-convex and semi-smooth even with ReLU activations. This implies an equivalence between over-parameterized neural networks and neural tangent kernel (NTK) in the finite (and polynomial) width setting. As concrete examples, starting from randomly initialized weights, we prove that SGD can attain 100 training accuracy in classification tasks, or minimize regression loss in linear convergence speed, with running time polynomial in @math . Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet).",
"One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth. This paper demystifies this surprising phenomenon for two-layer fully connected ReLU activated neural networks. For an @math hidden node shallow neural network with ReLU activation and @math training data, we show as long as @math is large enough and no two inputs are parallel, randomly initialized gradient descent converges to a globally optimal solution at a linear convergence rate for the quadratic loss function. Our analysis relies on the following observation: over-parameterization and random initialization jointly restrict every weight vector to be close to its initialization for all iterations, which allows us to exploit a strong convexity-like property to show that gradient descent converges at a global linear rate to the global optimum. We believe these insights are also useful in analyzing deep models and other first order methods.",
"Gradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). Our analysis relies on the particular structure of the Gram matrix induced by the neural network architecture. This structure allows us to show the Gram matrix is stable throughout the training process and this stability implies the global optimality of the gradient descent algorithm. We further extend our analysis to deep residual convolutional neural networks and obtain a similar convergence result."
]
} |
1902.06015 | 2913010492 | We consider learning two layer neural networks using stochastic gradient descent. The mean-field description of this learning dynamics approximates the evolution of the network weights by an evolution in the space of probability distributions in @math (where @math is the number of parameters associated to each neuron). This evolution can be defined through a partial differential equation or, equivalently, as the gradient flow in the Wasserstein space of probability distributions. Earlier work shows that (under some regularity assumptions), the mean field description is accurate as soon as the number of hidden units is much larger than the dimension @math . In this paper we establish stronger and more general approximation guarantees. First of all, we show that the number of hidden units only needs to be larger than a quantity dependent on the regularity properties of the data, and independent of the dimensions. Next, we generalize this analysis to the case of unbounded activation functions, which was not covered by earlier bounds. We extend our results to noisy stochastic gradient descent. Finally, we show that kernel ridge regression can be recovered as a special limit of the mean field analysis. | A discussion of the difference between the kernel and mean-field regimes was recently presented in @cite_3 . However, @cite_3 argues that the difference between kernel and mean-field behaviors is due to different initializations of the coefficients @math 's. We show instead that, for a suitable scaling of the initialization, kernel and mean field regimes appear at different time scales. Namely, the kernel behavior arises at the beginning of the dynamics, and mean field characterizes longer time scales. It is also worth mentioning that the connection between mean field dynamics and kernel boosting with a time-varying data-dependent kernel was already present (somewhat implicitly) in @cite_10 . | {
"cite_N": [
"@cite_10",
"@cite_3"
],
"mid": [
"2798986185",
"2912321142"
],
"abstract": [
"Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, potentially of great use in computational and applied mathematics. That said, there are few rigorous results about the representation error and trainability of neural networks. Here we characterize both the error and the scaling of the error with the size of the network by reinterpreting the standard optimization algorithm used in machine learning applications, stochastic gradient descent, as the evolution of a particle system with interactions governed by a potential related to the objective or \"loss\" function used to train the network. We show that, when the number @math of parameters is large, the empirical distribution of the particles descends on a convex landscape towards a minimizer at a rate independent of @math . We establish a Law of Large Numbers and a Central Limit Theorem for the empirical distribution, which together show that the approximation error of the network universally scales as @math . Remarkably, these properties do not depend on the dimensionality of the domain of the function that we seek to represent. Our analysis also quantifies the scale and nature of the noise introduced by stochastic gradient descent and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural network to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as @math .",
"Consider the problem: given the data pair @math drawn from a population with @math , specify a neural network model and run gradient flow on the weights over time until reaching any stationarity. How does @math , the function computed by the neural network at time @math , relate to @math , in terms of approximation and representation? What are the provable benefits of the adaptive representation by neural networks compared to the pre-specified fixed basis representation in the classical nonparametric literature? We answer the above questions via a dynamic reproducing kernel Hilbert space (RKHS) approach indexed by the training process of neural networks. Firstly, we show that when reaching any local stationarity, gradient flow learns an adaptive RKHS representation and performs the global least-squares projection onto the adaptive RKHS, simultaneously. Secondly, we prove that as the RKHS is data-adaptive and task-specific, the residual for @math lies in a subspace that is potentially much smaller than the orthogonal complement of the RKHS. The result formalizes the representation and approximation benefits of neural networks. Lastly, we show that the neural network function computed by gradient flow converges to the kernel ridgeless regression with an adaptive kernel, in the limit of vanishing regularization. The adaptive kernel viewpoint provides new angles of studying the approximation, representation, generalization, and optimization advantages of neural networks."
]
} |
1902.06090 | 2912894365 | A mobile agent has to find an inert treasure hidden in the plane. Both the agent and the treasure are modeled as points. This is a variant of the task known as treasure hunt. The treasure is at a distance at most @math from the initial position of the agent, and the agent finds the treasure when it gets at distance @math from it, called the vision radius . However, the agent does not know the location of the treasure and does not know the parameters @math and @math . The cost of finding the treasure is the length of the trajectory of the agent. We investigate the tradeoffs between the amount of information held a priori by the agent and the cost of treasure hunt. Following the well-established paradigm of algorithms with advice , this information is given to the agent in advance as a binary string, by an oracle cooperating with the agent and knowing the location of the treasure and the initial position of the agent. The size of advice given to the agent is the length of this binary string. For any size @math of advice and any @math and @math , let @math be the optimal cost of finding the treasure for parameters @math , @math and @math , if the agent has only an advice string of length @math as input. We design treasure hunt algorithms working with advice of size @math at cost @math whenever @math or @math . For intermediate values of @math , i.e., @math , the treasure can be found at cost @math . | Algorithms with advice. The paradigm of algorithms with advice was used predominantly for tasks in graphs. Providing arbitrary items of information that can be used to increase efficiency of solutions to network problems has been proposed in @cite_29 @cite_1 @cite_15 @cite_2 @cite_0 @cite_13 @cite_14 @cite_22 @cite_7 @cite_18 @cite_26 @cite_10 @cite_23 @cite_5 @cite_3 @cite_8 . This approach was referred to as algorithms with advice . The advice, in the form of an arbitrary binary string, is given by a cooperating omniscient oracle either to the nodes of the network or to mobile agents performing some task in it. In the first case, instead of advice, the term informative labeling schemes is sometimes used, if different nodes can get different information. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_10",
"@cite_29",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_13"
],
"mid": [
"2034501275",
"1975595616",
"1983693678",
"2072269078",
"2251394987",
"2045446569",
"2006162221",
"1990376056",
"2046334554",
"2174013141",
"1971694274",
"2056295140",
"1975011672",
"599153350",
"2109659895",
"2038319432"
],
"abstract": [
"We consider the problem of labeling the nodes of a graph in a way that will allow one to compute the distance between any two nodes directly from their labels (without using any additional information). Our main interest is in the minimal length of labels needed in different cases. We obtain upper and lower bounds for several interesting families of graphs. In particular, our main results are the following. For general graphs, we show that the length needed is Θ(n). For trees, we show that the length needed is Θ(log2 n). For planar graphs, we show an upper bound of O(√nlogn) and a lower bound of Ω(n1 3). For bounded degree graphs, we show a lower bound of Ω(√n). The upper bounds for planar graphs and for trees follow by a more general upper bound for graphs with a r(n)-separator. The two lower bounds, however, are obtained by two different arguments that may be interesting in their own right. We also show some lower bounds on the length of the labels, even if it is only required that distances be approximated to a multiplicative factor s. For example, we show that for general graphs the required length is Ω(n) for every s < 3. We also consider the problem of the time complexity of the distance function once the labels are computed. We show that there are graphs with optimal labels of length 3 log n, such that if we use any labels with fewer than n bits per label, computing the distance function requires exponential time. A similar result is obtained for planar and bounded degree graphs.",
"We use the recently introduced advising scheme framework for measuring the difficulty of locally distributively computing a Minimum Spanning Tree (MST). An (m,t)-advising scheme for a distributed problem P is a way, for every possible input I of P, to provide an \"advice\" (i.e., a bit string) about I to each node so that: (1) the maximum size of the advices is at most m bits, and (2) the problem P can be solved distributively in at most t rounds using the advices as inputs. In case of MST, the output returned by each node of a weighted graph G is the edge leading to its parent in some rooted MST T of G. Clearly, there is a trivial (log n,0)-advising scheme for MST (each node is given the local port number of the edge leading to the root of some MST T), and it is known that any (0,t)-advising scheme satisfies t ≥ Ω (√n). Our main result is the construction of an (O(1),O(log n))-advising scheme for MST. That is, by only giving a constant number of bits of advice to each node, one can decrease exponentially the distributed computation time of MST in arbitrary graph, compared to algorithms dealing with the problem in absence of any a priori information. We also consider the average size of the advices. On the one hand, we show that any (m,0)-advising scheme for MST gives advices of average size Ω(log n). On the other hand we design an (m,1)-advising scheme for MST with advices of constant average size, that is one round is enough to decrease the average size of the advices from log(n) to constant.",
"We study deterministic broadcasting in radio networks in the recently introduced framework of network algorithms with advice. We concentrate on the problem of trade-offs between the number of bits of information (size of advice) available to nodes and the time in which broadcasting can be accomplished. In particular, we ask what is the minimum number of bits of information that must be available to nodes of the network, in order to broadcast very fast. For networks in which constant time broadcast is possible under a complete knowledge of the network we give a tight answer to the above question: O(n) bits of advice are sufficient but o(n) bits are not, in order to achieve constant broadcasting time in all these networks. This is in sharp contrast with geometric radio networks of constant broadcasting time: we show that in these networks a constant number of bits suffices to broadcast in constant time. For arbitrary radio networks we present a broadcasting algorithm whose time is inverse-proportional to the size of the advice.",
"We study the problem of the amount of information required to perform fast broadcasting in tree networks. The source located at the root of a tree has to disseminate a message to all nodes. In each round each informed node can transmit to one child. Nodes do not know the topology of the tree but an oracle knowing it can give a string of bits of advice to the source which can then pass it down the tree with the source message. The quality of a broadcasting algorithm with advice is measured by its competitive ratio: the worst case ratio, taken over n-node trees, between the time of this algorithm and the optimal broadcasting time in the given tree. Our goal is to find a trade-off between the size of advice and the best competitive ratio of a broadcasting algorithm for n-node trees. We establish such a trade-off with an approximation factor of O(n e ), for an arbitrarily small positive constant e. This is the first communication problem for which a trade-off between the size of advice and the efficiency of the solution is shown for arbitrary size of advice.",
"In topology recognition, each node of an anonymous network has to deterministically produce an isomorphic copy of the underlying graph, with all ports correctly marked. This task is usually unfeasible without any a priori information. Such information can be provided to nodes as advice. An oracle knowing the network can give a (possibly different) string of bits to each node, and all nodes must reconstruct the network using this advice, after a given number of rounds of communication. During each round each node can exchange arbitrary messages with all its neighbors and perform arbitrary local computations. The time of completing topology recognition is the number of rounds it takes, and the size of advice is the maximum length of a string given to nodes.We investigate tradeoffs between the time in which topology recognition is accomplished and the minimum size of advice that has to be given to nodes. We provide upper and lower bounds on the minimum size of advice that is sufficient to perform topology recognition in a given time, in the class of all graphs of size n and diameter D ? α n , for any constant α < 1 . In most cases, our bounds are asymptotically tight.",
"Let G = (V,E) be an undirected weighted graph with vVv = n and vEv = m. Let k ≥ 1 be an integer. We show that G = (V,E) can be preprocessed in O(kmn1 k) expected time, constructing a data structure of size O(kn1p1 k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k−1, that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdos, implies that Ω(n1p1 k) space is needed in the worst case for any real stretch strictly smaller than 2kp1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name \"oracle\". Previously, data structures that used only O(n1p1 k) space had a query time of Ω(n1 k).Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs.",
"This paper studies labeling schemes for flow and connectivity functions. A flow labeling scheme using @math -bit labels is presented for general n-vertex graphs with maximum (integral) capacity @math . This is shown to be asymptotically optimal. For edge-connectivity, this yields a tight bound of @math bits. A k-vertex connectivity labeling scheme is then given for general n-vertex graphs using at most 3 log n bits for k = 2, 5 log n bits for k = 3, and 2k log n bits for k > 3. Finally, a lower bound of @math is established for k -vertex connectivity on n-vertex graphs, where k is polylogarithmic in n.",
"We consider the following problem. Give a rooted tree T, label the nodes of T in the most compact way such that given the labels of two nodes one can determine in constant time, by looking only at the labels, if one node is an ancestor of the other. The best known labeling scheme is rather straightforward and uses labels of size at most 2 log n, where n is the number of vertices In the tree. Our main result in this paper is a labeling scheme with maximum label size close to 3 2 log n. Our motivation for studying this problem is enhancing the performance of Web search engines. In the context of this application each indexed document is a tree and the labels of all trees are maintained in main memory. Therefore even small improvements in the maximum label size are important. There are no lower bounds known for this problem except for an obvious lower bound of log n that follows from the fact that different vertices must have different labels. The question whether one can find even shorter labels remains an intriguing open question.",
"We study the problem of the amount of information required to draw a complete or a partial map of a graph with unlabeled nodes and arbitrarily labeled ports. A mobile agent, starting at any node of an unknown connected graph and walking in it, has to accomplish one of the following tasks: draw a complete map of the graph, i.e., find an isomorphic copy of it including port numbering, or draw a partial map, i.e., a spanning tree, again with port numbering. The agent executes a deterministic algorithm and cannot mark visited nodes in any way. None of these map drawing tasks is feasible without any additional information, unless the graph is a tree. Hence we investigate the minimum number of bits of information (minimum size of advice) that has to be given to the agent to complete these tasks. It turns out that this minimum size of advice depends on the number n of nodes or the number m of edges of the graph, and on a crucial parameter @m, called the multiplicity of the graph, which measures the number of nodes that have an identical view of the graph. We give bounds on the minimum size of advice for both above tasks. For @m=1 our bounds are asymptotically tight for both tasks and show that the minimum size of advice is very small. For @m>1 the minimum size of advice increases abruptly. In this case our bounds are asymptotically tight for topology recognition and asymptotically almost tight for spanning tree construction.",
"[L. Blin, P. Fraigniaud, N. Nisse, S. Vial, Distributing chasing of network intruders, in: 13th Colloquium on Structural Information and Communication Complexity, SIROCCO, in: LNCS, vol. 4056, Springer-Verlag, 2006, pp. 70-84] introduced a new measure of difficulty for a distributed task in a network. The smallest number of bits of advice of a distributed problem is the smallest number of bits of information that has to be available to nodes in order to accomplish the task efficiently. Our paper deals with the number of bits of advice required to perform efficiently the graph searching problem in a distributed setting. In this variant of the problem, all searchers are initially placed at a particular node of the network. The aim of the team of searchers is to clear a contaminated graph in a monotone connected way, i.e., the cleared part of the graph is permanently connected, and never decreases while the search strategy is executed. Moreover, the clearing of the graph must be performed using the optimal number of searchers, i.e. the minimum number of searchers sufficient to clear the graph in a monotone connected way in a centralized setting. We show that the minimum number of bits of advice permitting the monotone connected and optimal clearing of a network in a distributed setting is @Q(nlogn), where n is the number of nodes of the network. More precisely, we first provide a labelling of the vertices of any graph G, using a total of O(nlogn) bits, and a protocol using this labelling that enables the optimal number of searchers to clear G in a monotone connected distributed way. Then, we show that this number of bits of advice is optimal: any distributed protocol requires @W(nlogn) bits of advice to clear a network in a monotone connected way, using an optimal number of searchers.",
"We study the amount of knowledge about a communication network that must be given to its nodes in order to efficiently disseminate information. Our approach is quantitative: we investigate the minimum total number of bits of information (minimum size of advice) that has to be available to nodes, regardless of the type of information provided. We compare the size of advice needed to perform broadcast and wakeup (the latter is a broadcast in which nodes can transmit only after getting the source information), both using a linear number of messages (which is optimal). We show that the minimum size of advice permitting the wakeup with a linear number of messages in an n-node network, is @Q(nlogn), while the broadcast with a linear number of messages can be achieved with advice of size O(n). We also show that the latter size of advice is almost optimal: no advice of size o(n) can permit to broadcast with a linear number of messages. Thus an efficient wakeup requires strictly more information about the network than an efficient broadcast.",
"This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as “how expensive is local verification?” and more specifically, “how expensive is local verification compared to computation?” A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithms- one to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less costly sometimes, since the configuration is typically generated so as to be easily verifiable. In contrast, our approach separates the configuration design from the verification. That is, it first generates the desired configuration without bothering with the need to verify it, and then handles the task of constructing a suitable verification scheme. Our approach thus allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind.",
"We study the problem of the amount of information (advice) about a graph that must be given to its nodes in order to achieve fast distributed computations. The required size of the advice enables to measure the information sensitivity of a network problem. A problem is information sensitive if little advice is enough to solve the problem rapidly (i.e., much faster than in the absence of any advice), whereas it is information insensitive if it requires giving a lot of information to the nodes in order to ensure fast computation of the solution. In this paper, we study the information sensitivity of distributed graph coloring.",
"In rendezvous, two agents traverse network edges in synchronous rounds and have to meet at some node. In treasure hunt, a single agent has to find a stationary target situated at an unknown node of the network. We study tradeoffs between the amount of information (advice) available a priori to the agents and the cost (number of edge traversals) of rendezvous and treasure hunt. Our goal is to find the smallest size of advice which enables the agents to solve these tasks at some cost C in a network with e edges. This size turns out to depend on the initial distance D and on the ratio e C , which is the relative cost gain due to advice. For arbitrary graphs, we give upper and lower bounds of O ( D log ( D ? e C ) + log log e ) and ? ( D log e C ) , respectively, on the optimal size of advice. For the class of trees, we give nearly tight upper and lower bounds of O ( D log e C + log log e ) and ? ( D log e C ) , respectively. In rendezvous, two agents traverse edges in rounds and have to meet at some node.In treasure hunt, an agent must find a fixed target at some node of the network.Objective: tradeoffs between the advice available to the agents and the cost.Results: bounds on the size of advice to achieve a given cost.",
"We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice is a function, defined by the online algorithm, of the whole request sequence. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b=0 corresponds to the classical online model, and b=@?log|A|@?, where A is the algorithm's action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones. In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the k-server problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1@?b@?@Q(logn), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio @W(log(n) b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n) b). For the k-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio k^O^(^1^ ^b^) for any choice of @Q(1)@?b@?logk.",
"We study the amount of knowledge about the network that is required in order to efficiently solve a task concerning this network. The impact of available information on the efficiency of solving network problems, such as communication or exploration, has been investigated before but assumptions concerned availability of particular items of information about the network, such as the size, the diameter, or a map of the network. In contrast, our approach is quantitative: we investigate the minimum number of bits of information (bits of advice) that has to be given to an algorithm in order to perform a task with given efficiency. We illustrate this quantitative approach to available knowledge by the task of tree exploration. A mobile entity (robot) has to traverse all edges of an unknown tree, using as few edge traversals as possible. The quality of an exploration algorithm A is measured by its competitive ratio, i.e., by comparing its cost (number of edge traversals) to the length of the shortest path containing all edges of the tree. Depth-First-Search has competitive ratio 2 and, in the absence of any information about the tree, no algorithm can beat this value. We determine the minimum number of bits of advice that has to be given to an exploration algorithm in order to achieve competitive ratio strictly smaller than 2. Our main result establishes an exact threshold number of bits of advice that turns out to be roughly loglogD, where D is the diameter of the tree. More precisely, for any constant c, we construct an exploration algorithm with competitive ratio smaller than 2, using at most loglogD-c bits of advice, and we show that every algorithm using loglogD-g(D) bits of advice, for any function g unbounded from above, has competitive ratio at least 2."
]
} |
1902.06231 | 2913531876 | Due to globalization, geographic boundaries no longer serve as effective shields for the spread of infectious diseases. In order to aid bio-surveillance analysts in disease tracking, recent research has been devoted to developing information retrieval and analysis methods utilizing the vast corpora of publicly available documents on the internet. In this work, we present methods for the automated retrieval and classification of documents related to active public health events. We demonstrate classification performance on an auto-generated corpus, using recurrent neural network, TF-IDF, and Naive Bayes log count ratio document representations. By jointly modeling the title and description of a document, we achieve 97 recall and 93.3 accuracy with our best performing bio-surveillance event classification model: logistic regression on the combined output from a pair of bidirectional recurrent neural networks. | Joint modeling using multiple representations has demonstrated reliable performance gains for several neural network document classification models. create separate document models from two sets of pre-trained word vectors using texts from the classification target domain and a large unlabeled corpus. They found vectorizing the word embedding matrices with separate convolutional networks performed better than using combined matrices as input to a single convolutional network. use GloVe @cite_9 word embeddings in tandem with pre-trained word embeddings from the English-to-German translation task. improve performance using joint bidirec- tional LSTM representations of a document; one network has weights fixed after unsupervised pre-training while the other is learned from labeled documents. They showed similar gains using this same semi-supervised set-up but with convolutional networks in place of LSTMs. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2250539671"
],
"abstract": [
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition."
]
} |
1902.06007 | 2914225466 | Deep reinforcement learning has seen great success across a breadth of tasks such as in game playing and robotic manipulation. However, the modern practice of attempting to learn tabula rasa disregards the logical structure of many domains and the wealth of readily-available human domain experts' knowledge that could help warm start'' the learning process. Further, learning from demonstration techniques are not yet sufficient to infer this knowledge through sampling-based mechanisms in large state and action spaces, or require immense amounts of data. We present a new reinforcement learning architecture that can encode expert knowledge, in the form of propositional logic, directly into a neural, tree-like structure of fuzzy propositions that are amenable to gradient descent. We show that our novel architecture is able to outperform reinforcement and imitation learning techniques across an array of canonical challenge problems for artificial intelligence. | There has been an increase in researchers investigating ways to improve the initialization of deep neural networks, particularly in the RL domain where agents can spend a tremendous amount of time without learning anything meaningful before getting fortunate enough to start accruing useful reward signals. Warm starts have been used for RL in healthcare @cite_21 , as well as in supervised learning for NLP @cite_33 or other classification tasks @cite_0 . While these works have provided interesting insight into the efficacy of warm starts in various domains, they either involve large labeled datasets, or they require RL agents to solve the same domain repeatedly. In domains where an RL agent will struggle to ever find a solution without a warm start, this is not a practical assumption, nor it is always possible to acquire a large labeled dataset for new domains. | {
"cite_N": [
"@cite_0",
"@cite_21",
"@cite_33"
],
"mid": [
"2311110368",
"2606868178",
"2741271950"
],
"abstract": [
"Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models. We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifically, we develop an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks. We deploy the framework on a CNN for sentiment analysis, and an RNN for named entity recognition. With a few highly intuitive rules, we obtain substantial improvements and achieve state-of-the-art or comparable results to previous best-performing systems.",
"Online reinforcement learning (RL) is increasingly popular for the personalized mobile health (mHealth) intervention. It is able to personalize the type and dose of interventions according to user's ongoing statuses and changing needs. However, at the beginning of online learning, there are usually too few samples to support the RL updating, which leads to poor performances. A delay in good performance of the online learning algorithms can be especially detrimental in the mHealth, where users tend to quickly disengage with the mHealth app. To address this problem, we propose a new online RL methodology that focuses on an effective warm start. The main idea is to make full use of the data accumulated and the decision rule achieved in a former study. As a result, we can greatly enrich the data size at the beginning of online learning in our method. Such case accelerates the online learning process for new users to achieve good performances not only at the beginning of online learning but also through the whole online learning process. Besides, we use the decision rules achieved in a previous study to initialize the parameter in our online RL model for new users. It provides a good initialization for the proposed online RL algorithm. Experiment results show that promising improvements have been achieved by our method compared with the state-of-the-art method.",
"Text classification is a fundamental task in NLP applications. Most existing work relied on either explicit or implicit text representation to address this problem. While these techniques work well for sentences, they can not easily be applied to short text because of its shortness and sparsity. In this paper, we propose a framework based on convolutional neural networks that combines explicit and implicit representations of short text for classification. We first conceptualize a short text as a set of relevant concepts using a large taxonomy knowledge base. We then obtain the embedding of short text by coalescing the words and relevant concepts on top of pre-trained word vectors. We further incorporate character level features into our model to capture fine-grained subword information. Experimental results on five commonly used datasets show that our proposed method significantly outperforms state-of-the-art methods."
]
} |
1902.06007 | 2914225466 | Deep reinforcement learning has seen great success across a breadth of tasks such as in game playing and robotic manipulation. However, the modern practice of attempting to learn tabula rasa disregards the logical structure of many domains and the wealth of readily-available human domain experts' knowledge that could help warm start'' the learning process. Further, learning from demonstration techniques are not yet sufficient to infer this knowledge through sampling-based mechanisms in large state and action spaces, or require immense amounts of data. We present a new reinforcement learning architecture that can encode expert knowledge, in the form of propositional logic, directly into a neural, tree-like structure of fuzzy propositions that are amenable to gradient descent. We show that our novel architecture is able to outperform reinforcement and imitation learning techniques across an array of canonical challenge problems for artificial intelligence. | Most closely aligned with our work is the deep jointly-informed neural networks (DJINN) approach @cite_29 , which uses a decision tree to initialize a deep network for classification while preserving the decision tree rules for immediately accurate prediction from the network. While DJINN uses a decision tree trained on a target dataset for network initialization, our approach instead seeks to translate an expert policy into a network. This means that our approach does not require a supervised training set in order to construct a decision tree for initialization, we can instead convert a set of propositional logical rules into a set of neural network weights. The ability to begin with a set of arbitrary rules rather than a pretrained decision tree is important, particularly in RL, because a large dataset of state-action pairs may be unavailable, unreliable, or misleading given the covariate shift. | {
"cite_N": [
"@cite_29"
],
"mid": [
"2810818231"
],
"abstract": [
"In this paper, a novel, automated process for constructing and initializing deep feedforward neural networks based on decision trees is presented. The proposed algorithm maps a collection of decision trees trained on the data into a collection of initialized neural networks with the structures of the networks determined by the structures of the trees. The tree-informed initialization acts as a warm-start to the neural network training process, resulting in efficiently trained, accurate networks. These models, referred to as “deep jointly informed neural networks” (DJINN), demonstrate high predictive performance for a variety of regression and classification data sets and display comparable performance to Bayesian hyperparameter optimization at a lower computational cost. By combining the user-friendly features of decision tree models with the flexibility and scalability of deep neural networks, DJINN is an attractive algorithm for training predictive models on a wide range of complex data sets."
]
} |
1902.06007 | 2914225466 | Deep reinforcement learning has seen great success across a breadth of tasks such as in game playing and robotic manipulation. However, the modern practice of attempting to learn tabula rasa disregards the logical structure of many domains and the wealth of readily-available human domain experts' knowledge that could help warm start'' the learning process. Further, learning from demonstration techniques are not yet sufficient to infer this knowledge through sampling-based mechanisms in large state and action spaces, or require immense amounts of data. We present a new reinforcement learning architecture that can encode expert knowledge, in the form of propositional logic, directly into a neural, tree-like structure of fuzzy propositions that are amenable to gradient descent. We show that our novel architecture is able to outperform reinforcement and imitation learning techniques across an array of canonical challenge problems for artificial intelligence. | Research in RL has revealed the value of efficient exploration @cite_20 . Recent work has even found that exploration for its own sake can yield improved results in maximizing extrinsic reward @cite_30 . However, much of the advancement in RL has involved building out new approaches that scale well with millions of samples @cite_2 @cite_14 , as advances in compute capability have provided researchers with the ability to cheaply gather hundreds of years' worth of samples in mere days @cite_3 . We introduce a method that seeks to reduce the number of samples needed to begin effective learning in complex domains, and that does not require supervision. Rather than spending hundreds of CPU years of experience taking random actions and losing without making positive steps toward reward, our approach can begin with a plausible policy and explore from initial success, allowing ProLoNet agents to meaningfully learn from the first episode. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_3",
"@cite_2",
"@cite_20"
],
"mid": [
"2899205164",
"2786036274",
"",
"2766447205",
"2939519298"
],
"abstract": [
"We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.",
"In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time, which is already a problem in single task learning. We have developed a new distributed agent IMPALA (Importance-Weighted Actor Learner Architecture) that can scale to thousands of machines and achieve a throughput rate of 250,000 frames per second. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace, which was critical for achieving learning stability. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (, 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (, 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents, use less data and crucially exhibits positive transfer between tasks as a result of its multi-task approach.",
"",
"Starting from zero knowledge and without human data, AlphaGo Zero was able to teach itself to play Go and to develop novel strategies that provide new insights into the oldest of games.",
"Localizing moments in untrimmed videos via language queries is a new and interesting task that requires the ability to accurately ground language into video. Previous works have approached this task by processing the entire video, often more than once, to localize relevant activities. In the real world applications that this task lends itself to, such as surveillance, efficiency is a pivotal trait of a system. In this paper, we present TripNet, an end-to-end system that uses a gated attention architecture to model fine-grained textual and visual representations in order to align text and video content. Furthermore, TripNet uses reinforcement learning to efficiently localize relevant activity clips in long videos, by learning how to intelligently skip around the video. It extracts visual features for fewer frames to perform activity classification. In our evaluation over Charades-STA, ActivityNet Captions and the TACoS dataset, we find that TripNet achieves high accuracy and saves processing time by only looking at 32-41 of the entire video."
]
} |
1902.06007 | 2914225466 | Deep reinforcement learning has seen great success across a breadth of tasks such as in game playing and robotic manipulation. However, the modern practice of attempting to learn tabula rasa disregards the logical structure of many domains and the wealth of readily-available human domain experts' knowledge that could help warm start'' the learning process. Further, learning from demonstration techniques are not yet sufficient to infer this knowledge through sampling-based mechanisms in large state and action spaces, or require immense amounts of data. We present a new reinforcement learning architecture that can encode expert knowledge, in the form of propositional logic, directly into a neural, tree-like structure of fuzzy propositions that are amenable to gradient descent. We show that our novel architecture is able to outperform reinforcement and imitation learning techniques across an array of canonical challenge problems for artificial intelligence. | Imitation learning in the RL domain, however, often requires large batch datasets on which to train. Methods such as DAgger @cite_32 and ILPO @cite_26 require large labeled datasets, and approaches that combine imitating and exploring, such as LOKI @cite_10 , require a pre-trained policy or heuristic to act as an oracle. Even with a pre-trained policy, LOKI still requires extensive domain experience before beginning the reinforcement learning stage. A human can also act as an oracle for imitation learning, but it is not reasonable to expect a human to patiently label replay data for the entirety of an imitation-learning agent's life @cite_4 . While there are many methods for extracting policies or general rules of thumb'' from humans @cite_9 @cite_25 @cite_19 , these heuristics or rules must be translated into oracles which can be used to provide labels for imitation learning systems, and then these oracles must be run over large amounts of data. Our approach can leverage the same human factors research for extracting policies from humans, though we translate them directly into an RL agent's policy and begin RL immediately, sidestepping the imitation learning phase. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_9",
"@cite_32",
"@cite_19",
"@cite_10",
"@cite_25"
],
"mid": [
"2804020409",
"1501005121",
"1991691398",
"2962957031",
"1969897050",
"2804930149",
"1664374833"
],
"abstract": [
"In this paper, we describe a novel approach to imitation learning that infers latent policies directly from state observations. We introduce a method that characterizes the causal effects of latent actions on observations while simultaneously predicting their likelihood. We then outline an action alignment procedure that leverages a small amount of environment interactions to determine a mapping between the latent and real-world actions. We show that this corrected labeling can be used for imitating the observed behavior, even though no expert actions are given. We evaluate our approach within classic control environments and a platform game and demonstrate that it performs better than standard approaches. Code for this work is available at this https URL.",
"Intelligent systems that learn interactively from their end-users are quickly becoming widespread. Until recently, this progress has been fueled mostly by advances in machine learning; however, more and more researchers are realizing the importance of studying users of these systems. In this article we promote this approach and demonstrate how it can result in better user experiences and more effective learning systems. We present a number of case studies that characterize the impact of interactivity, demonstrate ways in which some existing systems fail to account for the user, and explore new ways for learning systems to interact with their users. We argue that the design process for interactive machine learning systems should involve users at all stages: explorations that reveal human interaction patterns and inspire novel interaction methods, as well as refinement stages to tune details of the interface and choose among alternatives. After giving a glimpse of the progress that has been made so far, we discuss the challenges that we face in moving the field forward.",
"Contents: Preface. An Applied Information-Processing Psychology. Part I: Science Base. The Human Information-Processor. Part II: Text-Editing. System and User Variability. An Exercise in Task Analysis. The GOMS Model of Manuscript Editing. Extensions of the GOMS Analysis. Models of Devices for Text Selection. Part III: Engineering Models. The Keystroke-Level Model. The Unit-Task Level of Analysis. Part IV: Extensions and Generalizations. An Exploration into Circuit Design. Cognitive Skill. Applying Psychology to Design Reprise.",
"Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches (Daume , 2009; Ross and Bagnell, 2010) provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.",
"We have developed learning and interaction algorithms to support a human teaching hierarchical task models to a robot using a single demonstration in the context of a mixedinitiative interaction with bi-directional communication. In particular, we have identified and implemented two important heuristics for suggesting task groupings based on the physical structure of the manipulated artifact and on the data flow between tasks. We have evaluated our algorithms with users in a simulated environment and shown both that the overall approach is usable and that the grouping suggestions significantly improve the learning and interaction. Categories and Subject Descriptors I.2.9 [Artificial Intelligence]: Robotics",
"Imitation learning (IL) consists of a set of tools that leverage expert demonstrations to quickly learn policies. However, if the expert is suboptimal, IL can yield policies with inferior performance compared to reinforcement learning (RL). In this paper, we aim to provide an algorithm that combines the best aspects of RL and IL. We accomplish this by formulating several popular RL and IL algorithms in a common mirror descent framework, showing that these algorithms can be viewed as a variation on a single approach. We then propose LOKI, a strategy for policy learning that first performs a small but random number of IL iterations before switching to a policy gradient RL method. We show that if the switching time is properly randomized, LOKI can learn to outperform a suboptimal expert and converge faster than running policy gradient from scratch. Finally, we evaluate the performance of LOKI experimentally in several simulated environments.",
"Cognitive Task Analysis (CTA) helps researchers understand how cognitive skills and strategies make it possible for people to act effectively and get things done. CTA can yield information people need -- employers faced with personnel issues, market researchers who want to understand the thought processes of consumers, trainers and others who design instructional systems, health care professionals who want to apply lessons learned from errors and accidents, systems analysts developing user specifications, and many other professionals. CTA can show what makes the workplace work -- and what keeps it from working as well as it might. Working Minds is a true handbook, offering a set of tools for doing CTA: methods for collecting data about cognitive processes and events, analyzing them, and communicating them effectively. It covers both the \"why\" and the \"how\" of CTA methods, providing examples, guidance, and stories from the authors' own experiences as CTA practitioners. Because effective use of CTA depends on some conceptual grounding in cognitive theory and research -- on knowing what a cognitive perspective can offer -- the book also offers an overview of current research on cognition. The book provides detailed guidance for planning and carrying out CTA, with chapters on capturing knowledge and capturing the way people reason. It discusses studying cognition in real-world settings and the challenges of rapidly changing technology. And it describes key issues in applying CTA findings in a variety of fields. Working Minds makes the methodology of CTA accessible and the skills involved attainable."
]
} |
1902.06000 | 2914081143 | Semantic parsing using hierarchical representations has recently been proposed for task oriented dialog with promising results [ 2018]. In this paper, we present three different improvements to the model: contextualized embeddings, ensembling, and pairwise re-ranking based on a language model. We taxonomize the errors possible for the hierarchical representation, such as wrong top intent, missing spans or split spans, and show that the three approaches correct different kinds of errors. The best model combines the three techniques and gives 6.4 better exact match accuracy than the state-of-the-art, with an error reduction of 33 , resulting in a new state-of-the-art result on the Task Oriented Parsing (TOP) dataset. | Our work builds on top of two related but distinct directions of research. At one end, there has been a large literature on language understanding for task oriented dialog, such as the work that tackle the ATIS and DSTC datasets @cite_8 @cite_16 . Most work in this area assumes that the utterance is not compositional. The current state-of-the-art @cite_9 frames the problem as one of non-recursive intent and slot tagging, and assumes that the NLU output is passed along to a dialog manager in order to be executed. There has also been work on end-to-end task oriented dialog @cite_11 , but there too, the problem is usually framed as one of selecting a single API call and its arguments, as opposed to compositional API calls. | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_11",
"@cite_8"
],
"mid": [
"2964117975",
"2963974889",
"2964210218",
"2399456070"
],
"abstract": [
"This paper investigates the framework of encoder-decoder with attention for sequence labelling based spoken language understanding. We introduce Bidirectional Long Short Term Memory - Long Short Term Memory networks (BLSTM-LSTM) as the encoder-decoder model to fully utilize the power of deep learning. In the sequence labelling task, the input and output sequences are aligned word by word, while the attention mechanism cannot provide the exact alignment. To address this limitation, we propose a novel focus mechanism for encoder-decoder framework. Experiments on the standard ATIS dataset showed that BLSTM-LSTM with focus mechanism defined the new state-of-the-art by outperforming standard BLSTM and attention based encoder-decoder. Further experiments also show that the proposed model is more robust to speech recognition errors.",
"",
"Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (, 2014a). We show similar result patterns on data extracted from an online concierge service.",
"One of the key problems in spoken language understanding (SLU) is the task of slot filling. In light of the recent success of applying deep neural network technologies in domain detection and intent identification, we carried out an in-depth investigation on the use of recurrent neural networks for the more difficult task of slot filling involving sequence discrimination. In this work, we implemented and compared several important recurrent-neural-network architectures, including the Elman-type and Jordan-type recurrent networks and their variants. To make the results easy to reproduce and compare, we implemented these networks on the common Theano neural network toolkit, and evaluated them on the ATIS benchmark. We also compared our results to a conditional random fields (CRF) baseline. Our results show that on this task, both types of recurrent networks outperform the CRF baseline substantially, and a bi-directional Jordantype network that takes into account both past and future dependencies among slots works best, outperforming a CRFbased baseline by 14 in relative error reduction."
]
} |
1902.06000 | 2914081143 | Semantic parsing using hierarchical representations has recently been proposed for task oriented dialog with promising results [ 2018]. In this paper, we present three different improvements to the model: contextualized embeddings, ensembling, and pairwise re-ranking based on a language model. We taxonomize the errors possible for the hierarchical representation, such as wrong top intent, missing spans or split spans, and show that the three approaches correct different kinds of errors. The best model combines the three techniques and gives 6.4 better exact match accuracy than the state-of-the-art, with an error reduction of 33 , resulting in a new state-of-the-art result on the Task Oriented Parsing (TOP) dataset. | Within both of these areas, neural approaches have supplanted previous feature-engineering based approaches in recent years @cite_22 @cite_17 . In the context of tree-structured semantic parsing, some other interesting approaches include Seq2Tree @cite_12 which modifies the standard Seq2Seq decoder to better output trees; SCANNER @cite_23 @cite_13 which extends the RNNG formulation specifically for semantic parsing such that the output is no longer coupled with the input; and TRANX @cite_7 and Abstract Syntax Network @cite_2 which generate code along a programming language schema. For graph-structured semantic parsing @cite_15 @cite_3 , SLING @cite_4 produces graph-structured parses by modeling semantic parsing as a neural transition parsing problem with a more expressive transition tag set. While graph structures can provide more detailed semantics, they are more difficult to parse and can be an overkill for understanding task oriented utterances. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_3",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2766841988",
"2473329891",
"2890867094",
"2740765036",
"2963960541",
"2962728167",
"2252123671",
"2610301736",
"2224454470",
"2611818442"
],
"abstract": [
"We describe SLING, a framework for parsing natural language into semantic frames. SLING supports general transition-based, neural-network parsing with bidirectional LSTM input encoding and a Transition Based Recurrent Unit (TBRU) for output decoding. The parsing model is trained end-to-end using only the text tokens as input. The transition system has been designed to output frame graphs directly without any intervening symbolic representation. The SLING framework includes an efficient and scalable frame store implementation as well as a neural network JIT compiler for fast inference during parsing. SLING is implemented in C++ and it is available for download on GitHub.",
"Sequence-to-sequence deep learning has recently emerged as a new paradigm in supervised learning for spoken language understanding. However, most of the previous studies explored this framework for building single domain models for each task, such as slot filling or domain classification, comparing deep learning based approaches with conventional ones like conditional random fields. This paper proposes a holistic multi-domain, multi-task (i.e. slot filling, domain and intent detection) modeling approach to estimate complete semantic frames for all user utterances addressed to a conversational system, demonstrating the distinctive power of deep learning methods, namely bi-directional recurrent neural network (RNN) with long-short term memory (LSTM) cells (RNN-LSTM) to handle such complexity. The contributions of the presented work are three-fold: (i) we propose an RNN-LSTM architecture for joint modeling of slot filling, intent determination, and domain classification; (ii) we build a joint multi-domain model enabling multi-task deep learning where the data from each domain reinforces each other; (iii) we investigate alternative architectures for modeling lexical context in spoken language understanding. In addition to the simplicity of the single model framework, experimental results show the power of such an approach on Microsoft Cortana real user data over alternative methods based on single domain task deep learning.",
"",
"",
"This article describes a neural semantic parser that maps natural language utterances onto logical forms that can be executed against a task-specific environment, such as a knowledge base or a data...",
"",
"We describe Abstract Meaning Representation (AMR), a semantic representation language in which we are writing down the meanings of thousands of English sentences. We hope that a sembank of simple, whole-sentence semantic structures will spur new work in statistical natural language understanding and generation, like the Penn Treebank encouraged work on statistical parsing. This paper gives an overview of AMR and tools associated with it.",
"We introduce a neural semantic parser that converts natural language utterances to intermediate representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains. The semantic parser is trained end-to-end using annotated logical forms or their denotations. We obtain competitive results on various datasets. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from linguistically motivated ones.",
"Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domain- or representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations.",
"We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal intervention. To achieve this, we adapt neural sequence models to map utterances directly to SQL with its full expressivity, bypassing any intermediate meaning representations. These models are immediately deployed online to solicit feedback from real users to flag incorrect queries. Finally, the popularity of SQL facilitates gathering annotations for incorrect predictions using the crowd, which is directly used to improve our models. This complete feedback loop, without intermediate representations or database specific engineering, opens up new ways of building high quality semantic parsers. Experiments suggest that this approach can be deployed quickly for any new target domain, as we show by learning a semantic parser for an online academic database from scratch."
]
} |
1902.06228 | 2911261584 | Ride-sourcing services are now reshaping the way people travel by effectively connecting drivers and passengers through mobile internets. Online matching between idle drivers and waiting passengers is one of the most key components in a ride-sourcing system. The average pickup distance or time is an important measurement of system efficiency since it affects both passengers' waiting time and drivers' utilization rate. It is naturally expected that a more effective bipartite matching (with smaller average pickup time) can be implemented if the platform accumulates more idle drivers and waiting passengers in the matching pool. A specific passenger request can also benefit from a delayed matching since he she may be matched with closer idle drivers after waiting for a few seconds. Motivated by the potential benefits of delayed matching, this paper establishes a two-stage framework which incorporates a combinatorial optimization and multi-agent deep reinforcement learning methods. The multi-agent reinforcement learning methods are used to dynamically determine the delayed time for each passenger request (or the time at which each request enters the matching pool), while the combinatorial optimization conducts an optimal bipartite matching between idle drivers and waiting passengers in the matching pool. Two reinforcement learning methods, spatio-temporal multi-agent deep Q learning (ST-M-DQN) and spatio-temporal multi-agent actor-critic (ST-M-A2C) are developed. Through extensive empirical experiments with a well-designed simulator, we show that the proposed framework is able to remarkably improve system performances. | Taxi dispatch, or driver dispatch, is a term usually referring to the process of matching vacant drivers with passengers’ requests using some algorithms to maximize the system’s performance. Traditional dispatch systems maximize the driver acceptance rate for each individual order by sequentially dispatching taxis to riders. @cite_0 proposed to dispatch taxis to server multiple bookings at the same time thus to maximize the global success rate. @cite_2 considered the individual participant’s benefit and proposed a notion of a stable match. @cite_20 constructed an end-to-end framework to predict the future supply and demand in order to optimally schedule the drivers in advance. @cite_0 investigated the preferred service and proposed a recommendation system to enhance the prediction accuracy and reduce the user’s effort in finding the desired service. | {
"cite_N": [
"@cite_0",
"@cite_20",
"@cite_2"
],
"mid": [
"",
"2614121823",
"1817264369"
],
"abstract": [
"",
"The online car-hailing service has gained great popularity all over the world. As more passengers and more drivers use the service, it becomes increasingly more important for the the car-hailing service providers to effectively schedule the drivers to minimize the waiting time of passengers and maximize the driver utilization, thus to improve the overall user experience. In this paper, we study the problem of predicting the real-time car-hailing supply-demand, which is one of the most important component of an effective scheduling system. Our objective is to predict the gap between the car-hailing supply and demand in a certain area in the next few minutes. Based on the prediction, we can balance the supply-demands by scheduling the drivers in advance. We present an end-to-end framework called Deep Supply-Demand (DeepSD) using a novel deep neural network structure. Our approach can automatically discover complicated supply-demand patterns from the car-hailing service data while only requires a minimal amount hand-crafted features. Moreover, our framework is highly flexible and extendable. Based on our framework, it is very easy to utilize multiple data sources (e.g., car-hailing orders, weather and traffic data) to achieve a high accuracy. We conduct extensive experimental evaluations, which show that our framework provides more accurate prediction results than the existing methods.",
"Dynamic ride-sharing systems enable people to share rides and increase the efficiency of urban transportation by connecting riders and drivers on short notice. Automated systems that establish ride-share matches with minimal input from participants provide the most convenience and the most potential for system-wide performance improvement, such as reduction in total vehicle-miles traveled. Indeed, such systems may be designed to match riders and drivers to maximize system performance improvement. However, system-optimal matches may not provide the maximum benefit to each individual participant. In this paper we consider a notion of stability for ride-share matches and present several mathematical programming methods to establish stable or nearly-stable matches, where we note that ride-share matching optimization is performed over time with incomplete information. Our numerical experiments using travel demand data for the metropolitan Atlanta region show that we can significantly increase the stability of ride-share matching solutions at the cost of only a small degradation in system-wide performance."
]
} |
1902.06155 | 2972267247 | Sum-Product Networks (SPNs) are hierarchical, probabilistic graphical models capable of fast and exact inference that can be trained directly from high-dimensional, noisy data. Traditionally, SPNs struggle with capturing relationships in complex spatial data such as images. To this end, we introduce Deep Generalized Convolutional Sum-Product Networks (DGC-SPNs), which encode spatial features through products and sums with scopes corresponding to local receptive fields. As opposed to existing convolutional SPNs, DGC-SPNs allow for overlapping convolution patches through a novel parameterization of dilation and strides, resulting in significantly improved feature coverage and feature resolution. DGC-SPNs substantially outperform other convolutional and non-convolutional SPN approaches across several visual datasets and for both generative and discriminative tasks, including image completion and image classification. In addition, we demonstrate a modificiation to hard EM learning that further improves the generative performance of DGC-SPNs. While fully probabilistic and versatile, our model is scalable and straightforward to apply in practical applications in place of traditional deep models. Our implementation is tensorized, employs efficient GPU-accelerated optimization techniques, and is available as part of an open-source library based on TensorFlow. | In @cite_19 , image completion was performed on the Olivetti @cite_4 and Caltech @cite_6 datasets showing superior results compared to other generative models. | {
"cite_N": [
"@cite_19",
"@cite_4",
"@cite_6"
],
"mid": [
"2040370888",
"2103560185",
"2155904486"
],
"abstract": [
"The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs) and will present in this abstract.",
"Recent work on face identification using continuous density Hidden Markov Models (HMMs) has shown that stochastic modelling can be used successfully to encode feature information. When frontal images of faces are sampled using top-bottom scanning, there is a natural order in which the features appear and this can be conveniently modelled using a top-bottom HMM. However, a top-bottom HMM is characterised by different parameters, the choice of which has so far been based on subjective intuition. This paper presents a set of experimental results in which various HMM parameterisations are analysed. >",
"Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum-likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible. Both Bayesian methods outperform maximum likelihood on small training sets."
]
} |
1902.06273 | 2967589260 | The integration of visual-tactile stimulus is common while humans performing daily tasks. In contrast, using unimodal visual or tactile perception limits the perceivable dimensionality of a subject. However, it remains a challenge to integrate the visual and tactile perception to facilitate robotic tasks. In this paper, we propose a novel framework for the cross-modal sensory data generation for visual and tactile perception. Taking texture perception as an example, we apply conditional generative adversarial networks to generate pseudo visual images or tactile outputs from data of the other modality. Extensive experiments on the ViTac dataset of cloth textures show that the proposed method can produce realistic outputs from other sensory inputs. We adopt the structural similarity index to evaluate similarity of the generated output and real data and results show that realistic data have been generated. Classification evaluation has also been performed to show that the inclusion of generated data can improve the perception performance. The proposed framework has potential to expand datasets for classification tasks, generate sensory outputs that are not easy to access, and also advance integrated visual-tactile perception. | Vision and touch sensing are two main important modalities in perception. Both have been widely applied in robot tasks, usually with only one modality used @cite_10 @cite_19 @cite_30 . It is still challenging to combine vision and touch modalities to facilitate robot operations due to their different sensing principles and data structures. In @cite_7 , vision and tactile samples are paired to classify materials using dimensionality reduction techniques. In @cite_32 , tactile contacts are localized in a visual map by matching the tactile features with visual features. Vision and touch data are combined to reconstruct a point cloud representation and there is no learning of the key features of the two modalities in @cite_1 . Deep neural networks have also been used to extract adjectives features from both vision and tactile data @cite_15 @cite_28 . In a more recent work @cite_31 , a cross-model framework is proposed for visuo-tactile object recognition. Differently from the prior works on learning a subspace of vision and touch, we make a step further to generate new tactile-visual data. | {
"cite_N": [
"@cite_30",
"@cite_31",
"@cite_7",
"@cite_28",
"@cite_1",
"@cite_32",
"@cite_19",
"@cite_15",
"@cite_10"
],
"mid": [
"2295747340",
"2737661303",
"2114128011",
"",
"2737380187",
"1560331997",
"",
"2963654998",
"2963263790"
],
"abstract": [
"This letter presents a strategy to represent data from a tactile array sensor and match it to an object’s geometric features. Using that representation, a method is presented to localise a grasped object within a robot hand. The method consists of computing the covariance matrix in the tactile sensors’ pressure data and computing the eigenbasis from its principal axes. A global search is carried out to find a pose in which the object’s local geometry in the vicinity of the contact is coherent with that basis, i.e., is aligned with the principal axes and has similar variances. This approach, which can be used as a measurement model for tactile sensors, is compared and outperforms methods using the distance between the tactile sensor elements and object surface.",
"In this work, we propose a framework to deal with cross-modal visuo-tactile object recognition. By cross-modal visuo-tactile object recognition, we mean that the object recognition algorithm is trained only with visual data and is able to recognize objects leveraging only tactile perception. The proposed cross-modal framework is constituted by three main elements. The first is a unified representation of visual and tactile data, which is suitable for cross-modal perception. The second is a set of features able to encode the chosen representation for classification applications. The third is a supervised learning algorithm, which takes advantage of the chosen descriptor. In order to show the results of our approach, we performed experiments with 15 objects common in domestic and industrial environments. Moreover, we compare the performance of the proposed framework with the performance of 10 humans in a simple cross-modal recognition task.",
"Dynamic tactile sensing is a fundamental ability to recognize materials and objects. However, while humans are born with partially developed dynamic tactile sensing and quickly master this skill, today's robots remain in their infancy. The development of such a sense requires not only better sensors but the right algorithms to deal with these sensors' data as well. For example, when classifying a material based on touch, the data are noisy, high-dimensional, and contain irrelevant signals as well as essential ones. Few classification methods from machine learning can deal with such problems. In this paper, we propose an efficient approach to infer suitable lower dimensional representations of the tactile data. In order to classify materials based on only the sense of touch, these representations are autonomously discovered using visual information of the surfaces during training. However, accurately pairing vision and tactile samples in real-robot applications is a difficult problem. The proposed approach, therefore, works with weak pairings between the modalities. Experiments show that the resulting approach is very robust and yields significantly higher classification performance based on only dynamic tactile sensing.",
"",
"We present an object-tracking framework that fuses point cloud information from an RGB-D camera with tactile information from a GelSight contact sensor. GelSight can be treated as a source of dense local geometric information, which we incorporate directly into a conventional point-cloud-based articulated object tracker based on signed-distance functions. Our implementation runs at 12 Hz using an online depth reconstruction algorithm for GelSight and a modified second-order update for the tracking algorithm. We present data from hardware experiments demonstrating that the addition of contact-based geometric information significantly improves the pose accuracy during contact, and provides robustness to occlusions of small objects by the robot's end effector.",
"This paper presents a novel framework for integration of vision and tactile sensing by localizing tactile readings in a visual object map. Intuitively, there are some correspondences, e.g., prominent features, between visual and tactile object identification. To apply it in robotics, we propose to localize tactile readings in visual images by sharing same sets of feature descriptors through two sensing modalities. It is then treated as a probabilistic estimation problem solved in a framework of recursive Bayesian filtering. Feature-based measurement model and Gaussian based motion model are thus built. In our tests, a tactile array sensor is utilized to generate tactile images during interaction with objects and the results have proven the feasibility of our proposed framework.",
"",
"Robots which interact with the physical world will benefit from a fine-grained tactile understanding of objects and surfaces. Additionally, for certain tasks, robots may need to know the haptic properties of an object before touching it. To enable better tactile understanding for robots, we propose a method of classifying surfaces with haptic adjectives (e.g., compressible or smooth) from both visual and physical interaction data. Humans typically combine visual predictions and feedback from physical interactions to accurately predict haptic properties and interact with the world. Inspired by this cognitive pattern, we propose and explore a purely visual haptic prediction model. Purely visual models enable a robot to “feel” without physical interaction. Furthermore, we demonstrate that using both visual and physical interaction signals together yields more accurate haptic classification. Our models take advantage of recent advances in deep neural networks by employing a unified approach to learning features for physical interaction and visual observations. Even though we employ little domain specific knowledge, our model still achieves better results than methods based on hand-designed features.",
"Touch sensing can help robots understand their surrounding environment, and in particular the objects they interact with. To this end, roboticists have, in the last few decades, developed several tactile sensing solutions, extensively reported in the literature. Research into interpreting the conveyed tactile information has also started to attract increasing attention in recent years. However, a comprehensive study on this topic is yet to be reported. In an effort to collect and summarize the major scientific achievements in the area, this survey extensively reviews current trends in robot tactile perception of object properties. Available tactile sensing technologies are briefly presented before an extensive review on tactile recognition of object properties. The object properties that are targeted by this review are shape, surface material and object pose. The role of touch sensing in combination with other sensing sources is also discussed. In this review, open issues are identified and future directions for applying tactile sensing in different tasks are suggested."
]
} |
1902.05624 | 2911436395 | In the recent years Generative Adversarial Networks (GANs) have demonstrated significant progress in generating authentic looking data. In this work we introduce our simple method to exploit the advancements in well established image-based GANs to synthesise single channel time series data. We implement Wasserstein GANs (WGANs) with gradient penalty due to their stability in training to synthesise three different types of data; sinusoidal data, photoplethysmograph (PPG) data and electrocardiograph (ECG) data. The length of the returned time series data is limited only by the image resolution, we use an image size of 64x64 pixels which yields 4096 data points. We present both visual and quantitative evidence that our novel method can successfully generate time series data using image-based GANs. | Few studies have used GANs to produce time series data as they have mainly been developed for the generation of images. Some recent results showed promise in synthesising time series data @cite_4 @cite_5 . @cite_5 used channel FCC4h recorded from a 128-electrode electroencephalograph (EEG) system down sampled to 250 Hz as training samples for their EEG-GAN framework. With this they demonstrated the ability of their EEG-GAN for the generation of time series EEG data up to 768 time samples. | {
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"2806591294",
"2622068151"
],
"abstract": [
"Generative adversarial networks (GANs) are recently highly successful in generative applications involving images and start being applied to time series data. Here we describe EEG-GAN as a framework to generate electroencephalographic (EEG) brain signals. We introduce a modification to the improved training of Wasserstein GANs to stabilize training and investigate a range of architectural choices critical for time series generation (most notably up- and down-sampling). For evaluation we consider and compare different metrics such as Inception score, Frechet inception distance and sliced Wasserstein distance, together showing that our EEG-GAN framework generated naturalistic EEG examples. It thus opens up a range of new generative application scenarios in the neuroscientific and neurological context, such as data augmentation in brain-computer interfacing tasks, EEG super-sampling, or restoration of corrupted data segments. The possibility to generate signals of a certain class and or with specific properties may also open a new avenue for research into the underlying structure of brain signals.",
"Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from ‘serialised’ MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN."
]
} |
1902.05624 | 2911436395 | In the recent years Generative Adversarial Networks (GANs) have demonstrated significant progress in generating authentic looking data. In this work we introduce our simple method to exploit the advancements in well established image-based GANs to synthesise single channel time series data. We implement Wasserstein GANs (WGANs) with gradient penalty due to their stability in training to synthesise three different types of data; sinusoidal data, photoplethysmograph (PPG) data and electrocardiograph (ECG) data. The length of the returned time series data is limited only by the image resolution, we use an image size of 64x64 pixels which yields 4096 data points. We present both visual and quantitative evidence that our novel method can successfully generate time series data using image-based GANs. | An important advance was introduced by @cite_4 , a method of synthesising time series using recurrent conditional generative adversarial networks (RCGAN). They synthesised both time series sinusoidal data and physiological data; oxygen saturation, heart rate, respiratory rate and mean arterial pressure. The data was gathered from the eICU Collaborative Research Database. The authors state the length of their generated data sequences as 30 data points. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2622068151"
],
"abstract": [
"Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from ‘serialised’ MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN."
]
} |
1902.05909 | 2914527235 | We consider the problem of electing a committee of @math candidates, subject to some constraints as to what this committee is supposed to look like. In our framework, the pool of candidates is divided into tribes, and constraints of the form "at least @math candidates must be elected from tribe @math " and "there must be at least as many members of tribe @math as of @math " are considered. In the case of a committee scoring rule this becomes a constrained optimisation problem and in the case of weakly separable rules we show the existence of a polynomial time solution in the case of tree-like constraints, and prove NP-hardness in the general case. | Committee scoring rules were first introduced by @cite_8 , in which the authors identify the classes of weakly separable and representation-focused rules, and study the properties committee selection rules might be expected to satisfy with respect to three possible applications. Weakly separable rules are found to be tractable for reasonable underlying single-winner functions, while representation-focused rules in general are @math -hard, following from the results of @cite_4 @cite_6 @cite_12 . | {
"cite_N": [
"@cite_4",
"@cite_12",
"@cite_6",
"@cite_8"
],
"mid": [
"2131891143",
"1238745702",
"2124837549",
"2949245628"
],
"abstract": [
"We demonstrate that winner selection in two prominent proportional representation voting systems is a computationally intractable problem—implying that these systems are impractical when the assembly is large. On a different note, in settings where the size of the assembly is constant, we show that the problem can be solved in polynomial time.",
"We develop a general framework for social choice problems in which a limited number of alternatives can be recommended to an agent population. In our budgeted social choice model, this limit is determined by a budget, capturing problems that arise naturally in a variety of contexts, and spanning the continuum from pure consensus decision making (i.e., standard social choice) to fully personalized recommendation. Our approach applies a form of segmentation to social choice problems-- requiring the selection of diverse options tailored to different agent types--and generalizes certain multiwinner election schemes. We show that standard rank aggregation methods perform poorly, and that optimization in our model is NP-complete; but we develop fast greedy algorithms with some theoretical guarantees. Experiments on real-world datasets demonstrate the effectiveness of our algorithms.",
"We investigate two systems of fully proportional representation suggested by Chamberlin & Courant and Monroe. Both systems assign a representative to each voter so that the \"sum of misrepresentations\" is minimized. The winner determination problem for both systems is known to be NP-hard, hence this work aims at investigating whether there are variants of the proposed rules and or specific electorates for which these problems can be solved efficiently. As a variation of these rules, instead of minimizing the sum of misrepresentations, we considered minimizing the maximal misrepresentation introducing effectively two new rules. In the general case these \"minimax\" versions of classical rules appeared to be still NP-hard. We investigated the parameterized complexity of winner determination of the two classical and two new rules with respect to several parameters. Here we have a mixture of positive and negative results: e.g., we proved fixed-parameter tractability for the parameter the number of candidates but fixed-parameter intractability for the number of winners. For single-peaked electorates our results are overwhelmingly positive: we provide polynomial-time algorithms for most of the considered problems. The only rule that remains NP-hard for single-peaked electorates is the classical Monroe rule.",
"The goal of this paper is to propose and study properties of multiwinner voting rules which can be consider as generalisations of single-winner scoring voting rules. We consider SNTV, Bloc, k-Borda, STV, and several variants of Chamberlin--Courant's and Monroe's rules and their approximations. We identify two broad natural classes of multiwinner score-based rules, and show that many of the existing rules can be captured by one or both of these approaches. We then formulate a number of desirable properties of multiwinner rules, and evaluate the rules we consider with respect to these properties."
]
} |
1902.05909 | 2914527235 | We consider the problem of electing a committee of @math candidates, subject to some constraints as to what this committee is supposed to look like. In our framework, the pool of candidates is divided into tribes, and constraints of the form "at least @math candidates must be elected from tribe @math " and "there must be at least as many members of tribe @math as of @math " are considered. In the case of a committee scoring rule this becomes a constrained optimisation problem and in the case of weakly separable rules we show the existence of a polynomial time solution in the case of tree-like constraints, and prove NP-hardness in the general case. | A third class, the top- @math counting rules, was introduced by @cite_1 in the context of finding a multiwinner analogue of the fixed-majority criterion. Ordered weighted average operators where introduced by @cite_0 , which led to the superclass of ordered weighted average rules @cite_14 , and the relationship between these classes and their axiomatic properties was studied by @cite_10 . | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_14",
"@cite_1"
],
"mid": [
"2174198470",
"",
"2952061453",
"2289435862"
],
"abstract": [
"Positional scoring rules in voting compute the score of an alternative by summing the scores for the alternative induced by every vote. This summation principle ensures that all votes contribute equally to the score of an alternative. We relax this assumption and, instead, aggregate scores by taking into account the rank of a score in the ordered list of scores obtained from the votes. This defines a new family of voting rules, rank-dependent scoring rules (RDSRs), based on ordered weighted average (OWA) operators, which, include all scoring rules, and many others, most of which of new. We study some properties of these rules, and show, empirically, that certain RDSRs are less manipulable than Borda voting, across a variety of statistical cultures.",
"",
"We consider the following problem: There is a set of items (e.g., movies) and a group of agents (e.g., passengers on a plane); each agent has some intrinsic utility for each of the items. Our goal is to pick a set of @math items that maximize the total derived utility of all the agents (i.e., in our example we are to pick @math movies that we put on the plane's entertainment system). However, the actual utility that an agent derives from a given item is only a fraction of its intrinsic one, and this fraction depends on how the agent ranks the item among the chosen, available, ones. We provide a formal specification of the model and provide concrete examples and settings where it is applicable. We show that the problem is hard in general, but we show a number of tractability results for its natural special cases.",
"We characterize the class of committee scoring rules that satisfy the fixed-majority criterion. In some sense, the committee scoring rules in this class are multiwinner analogues of the single-winner Plurality rule, which is uniquely characterized as the only single-winner scoring rule that satisfies the simple majority criterion. We find that, for most of the rules in our new class, the complexity of winner determination is high (i.e., the problem of computing the winners is NP-hard), but we also show some examples of polynomial-time winner determination procedures, exact and approximate."
]
} |
1902.05870 | 2911351991 | A robust, large-scale web service can be difficult to engineer. When demand spikes, it must configure new machines and manage load-balancing; when demand falls, it must shut down idle machines to reduce costs; and when a machine crashes, it must quickly work around the failure without losing data. In recent years, serverless computing, a new cloud computing abstraction, has emerged to help address these challenges. In serverless computing, programmers write serverless functions, and the cloud platform transparently manages the operating system, resource allocation, load-balancing, and fault tolerance. In 2014, Amazon Web Services introduced the first serverless platform, AWS Lambda, and similar abstractions are now available on all major clouds. Unfortunately, the serverless computing abstraction exposes several low-level operational details that make it hard for programmers to write and reason about their code. This paper sheds light on this problem by presenting @math , an operational semantics of the essence of serverless computing. Despite being a small core calculus (less than one column), @math models all the low-level details that serverless functions can observe. To show that @math is useful, we present three applications. First, to make it easier for programmers to reason about their code, we present a simplified semantics of serverless execution and precisely characterize when the simplified semantics and @math coincide. Second, we augment @math with a key-value store, which allows us to reason about stateful serverless functions. Third, since a handful of serverless platforms support serverless function composition, we show how to extend @math with a composition language. We have implemented this composition language and show that it outperforms prior work. | Trapeze @cite_3 presents dynamic for serverless computing, and further sandboxes serverless functions to mediate their interactions with shared storage. Their Coq formalization of termination-sensitive noninterference does not model several features of serverless platforms, such as warm starts and failures, that our semantics does model. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2788556107"
],
"abstract": [
"The rise of serverless computing provides an opportunity to rethink cloud security. We present an approach for securing serverless systems using a novel form of dynamic information flow control (IFC). We show that in serverless applications, the termination channel found in most existing IFC systems can be arbitrarily amplified via multiple concurrent requests, necessitating a stronger termination-sensitive non-interference guarantee, which we achieve using a combination of static labeling of serverless processes and dynamic faceted labeling of persistent data. We describe our implementation of this approach on top of JavaScript for AWS Lambda and OpenWhisk serverless platforms, and present three realistic case studies showing that it can enforce important IFC security properties with low overhead."
]
} |
1902.05870 | 2911351991 | A robust, large-scale web service can be difficult to engineer. When demand spikes, it must configure new machines and manage load-balancing; when demand falls, it must shut down idle machines to reduce costs; and when a machine crashes, it must quickly work around the failure without losing data. In recent years, serverless computing, a new cloud computing abstraction, has emerged to help address these challenges. In serverless computing, programmers write serverless functions, and the cloud platform transparently manages the operating system, resource allocation, load-balancing, and fault tolerance. In 2014, Amazon Web Services introduced the first serverless platform, AWS Lambda, and similar abstractions are now available on all major clouds. Unfortunately, the serverless computing abstraction exposes several low-level operational details that make it hard for programmers to write and reason about their code. This paper sheds light on this problem by presenting @math , an operational semantics of the essence of serverless computing. Despite being a small core calculus (less than one column), @math models all the low-level details that serverless functions can observe. To show that @math is useful, we present three applications. First, to make it easier for programmers to reason about their code, we present a simplified semantics of serverless execution and precisely characterize when the simplified semantics and @math coincide. Second, we augment @math with a key-value store, which allows us to reason about stateful serverless functions. Third, since a handful of serverless platforms support serverless function composition, we show how to extend @math with a composition language. We have implemented this composition language and show that it outperforms prior work. | @math @cite_24 is a semantics for horizontally-scaled services with durable storage, which are related to serverless computing. A key difference between and @math is that models , which occur when a serverless platform runs a new request on an old function instance, without resetting its state. Warm-starts make it hard to reason about correctness, but this paper presents an approach to do so. Both and @math present weak bisimulations between detailed and naive semantics. However, 's naive semantics processes a single request at a time, whereas @math 's idealized semantics has concurrency. We use to specify a protocol to ensure that serverless functions are idempotent and fault tolerant. However, @math also presents a compiler that automatically ensures that these properties hold for C # and F # code. We believe the approach would work for . spl extends with new primitives, which we then implement and evaluate. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2114707853"
],
"abstract": [
"Building distributed services and applications is challenging due to the pitfalls of distribution such as process and communication failures. A natural solution to these problems is to detect potential failures, and retry the failed computation and or resend messages. Ensuring correctness in such an environment requires distributed services and applications to be idempotent. In this paper, we study the inter-related aspects of process failures, duplicate messages, and idempotence. We first introduce a simple core language (based on lambda calculus inspired by modern distributed computing platforms. This language formalizes the notions of a service, duplicate requests, process failures, data partitioning, and local atomic transactions that are restricted to a single store. We then formalize a desired (generic) correctness criterion for applications written in this language, consisting of idempotence (which captures the desired safety properties) and failure-freedom (which captures the desired progress properties). We then propose language support in the form of a monad that automatically ensures failfree idempotence. A key characteristic of our implementation is that it is decentralized and does not require distributed coordination. We show that the language support can be enriched with other useful constructs, such as compensations, while retaining the coordination-free decentralized nature of the implementation. We have implemented the idempotence monad (and its variants) in F# and C# and used our implementation to build realistic applications on Windows Azure. We find that the monad has low runtime overheads and leads to more declarative applications."
]
} |
1902.05870 | 2911351991 | A robust, large-scale web service can be difficult to engineer. When demand spikes, it must configure new machines and manage load-balancing; when demand falls, it must shut down idle machines to reduce costs; and when a machine crashes, it must quickly work around the failure without losing data. In recent years, serverless computing, a new cloud computing abstraction, has emerged to help address these challenges. In serverless computing, programmers write serverless functions, and the cloud platform transparently manages the operating system, resource allocation, load-balancing, and fault tolerance. In 2014, Amazon Web Services introduced the first serverless platform, AWS Lambda, and similar abstractions are now available on all major clouds. Unfortunately, the serverless computing abstraction exposes several low-level operational details that make it hard for programmers to write and reason about their code. This paper sheds light on this problem by presenting @math , an operational semantics of the essence of serverless computing. Despite being a small core calculus (less than one column), @math models all the low-level details that serverless functions can observe. To show that @math is useful, we present three applications. First, to make it easier for programmers to reason about their code, we present a simplified semantics of serverless execution and precisely characterize when the simplified semantics and @math coincide. Second, we augment @math with a key-value store, which allows us to reason about stateful serverless functions. Third, since a handful of serverless platforms support serverless function composition, we show how to extend @math with a composition language. We have implemented this composition language and show that it outperforms prior work. | Whip @cite_1 and ucheck @cite_10 are tools that check properties of microservice-based applications at run-time. These works are complementary to ours. For example, our paper identifies several important properties of serverless functions, which could then be checked using Whip or ucheck . | {
"cite_N": [
"@cite_10",
"@cite_1"
],
"mid": [
"2736340806",
"2752252621"
],
"abstract": [
"Many large applications are now built using collections of microservices, each of which is deployed in isolated containers and which interact with each other through the use of remote procedure calls (RPCs). The use of microservices improves scalability -- each component of an application can be scaled independently -- and deployability. However, such applications are inherently distributed and current tools do not provide mechanisms to reason about and ensure their global behavior. In this paper we argue that recent advances in formal methods and software packet processing pave the path towards building mechanisms that can ensure correctness for such systems, both when they are being built and at runtime. These techniques impose minimal runtime overheads and are amenable to production deployments.",
"Modern service-oriented applications forgo semantically rich protocols and middleware when composing services. Instead, they embrace the loosely-coupled development and deployment of services that communicate via simple network protocols. Even though these applications do expose interfaces that are higher-order in spirit, the simplicity of the network protocols forces them to rely on brittle low-level encodings. To bridge the apparent semantic gap, programmers introduce ad-hoc and error-prone defensive code. Inspired by Design by Contract, we choose a different route to bridge this gap. We introduce Whip, a contract system for modern services. Whip (i) provides programmers with a higher-order contract language tailored to the needs of modern services; and (ii) monitors services at run time to detect services that do not live up to their advertised interfaces. Contract monitoring is local to a service. Services are treated as black boxes, allowing heterogeneous implementation languages without modification to services' code. Thus, Whip does not disturb the loosely coupled nature of modern services."
]
} |
1902.05870 | 2911351991 | A robust, large-scale web service can be difficult to engineer. When demand spikes, it must configure new machines and manage load-balancing; when demand falls, it must shut down idle machines to reduce costs; and when a machine crashes, it must quickly work around the failure without losing data. In recent years, serverless computing, a new cloud computing abstraction, has emerged to help address these challenges. In serverless computing, programmers write serverless functions, and the cloud platform transparently manages the operating system, resource allocation, load-balancing, and fault tolerance. In 2014, Amazon Web Services introduced the first serverless platform, AWS Lambda, and similar abstractions are now available on all major clouds. Unfortunately, the serverless computing abstraction exposes several low-level operational details that make it hard for programmers to write and reason about their code. This paper sheds light on this problem by presenting @math , an operational semantics of the essence of serverless computing. Despite being a small core calculus (less than one column), @math models all the low-level details that serverless functions can observe. To show that @math is useful, we present three applications. First, to make it easier for programmers to reason about their code, we present a simplified semantics of serverless execution and precisely characterize when the simplified semantics and @math coincide. Second, we augment @math with a key-value store, which allows us to reason about stateful serverless functions. Third, since a handful of serverless platforms support serverless function composition, we show how to extend @math with a composition language. We have implemented this composition language and show that it outperforms prior work. | Ballerina @cite_40 is a language for managing cloud environments; Engage @cite_12 is a deployment manager that supports inter-machine dependencies; and Pulumi @cite_18 is an embedded for writing programs that configure and run in the cloud. In contrast, is a semantics of serverless computing. spl uses to design and implement a language for composing serverless functions that runs within a serverless platform. | {
"cite_N": [
"@cite_40",
"@cite_18",
"@cite_12"
],
"mid": [
"2886620722",
"",
"2159567171"
],
"abstract": [
"Ballerina is a new language for solving integration problems. It is based on insights and best practices derived from languages like BPEL, BPMN, Go, and Java, but also cloud infrastructure systems like Kubernetes. Integration problems were traditionally addressed by dedicated middleware systems such as enterprise service buses, workflow systems and message brokers. However, such systems lack agility required by current integration scenarios, especially for cloud based deployments. This paper discusses how Ballerina solves this problem by bringing integration features into a general purpose programming language.",
"",
"Many modern applications are built by combining independently developed packages and services that are distributed over many machines with complex inter-dependencies. The assembly, installation, and management of such applications is hard, and usually performed either manually or by writing customized scripts. We present Engage, a system for configuring, installing, and managing complex application stacks. Engage consists of three components: a domain-specific model to describe component metadata and inter-component dependencies; a constraint-based algorithm that takes a partial installation specification and computes a full installation plan; and a runtime system that co-ordinates the deployment of the application across multiple machines and manages the deployed system. By explicitly modeling configuration metadata and inter-component dependencies, Engage enables static checking of application configurations and automated, constraint-driven, generation of installation plans across multiple machines. This reduces the tedious manual process of application configuration, installation, and management. We have implemented Engage and we have used it to successfully host a number of applications. We describe our experiences in using Engage to manage a generic platform that hosts Django applications in the cloud or on premises."
]
} |
1902.05870 | 2911351991 | A robust, large-scale web service can be difficult to engineer. When demand spikes, it must configure new machines and manage load-balancing; when demand falls, it must shut down idle machines to reduce costs; and when a machine crashes, it must quickly work around the failure without losing data. In recent years, serverless computing, a new cloud computing abstraction, has emerged to help address these challenges. In serverless computing, programmers write serverless functions, and the cloud platform transparently manages the operating system, resource allocation, load-balancing, and fault tolerance. In 2014, Amazon Web Services introduced the first serverless platform, AWS Lambda, and similar abstractions are now available on all major clouds. Unfortunately, the serverless computing abstraction exposes several low-level operational details that make it hard for programmers to write and reason about their code. This paper sheds light on this problem by presenting @math , an operational semantics of the essence of serverless computing. Despite being a small core calculus (less than one column), @math models all the low-level details that serverless functions can observe. To show that @math is useful, we present three applications. First, to make it easier for programmers to reason about their code, we present a simplified semantics of serverless execution and precisely characterize when the simplified semantics and @math coincide. Second, we augment @math with a key-value store, which allows us to reason about stateful serverless functions. Third, since a handful of serverless platforms support serverless function composition, we show how to extend @math with a composition language. We have implemented this composition language and show that it outperforms prior work. | There is a large body of work on verification, testing, and modular programming for distributed systems and algorithms (e.g., @cite_5 @cite_38 @cite_2 @cite_28 @cite_7 @cite_44 @cite_29 @cite_42 @cite_14 ). The serverless computation model is more constrained than arbitrary distributed systems and algorithms. This paper presents a formal semantics of serverless computing, , with an emphasis on low-level details that are observable by programs, and thus hard for programmers to get right. To demonstrate that is useful, we present three applications that employ it and extend it in several ways. This paper does not address verification for serverless computing, but could be used as a foundation for future verification work. | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_7",
"@cite_28",
"@cite_29",
"@cite_42",
"@cite_44",
"@cite_2",
"@cite_5"
],
"mid": [
"2285931649",
"2091776255",
"2167814583",
"2776248742",
"2898712586",
"2898806643",
"2899000846",
"",
"2894638380"
],
"abstract": [
"Fault-tolerant distributed algorithms play an important role in many critical high-availability applications. These algorithms are notoriously difficult to implement correctly, due to asynchronous communication and the occurrence of faults, such as the network dropping messages or computers crashing. We introduce PSync, a domain specific language based on the Heard-Of model, which views asynchronous faulty systems as synchronous ones with an adversarial environment that simulates asynchrony and faults by dropping messages. We define a runtime system for PSync that efficiently executes on asynchronous networks. We formalise the relation between the runtime system and PSync in terms of observational refinement. The high-level lockstep abstraction introduced by PSync simplifies the design and implementation of fault-tolerant distributed algorithms and enables automated formal verification. We have implemented an embedding of PSync in the Scala programming language with a runtime system for partially synchronous networks. We show the applicability of PSync by implementing several important fault-tolerant distributed algorithms and we compare the implementation of consensus algorithms in PSync against implementations in other languages in terms of code size, runtime efficiency, and verification.",
"Distributed systems are notorious for harboring subtle bugs. Verification can, in principle, eliminate these bugs a priori, but verification has historically been difficult to apply at full-program scale, much less distributed-system scale. We describe a methodology for building practical and provably correct distributed systems based on a unique blend of TLA-style state-machine refinement and Hoare-logic verification. We demonstrate the methodology on a complex implementation of a Paxos-based replicated state machine library and a lease-based sharded key-value store. We prove that each obeys a concise safety specification, as well as desirable liveness requirements. Each implementation achieves performance competitive with a reference system. With our methodology and lessons learned, we aim to raise the standard for distributed systems from \"tested\" to \"correct.\"",
"Distributed systems are difficult to implement correctly because they must handle both concurrency and failures: machines may crash at arbitrary points and networks may reorder, drop, or duplicate packets. Further, their behavior is often too complex to permit exhaustive testing. Bugs in these systems have led to the loss of critical data and unacceptable service outages. We present Verdi, a framework for implementing and formally verifying distributed systems in Coq. Verdi formalizes various network semantics with different faults, and the developer chooses the most appropriate fault model when verifying their implementation. Furthermore, Verdi eases the verification burden by enabling the developer to first verify their system under an idealized fault model, then transfer the resulting correctness guarantees to a more realistic fault model without any additional proof burden. To demonstrate Verdi's utility, we present the first mechanically checked proof of linearizability of the Raft state machine replication algorithm, as well as verified implementations of a primary-backup replication system and a key-value store. These verified systems provide similar performance to unverified equivalents.",
"Distributed systems play a crucial role in modern infrastructure, but are notoriously difficult to implement correctly. This difficulty arises from two main challenges: (a) correctly implementing core system components (e.g., two-phase commit), so all their internal invariants hold, and (b) correctly composing standalone system components into functioning trustworthy applications (e.g., persistent storage built on top of a two-phase commit instance). Recent work has developed several approaches for addressing (a) by means of mechanically verifying implementations of core distributed components, but no methodology exists to address (b) by composing such verified components into larger verified applications. As a result, expensive verification efforts for key system components are not easily reusable, which hinders further verification efforts. In this paper, we present Disel, the first framework for implementation and compositional verification of distributed systems and their clients, all within the mechanized, foundational context of the Coq proof assistant. In Disel, users implement distributed systems using a domain specific language shallowly embedded in Coq and providing both high-level programming constructs as well as low-level communication primitives. Components of composite systems are specified in Disel as protocols, which capture system-specific logic and disentangle system definitions from implementation details. By virtue of Disel's dependent type system, well-typed implementations always satisfy their protocols' invariants and never go wrong, allowing users to verify system implementations interactively using Disel's Hoare-style program logic, which extends state-of-the-art techniques for concurrency verification to the distributed setting. By virtue of the substitution principle and frame rule provided by Disel's logic, system components can be composed leading to modular, reusable verified distributed systems. We describe Disel, illustrate its use with a series of examples, outline its logic and metatheory, and report on our experience using it as a framework for implementing, specifying, and verifying distributed systems.",
"We introduce canonical sequentialization, a new approach to verifying unbounded, asynchronous, message-passing programs at compile-time. Our approach builds upon the following observation: due the combinatorial explosion in complexity, programmers do not reason about their systems by case-splitting over all the possible execution orders. Instead, correct programs tend to be well-structured so that the programmer can reason about a small number of representative executions, which we call the program's canonical sequentialization. We have implemented our approach in a tool called Brisk that synthesizes canonical sequentializations for programs written in Haskell, and evaluated it on a wide variety of distributed systems including benchmarks from the literature and implementations of MapReduce, two-phase commit, and a version of the Disco distributed file-system. We show that unlike model checking, which gets prohibitively slow with just 10 processes Brisk verifies the unbounded versions of the benchmarks in tens of milliseconds, yielding the first concurrency verification tool that is fast enough to be integrated into a design-implement-check cycle.",
"Data replication is used in distributed systems to maintain up-to-date copies of shared data across multiple computers in a network. However, despite decades of research, algorithms for achieving consistency in replicated systems are still poorly understood. Indeed, many published algorithms have later been shown to be incorrect, even some that were accompanied by supposed mechanised proofs of correctness. In this work, we focus on the correctness of Conflict-free Replicated Data Types (CRDTs), a class of algorithm that provides strong eventual consistency guarantees for replicated data. We develop a modular and reusable framework in the Isabelle HOL interactive proof assistant for verifying the correctness of CRDT algorithms. We avoid correctness issues that have dogged previous mechanised proofs in this area by including a network model in our formalisation, and proving that our theorems hold in all possible network behaviours. Our axiomatic network model is a standard abstraction that accurately reflects the behaviour of real-world computer networks. Moreover, we identify an abstract convergence theorem, a property of order relations, which provides a formal definition of strong eventual consistency. We then obtain the first machine-checked correctness theorems for three concrete CRDTs: the Replicated Growable Array, the Observed-Remove Set, and an Increment-Decrement Counter. We find that our framework is highly reusable, developing proofs of correctness for the latter two CRDTs in a few hours and with relatively little CRDT-specific code.",
"",
"",
"A real-world distributed system is rarely implemented as a standalone monolithic system. Instead, it is composed of multiple independent interacting components that together ensure the desired system-level specification. One can scale systematic testing to large, industrial-scale implementations by decomposing the system-level testing problem into a collection of simpler component-level testing problems. This paper proposes techniques for compositional programming and testing of distributed systems with two central contributions: (1) We propose a module system based on the theory of compositional trace refinement for dynamic systems consisting of asynchronously-communicating state machines, where state machines can be dynamically created, and communication topology of the existing state machines can change at runtime; (2) We present ModP, a programming system that implements our module system to enable compositional reasoning (assume-guarantee) of distributed systems. We demonstrate the efficacy of our framework by building two practical fault-tolerant distributed systems, a transaction-commit service and a replicated hash-table. ModP helps implement these systems modularly and validate them via compositional testing. We empirically demonstrate that the abstraction-based compositional reasoning approach helps amplify the coverage during testing and scale it to real-world distributed systems. The distributed services built using ModP achieve performance comparable to open-source equivalents."
]
} |
1902.05660 | 2913187365 | Despite significant progress in Visual Question Answering over the years, robustness of today's VQA models leave much to be desired. We introduce a new evaluation protocol and associated dataset (VQA-Rephrasings) and show that state-of-the-art VQA models are notoriously brittle to linguistic variations in questions. VQA-Rephrasings contains 3 human-provided rephrasings for 40k questions spanning 40k images from the VQA v2.0 validation dataset. As a step towards improving robustness of VQA models, we propose a model-agnostic framework that exploits cycle consistency. Specifically, we train a model to not only answer a question, but also generate a question conditioned on the answer, such that the answer predicted for the generated question is the same as the ground truth answer to the original question. Without the use of additional annotations, we show that our approach is significantly more robust to linguistic variations than state-of-the-art VQA models, when evaluated on the VQA-Rephrasings dataset. In addition, our approach outperforms state-of-the-art approaches on the standard VQA and Visual Question Generation tasks on the challenging VQA v2.0 dataset. | Robustness of VQA models has been studied in several contexts @cite_27 @cite_33 @cite_50 . For example, @cite_27 studies the robustness of VQA models to changes in the answer distributions across training and test settings; @cite_7 analyzes the extent of visual grounding in VQA models by studying robustness of VQA models to meaningful semantic changes in images; @cite_6 shows that despite the use of an advanced attention mechanism, it is easy to fool a VQA model with very minor changes in the image. Our work, however, aims to complete the study in robustness by benchmarking and improving robustness of VQA models to linguistic and compositional variations in questions in the form of rephrasings. Robustness has also been studied in natural language processing (NLP) systems @cite_5 @cite_3 in contexts of bias @cite_51 @cite_35 , domain-shift @cite_44 and syntactic variations @cite_15 . To counter these issues in NLP systems, solutions like linguistically motivated data-augmentation @cite_44 and adversarial training @cite_15 have been proposed. We study this in the context of visual question answering which is a multi-modal task which grounds language into the visual world. | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_7",
"@cite_6",
"@cite_3",
"@cite_44",
"@cite_27",
"@cite_50",
"@cite_5",
"@cite_15",
"@cite_51"
],
"mid": [
"",
"",
"2963890019",
"2799244840",
"",
"2740839465",
"",
"",
"2768037715",
"2798139452",
"1965623190"
],
"abstract": [
"",
"",
"The complex compositional structure of language makes problems at the intersection of vision and language challenging. But language also provides a strong prior that can result in good superficial performance, without the underlying models truly understanding the visual content. This can hinder progress in pushing state of art in the computer vision aspects of multi-modal AI. In this paper, we address binary Visual Question Answering (VQA) on abstract scenes. We formulate this problem as visual verification of concepts inquired in the questions. Specifically, we convert the question to a tuple that concisely summarizes the visual concept to be detected in the image. If the concept can be found in the image, the answer to the question is \"yes\", and otherwise \"no\". Abstract scenes play two roles (1) They allow us to focus on the highlevel semantics of the VQA task as opposed to the low-level recognition problems, and perhaps more importantly, (2) They provide us the modality to balance the dataset such that language priors are controlled, and the role of vision is essential. In particular, we collect fine-grained pairs of scenes for every question, such that the answer to the question is \"yes\" for one scene, and \"no\" for the other for the exact same question. Indeed, language priors alone do not perform better than chance on our balanced dataset. Moreover, our proposed approach matches the performance of a state-of-the-art VQA approach on the unbalanced dataset, and outperforms it on the balanced dataset.",
"Adversarial attacks are known to succeed on classifiers, but it has been an open question whether more complex vision systems are vulnerable. In this paper, we study adversarial examples for vision and language models, which incorporate natural language understanding and complex structures such as attention, localization, and modular architectures. In particular, we investigate attacks on a dense captioning model and on two visual question answering (VQA) models. Our evaluation shows that we can generate adversarial examples with a high success rate (i.e., > 90 ) for these models. Our work sheds new light on understanding adversarial attacks on vision systems which have a language component and shows that attention, bounding box localization, and compositional internal structures are vulnerable to adversarial attacks. These observations will inform future work towards building effective defenses.",
"",
"",
"",
"",
"This paper presents a summary of the first Workshop on Building Linguistically Generalizable Natural Language Processing Systems, and the associated Build It Break It, The Language Edition shared task. The goal of this workshop was to bring together researchers in NLP and linguistics with a shared task aimed at testing the generalizability of NLP systems beyond the distributions of their training data. We describe the motivation, setup, and participation of the shared task, provide discussion of some highlighted results, and discuss lessons learned.",
"We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) \"fool\" pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.",
"Practical natural language understanding systems used to be concerned with very small miniature domains only: They knew exactly what potential text might be about, and what kind of sentence structures to expect. This optimistic assumption is no longer feasible if NLU is to scale up to deal with text that naturally occurs in the \"real world\". The key issue is robustness: The system needs to be prepared for cases where the input data does not correspond to the expectations encoded in the grammar. In this paper, we survey the approaches towards the robustness problem that have been developed throughout the last decade. We inspect techniques to overcome both syntactically and semantically ill-formed input in sentence parsing and then look briefly into more recent ideas concerning the extraction of information from texts, and the related question of the role that linguistic research plays in this game. Finally, the robust sentence parsing schemes are classified on a more abstract level of analysis."
]
} |
1902.05660 | 2913187365 | Despite significant progress in Visual Question Answering over the years, robustness of today's VQA models leave much to be desired. We introduce a new evaluation protocol and associated dataset (VQA-Rephrasings) and show that state-of-the-art VQA models are notoriously brittle to linguistic variations in questions. VQA-Rephrasings contains 3 human-provided rephrasings for 40k questions spanning 40k images from the VQA v2.0 validation dataset. As a step towards improving robustness of VQA models, we propose a model-agnostic framework that exploits cycle consistency. Specifically, we train a model to not only answer a question, but also generate a question conditioned on the answer, such that the answer predicted for the generated question is the same as the ground truth answer to the original question. Without the use of additional annotations, we show that our approach is significantly more robust to linguistic variations than state-of-the-art VQA models, when evaluated on the VQA-Rephrasings dataset. In addition, our approach outperforms state-of-the-art approaches on the standard VQA and Visual Question Generation tasks on the challenging VQA v2.0 dataset. | Using cycle-consistency to regularize the training of models has been used extensively in object tracking @cite_22 , machine translation @cite_39 , unpaired image-to-image translation @cite_9 and text-based question answering @cite_1 . Consistency enables learning of robust models by regularizing transformations that map one interconnected modality or domain to the other. While cycle consistency has been used vastly in the domains involving a single modality (text-only or image-only), it hasn't been explored in the context of multi-modal tasks like VQA. Cycle-consistency in VQA can be also thought of as an online data-augmentation technique where the model is trained on several generated rephrasings of the same question. | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_22",
"@cite_39"
],
"mid": [
"2962793481",
"2803595284",
"1530781137",
"2546938941"
],
"abstract": [
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"",
"Dense and accurate motion tracking is an important requirement for many video feature extraction algorithms. In this paper we provide a method for computing point trajectories based on a fast parallel implementation of a recent optical flow algorithm that tolerates fast motion. The parallel implementation of large displacement optical flow runs about 78× faster than the serial C++ version. This makes it practical to use in a variety of applications, among them point tracking. In the course of obtaining the fast implementation, we also proved that the fixed point matrix obtained in the optical flow technique is positive semi-definite. We compare the point tracking to the most commonly used motion tracker - the KLT tracker - on a number of sequences with ground truth motion. Our resulting technique tracks up to three orders of magnitude more points and is 46 more accurate than the KLT tracker. It also provides a tracking density of 48 and has an occlusion error of 3 compared to a density of 0.1 and occlusion error of 8 for the KLT tracker. Compared to the Particle Video tracker, we achieve 66 better accuracy while retaining the ability to handle large displacements while running an order of magnitude faster.",
"While neural machine translation (NMT) is making good progress in the past two years, tens of millions of bilingual sentence pairs are needed for its training. However, human labeling is very costly. To tackle this training data bottleneck, we develop a dual-learning mechanism, which can enable an NMT system to automatically learn from unlabeled data through a dual-learning game. This mechanism is inspired by the following observation: any machine translation task has a dual task, e.g., English-to-French translation (primal) versus French-to-English translation (dual); the primal and dual tasks can form a closed loop, and generate informative feedback signals to train the translation models, even if without the involvement of a human labeler. In the dual-learning mechanism, we use one agent to represent the model for the primal task and the other agent to represent the model for the dual task, then ask them to teach each other through a reinforcement learning process. Based on the feedback signals generated during this process (e.g., the language-model likelihood of the output of a model, and the reconstruction error of the original sentence after the primal and dual translations), we can iteratively update the two models until convergence (e.g., using the policy gradient methods). We call the corresponding approach to neural machine translation dual-NMT. Experiments show that dual-NMT works very well on English ↔ French translation; especially, by learning from monolingual data (with 10 bilingual data for warm start), it achieves a comparable accuracy to NMT trained from the full bilingual data for the French-to-English translation task."
]
} |
1902.05577 | 2914165540 | Advances in deep neural networks (DNN) and computer vision (CV) algorithms have made it feasible to extract meaningful insights from large-scale deployments of urban cameras. Tracking an object of interest across the camera network in near real-time is a canonical problem. However, current tracking frameworks have two key limitations: 1) They are monolithic, proprietary, and lack the ability to rapidly incorporate sophisticated tracking models; and 2) They are less responsive to dynamism across wide-area computing resources that include edge, fog and cloud abstractions. We address these gaps using Anveshak, a runtime platform for composing and coordinating distributed tracking applications. It provides a domain-specific dataflow programming model to intuitively compose a tracking application, supporting contemporary CV advances like query fusion and re-identification, and enabling dynamic scoping of the camera-network's search space to avoid wasted computation. We also offer tunable batching and data-dropping strategies for dataflow blocks deployed on distributed resources to respond to network and compute variability. These balance the tracking accuracy, its real-time performance and the active camera-set size. We illustrate the concise expressiveness of the programming model for 4 tracking applications. Our detailed experiments for a network of 1000 camera-feeds on modest resources exhibit the tunable scalability, performance and quality trade-offs enabled by our dynamic tracking, batching and dropping strategies. | Video Surveillance Systems Intelligent video surveillance systems allow automated analytics over camera feeds @cite_28 . They enable wide-area surveillance from multiple static or PTZ cameras, with distributed or master-slave controls @cite_2 . @cite_9 supports tracking, crowd counting, and behavioral analysis over camera feeds from train stations to support human operators. However, these are pre-defined applications, run centrally on a private data center and process all camera feeds all the time. @cite_44 is a proprietary platform for video data management, analysis and real-time alerts. While it offers limited composability using different modules, it too executes the applications centrally and does not consider performance optimizations. Early works examine edge computing for basic pre-processing @cite_40 . But the edge logic is statically defined, with the rest of the analytics done on centrally and over dedicated networks. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_44",
"@cite_40",
"@cite_2"
],
"mid": [
"2403019673",
"1576760065",
"1521832956",
"2132357372",
"1177360271"
],
"abstract": [
"",
"ADVISOR is an automated visual surveillance system for metro stations which was developed as part of the project ADVISOR, involving 3 academic and 3 industrial project partners. The ADVISOR system aims at making public transport safer by automatically detecting at an early stage dangerous situations which may lead to accidents, violence or vandalism. In order to achieve this people are tracked across the station and their behaviours analysed. Additional measurements on crowd density and movement are also obtained. Warnings are generated and displayed to human operators for possible intervention. The article explores the main difficulties encountered during the design and implementation of ADVISOR and describes the ways in which they were solved. A prototype system has been built and extensively tested, proving the feasibility of automated visual surveillance systems. An analysis of test runs at a metro station in Barcelona and several individual experiments show that the system copes with many difficult image analysis problems. The analysis also points the way for future development and ways of deployment of the techniques used in the system.",
"As smart surveillance technology becomes a critical component in security infrastructures, the system architecture assumes a critical importance. This paper considers the example of smart surveillance in an airport environment. We start with a threat model for airports and use this to derive the security requirements. These requirements are used to motivate an open-standards based architecture for surveillance. We discuss the critical aspects of this architecture and its implementation in the IBM S3 smart surveillance system. Demo results from a pilot deployment in Hawthorne, NY are presented.",
"Graduates of computer science (CS) and software engineering (SE) programs are typically employed to develop industry-strength software. Computer engineering (CE) programs focus primarily on computing-system design, often with significant software components. These three programs have different emphases: development of new algorithms versus development of large, complex software systems versus development of small embedded software and device drivers. All three areas require good SE practices.",
"The use of multiple heterogeneous cameras is becoming more common in today's surveillance systems. In order to perform surveillance tasks, effective coordination and control in multi-camera systems is very important, and is catching significant research attention these days. This survey aims to provide researchers with a state-of-the-art overview of various techniques for multi-camera coordination and control (MC3) that have been adopted in surveillance systems. The existing literature on MC3 is presented through several classifications based on the applicable architectures, frameworks and the associated surveillance tasks. Finally, a discussion on the open problems in surveillance area that can be solved effectively using MC3 and the future directions in MC3 research is presented"
]
} |
1902.05577 | 2914165540 | Advances in deep neural networks (DNN) and computer vision (CV) algorithms have made it feasible to extract meaningful insights from large-scale deployments of urban cameras. Tracking an object of interest across the camera network in near real-time is a canonical problem. However, current tracking frameworks have two key limitations: 1) They are monolithic, proprietary, and lack the ability to rapidly incorporate sophisticated tracking models; and 2) They are less responsive to dynamism across wide-area computing resources that include edge, fog and cloud abstractions. We address these gaps using Anveshak, a runtime platform for composing and coordinating distributed tracking applications. It provides a domain-specific dataflow programming model to intuitively compose a tracking application, supporting contemporary CV advances like query fusion and re-identification, and enabling dynamic scoping of the camera-network's search space to avoid wasted computation. We also offer tunable batching and data-dropping strategies for dataflow blocks deployed on distributed resources to respond to network and compute variability. These balance the tracking accuracy, its real-time performance and the active camera-set size. We illustrate the concise expressiveness of the programming model for 4 tracking applications. Our detailed experiments for a network of 1000 camera-feeds on modest resources exhibit the tunable scalability, performance and quality trade-offs enabled by our dynamic tracking, batching and dropping strategies. | The @cite_42 supports the definition of distributed analytics on the edge and over a WAN. They use a publish-subscribe model with hierarchical brokers to route video and event streams between analytics deployed on edge devices. They illustrate a multi-camera single person tracking application, similar to us. However, their platform design resembles a general-purpose event-driven middleware, without any specific model support or runtime optimizations for video analytics. We offer programming support for tracking applications and batching dropping strategies that are tunable to dynamism. Others exclusively focus on offline analysis over video feeds from a many-camera network along with other data sources for spatio-temporal association studies @cite_48 . | {
"cite_N": [
"@cite_48",
"@cite_42"
],
"mid": [
"2625270925",
"2537825474"
],
"abstract": [
"Video surveillance system has become a critical part in the security and protection system of modem cities, since smart monitoring cameras equipped with intelligent video analytics techniques can monitor and pre-alarm abnormal behaviors or events. However, with the expansion of the surveillance network, massive surveillance video data poses huge challenges to the analytics, storage and retrieval in the Big Data era. This paper presents a novel intelligent processing and utilization solution to big surveillance video data based on the event detection and alarming messages from front-end smart cameras. The method includes three parts: the intelligent pre-alarming for abnormal events, smart storage for surveillance video and rapid retrieval for evidence videos, which fully explores the temporal-spatial association analysis with respect to the abnormal events in different monitoring sites. Experimental results reveal that our proposed approach can reliably pre-alarm security risk events, substantially reduce storage space of recorded video and significantly speed up the evidence video retrieval associated with specific suspects.",
"Despite significant interest in the research community, the development of multi-camera applications is still quite challenging. This paper presents Ella - a dedicated publish subscribe middleware system that facilitates distribution, component reuse and communication for heterogeneous multi-camera applications. We present the key components of this middleware system and demonstrate its applicability based on an autonomous multi-camera person tracking application. Ella is able to run on resource-limited and heterogeneous VSNs. We present performance measurements on different hardware platforms as well as operating systems."
]
} |
1902.05577 | 2914165540 | Advances in deep neural networks (DNN) and computer vision (CV) algorithms have made it feasible to extract meaningful insights from large-scale deployments of urban cameras. Tracking an object of interest across the camera network in near real-time is a canonical problem. However, current tracking frameworks have two key limitations: 1) They are monolithic, proprietary, and lack the ability to rapidly incorporate sophisticated tracking models; and 2) They are less responsive to dynamism across wide-area computing resources that include edge, fog and cloud abstractions. We address these gaps using Anveshak, a runtime platform for composing and coordinating distributed tracking applications. It provides a domain-specific dataflow programming model to intuitively compose a tracking application, supporting contemporary CV advances like query fusion and re-identification, and enabling dynamic scoping of the camera-network's search space to avoid wasted computation. We also offer tunable batching and data-dropping strategies for dataflow blocks deployed on distributed resources to respond to network and compute variability. These balance the tracking accuracy, its real-time performance and the active camera-set size. We illustrate the concise expressiveness of the programming model for 4 tracking applications. Our detailed experiments for a network of 1000 camera-feeds on modest resources exhibit the tunable scalability, performance and quality trade-offs enabled by our dynamic tracking, batching and dropping strategies. | @cite_27 is specifically for video surveillance using wireless networks, which have limited bandwidth. They assume an Edge computing node (ECN) is co-located with the cameras and is used to reduce redundant data from being sent to the Cloud. The authors assign a utility score to each frame to ascertain its importance, similar to our flag. Our model and platform offer more active controls over the logic running on the ECN, and the runtime tuning. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2086402015"
],
"abstract": [
"Internet-enabled cameras pervade daily life, generating a huge amount of data, but most of the video they generate is transmitted over wires and analyzed offline with a human in the loop. The ubiquity of cameras limits the amount of video that can be sent to the cloud, especially on wireless networks where capacity is at a premium. In this paper, we present Vigil, a real-time distributed wireless surveillance system that leverages edge computing to support real-time tracking and surveillance in enterprise campuses, retail stores, and across smart cities. Vigil intelligently partitions video processing between edge computing nodes co-located with cameras and the cloud to save wireless capacity, which can then be dedicated to Wi-Fi hotspots, offsetting their cost. Novel video frame prioritization and traffic scheduling algorithms further optimize Vigil's bandwidth utilization. We have deployed Vigil across three sites in both whitespace and Wi-Fi networks. Depending on the level of activity in the scene, experimental results show that Vigil allows a video surveillance system to support a geographical area of coverage between five and 200 times greater than an approach that simply streams video over the wireless network. For a fixed region of coverage and bandwidth, Vigil outperforms the default equal throughput allocation strategy of Wi-Fi by delivering up to 25 more objects relevant to a user's query."
]
} |
1902.05577 | 2914165540 | Advances in deep neural networks (DNN) and computer vision (CV) algorithms have made it feasible to extract meaningful insights from large-scale deployments of urban cameras. Tracking an object of interest across the camera network in near real-time is a canonical problem. However, current tracking frameworks have two key limitations: 1) They are monolithic, proprietary, and lack the ability to rapidly incorporate sophisticated tracking models; and 2) They are less responsive to dynamism across wide-area computing resources that include edge, fog and cloud abstractions. We address these gaps using Anveshak, a runtime platform for composing and coordinating distributed tracking applications. It provides a domain-specific dataflow programming model to intuitively compose a tracking application, supporting contemporary CV advances like query fusion and re-identification, and enabling dynamic scoping of the camera-network's search space to avoid wasted computation. We also offer tunable batching and data-dropping strategies for dataflow blocks deployed on distributed resources to respond to network and compute variability. These balance the tracking accuracy, its real-time performance and the active camera-set size. We illustrate the concise expressiveness of the programming model for 4 tracking applications. Our detailed experiments for a network of 1000 camera-feeds on modest resources exhibit the tunable scalability, performance and quality trade-offs enabled by our dynamic tracking, batching and dropping strategies. | The @cite_57 is designed to efficiently deploy DNN models on the edge. It provides a JavaScript API for users to specify the parameters of the DNNs to be used, with the actual DNN implementation abstracted from the end user. While it caters to a wider class of analytics applications, it lacks composability and domain-specific patterns for tracking applications. It offers performance optimizing for the DNN model, but does not consider distributed systems issues, such as batching, dropping and variability of network and compute that we emphasize. Also, not all tracking applications use DNNs, and classic CV algorithms are still relevant @cite_53 . | {
"cite_N": [
"@cite_57",
"@cite_53"
],
"mid": [
"2807214292",
"2732951378"
],
"abstract": [
"Deep learning with Deep Neural Networks (DNNs) can achieve much higher accuracy on many computer vision tasks than classic machine learning algorithms. Because of the high demand for both computation and storage resources, DNNs are often deployed in the cloud. Unfortunately, executing deep learning inference in the cloud, especially for real-time video analysis, often incurs high bandwidth consumption, high latency, reliability issues, and privacy concerns. Moving the DNNs close to the data source with an edge computing paradigm is a good approach to address those problems. The lack of an open source framework with a high-level API also complicates the deployment of deep learning-enabled service at the Internet edge. This paper presents EdgeEye, an edge-computing framework for real-time intelligent video analytics applications. EdgeEye provides a high-level, task-specific API for developers so that they can focus solely on application logic. EdgeEye does so by enabling developers to transform models trained with popular deep learning frameworks to deployable components with minimal effort. It leverages the optimized inference engines from industry to achieve the optimized inference performance and efficiency.",
"This paper develops a novel tree-based algorithm, called Bonsai, for efficient prediction on IoT devices - such as those based on the Arduino Uno board having an 8 bit ATmega328P microcontroller operating at 16 MHz with no native floating point support, 2 KB RAM and 32 KB read-only flash. Bonsai maintains prediction accuracy while minimizing model size and prediction costs by: (a) developing a tree model which learns a single, shallow, sparse tree with powerful nodes; (b) sparsely projecting all data into a low-dimensional space in which the tree is learnt; and (c) jointly learning all tree and projection parameters. Experimental results on multiple benchmark datasets demonstrate that Bonsai can make predictions in milliseconds even on slow microcontrollers, can fit in KB of memory, has lower battery consumption than all other algorithms while achieving prediction accuracies that can be as much as 30 higher than state-of-the-art methods for resource-efficient machine learning. Bonsai is also shown to generalize to other resource constrained settings beyond IoT by generating significantly better search results as compared to Bing's L3 ranker when the model size is restricted to 300 bytes. Bonsai's code can be downloaded from (BonsaiCode)."
]
} |
1902.05577 | 2914165540 | Advances in deep neural networks (DNN) and computer vision (CV) algorithms have made it feasible to extract meaningful insights from large-scale deployments of urban cameras. Tracking an object of interest across the camera network in near real-time is a canonical problem. However, current tracking frameworks have two key limitations: 1) They are monolithic, proprietary, and lack the ability to rapidly incorporate sophisticated tracking models; and 2) They are less responsive to dynamism across wide-area computing resources that include edge, fog and cloud abstractions. We address these gaps using Anveshak, a runtime platform for composing and coordinating distributed tracking applications. It provides a domain-specific dataflow programming model to intuitively compose a tracking application, supporting contemporary CV advances like query fusion and re-identification, and enabling dynamic scoping of the camera-network's search space to avoid wasted computation. We also offer tunable batching and data-dropping strategies for dataflow blocks deployed on distributed resources to respond to network and compute variability. These balance the tracking accuracy, its real-time performance and the active camera-set size. We illustrate the concise expressiveness of the programming model for 4 tracking applications. Our detailed experiments for a network of 1000 camera-feeds on modest resources exhibit the tunable scalability, performance and quality trade-offs enabled by our dynamic tracking, batching and dropping strategies. | @cite_54 is a video analytics system designed with the goals of approximation and delay tolerance. It schedules a mix of video analytics query workloads on a cluster of machines, where each query has a deadline and priority. Video Storm is capable of tuning knobs in the query, such as the resolution or the framerate, in order to support fluctuating workloads, at the cost of quality. Video Edge @cite_10 extends this to support scheduling on a hierarchy of Edge, Fog and Cloud resources. Both these provide tuning knobs which at a high-level are similar to our Tuning Triangle. However, the key distinction is that they offer many degrees of freedom but also requires the specification of objective function to define the impact of the knobs on metrics. This makes it challenging to use out of the box if the interactions are not well-defined. We our domain-sensitive Tuning Triangle takes a more prescriptive approach. It intuitively captures the impact of the three well-defined knobs we offer on the three metrics that have the most impact on tracking applications. | {
"cite_N": [
"@cite_54",
"@cite_10"
],
"mid": [
"2599379624",
"2896225285"
],
"abstract": [
"Video cameras are pervasively deployed for security and smart city scenarios, with millions of them in large cities worldwide. Achieving the potential of these cameras requires efficiently analyzing the live videos in real-time. We describe VideoStorm, a video analytics system that processes thousands of video analytics queries on live video streams over large clusters. Given the high costs of vision processing, resource management is crucial. We consider two key characteristics of video analytics: resource-quality tradeoff with multi-dimensional configurations, and variety in quality and lag goals. VideoStorm's offline profiler generates query resource-quality profile, while its online scheduler allocates resources to queries to maximize performance on quality and lag, in contrast to the commonly used fair sharing of resources in clusters. Deployment on an Azure cluster of 101 machines shows improvement by as much as 80 in quality of real-world queries and 7× better lag, processing video from operational traffic cameras.",
"Organizations deploy a hierarchy of clusters - cameras, private clusters, public clouds - for analyzing live video feeds from their cameras. Video analytics queries have many implementation options which impact their resource demands and accuracy of outputs. Our objective is to select the \"query plan\" - implementations (and their knobs) - and place it across the hierarchy of clusters, and merge common components across queries to maximize the average query accuracy. This is a challenging task, because we have to consider multi-resource (network and compute) demands and constraints in the hierarchical cluster and search in an exponentially large search space for plans, placements, and merging. We propose VideoEdge, a system that introduces dominant demand to identify the best tradeoff between multiple resources and accuracy, and narrows the search space by identifying a \"Pareto band\" of promising configurations. VideoEdge also balances the resource benefits and accuracy penalty of merging queries. Deployment results show that VideoEdge improves accuracy by 25:4 and 5:4 compared to fair allocation of resources and a recent solution for video query planning (VideoStorm), respectively, and is within 6 of optimum."
]
} |
1902.05577 | 2914165540 | Advances in deep neural networks (DNN) and computer vision (CV) algorithms have made it feasible to extract meaningful insights from large-scale deployments of urban cameras. Tracking an object of interest across the camera network in near real-time is a canonical problem. However, current tracking frameworks have two key limitations: 1) They are monolithic, proprietary, and lack the ability to rapidly incorporate sophisticated tracking models; and 2) They are less responsive to dynamism across wide-area computing resources that include edge, fog and cloud abstractions. We address these gaps using Anveshak, a runtime platform for composing and coordinating distributed tracking applications. It provides a domain-specific dataflow programming model to intuitively compose a tracking application, supporting contemporary CV advances like query fusion and re-identification, and enabling dynamic scoping of the camera-network's search space to avoid wasted computation. We also offer tunable batching and data-dropping strategies for dataflow blocks deployed on distributed resources to respond to network and compute variability. These balance the tracking accuracy, its real-time performance and the active camera-set size. We illustrate the concise expressiveness of the programming model for 4 tracking applications. Our detailed experiments for a network of 1000 camera-feeds on modest resources exhibit the tunable scalability, performance and quality trade-offs enabled by our dynamic tracking, batching and dropping strategies. | Big Data platforms and DSL Generic stream processing platforms like Apache Storm, Flink and Spark Streaming @cite_24 @cite_34 @cite_16 offer flexible dataflow composition. But defining a dataflow pattern for tracking applications, like we do, offers users a frame of reference for designing distributed video analytics applications, with modular user-defined tasks. | {
"cite_N": [
"@cite_24",
"@cite_34",
"@cite_16"
],
"mid": [
"2189465200",
"",
"2566979091"
],
"abstract": [
"MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time.",
"",
"Modern enterprise applications are currently undergoing a complete paradigm shift away from traditional transactional processing to combined analytical and transactional processing. This challenge of combining two opposing query types in a single database management system results in additional requirements for transaction management as well. In this paper, we discuss our approach to achieve high throughput for transactional query processing while allowing concurrent analytical queries. We present our approach to distributed snapshot isolation and optimized two-phase commit protocols."
]
} |
1902.05577 | 2914165540 | Advances in deep neural networks (DNN) and computer vision (CV) algorithms have made it feasible to extract meaningful insights from large-scale deployments of urban cameras. Tracking an object of interest across the camera network in near real-time is a canonical problem. However, current tracking frameworks have two key limitations: 1) They are monolithic, proprietary, and lack the ability to rapidly incorporate sophisticated tracking models; and 2) They are less responsive to dynamism across wide-area computing resources that include edge, fog and cloud abstractions. We address these gaps using Anveshak, a runtime platform for composing and coordinating distributed tracking applications. It provides a domain-specific dataflow programming model to intuitively compose a tracking application, supporting contemporary CV advances like query fusion and re-identification, and enabling dynamic scoping of the camera-network's search space to avoid wasted computation. We also offer tunable batching and data-dropping strategies for dataflow blocks deployed on distributed resources to respond to network and compute variability. These balance the tracking accuracy, its real-time performance and the active camera-set size. We illustrate the concise expressiveness of the programming model for 4 tracking applications. Our detailed experiments for a network of 1000 camera-feeds on modest resources exhibit the tunable scalability, performance and quality trade-offs enabled by our dynamic tracking, batching and dropping strategies. | Google's @cite_36 is a domain-specific programming model for defining DNNs and CV algorithms. It provides to deploy trained models for inference. However, is not meant for composing arbitrary modules together. The tasks take a Tensor as an input and give a Tensor as the output, and there are no native patterns such as Map and Reduce that big data frameworks like MapReduce and Spark offer. Such pre-defined APIs allow users to better reason about the operations being performed on the data, and map to well-defined implementations of these APIs that saves users effort. We take a similar effort for tracking analytics. | {
"cite_N": [
"@cite_36"
],
"mid": [
"2402144811"
],
"abstract": [
"TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. Tensor-Flow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous \"parameter server\" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model and demonstrate the compelling performance that TensorFlow achieves for several real-world applications."
]
} |
1902.05577 | 2914165540 | Advances in deep neural networks (DNN) and computer vision (CV) algorithms have made it feasible to extract meaningful insights from large-scale deployments of urban cameras. Tracking an object of interest across the camera network in near real-time is a canonical problem. However, current tracking frameworks have two key limitations: 1) They are monolithic, proprietary, and lack the ability to rapidly incorporate sophisticated tracking models; and 2) They are less responsive to dynamism across wide-area computing resources that include edge, fog and cloud abstractions. We address these gaps using Anveshak, a runtime platform for composing and coordinating distributed tracking applications. It provides a domain-specific dataflow programming model to intuitively compose a tracking application, supporting contemporary CV advances like query fusion and re-identification, and enabling dynamic scoping of the camera-network's search space to avoid wasted computation. We also offer tunable batching and data-dropping strategies for dataflow blocks deployed on distributed resources to respond to network and compute variability. These balance the tracking accuracy, its real-time performance and the active camera-set size. We illustrate the concise expressiveness of the programming model for 4 tracking applications. Our detailed experiments for a network of 1000 camera-feeds on modest resources exhibit the tunable scalability, performance and quality trade-offs enabled by our dynamic tracking, batching and dropping strategies. | Google's Millwheel @cite_21 uses the concept of to determine the progress of the system, defined as the timestamp of the oldest unprocessed event in the system. It guarantees that no event older than the watermark may enter the system. Watermarks can thus be used to trigger computations such as window operations safely. While our batching and drop strategies are similar, watermarks cannot determine the time left for a message in the pipeline and has no notion of user-defined latency. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2153972927"
],
"abstract": [
"MillWheel is a framework for building low-latency data-processing applications that is widely used at Google. Users specify a directed computation graph and application code for individual nodes, and the system manages persistent state and the continuous flow of records, all within the envelope of the framework's fault-tolerance guarantees. This paper describes MillWheel's programming model as well as its implementation. The case study of a continuous anomaly detector in use at Google serves to motivate how many of MillWheel's features are used. MillWheel's programming model provides a notion of logical time, making it simple to write time-based aggregations. MillWheel was designed from the outset with fault tolerance and scalability in mind. In practice, we find that MillWheel's unique combination of scalability, fault tolerance, and a versatile programming model lends itself to a wide variety of problems at Google."
]
} |
1902.05577 | 2914165540 | Advances in deep neural networks (DNN) and computer vision (CV) algorithms have made it feasible to extract meaningful insights from large-scale deployments of urban cameras. Tracking an object of interest across the camera network in near real-time is a canonical problem. However, current tracking frameworks have two key limitations: 1) They are monolithic, proprietary, and lack the ability to rapidly incorporate sophisticated tracking models; and 2) They are less responsive to dynamism across wide-area computing resources that include edge, fog and cloud abstractions. We address these gaps using Anveshak, a runtime platform for composing and coordinating distributed tracking applications. It provides a domain-specific dataflow programming model to intuitively compose a tracking application, supporting contemporary CV advances like query fusion and re-identification, and enabling dynamic scoping of the camera-network's search space to avoid wasted computation. We also offer tunable batching and data-dropping strategies for dataflow blocks deployed on distributed resources to respond to network and compute variability. These balance the tracking accuracy, its real-time performance and the active camera-set size. We illustrate the concise expressiveness of the programming model for 4 tracking applications. Our detailed experiments for a network of 1000 camera-feeds on modest resources exhibit the tunable scalability, performance and quality trade-offs enabled by our dynamic tracking, batching and dropping strategies. | Aurora @cite_3 introduced the concept of , which is conceptually the same as data drops. They define QoS as a multidimensional function, including attributes such as response time, similar to our maximum tolerable latency, and tuple drops. Given this function, the objective is to maximize the QoS. Borealis @cite_19 extended this to a distributed setup. uses multiple drop points even within a task, which offers it fine-grained control and robustness. Features like do not drop'' and resilience to clock skews found in WAN resources are other domain and system specific optimizations. | {
"cite_N": [
"@cite_19",
"@cite_3"
],
"mid": [
"2115503987",
"2149576945"
],
"abstract": [
"Borealis is a second-generation distributed stream processing engine that is being developed at Brandeis University, Brown University, and MIT. Borealis inherits core stream processing functionality from Aurora [14] and distribution functionality from Medusa [51]. Borealis modifies and extends both systems in non-trivial and critical ways to provide advanced capabilities that are commonly required by newly-emerging stream processing applications. In this paper, we outline the basic design and functionality of Borealis. Through sample real-world applications, we motivate the need for dynamically revising query results and modifying query specifications. We then describe how Borealis addresses these challenges through an innovative set of features, including revision records, time travel, and control lines. Finally, we present a highly flexible and scalable QoS-based optimization model that operates across server and sensor networks and a new fault-tolerance model with flexible consistency-availability trade-offs.",
"Abstract.This paper describes the basic processing model and architecture of Aurora, a new system to manage data streams for monitoring applications. Monitoring applications differ substantially from conventional business data processing. The fact that a software system must process and react to continual inputs from many sources (e.g., sensors) rather than from human operators requires one to rethink the fundamental architecture of a DBMS for this application area. In this paper, we present Aurora, a new DBMS currently under construction at Brandeis University, Brown University, and M.I.T. We first provide an overview of the basic Aurora model and architecture and then describe in detail a stream-oriented set of operators."
]
} |
1902.05623 | 2950136818 | In the age of Big Data, releasing protected sensitive data at a future point in time is critical for various applications. Such self-emerging data release requires the data to be protected until a prescribed data release time and be automatically released to the recipient at the release time, even if the data sender goes offline. While straight-forward centralized approaches provide a basic solution to the problem, unfortunately they are limited to a single point of trust and involve a single point of control. This paper presents decentralized techniques for supporting self-emerging data using smart contracts in Ethereum blockchain networks. We design a credible and enforceable smart contract for supporting self-emerging data release. The smart contract employs a set of Ethereum peers to jointly follow the proposed timed-release service protocol allowing the participating peers to earn the remuneration paid by the service users.We model the problem as an extensive-form game with imperfect information to protect against possible adversarial attacks including some peers destroying the private data (drop attack) or secretly releasing the private data before the release time (release-ahead attack). We demonstrate the efficacy and attack-resilience of the proposed techniques through rigorous analysis and experimental evaluation. Our implementation and experimental evaluation on the Ethereum official test network demonstrate the low monetary cost and the low time overhead associated with the proposed approach and validate its guaranteed security properties. | Our preliminary work on decentralized self-emerging data has studied the problem in the context of Distributed Hash Table (DHT) networks @cite_2 . The idea behind these techniques is to leverage the scalability and distributed features of DHT P2P networks to make message securely hidden before release time. In contrast to such DHT-based solutions that do not offer guaranteed resilience to potential misbehaviors, the decentralized self-emerging data release techniques presented in this paper employs a blockchain infrastructure that offers more robust and attractive features including higher protocol enforceability by using incentives and security deposits. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2734727053"
],
"abstract": [
"Releasing private data to the future is a challenging problem. Making private data accessible at a future point in time requires mechanisms to keep data secure and undiscovered so that protected data is not available prior to the legitimate release time and the data appears automatically at the expected release time. In this paper, we develop new mechanisms to support self-emerging data storage that securely hide keys of encrypted data in a Distributed Hash Table (DHT) network that makes the encryption keys automatically appear at the predetermined release time so that the protected encrypted private data can be decrypted at the release time. We show that a straight-forward approach of privately storing keys in a DHT is prone to a number of attacks that could either make the hidden data appear before the prescribed release time (release-ahead attack) or destroy the hidden data altogether (drop attack). We develop a suite of self-emerging key routing mechanisms for securely storing and routing encryption keys in the DHT. We show that the proposed scheme is resilient to both release-ahead attack and drop attack as well as to attacks that arise due to traditional churn issues in DHT networks. Our experimental evaluation demonstrates the performance of the proposed schemes in terms of attack resilience and churn resilience."
]
} |
1902.05770 | 2914049912 | With the promising progress of deep neural networks, layer aggregation has been used to fuse information across layers in various fields, such as computer vision and machine translation. However, most of the previous methods combine layers in a static fashion in that their aggregation strategy is independent of specific hidden states. Inspired by recent progress on capsule networks, in this paper we propose to use routing-by-agreement strategies to aggregate layers dynamically. Specifically, the algorithm learns the probability of a part (individual layer representations) assigned to a whole (aggregated representations) in an iterative way and combines parts accordingly. We implement our algorithm on top of the state-of-the-art neural machine translation model TRANSFORMER and conduct experiments on the widely-used WMT14 English-German and WMT17 Chinese-English translation datasets. Experimental results across language pairs show that the proposed approach consistently outperforms the strong baseline model and a representative static aggregation model. | Exploiting deep representations have been studied by various communities, from computer vision to natural language processing. he2016deep propose a residual learning framework, combining layers and encouraging gradient flow by simple short-cut connections. Huang:2017:CVPR extend the idea by introducing densely connected layers which could better strengthen feature propagation and encourage feature reuse. Deep layer aggregation @cite_30 designs architecture to fuse information iteratively and hierarchically. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2963323244"
],
"abstract": [
"Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been \"shallow\" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes."
]
} |
1902.05770 | 2914049912 | With the promising progress of deep neural networks, layer aggregation has been used to fuse information across layers in various fields, such as computer vision and machine translation. However, most of the previous methods combine layers in a static fashion in that their aggregation strategy is independent of specific hidden states. Inspired by recent progress on capsule networks, in this paper we propose to use routing-by-agreement strategies to aggregate layers dynamically. Specifically, the algorithm learns the probability of a part (individual layer representations) assigned to a whole (aggregated representations) in an iterative way and combines parts accordingly. We implement our algorithm on top of the state-of-the-art neural machine translation model TRANSFORMER and conduct experiments on the widely-used WMT14 English-German and WMT17 Chinese-English translation datasets. Experimental results across language pairs show that the proposed approach consistently outperforms the strong baseline model and a representative static aggregation model. | The idea of dynamic routing is first proposed by Sabour:2017:NIPS , which aims at addressing the representational limitations of convolutional and recurrent neural networks for image classification. The iterative routing procedure is further improved by using Expectation-Maximization algorithm to better estimate the agreement between capsules @cite_28 . In computer vision community, xi2017capsule explore its application on CIFAR data with higher dimensionality. lalonde2018capsules apply capsule networks on object segmentation task. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2785994986"
],
"abstract": [
"A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules [a group of capsules forms a capsule layer and can be used in place of a traditional layer in a neural net]. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45 compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attack than our baseline convolutional neural network."
]
} |
1902.05873 | 2913506340 | There exists a plethora of consensus protocols in literature. The reason is that there is no one-size-fits-all solution, since every protocol is unique and its performance is directly tied to the deployment settings and workload configurations. Some protocols are well suited for geographical scale environments, e.g., leaderless, while others provide high performance under workloads with high contention, e.g., single leader-based. Thus, existing protocols seldom adapt to changing workload conditions. To overcome this limitation, we propose Spectrum, a consensus framework that is able to switch consensus protocols at run-time, to enable a dynamic reaction to changes in the workload characteristics and deployment scenarios. With this framework, we provide transparent instantiation of various consensus protocols, and a completely asynchronous switching mechanism with zero downtime. We assess the effectiveness of Spectrum via an extensive experimental evaluation, which shows that Spectrum is able to limit the increase of the user perceived latency when switching among consensus protocols. | Composition of consensus protocols has been investigated in @cite_38 . Abstract is an abstraction for designing and reconfiguring generalized state machines, by leveraging the idea of composing instances of different fault-tolerant consensus protocols. The idea is to build simpler consensus protocols each tolerating particular system conditions such as fault models and contention, and compose them together to achieve a robust system. The downside of the Abstract approach is that it requires the candidate protocols to implement specific interfaces. Specifically, the candidates must be able to export (import, respectively) internal state outside (into, respectively) the protocol. This means that existing as well as new protocols must be rethought to accommodate to the abstraction. | {
"cite_N": [
"@cite_38"
],
"mid": [
"1965159636"
],
"abstract": [
"Modern Byzantine fault-tolerant state machine replication (BFT) protocols involve about 20,000 lines of challenging C++ code encompassing synchronization, networking and cryptography. They are notoriously difficult to develop, test and prove. We present a new abstraction to simplify these tasks. We treat a BFT protocol as a composition of instances of our abstraction. Each instance is developed and analyzed independently. To illustrate our approach, we first show how our abstraction can be used to obtain the benefits of a state-of-the-art BFT protocol with much less pain. Namely, we develop AZyzzyva, a new protocol that mimics the behavior of Zyzzyva in best-case situations (for which Zyzzyva was optimized) using less than 24 of the actual code of Zyzzyva. To cover worst-case situations, our abstraction enables to use in AZyzzyva any existing BFT protocol, typically, a classical one like PBFT which has been tested and proved correct. We then present Aliph, a new BFT protocol that outperforms previous BFT protocols both in terms of latency (by up to 30 ) and throughput (by up to 360 ). The development of Aliph required two new instances of our abstraction. Each instance contains less than 25 of the code needed to develop state-of-the-art BFT protocols."
]
} |
1902.05873 | 2913506340 | There exists a plethora of consensus protocols in literature. The reason is that there is no one-size-fits-all solution, since every protocol is unique and its performance is directly tied to the deployment settings and workload configurations. Some protocols are well suited for geographical scale environments, e.g., leaderless, while others provide high performance under workloads with high contention, e.g., single leader-based. Thus, existing protocols seldom adapt to changing workload conditions. To overcome this limitation, we propose Spectrum, a consensus framework that is able to switch consensus protocols at run-time, to enable a dynamic reaction to changes in the workload characteristics and deployment scenarios. With this framework, we provide transparent instantiation of various consensus protocols, and a completely asynchronous switching mechanism with zero downtime. We assess the effectiveness of Spectrum via an extensive experimental evaluation, which shows that Spectrum is able to limit the increase of the user perceived latency when switching among consensus protocols. | TAS @cite_0 is an approach for automating the elastic scaling of replicated in-memory transactional systems. can benefit from its performance predictor that relies on the combined usage of analytical modeling and machine learning, since it is able to forecast the effects of data contention. For the same reason, the machine learning-based model of MorphR @cite_5 can be exploited by , which finds the optimal transactional replication protocols according to the conflicts in the system. MorphR is able to choose between blocking, i.e., 2-PC, and non-blocking, i.e., total order, protocols, but it does not focus on the optimal switching mechanisms among non-blocking protocols, e.g., different consensus protocols. | {
"cite_N": [
"@cite_0",
"@cite_5"
],
"mid": [
"2118677985",
"1979867838"
],
"abstract": [
"In this article, we introduce TAS (Transactional Auto Scaler), a system for automating the elastic scaling of replicated in-memory transactional data grids, such as NoSQL data stores or Distributed Transactional Memories. Applications of TAS range from online self-optimization of in-production applications to the automatic generation of QoS cost-driven elastic scaling policies, as well as to support for what-if analysis on the scalability of transactional applications. In this article, we present the key innovation at the core of TAS, namely, a novel performance forecasting methodology that relies on the joint usage of analytical modeling and machine learning. By exploiting these two classically competing approaches in a synergic fashion, TAS achieves the best of the two worlds, namely, high extrapolation power and good accuracy, even when faced with complex workloads deployed over public cloud infrastructures. We demonstrate the accuracy and feasibility of TAS’s performance forecasting methodology via an extensive experimental study based on a fully fledged prototype implementation integrated with a popular open-source in-memory transactional data grid (Red Hat’s Infinispan) and industry-standard benchmarks generating a breadth of heterogeneous workloads.",
"Replication plays an essential role for in-memory distributed transactional platforms, such as NoSQL data grids, given that it represents the primary mean to ensure data durability. Unfortunately, no single replication technique can ensure optimal performance across a wide range of workloads and system configurations. This paper tackles this problem by presenting MORPHR, a framework that allows to automatically adapt the replication protocol of in-memory transactional platforms according to the current operational conditions. MORPHR presents two key innovative aspects. On one hand, it allows to plug in, in a modular fashion, specialized algorithms to regulate the switching between arbitrary replication protocols. On the other hand, MORPHR relies on state of the art machine learning techniques to autonomously determine the optimal replication in face of varying workloads. We integrated MORPHR in a popular open-source in-memory NoSQL data grid, and evaluated it by means of an extensive experimental study. The results highlight that MORPHR is accurate in identifying the optimal replication strategy in presence of complex, realistic workloads, and does so with minimal overhead."
]
} |
1902.05428 | 2914704821 | Estimation of quantiles is one of the most fundamental real-time analysis tasks. Most real-time data streams vary dynamically with time and incremental quantile estimators document state-of-the art performance to track quantiles of such data streams. However, most are not able to make joint estimates of multiple quantiles in a consistent manner, and estimates may violate the monotone property of quantiles. In this paper we propose the general concept of *conditional quantiles* that can extend incremental estimators to jointly track multiple quantiles. We apply the concept to propose two new estimators. Extensive experimental results, on both synthetic and real-life data, show that the new estimators clearly outperform legacy state-of-the-art joint quantile tracking algorithm and achieve faster adaptivity in dynamically varying data streams. | Given dynamically varying data stream, two main problems are considered namely to i) dynamically update estimates of quantiles of all data received from the stream so far or ii) estimate quantiles of the current distribution of the data stream (tracking). To address problem i), histogram based methods form an important class of memory efficient methods. A representative work in this perspective is due to Schmeiser and Deutsch @cite_22 . In fact, Schmeiser and Deutsch proposed to use equidistant bins where the boundaries are adjusted online. @cite_17 use a different idea than equidistant bins by attempting to maintain bins in a manner that maximizes the entropy of the corresponding estimate of the historical data distribution. Thus, the bin boundaries are adjusted in an online manner. Nevertheless, histogram based methods have problems addressing problem ii) of tracking quantiles of the current data stream distribution @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_22",
"@cite_17"
],
"mid": [
"1984639724",
"2042616284",
""
],
"abstract": [
"Network monitoring in cellular networks requires the tracking of quantiles for data distributions of many evolving network measurements (e.g. number of high signaling subscribers per minute). Most quantile estimation algorithms are based on a summary of the empirical data distribution, using either a representative sample or a global approximation of the entire distribution. In contrast, by viewing data as a quantity from a random distribution, the stochastic approximation (SA) for quantile estimation does not keep a global approximation, but rather local approximations at the quantiles of interest, and therefore uses negligible memory even for estimating tail quantiles. However, the current stochastic approximation algorithm for quantile estimation tracks each quantile separately, and this may lead to a violation of the monotone property of quantiles. In this paper, we propose a stochastic approximation technique that enables the simultaneous tracking of multiple quantiles. Our technique maintains the monotone property of different quantiles, and is adaptive to changes in the data distribution. We evaluate its performance using real cellular provider datasets. Our results show that the technique is very efficient.",
"Data are often collected in histogram form, especially in the context of computer simulation. While requiring less memory and computation than saving all observations, the grouping of observations in the histogram cells complicates statistical estimation of parameters of interest. In this paper the mean and variance of the cell midpoint estimator of the pth quantile are analyzed in terms of distribution, cell width, and sample size. Three idiosyncrasies of using cell midpoints to estimate quantiles are illustrated. The results tend to run counter to previously published results for grouped data.",
""
]
} |
1902.05431 | 2914441490 | In this paper, we propose a novel method for highly efficient follicular segmentation of thyroid cytopathological WSIs. Firstly, we propose a hybrid segmentation architecture, which integrates a classifier into Deeplab V3 by adding a branch. A large amount of the WSI segmentation time is saved by skipping the irrelevant areas using the classification branch. Secondly, we merge the low scale fine features into the original atrous spatial pyramid pooling (ASPP) in Deeplab V3 to accurately represent the details in cytopathological images. Thirdly, our hybrid model is trained by a criterion-oriented adaptive loss function, which leads the model converging much faster. Experimental results on a collection of thyroid patches demonstrate that the proposed model reaches 80.9 on the segmentation accuracy. Besides, 93 time is reduced for the WSI segmentation by using our proposed method, and the WSI-level accuracy achieves 53.4 . | Traditional machine learning @cite_8 @cite_13 methods and deep learning methods @cite_9 @cite_0 greatly improve the accuracy of automatic lesion classification in medical areas. @cite_8 perform support vector machine(SVM) and achieve a diagnostic accuracy of 96.7 Traditional semantic segmentation methods @cite_17 learn the representation from hand-craft features instead of the semantic features. Recently, CNN-based methods largely improve performance. FCN @cite_18 is the pioneering work on semantic segmentation by modifying fully connected layers into convolution layers in classification. DeepLab @cite_6 @cite_5 @cite_2 uses dilated convolutions to provide dense labeling and enlarge the receptive field. Semantic segmentation methods have already been used in the pathological image segmentation. @cite_19 propose a fully automated segmentation framework to identify placental candidate pixels. @cite_4 introduce an image segmentation method based on recurrent neural network. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_17"
],
"mid": [
"1903029394",
"2524644797",
"2169245092",
"2739315424",
"2964288706",
"2327598497",
"2476575773",
"2630837129",
"2412782625",
"2072569807",
"1978085585"
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"Deep neural networks have demonstrated very promising performance on accurate segmentation of challenging organs (e.g., pancreas) in abdominal CT and MRI scans. The current deep learning approaches conduct pancreas segmentation by processing sequences of 2D image slices independently through deep, dense per-pixel masking for each image, without explicitly enforcing spatial consistency constraint on segmentation of successive slices. We propose a new convolutional recurrent neural network architecture to address the contextual learning and segmentation consistency problem. A deep convolutional sub-network is first designed and pre-trained from scratch. The output layer of this network module is then connected to recurrent layers and can be fine-tuned for contextual learning, in an end-to-end manner. Our recurrent sub-network is a type of Long short-term memory (LSTM) network that performs segmentation on an image by integrating its neighboring slice segmentation predictions, in the form of a dependent sequence processing. Additionally, a novel segmentation-direct loss function (named Jaccard Loss) is proposed and deep networks are trained to optimize Jaccard Index (JI) directly. Extensive experiments are conducted to validate our proposed deep models, on quantitative pancreas segmentation using both CT and MRI scans. Our method outperforms the state-of-the-art work on CT [11] and MRI pancreas segmentation [1], respectively.",
"Objective: The aim of this study was to develop an automated computer-aided diagnostic system for diagnosis of thyroid cancer pattern in fine needle aspiration cytology (FNAC) microscopic images with high degree of sensitivity and specificity using statistical texture features and a Support Vector Machine classifier (SVM). Materials and Methods: A training set of 40 benign and 40 malignant FNAC images and a testing set of 10 benign and 20 malignant FNAC images were used to perform the diagnosis of thyroid cancer. Initially, segmentation of region of interest (ROI) was performed by region-based morphology segmentation. The developed diagnostic system utilized statistical texture features derived from the segmented images using a Gabor filter bank at various wavelengths and angles. Finally, the SVM was used as a machine learning algorithm to identify benign and malignant states of thyroid nodules. Results: The SVMachieved a diagnostic accuracy of 96.7 with sensitivity and specificity of 95 and 100 , respectively, at a wavelength of 4 and an angle of 45. Conclusion: The results show that the diagnosis of thyroid cancer in FNAC images can be effectively performed using statistical texture information derived with Gabor filters in association with an SVM.",
"Fine needle aspiration cytology is commonly used for diagnosis of breast cancer, with traditional practice being based on the subjective visual assessment of the breast cytopathology cell samples under a microscope to evaluate the state of various cytological features. Therefore, there are many challenges in maintaining consistency and reproducibility of findings. However, digital imaging and computational aid in diagnosis can improve the diagnostic accuracy and reduce the effective workload of pathologists. This paper presents a deep convolutional neural network (CNN) based classification approach for the diagnosis of the cell samples using their microscopic high-magnification multi-views. The proposed approach has been tested using GoogLeNet architecture of CNN on an image dataset of 37 breast cytopathology samples (24 benign and 13 malignant), where the network was trained using images of 54 cell samples and tested on the rest, achieving 89.7 mean accuracy in 8 fold validation.",
"Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"Cytopathology is the study of disease at the cellular level and often used as a screening tool for cancer. Thyroid cytopathology is a branch of pathology that studies the diagnosis of thyroid lesions and diseases. A pathologist views cell images that may have high visual variance due to different anatomical structures and pathological characteristics. To assist the physician with identifying and searching through images, we propose a deep semantic mobile application. Our work augments recent advances in the digitization of pathology and machine learning techniques, where there are transformative opportunities for computers to assist pathologists. Our system uses a custom thyroid ontology that can be augmented with multimedia metadata extracted from images using deep machine learning techniques. We describe the utilization of a particular methodology, deep convolutional neural networks, to the application of cytopathology classification. Our method is able to leverage networks that have been trained on millions of generic images, to medical scenarios where only hundreds or thousands of images exist. We demonstrate the benefits of our framework through both quantitative and qualitative results.",
"Recently, magnetic resonance imaging has revealed to be important for the evaluation of placenta’s health during pregnancy. Quantitative assessment of the placenta requires a segmentation, which proves to be challenging because of the high variability of its position, orientation, shape and appearance. Moreover, image acquisition is corrupted by motion artifacts from both fet al and maternal movements. In this paper we propose a fully automatic segmentation framework of the placenta from structural T2-weighted scans of the whole uterus, as well as an extension in order to provide an intuitive pre-natal view into this vital organ. We adopt a 3D multi-scale convolutional neural network to automatically identify placental candidate pixels. The resulting classification is subsequently refined by a 3D dense conditional random field, so that a high resolution placental volume can be reconstructed from multiple overlapping stacks of slices. Our segmentation framework has been tested on 66 subjects at gestational ages 20–38 weeks achieving a Dice score of (71.95 19.79 , ) for healthy fetuses with a fixed scan sequence and (66.89 15.35 , ) for a cohort mixed with cases of intrauterine fet al growth restriction using varying scan parameters.",
"In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"An automated medical diagnosis system has been developed to discriminate benign and malignant thyroid nodules in multi-stained fine needle aspiration biopsy (FNAB) images using multiple classifier fusion and presented in this paper. First, thyroid cell regions are extracted from the auto-cropped sub-image by implementing mathematical morphology segmentation method. Subsequently, statistical features are extracted by two-level wavelet decomposition based on texture characteristics of the thyroid cells. After that, decision tree (DT), k-nearest neighbor (k-NN), Elman neural network (ENN) and support vector machine (SVM) classifiers are used separately to classify thyroid nodules into benign and malignant. The four individual classifier outputs are then fused together using majority voting rule and linear combination rules to improve the performance of the diagnostic system. The classification results of ENN and SVM classifiers show an overall diagnostic accuracy (DA) of 90 , sensitivity (Se) of 85 and 100 ...",
"Image segmentation is the process of clustering pixels into salient image regions (i.e) regions corresponding to individual surfaces, objects or natural parts of objects. Image segmentation plays a vital role in image analysis and computer vision applications. Several general-purpose algorithms and techniques have been developed for image segmentation. Segmentation process should be stopped when region of interest is separated from the input image. Based on the application, region of interest may differ and hence none of the segmentation algorithm satisfies the global applications. Thus segmentation still remains a challenging area for researchers. This paper presents a comparison of some literature on color image segmentation based on region growing and merging algorithm. Finally an automatic seeded region growing algorithm is proposed for segmenting color images."
]
} |
1902.05546 | 2913988491 | Contemporary sensorimotor learning approaches typically start with an existing complex agent (e.g., a robotic arm), which they learn to control. In contrast, this paper investigates a modular co-evolution strategy: a collection of primitive agents learns to dynamically self-assemble into composite bodies while also learning to coordinate their behavior to control these bodies. Each primitive agent consists of a limb with a motor attached at one end. Limbs may choose to link up to form collectives. When a limb initiates a link-up action and there is another limb nearby, the latter is magnetically connected to the 'parent' limb's motor. This forms a new single agent, which may further link with other agents. In this way, complex morphologies can emerge, controlled by a policy whose architecture is in explicit correspondence with the morphology. We evaluate the performance of these 'dynamic' and 'modular' agents in simulated environments. We demonstrate better generalization to test-time changes both in the environment, as well as in the agent morphology, compared to static and monolithic baselines. Project videos and code are available at this https URL | The idea of modular and self-assembling agents goes back at least to Von Neumman's Theory of Self-Reproducing Automata @cite_1 . In robotics, such systems have been termed self-reconfiguring modular robots" @cite_33 @cite_24 . There has been a lot of work in modular robotics to design real hardware robotic modules that can be docked together to form complex robotic morphologies @cite_3 @cite_7 @cite_32 @cite_30 @cite_9 . We approach this problem from a learning perspective, in particular deep RL, and study the resulting generalization properties. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_24"
],
"mid": [
"2013549602",
"167755092",
"2146262966",
"2754550828",
"",
"2079218888",
"2101392651",
""
],
"abstract": [
"We describe the design, implementation and programming of a set of robots that, starting from an amorphous arrangement, can be assembled into arbitrary shapes and then commanded to self-disassemble in an organized manner to obtain a goal shape. We present custom hardware, distributed algorithms and experimental results from hundreds of trails which show the system successfully forming complex 3D shapes. Each of the 28 modules in the system is implemented as a 1.8-inch autonomous cube-shaped robot able to connect to and communicate with its immediate neighbors. Embedded microprocessors control each module's magnetic connection mechanisms and infrared communication interfaces. When assembled into a structure, the modules form a system that can be virtually sculpted using a computer interface and a distributed process. The group of modules collectively decides which elements are a part of the final shape and which are not using algorithms that minimize information transmission and storage. Finally, the modules not in the structure disengage their magnetic couplings and fall away under the influence of an external force: in this case, gravity.",
"Self-reconfigurable robots are constructed of robotic modules that can be connected in many different ways. These modules move in relationship to each other, which allows the robot as a whole to change shape. This shapeshifting makes it possible for the robots to adapt and optimize their shapes for different tasks. Thus, a self-reconfigurable robot can first assume the shape of a rolling track to cover distance quickly, then the shape of a snake to explore a narrow space, and finally the shape of a hexapod to carry an artifact back to the starting point. The field of self-reconfigurable robots has seen significant progress over the last twenty years, and this book collects and synthesizes existing research previously only available in widely scattered individual papers, offering an accessible guide to the latest information on self-reconfigurable robots for researchers and students interested in the field. Self-Reconfigurable Robots focuses on conveying the intuition behind the design and control of self-reconfigurable robots rather than technical details. Suggestions for further reading refer readers to the underlying sources of technical information. The book includes descriptions of existing robots and a brief history of the field; discussion of module design considerations, including module geometry, connector design, and computing and communication infrastructure; an in-depth presentation of strategies for controlling self-reconfiguration and locomotion; and exploration of future research challenges.",
"Many factors such as size, power, and weight constrain the design of modular snake robots. Meeting these constraints requires implementing a complex mechanical and electrical architecture. Here we present our solution, which involves the construction of sixteen aluminum modules and creation of the Super Servo, a modified hobby servo. To create the Super Servo, we have replaced the electronics in a hobby servo, adding such components as sensors to monitor current and temperature, a communications bus, and a programmable microcontroller. Any robust solution must also protect components from hazardous environments such as sand and brush. To resolve this problem we insert the robots into skins that cover their surface. Functions such as climbing the inside and outside of a pipe add a new dimension of interaction. Thus we attach a compliant, high-friction material to every module, which assists in tasks that require gripping. This combination of the mechanical and electrical architectures results in a robust and versatile robot.",
"The theoretical ability of modular robots to reconfigure in response to complex tasks in a priori unknown environments has frequently been cited as an advantage and remains a major motivator for work in the field. We present a modular robot system capable of autonomously completing high-level tasks by reactively reconfiguring to meet the needs of a perceived, a priori unknown environment. The system integrates perception, high-level planning, and modular hardware and is validated in three hardware demonstrations. Given a high-level task specification, a modular robot autonomously explores an unknown environment, decides when and how to reconfigure, and manipulates objects to complete its task. The system architecture balances distributed mechanical elements with centralized perception, planning, and control. By providing an example of how a modular robot system can be designed to leverage reactive reconfigurability in unknown environments, we have begun to lay the groundwork for modular self-reconfigurable robots to address tasks in the real world.",
"",
"In this paper, we describe a novel self-assembling, self-reconfiguring cubic robot that uses pivoting motions to change its intended geometry. Each individual module can pivot to move linearly on a substrate of stationary modules. The modules can use the same operation to perform convex and concave transitions to change planes. Each module can also move independently to traverse planar unstructured environments. The modules achieve these movements by quickly transferring angular momentum accumulated in a self-contained flywheel to the body of the robot. The system provides a simplified realization of the modular actions required by the sliding cube model using pivoting. We describe the principles, the unit-module hardware, and extensive experiments with a system of eight modules.",
"Modular, self-reconfigurable robots show the promise of great versatility, robustness and low cost. The paper presents examples and issues in realizing those promises. PolyBot is a modular, self-reconfigurable system that is being used to explore the hardware reality of a robot with a large number of interchangeable modules. PolyBot has demonstrated the versatility promise, by implementing locomotion over a variety of terrain and manipulation versatility with a variety of objects. PolyBot is the first robot to demonstrate sequentially two topologically distinct locomotion modes by self-reconfiguration. PolyBot has raised issues regarding software scalability and hardware dependency and as the design evolves the issues of low cost and robustness will be resolved while exploring the potential of modular, self-reconfigurable robots.",
""
]
} |
1902.05546 | 2913988491 | Contemporary sensorimotor learning approaches typically start with an existing complex agent (e.g., a robotic arm), which they learn to control. In contrast, this paper investigates a modular co-evolution strategy: a collection of primitive agents learns to dynamically self-assemble into composite bodies while also learning to coordinate their behavior to control these bodies. Each primitive agent consists of a limb with a motor attached at one end. Limbs may choose to link up to form collectives. When a limb initiates a link-up action and there is another limb nearby, the latter is magnetically connected to the 'parent' limb's motor. This forms a new single agent, which may further link with other agents. In this way, complex morphologies can emerge, controlled by a policy whose architecture is in explicit correspondence with the morphology. We evaluate the performance of these 'dynamic' and 'modular' agents in simulated environments. We demonstrate better generalization to test-time changes both in the environment, as well as in the agent morphology, compared to static and monolithic baselines. Project videos and code are available at this https URL | A variety of alternative approaches have also been proposed to optimize agent morphologies, including genetic algorithms that search over a generative grammar @cite_31 , as well as directly optimizing controllers by minizing energy-based objectives @cite_6 @cite_0 . A learning-based alternative is to condition policy on several hardwares to ensure robustness @cite_18 . One key difference between these approaches and our own is that we achieve morphogenesis via (linking), which agents take during their lifetimes, whereas the past approaches treat morphology as an optimization target to be updated between generations or episodes. Since the physical morphology also defines the connectivity of the policy net, our proposed algorithm can also be viewed as performing a kind of neural architecture search @cite_23 in physical agents. | {
"cite_N": [
"@cite_18",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_31"
],
"mid": [
"2889970038",
"",
"2099824239",
"2553303224",
"2117085697"
],
"abstract": [
"Deep reinforcement learning could be used to learn dexterous robotic policies but it is extremely challenging to transfer them to new robots with vastly different hardware properties. It is also prohibitively expensive to learn a new policy from scratch for each robot hardware due to the high sample complexity of modern state-of-the-art algorithms. We propose a novel approach called where we train a universal policy conditioned on a vector representation of robot hardware. We considered robots in simulation with varied dynamics, kinematic structure, kinematic lengths and degrees-of-freedom. First, we use the kinematic structure directly as the hardware encoding and show great zero-shot transfer to completely novel robots not seen during training. For robots with lower zero-shot success rate, we also demonstrate that fine-tuning the policy network is significantly more sample-efficient than training a model from scratch. In tasks where knowing the agent dynamics is crucial for success, we learn an embedding for robot hardware and show that policies conditioned on the encoding of hardware tend to generalize and transfer well.",
"",
"We present a fully automatic method for generating gaits and morphologies for legged animal locomotion. Given a specific animal's shape we can determine an efficient gait with which it can move. Similarly, we can also adapt the animal's morphology to be optimal for a specific locomotion task. We show that determining such gaits is possible without the need to specify a good initial motion, and without manually restricting the allowed gaits of each animal. Our approach is based on a hybrid optimization method which combines an efficient derivative-aware spacetime constraints optimization with a derivative-free approach able to find non-local solutions in high-dimensional discontinuous spaces. We demonstrate the effectiveness of this approach by synthesizing dynamic locomotions of bipeds, a quadruped, and an imaginary five-legged creature.",
"Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.",
"This paper describes a novel system for creating virtual creatures that move and behave in simulated three-dimensional physical worlds. The morphologies of creatures and the neural systems for controlling their muscle forces are both generated automatically using genetic algorithms. Different fitness evaluation functions are used to direct simulated evolutions towards specific behaviors such as swimming, walking, jumping, and following. A genetic language is presented that uses nodes and connections as its primitive elements to represent directed graphs, which are used to describe both the morphology and the neural circuitry of these creatures. This genetic language defines a hyperspace containing an indefinite number of possible creatures with behaviors, and when it is searched using optimization techniques, a variety of successful and interesting locomotion strategies emerge, some of which would be difficult to invent or built by design."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.