aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1708.02254 | 2744942019 | Questions play a prominent role in social interactions, performing rhetorical functions that go beyond that of simple informational exchange. The surface form of a question can signal the intention and background of the person asking it, as well as the nature of their relation with the interlocutor. While the informational nature of questions has been extensively examined in the context of question-answering applications, their rhetorical aspects have been largely understudied. In this work we introduce an unsupervised methodology for extracting surface motifs that recur in questions, and for grouping them according to their latent rhetorical role. By applying this framework to the setting of question sessions in the UK parliament, we show that the resulting typology encodes key aspects of the political discourse---such as the bifurcation in questioning behavior between government and opposition parties---and reveals new insights into the effects of a legislator's tenure and political career ambitions. | Question-answering Computationally, questions have received considerable attention in the context of question-answering (QA) systems---for a survey see ---with an emphasis on understanding their information need @cite_25 . Techniques have been developed to categorize questions based on the nature of these information needs in the context of the TREC QA challenge @cite_3 , and to identify questions asking for similar information @cite_39 @cite_35 @cite_17 ; questions have also been classified by topic @cite_24 and quality @cite_27 @cite_5 . In contrast, our work is not concerned with the information need central to QA applications, and instead focuses on the rhetorical aspect of questions. | {
"cite_N": [
"@cite_35",
"@cite_3",
"@cite_39",
"@cite_24",
"@cite_27",
"@cite_5",
"@cite_25",
"@cite_17"
],
"mid": [
"2604420197",
"",
"2151280665",
"1975809876",
"2400323868",
"2286400365",
"2103247590",
"2045411013"
],
"abstract": [
"Programming community-based question-answering (PCQA) websites such as Stack Overflow enable programmers to find working solutions to their questions. Despite detailed posting guidelines, duplicate questions that have been answered are frequently created. To tackle this problem, Stack Overflow provides a mechanism for reputable users to manually mark duplicate questions. This is a laborious effort, and leads to many duplicate questions remain undetected. Existing duplicate detection methodologies from traditional community based question-answering (CQA) websites are difficult to be adopted directly to PCQA, as PCQA posts often contain source code which is linguistically very different from natural languages. In this paper, we propose a methodology designed for the PCQA domain to detect duplicate questions. We model the detection as a classification problem over question pairs. To extract features for question pairs, our methodology leverages continuous word vectors from the deep learning literature, topic model features and phrases pairs that co-occur frequently in duplicate questions mined using machine translation systems. These features capture semantic similarities between questions and produce a strong performance for duplicate detection. Experiments on a range of real-world datasets demonstrate that our method works very well; in some cases over 30 improvement compared to state-of-the-art benchmarks. As a product of one of the proposed features, the association score feature, we have mined a set of associated phrases from duplicate questions on Stack Overflow and open the dataset to the public.",
"",
"Community-based Question Answering sites, such as Yahoo! Answers or Baidu Zhidao, allow users to get answers to complex, detailed and personal questions from other users. However, since answering a question depends on the ability and willingness of users to address the asker's needs, a significant fraction of the questions remain unanswered. We measured that in Yahoo! Answers, this fraction represents 15 of all incoming English questions. At the same time, we discovered that around 25 of questions in certain categories are recurrent, at least at the question-title level, over a period of one year. We attempt to reduce the rate of unanswered questions in Yahoo! Answers by reusing the large repository of past resolved questions, openly available on the site. More specifically, we estimate the probability whether certain new questions can be satisfactorily answered by a best answer from the past, using a statistical model specifically trained for this task. We leverage concepts and methods from query-performance prediction and natural language processing in order to extract a wide range of features for our model. The key challenge here is to achieve a level of quality similar to the one provided by the best human answerers. We evaluated our algorithm on offline data extracted from Yahoo! Answers, but more interestingly, also on online data by using three \"live\" answering robots that automatically provide past answers to new questions when a certain degree of confidence is reached. We report the success rate of these robots in three active Yahoo! Answers categories in terms of both accuracy, coverage and askers' satisfaction. This work presents a first attempt, to the best of our knowledge, of automatic question answering to questions of social nature, by reusing past answers of high quality.",
"Community Question Answering (CQA) has emerged as a popular type of service where users ask and answer questions and access historical question-answer pairs. CQA archives contain very large volumes of questions organized into a hierarchy of categories. As an essential function of CQA services, question retrieval in a CQA archive aims to retrieve historical question-answer pairs that are relevant to a query question. In this paper, we present a new approach to exploiting category information of questions for improving the performance of question retrieval, and we apply the approach to existing question retrieval models, including a state-of-the-art question retrieval model. Experiments conducted on real CQA data demonstrate that the proposed techniques are capable of outperforming a variety of baseline methods significantly.",
"",
"Asking the right question in the right way is an art (and a science). In a community question-answering setting, a good question is not just one that is found to be useful by other people: a question is good if it is also presented clearly and shows prior research. Using a community question-answering site that allows voting over the questions, we show that there is a notion of question quality that goes beyond mere popularity. We present techniques using latent topic models to automatically predict the quality of questions based on their content. Our best system achieves a prediction accuracy of 72 , beating out strong baselines by a significant amount. We also examine the effect of question quality on the dynamics of user behavior and the longevity of questions.",
"Question Answering (QA) is a specific type of information retrieval. Given a set of documents, a Question Answering system attempts to find out the correct answer to the question pose in natural language. Question answering is multidisciplinary. It involves information technology, artificial intelligence, natural language processing, knowledge and database management and cognitive science. From the technological perspective, question answering uses natural or statistical language processing, information retrieval, and knowledge representation and reasoning as potential building blocks. It involves text classification, information extraction and summarization technologies. In general, question answering system (QAS) has three components such as question classification, information retrieval, and answer extraction. These components play a essential role in QAS. Question classification play primary role in QA system to categorize the question based upon on the type of its entity. Information retrieval method is get of identify success by extracting out applicable answer post by their intelligent question answering system. Finally, answer extraction module is rising topics in the QAS where these systems are often requiring ranking and validating a candidate’s answer. Most of the Question Answering systems consists of three main modules: question processing, document processing and answer processing. Question processing module plays an important part in QA systems. If this module doesn't work correctly, it will make problems for other sections. Moreover answer processing module is an emerging topic in Question Answering, in which these systems are often required to rank and validate candidate answers. These techniques aiming at discovering the short and precise answers are often based on the semantic classification. QA systems give the ability to answer questions posed in natural language by extracting, from a repository of documents, fragments of documents that contain material relevant to the answer.",
"A large number of question and answer pairs can be collected from question and answer boards and FAQ pages on the Web. This paper proposes an automatic method of finding the questions that have the same meaning. The method can detect semantically similar questions that have little word overlap because it calculates question-question similarities by using the corresponding answers as well as the questions. We develop two different similarity measures based on language modeling and compare them with the traditional similarity measures. Experimental results show that semantically similar questions pairs can be effectively found with the proposed similarity measures."
]
} |
1708.02254 | 2744942019 | Questions play a prominent role in social interactions, performing rhetorical functions that go beyond that of simple informational exchange. The surface form of a question can signal the intention and background of the person asking it, as well as the nature of their relation with the interlocutor. While the informational nature of questions has been extensively examined in the context of question-answering applications, their rhetorical aspects have been largely understudied. In this work we introduce an unsupervised methodology for extracting surface motifs that recur in questions, and for grouping them according to their latent rhetorical role. By applying this framework to the setting of question sessions in the UK parliament, we show that the resulting typology encodes key aspects of the political discourse---such as the bifurcation in questioning behavior between government and opposition parties---and reveals new insights into the effects of a legislator's tenure and political career ambitions. | Question types To facilitate retrieval of frequently asked questions, manually developed a typology of surface question forms (e.g., what'- and why'-questions) starting from Lehnerts' conceptual question categories @cite_29 . Question types were also hand annotated for dialog-act labeling, distinguishing between yes-no, wh-, open-ended and rhetorical questions @cite_13 . To complement this line of work, this paper introduces a completely unsupervised methodology to automatically build a domain-tailored question typology, bypassing the need for human annotation. | {
"cite_N": [
"@cite_29",
"@cite_13"
],
"mid": [
"2110099183",
"209307208"
],
"abstract": [
"One useful way to find the answer to a question is to search a library of previously-answered questions. This is the idea behind FAQFinder, a Web-based natural language questionanswering system which uses Frequently Asked Questions (FAQ) files to answer users’ questions. FAQFinder tries to answer a user’s question by retrieving a similar FAQ question, if one exists, and its answer. FAQFinder uses several metrics to judge the similarity of user and FAQ questions. In this paper, we discuss a metric based on question type, which we recently added to the system. We discuss the taxonomy of question types used, and present experimental results which indicate that the incorporation of question type information has substantially improved FAQFinder’s performance.",
"Abstract : This labeling guide is adapted from work on the Switchboard recordings and the accompanying manual ( 1997). The Switchboard-DAMSL (SWBD-DAMSL) manual for labeling one-on-one phone conversations provided a useful starting point for the types of dialog acts (DAs) that arose in the ICSI meeting corpus. However, the tagset for labeling meetings presented here has been modified as necessary to better reflect the types of interaction we observed in multiparty face-to-face meetings. This guide consists of five major sections: Quick Reference Information, Segmentation, How to Label, Adjacency Pairs, and Tag Descriptions. The first section supplies definitions for terms used throughout this guide and contains the correspondence of the Meeting Recorder DA (MRDA) tagset, which is the tagset detailed within this guide, to the SWBD-DAMSL tagset. This section also contains the entire MRDA tagset organized into groups according to syntactic, semantic, pragmatic, and functional similarities of the utterances they mark. The section entitled Segmentation, as its name indicates, details the rules and guidelines governing what constitutes an utterance along with how to determine utterance boundaries. The third section, How to Label, provides instruction regarding label construction, the management of utterances requiring additional DAs or containing quotes, and the use of the annotation software. The section entitled Adjacency Pairs details how adjacency pairs are constructed and the rules governing their usage. The section entitled Tag Descriptions provides explanations of each tag within the MRDA tagset. Two appendices are also found within this guide. The first provides a labeled portion of a meeting and the second contains information regarding tags used for a select number of meetings."
]
} |
1708.02254 | 2744942019 | Questions play a prominent role in social interactions, performing rhetorical functions that go beyond that of simple informational exchange. The surface form of a question can signal the intention and background of the person asking it, as well as the nature of their relation with the interlocutor. While the informational nature of questions has been extensively examined in the context of question-answering applications, their rhetorical aspects have been largely understudied. In this work we introduce an unsupervised methodology for extracting surface motifs that recur in questions, and for grouping them according to their latent rhetorical role. By applying this framework to the setting of question sessions in the UK parliament, we show that the resulting typology encodes key aspects of the political discourse---such as the bifurcation in questioning behavior between government and opposition parties---and reveals new insights into the effects of a legislator's tenure and political career ambitions. | Pragmatic dimensions One important pragmatic dimension of questions that has been previously studied computationally is their level of politeness @cite_8 @cite_0 ; in the specific context of making requests, politeness was shown to correlate with the social status of the asker. studied another rhetorical aspect by examining linguistic attributes distinguishing serviceable requests addressed to police on social media from general conversation. Previous research has also been directed at identifying rhetorical questions @cite_26 and understanding the motivations of their askers'' @cite_7 . Using the relationship between questions and answers, our work examines the rhetorical and social aspect of questions without predefining a pragmatic dimension and without relying on labeled data. We also complement these efforts in analyzing a broader range of situations in which questions may be posed without an information-seeking intent. | {
"cite_N": [
"@cite_0",
"@cite_26",
"@cite_7",
"@cite_8"
],
"mid": [
"2530299326",
"",
"2571944438",
"2157163421"
],
"abstract": [
"We present an interpretable neural network approach to predicting and understanding politeness in natural language requests. Our models are based on simple convolutional neural networks directly on raw text, avoiding any manual identification of complex sentiment or syntactic features, while performing better than such feature-based models from previous work. More importantly, we use the challenging task of politeness prediction as a testbed to next present a much-needed understanding of what these successful networks are actually learning. For this, we present several network visualizations based on activation clusters, first derivative saliency, and embedding space transformations, helping us automatically identify several subtle linguistics markers of politeness theories. Further, this analysis reveals multiple novel, high-scoring politeness strategies which, when added back as new features, reduce the accuracy gap between the original featurized system and the neural model, thus providing a clear quantitative interpretation of the success of these neural networks.",
"",
"Social media provides a platform for seeking information from a large user base. Information seeking in social media, however, occurs simultaneously with users expressing their viewpoints by making statements. Rhetorical questions have the form of a question but serve the function of a statement and might mislead platforms assisting information seeking in social media. It becomes difficult to identify rhetorical questions as they are not syntactically different from other questions. In this paper, we develop a framework to identify rhetorical questions by modeling the motivations of the users to post them. We focus on one motivation of the users drawing from linguistic theories, to implicitly convey a message. We develop a framework from this motivation to identify rhetorical questions in social media and evaluate the framework using questions posted on Twitter. This is the first framework to model the motivations for posting rhetorical questions to identify them on social",
"We propose a computational framework for identifying linguistic aspects of politeness. Our starting point is a new corpus of requests annotated for politeness, which we use to evaluate aspects of politeness theory and to uncover new interactions between politeness markers and context. These findings guide our construction of a classifier with domain-independent lexical and syntactic features operationalizing key components of politeness theory, such as indirection, deference, impersonalization and modality. Our classifier achieves close to human performance and is effective across domains. We use our framework to study the relationship between politeness and social power, showing that polite Wikipedia editors are more likely to achieve high status through elections, but, once elevated, they become less polite. We see a similar negative correlation between politeness and power on Stack Exchange, where users at the top of the reputation scale are less polite than those at the bottom. Finally, we apply our classifier to a preliminary analysis of politeness variation by gender and community."
]
} |
1708.02190 | 2744921630 | Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3. | Several lines of results have shown that intrinsically motivated exploration and learning mechanisms are particularly useful in the context of learning to solve reinforcement learning problems with sparse or deceptive rewards. For example, several state-of-the-art performances of Deep Reinforcement Learning algorithms, such as letting a machine learn how to solve complex video games, have been achieved by complementing the extrinsic rewards (number of points won) with an intrinsic reward pushing the learner to explore for improving its predictions of the world dynamics @cite_41 @cite_44 . An even more radical approach for solving problems with rare or deceptive extrinsic rewards has been to completely ignore extrinsic rewards, and let the machine explore the environment for the sole purpose of learning to predict the consequences of its actions @cite_16 @cite_13 or of learning to control self-generated goals @cite_40 @cite_2 , or to generate novel outcomes @cite_21 . This was shown for example to allow robots to learn tool use @cite_43 or to learn how to play some video games @cite_31 without ever observing the extrinsic reward. | {
"cite_N": [
"@cite_41",
"@cite_21",
"@cite_44",
"@cite_43",
"@cite_40",
"@cite_2",
"@cite_31",
"@cite_16",
"@cite_13"
],
"mid": [
"2419612459",
"2151083897",
"",
"",
"2004303440",
"2000514530",
"2614839826",
"",
"2101524054"
],
"abstract": [
"We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across observations. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into intrinsic rewards and obtain significantly improved exploration in a number of hard games, including the infamously difficult Montezuma's Revenge.",
"In evolutionary computation, the fitness function normally measures progress toward an objective in the search space, effectively acting as an objective function. Through deception, such objective functions may actually prevent the objective from being reached. While methods exist to mitigate deception, they leave the underlying pathology untreated: Objective functions themselves may actively misdirect search toward dead ends. This paper proposes an approach to circumventing deception that also yields a new perspective on open-ended evolution. Instead of either explicitly seeking an objective or modeling natural evolution to capture open-endedness, the idea is to simply search for behavioral novelty. Even in an objective-based problem, such novelty search ignores the objective. Because many points in the search space collapse to a single behavior, the search for novelty is often feasible. Furthermore, because there are only so many simple behaviors, the search for novelty leads to increasing complexity. By decoupling open-ended search from artificial life worlds, the search for novelty is applicable to real world problems. Counterintuitively, in the maze navigation and biped walking tasks in this paper, novelty search significantly outperforms objective-based search, suggesting the strange conclusion that some problems are best solved by methods that ignore the objective. The main lesson is the inherent limitation of the objective-based paradigm and the unexploited opportunity to guide search through other means.",
"",
"",
"We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills policies that solve a corresponding distribution of parameterized tasks goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: (1) learning the inverse kinematics in a highly-redundant robotic arm, (2) learning omnidirectional locomotion with motor primitives in a quadruped robot, and (3) an arm learning to control a fishing rod with a flexible wire. We show that (1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; (2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; (3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics.",
"In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL",
"",
"Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology"
]
} |
1708.02190 | 2744921630 | Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3. | Some approaches to intrinsically motivated exploration have used intrinsic rewards to value visited actions and states through measuring their novelty or the improvement of predictions that they provide, e.g. @cite_5 @cite_0 @cite_16 @cite_13 or more recently @cite_41 @cite_7 @cite_31 . However, organizing intrinsically motivated exploration at the higher level of goals (conceptualized as parameterized RL problems), by sampling goals according to measures such as competence progress @cite_2 , has been proposed and shown to be more efficient in contexts with high-dimensional continuous action spaces and strong time constraints for interaction with the environment @cite_40 . | {
"cite_N": [
"@cite_7",
"@cite_41",
"@cite_0",
"@cite_40",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_16",
"@cite_13"
],
"mid": [
"2417786368",
"2419612459",
"",
"2004303440",
"2000514530",
"1491843047",
"2614839826",
"",
"2101524054"
],
"abstract": [
"Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.",
"We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across observations. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into intrinsic rewards and obtain significantly improved exploration in a number of hard games, including the infamously difficult Montezuma's Revenge.",
"",
"We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills policies that solve a corresponding distribution of parameterized tasks goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: (1) learning the inverse kinematics in a highly-redundant robotic arm, (2) learning omnidirectional locomotion with motor primitives in a quadruped robot, and (3) an arm learning to control a fishing rod with a flexible wire. We show that (1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; (2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; (3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics.",
"This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments.",
"In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. Demo video and code available at this https URL",
"",
"Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology"
]
} |
1708.02190 | 2744921630 | Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3. | @cite_3 , @cite_10 and @cite_26 proposed methods that can be framed as IMGEPs, however they have considered notions of goals restricted to the reaching of states or direct sensory measurements, did not consider goal-parameterized rewards that can be computed for any goal, used different intrinsic rewards, and did not evaluate these algorithms in robotic setups. The notion of auxiliary tasks is also related to IMGEPs in the sense that it allows a learner to acquire tasks with rare rewards by adding several other objectives which increase the density of information obtained from the environment @cite_42 . Another line of related work @cite_4 proposed a theoretical framework for automatic generation of problem sequences for machine learners, however it has focused on theoretical considerations and experiments on abstract problems. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_42",
"@cite_3",
"@cite_10"
],
"mid": [
"2963262099",
"2046578104",
"2950872548",
"2556477470",
"2952578114"
],
"abstract": [
"Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. One of the key difficulties is insufficient exploration, resulting in an agent being unable to learn robust policies. Intrinsically motivated agents can explore new behavior for their own sake rather than to directly solve external goals. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical action-value functions, operating at different temporal scales, with goal-driven intrinsically motivated deep reinforcement learning. A top-level q-value function learns a policy over intrinsic goals, while a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse and delayed feedback: (1) a complex discrete stochastic decision process with stochastic transitions, and (2) the classic ATARI game -'Montezuma's Revenge'.",
"Like a scientist or a playing child, PowerPlay (Schmidhuber, 2011) not only learns new skills to solve given problems, but also invents new interesting problems by itself. By design, it continually comes up with the fastest to find, initially novel, but eventually solvable tasks. It also continually simplifies or compresses or speeds up solutions to previous tasks. Here we describe first experiments with PowerPlay. A self-delimiting recurrent neural network SLIM RNN (Schmidhuber, 2012) is used as a general computational problem solving architecture. Its connection weights can encode arbitrary, self-delimiting, halting or non-halting programs affecting both environment (through effectors) and internal states encoding abstractions of event sequences. Our PowerPlay-driven SLIM RNN learns to become an increasingly general solver of self-invented problems, continually adding new problem solving procedures to its growing skill repertoire. Extending a recent conference paper (Srivastava, Steunebrink, Stollenga, & Schmidhuber, 2012), we identify interesting, emerging, developmental stages of our open-ended system. We also show how it automatically self-modularizes, frequently re-using code for previously invented skills, always trying to invent novel tasks that can be quickly validated because they do not require too many weight changes affecting too many previous tasks.",
"Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880 expert human performance, and a challenging suite of first-person, three-dimensional tasks leading to a mean speedup in learning of 10 @math and averaging 87 expert human performance on Labyrinth.",
"In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"We present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cotemporal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sensory input from a complex three-dimensional environment. The presented formulation enables learning without a fixed goal at training time, and pursuing dynamically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trained using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments."
]
} |
1708.02190 | 2744921630 | Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3. | In machine learning, the concept of curriculum learning @cite_20 has most often been used in the context of training neural networks to solve prediction problems. Many approaches have used hand-designed learning curriculum @cite_34 , but recently it was shown how learning progress could be used to automate intrinsically motivated curriculum learning in LSTMs @cite_27 . However, these approaches have not considered curriculum learning of sets of reinforcement learning problems which is central in the IMGEP framework, and assumed the pre-existence of a database with learning exemplars to sample from. In recent related work, @cite_29 studied how intrinsic rewards based on learning progress could also be used to automatically generate a learning curriculum with discrete sets of reinforcement learning problems, but did not consider high-dimensional modular parameterized RL problems. The concept of "curriculum learning" has also been called "developmental trajectories" in prior work on computational modelling of intrinsically motivated exploration @cite_13 , and in particular on the topic of intrinsically motivated goal exploration @cite_40 @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_29",
"@cite_27",
"@cite_40",
"@cite_34",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"2731828881",
"2605801332",
"2004303440",
"1581407678",
"2101524054",
""
],
"abstract": [
"",
"We propose Teacher-Student Curriculum Learning (TSCL), a framework for automatic curriculum learning, where the Student tries to learn a complex task and the Teacher automatically chooses subtasks from a given set for the Student to train on. We describe a family of Teacher algorithms that rely on the intuition that the Student should practice more those tasks on which it makes the fastest progress, i.e. where the slope of the learning curve is highest. In addition, the Teacher algorithms address the problem of forgetting by also choosing tasks where the Student's performance is getting worse. We demonstrate that TSCL matches or surpasses the results of carefully hand-crafted curricula in two tasks: addition of decimal numbers with LSTM and navigation in Minecraft. Using our automatically generated curriculum enabled to solve a Minecraft maze that could not be solved at all when training directly on solving the maze, and the learning was an order of magnitude faster than uniform sampling of subtasks.",
"We introduce a method for automatically selecting the path, or syllabus, that a neural network follows through a curriculum so as to maximise learning efficiency. A measure of the amount that the network learns from each data sample is provided as a reward signal to a nonstationary multi-armed bandit algorithm, which then determines a stochastic syllabus. We consider a range of signals derived from two distinct indicators of learning progress: rate of increase in prediction accuracy, and rate of increase in network complexity. Experimental results for LSTM networks on three curricula demonstrate that our approach can significantly accelerate learning, in some cases halving the time required to attain a satisfactory performance level.",
"We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills policies that solve a corresponding distribution of parameterized tasks goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: (1) learning the inverse kinematics in a highly-redundant robotic arm, (2) learning omnidirectional locomotion with motor primitives in a quadruped robot, and (3) an arm learning to control a fishing rod with a flexible wire. We show that (1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; (2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; (3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"Recurrent Neural Networks (RNNs) with Long Short-Term Memory units (LSTM) are widely used because they are expressive and are easy to train. Our interest lies in empirically evaluating the expressiveness and the learnability of LSTMs in the sequence-to-sequence regime by training them to evaluate short computer programs, a domain that has traditionally been seen as too complex for neural networks. We consider a simple class of programs that can be evaluated with a single left-to-right pass using constant memory. Our main result is that LSTMs can learn to map the character-level representations of such programs to their correct outputs. Notably, it was necessary to use curriculum learning, and while conventional curriculum learning proved ineffective, we developed a new variant of curriculum learning that improved our networks' performance in all experimental conditions. The improved curriculum had a dramatic impact on an addition problem, making it possible to train an LSTM to add two 9-digit numbers with 99 accuracy.",
"Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology",
""
]
} |
1708.02196 | 2745060371 | This paper presents a joint trajectory smoothing and tracking framework for a specific class of targets with smooth motion. We model the target trajectory by a continuous function of time (FoT), which leads to a curve fitting approach that finds a trajectory FoT fitting the sensor data in a sliding time-window. A simulation study is conducted to demonstrate the effectiveness of our approach in tracking a maneuvering target, in comparison with the conventional filters and smoothers. Note to Practitioners —Estimation, such as automatically tracking and predicting the movement of an aircraft, a train, or a bus, plays a key role in our daily life. In this paper, we provide a new approach for the online estimation of the target trajectory function by means of fitting the time-series observation, which accommodates the lack of quantifiable knowledge about the target motion and of the statistical property of the sensor observation noise. The resulting trajectory function can be used to infer either the past or the present state of the target. Engineering-friendly strategies are provided for computationally efficient implementation. The proposed approach is particularly appealing to a broad range of real-world targets that move in smooth courses, such as passenger aircraft and ships. | Target trajectory estimation and analysis do not only allow recording the history of past locations and predicting the future but also provide a means to specify behavioral patterns or relationships between locations observed in time series and to guide target detection in future frames @cite_17 , to name a few. Most existing works however are based on either deterministic or stochastic HMM assumption of the target motion and need statistical property of the observation, which forms the key difference to our approach. In addition, no explicit attempts explicitly unify the tasks of smoothing, filtering, tracking and forecasting, fully based on data fitting learning. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2148442626"
],
"abstract": [
"We present a novel approach for multi-object tracking which considers object detection and spacetime trajectory estimation as a coupled optimization problem. It is formulated in a hypothesis selection framework and builds upon a state-of-the-art pedestrian detector. At each time instant, it searches for the globally optimal set of spacetime trajectories which provides the best explanation for the current image and for all evidence collected so far, while satisfying the constraints that no two objects may occupy the same physical space, nor explain the same image pixels at any point in time. Successful trajectory hypotheses are fed back to guide object detection in future frames. The optimization procedure is kept efficient through incremental computation and conservative hypothesis pruning. The resulting approach can initialize automatically and track a large and varying number of persons over long periods and through complex scenes with clutter, occlusions, and large-scale background changes. Also, the global optimization framework allows our system to recover from mismatches and temporarily lost tracks. We demonstrate the feasibility of the proposed approach on several challenging video sequences."
]
} |
1708.02196 | 2745060371 | This paper presents a joint trajectory smoothing and tracking framework for a specific class of targets with smooth motion. We model the target trajectory by a continuous function of time (FoT), which leads to a curve fitting approach that finds a trajectory FoT fitting the sensor data in a sliding time-window. A simulation study is conducted to demonstrate the effectiveness of our approach in tracking a maneuvering target, in comparison with the conventional filters and smoothers. Note to Practitioners —Estimation, such as automatically tracking and predicting the movement of an aircraft, a train, or a bus, plays a key role in our daily life. In this paper, we provide a new approach for the online estimation of the target trajectory function by means of fitting the time-series observation, which accommodates the lack of quantifiable knowledge about the target motion and of the statistical property of the sensor observation noise. The resulting trajectory function can be used to infer either the past or the present state of the target. Engineering-friendly strategies are provided for computationally efficient implementation. The proposed approach is particularly appealing to a broad range of real-world targets that move in smooth courses, such as passenger aircraft and ships. | More relevantly to our approach, efforts have been devoted to continuous time trajectory estimation via data fitting in different disciplines. De facto, signal processing stems from the interpolation and extrapolation of a sequence of observations @cite_82 . Data fitting is a self-contained mathematical problem and a prosperous research theme by its own, which has proven to be a powerful and universal method for pattern learning and time series data prediction, especially when adequate analytical solutions may not exist. Moreover, the recursive LS algorithm reformulated in state-space form was recognized a special case of the Kalman filter (KF) @cite_48 @cite_21 . | {
"cite_N": [
"@cite_48",
"@cite_21",
"@cite_82"
],
"mid": [
"2170081783",
"2169135280",
""
],
"abstract": [
"This discussion is directed to least-squares estimation theory, from its inception by Gauss1 to its modern form, as developed by Kalman.2 To aid in furnishing the desired perspective, the contributions and insights provided by Gauss are described and related to developments that have appeared more recently (that is, in the 20th century). In the author's opinion, it is enlightening to consider just how far (or how little) we have advanced since the initial developments and to recognize the truth in the saying that we stand on the shoulders of giants.''",
"Adaptive filtering algorithms fall into four main groups: recursive least squares (RLS) algorithms and the corresponding fast versions; QR- and inverse QR-least squares algorithms; least squares lattice (LSL) and QR decomposition-based least squares lattice (QRD-LSL) algorithms; and gradient-based algorithms such as the least-mean square (LMS) algorithm. Our purpose in this article is to present yet another approach, for the sake of achieving two important goals. The first one is to show how several different variants of the recursive least-squares algorithm can be directly related to the widely studied Kalman filtering problem of estimation and control. Our second important goal is to present all the different versions of the RLS algorithm in computationally convenient square-root forms: a prearray of numbers has to be triangularized by a rotation, or a sequence of elementary rotations, in order to yield a postarray of numbers. The quantities needed to form the next prearray can then be read off from the entries of the postarray, and the procedure can be repeated; the explicit forms of the rotation matrices are not needed in most cases. >",
""
]
} |
1708.02196 | 2745060371 | This paper presents a joint trajectory smoothing and tracking framework for a specific class of targets with smooth motion. We model the target trajectory by a continuous function of time (FoT), which leads to a curve fitting approach that finds a trajectory FoT fitting the sensor data in a sliding time-window. A simulation study is conducted to demonstrate the effectiveness of our approach in tracking a maneuvering target, in comparison with the conventional filters and smoothers. Note to Practitioners —Estimation, such as automatically tracking and predicting the movement of an aircraft, a train, or a bus, plays a key role in our daily life. In this paper, we provide a new approach for the online estimation of the target trajectory function by means of fitting the time-series observation, which accommodates the lack of quantifiable knowledge about the target motion and of the statistical property of the sensor observation noise. The resulting trajectory function can be used to infer either the past or the present state of the target. Engineering-friendly strategies are provided for computationally efficient implementation. The proposed approach is particularly appealing to a broad range of real-world targets that move in smooth courses, such as passenger aircraft and ships. | However, most existing works work in batch manners based on either MLE @cite_3 or Bayesian inference @cite_81 @cite_1 or as an extra scheme to a recursive filtering algorithm @cite_52 @cite_20 @cite_35 . In @cite_3 , directional bearing data from one or multiple sensors are investigated, where Cardinal splines (i.e., splines with equally spaced knots) of different dimensions are fit to the data in the MLE manner; this is one of the earliest and few attempts that assume a spatio-temporal trajectory for tracking. In @cite_81 , the trajectory is approximated by a cubic spline with an unknown number of knots in 2D state plane, and the function estimate is determined from positional measurements which are assumed to be received in batches at irregular time intervals. For the data drawn from an exponential family, the spline knot con-figurations (number and locations) are changed by reversible-jump Markov chain Monte Carlo @cite_89 . Much more complicated, artificial neural networks were considered as a parametric non-linear model in @cite_11 , which is unaffordable in computation for online estimation. | {
"cite_N": [
"@cite_35",
"@cite_1",
"@cite_52",
"@cite_3",
"@cite_89",
"@cite_81",
"@cite_20",
"@cite_11"
],
"mid": [
"2053931670",
"",
"2106669646",
"1996130934",
"2068120653",
"2100687146",
"2165465194",
"2187460249"
],
"abstract": [
"Maneuvering target tracking is a challenge. Target’s sudden speed or direction changing would make the common filtering tracker divergence. To improve the accuracy of maneuvering target tracking, we propose a tracking algorithm based on spline fitting. Curve fitting, based on historical point trace, reflects the mobility information. The innovation of this paper is assuming that there is no dynamic motion model, and prediction is only based on the curve fitting over the measured data. Monte Carlo simulation results show that, when sea targets are maneuvering, the proposed algorithm has better accuracy than the conventional Kalman filter algorithm and the interactive multiple model filtering algorithm, maintaining simple structure and small amount of storage.",
"",
"In underwater target tracking applications, measurement uncertainty and inaccuracies are usually modeled as additive Gaussian noise. The Gaussian model of noise may not be appropriate in many practical systems. The non-Gaussian noise and the model non-linearity arising in a tracking system will seriously affect the tracking performance. This paper discusses one way to create a robust version of the extended Kalman filter for enhanced underwater target tracking. State estimation in the filter is done through the robust regression approach and Welsch's proposal is used in the regression process. Monte Carlo simulation results with heavy-tailed contaminated observation noise demonstrate the robustness of the proposed estimation procedure. >",
"Abstract In many applications, bearings are measured to a moving object with the goal of estimating the object's course of movement. If movement is appropriately modeled as a smooth deterministic curve in the plane, then a cubic spline is a reasonable representation of the curve. Maximum likelihood estimators are presented for parameters of regression splines, assuming that observation errors follow a Von Mises distribution. Location estimates are obtainable even when data are sparse. Path estimation error, number and placement of knots, and outlier detection are discussed. Examples, including both simulated paths and observations from a wildlife radio-tracking study, are presented.",
"We describe a Bayesian method, for fitting curves to data drawn from an exponential family, that uses splines for which the number and locations of knots are free parameters. The method uses reversible jump Markov chain Monte Carlo to change the knot configurations and a locality heuristic to speed up mixing. For nonnormal models, we approximate the integrated likelihood ratios needed to compute acceptance probabilities by using the Bayesian information criterion, BIC, under priors that make this approximation accurate. Our technique is based on a marginalised chain on the knot number and locations, but we provide methods for inference about the regression coefficients, and functions of them, in both normal and nonnormal models. Simulation results suggest that the method performs well, and we illustrate the method in two neuroscience applications.",
"A curve fitting algorithm for batch ship trajectory estimation that employs Bayesian statistical inference for non-parametric regression is presented. It assumes no knowledge about the ship motion model while only assuming standard ship maneuvers. The trajectory is thought to be well represented by a cubic spline with an unknown number of knots in two-dimensional Euclidean plane. The function estimate is determined from positional measurements which are assumed to be received in batches at irregular time intervals. As the measurements are often delivered by different sensors the measurement errors are assumed to be heteroscedastic and correlated. A fully Bayesian approach is adopted by defining the prior distributions on all unknown parameters: the spline coefficients as well as the number and the locations of knots. The quality of the estimator algorithm is evaluated statistically using several simulated scenarios. The results suggest that the algorithm represents efficient methodology for trajectory estimation in maritime surveillance, especially in the absence of prior knowledge of the motion model.",
"Tracking of highly maneuvering targets with unknown behavior is a difficult problem in state estimation. This paper presents an interacting multiple model algorithm (IMM) utilizing adaptive turn rate models to track a maneuvering target. The turn rate is calculated at each step form the estimator of velocity and the radius of curvature of the trajectory of the target by using Least Square (LS) and curve fitting theory. Simulation in different scenario proves that the turn-rate estimation techniques in this adaptive framework can significantly solve the problem of tracking maneuvering targets.",
"Ground-based aircraft trajectory prediction is a critical issue for air traffic management. A safe and efficient prediction is a prerequisite for the implementation of automated tools that detect and solve conflicts between trajectories. Moreover, regarding the safety constraints, it could be more reasonable to predict intervals rather than precise aircraft positions . In this paper, a standard point-mass model and statistical regression method is used to predict the altitude of climbing aircraft. In addition to the standard linear regression model, two common non-linear regression methods, neural networks and Loess are used. A dataset is extracted from two months of radar and meteorological recordings, and several potential explanatory variables are computed for every sampled climb segment. A Principal Component Analysis allows us to reduce the dimensionality of the problems, using only a subset of principal components as input to the regression methods. The prediction models are scored by performing a 10-fold cross-validation. Statistical regression results method appears promising. The experiment part shows that the proposed regression models are much more efficient than the standard point-mass model. The prediction intervals obtained by our methods have the advantage of being more reliable and narrower than those found by point-mass model."
]
} |
1708.02255 | 2952032703 | Generative statistical models of chord sequences play crucial roles in music processing. To capture syntactic similarities among certain chords (e.g. in C major key, between G and G7 and between F and Dm), we study hidden Markov models and probabilistic context-free grammar models with latent variables describing syntactic categories of chord symbols and their unsupervised learning techniques for inducing the latent grammar from data. Surprisingly, we find that these models often outperform conventional Markov models in predictive power, and the self-emergent categories often correspond to traditional harmonic functions. This implies the need for chord categories in harmony models from the informatics perspective. | In Ref. @cite_0 , unsupervised learning was applied to a variant of PCFG models for chord symbols in which nonterminals describe harmonic functions. In this study, the nonterminals and the production rules were manually determined in accordance with existing data of harmony analysis and only the probabilities were learned unsupervisedly. Our PCFG models can be considered as extensions of this model as we allow larger numbers of nonterminals and they have no prior labels or restricted production rules. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2550442014"
],
"abstract": [
"While there is a growing body of work proposing grammars for music, there is little work testing analytical grammars in a generative setting. We explore the process of learning production probabilities for musical grammars from musical corpora and test the results using Kulitta, a recently developed framework for automated composition. To do this, we extend a well-known algorithm for learning production probabilities for context-free grammars (CFGs) to support various musical CFGs as well as an additional category of grammars called probabilistic temporal graph grammars."
]
} |
1708.02255 | 2952032703 | Generative statistical models of chord sequences play crucial roles in music processing. To capture syntactic similarities among certain chords (e.g. in C major key, between G and G7 and between F and Dm), we study hidden Markov models and probabilistic context-free grammar models with latent variables describing syntactic categories of chord symbols and their unsupervised learning techniques for inducing the latent grammar from data. Surprisingly, we find that these models often outperform conventional Markov models in predictive power, and the self-emergent categories often correspond to traditional harmonic functions. This implies the need for chord categories in harmony models from the informatics perspective. | Self-emergent HMMs have been applied for unsupervised POS tagging @cite_22 @cite_4 . In Ref. @cite_4 , Bayesian HMMs were learned using GS and they were shown to yield significantly higher tagging accuracies than HMMs trained by the EM algorithm. In Ref. @cite_22 , it was shown that the performance of HMMs learned by the EM algorithm is largely influenced by the random initialisation and when the size of the state space is small (e.g. 25), they yielded a similar tagging accuracy as the Bayesian HMMs. Since the tagging accuracy was the focus of these studies, the perplexities of the output symbols (words) were not measured. | {
"cite_N": [
"@cite_4",
"@cite_22"
],
"mid": [
"2099873701",
"1570013475"
],
"abstract": [
"Unsupervised learning of linguistic structure is a difficult problem. A common approach is to define a generative model and maximize the probability of the hidden structure given the observed data. Typically, this is done using maximum-likelihood estimation (MLE) of the model parameters. We show using part-of-speech tagging that a fully Bayesian approach can greatly improve performance. Rather than estimating a single set of parameters, the Bayesian approach integrates over all possible parameter values. This difference ensures that the learned structure will have high probability over a range of possible parameters, and permits the use of priors favoring the sparse distributions that are typical of natural language. Our model has the structure of a standard trigram HMM, yet its accuracy is closer to that of a state-of-the-art discriminative model (Smith and Eisner, 2005), up to 14 percentage points better than MLE. We find improvements both when training from data alone, and using a tagging dictionary.",
"This paper investigates why the HMMs estimated by Expectation-Maximization (EM) produce such poor results as Part-of-Speech (POS) taggers. We find that the HMMs estimated by EM generally assign a roughly equal number of word tokens to each hidden state, while the empirical distribution of tokens to POS tags is highly skewed. This motivates a Bayesian approach using a sparse prior to bias the estimator toward such a skewed distribution. We investigate Gibbs Sampling (GS) and Variational Bayes (VB) estimators and show that VB converges faster than GS for this task and that VB significantly improves 1-to-1 tagging accuracy over EM. We also show that EM does nearly as well as VB when the number of hidden HMM states is dramatically reduced. We also point out the high variance in all of these estimators, and that they require many more iterations to approach convergence than usually thought."
]
} |
1708.02255 | 2952032703 | Generative statistical models of chord sequences play crucial roles in music processing. To capture syntactic similarities among certain chords (e.g. in C major key, between G and G7 and between F and Dm), we study hidden Markov models and probabilistic context-free grammar models with latent variables describing syntactic categories of chord symbols and their unsupervised learning techniques for inducing the latent grammar from data. Surprisingly, we find that these models often outperform conventional Markov models in predictive power, and the self-emergent categories often correspond to traditional harmonic functions. This implies the need for chord categories in harmony models from the informatics perspective. | Unsupervised learning of PCFG models was studied in the context of grammar induction and parsing @cite_6 . The maximum likelihood estimation using the EM algorithm and Bayesian estimation using GS were compared and the latter method was shown to give higher accuracies in some cases. It was argued that even though for both learning schemes fully unsupervised grammar induction using simple PCFG models is difficult, Bayesian learning seems to induce more meaningful grammar owing to its ability to prefer sparse grammars. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2164151151"
],
"abstract": [
"This paper presents two Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference of probabilistic context free grammars (PCFGs) from terminal strings, providing an alternative to maximum-likelihood estimation using the Inside-Outside algorithm. We illustrate these methods by estimating a sparse grammar describing the morphology of the Bantu language Sesotho, demonstrating that with suitable priors Bayesian techniques can infer linguistic structure in situations where maximum likelihood methods such as the Inside-Outside algorithm only produce a trivial grammar."
]
} |
1708.02255 | 2952032703 | Generative statistical models of chord sequences play crucial roles in music processing. To capture syntactic similarities among certain chords (e.g. in C major key, between G and G7 and between F and Dm), we study hidden Markov models and probabilistic context-free grammar models with latent variables describing syntactic categories of chord symbols and their unsupervised learning techniques for inducing the latent grammar from data. Surprisingly, we find that these models often outperform conventional Markov models in predictive power, and the self-emergent categories often correspond to traditional harmonic functions. This implies the need for chord categories in harmony models from the informatics perspective. | Unsupervised learning has also been applied to adapt statistical language models for particular data. For NLP, there exist corpora of syntactic trees that are based on widely accepted linguistic theories (e.g. @cite_27 ), from which production probabilities can be obtained by supervised learning. On the other hand, studies have shown that extending the standard annotation symbols (e.g. NP) to incorporate context dependence and subcategories of words, often called symbol refinement, improves the accuracy of parsing @cite_12 . Unsupervised learning of variants of PCFG models has been successfully applied to find optimal refinement of nonterminal symbols from data @cite_23 . | {
"cite_N": [
"@cite_27",
"@cite_23",
"@cite_12"
],
"mid": [
"",
"2152561660",
"1551104980"
],
"abstract": [
"",
"This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Fine-grained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6 (F1, sentences ≤ 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.",
"The kinds of tree representations used in a treebank corpus can have a dramatic effect on performance of a parser based on the PCFG estimated from that corpus, causing the estimated likelihood of a tree to differ substantially from its frequency in the training corpus. This paper points out that the Penn II treebank representations are of the kind predicted to have such an effect, and describes a simple node relabeling transformation that improves a treebank PCFG-based parser's average precision and recall by around 8 , or approximately half of the performance difference between a simple PCFG model and the best broad-coverage parsers available today. This performance variation comes about because any PCFG, and hence the corpus of trees from which the PCFG is induced, embodies independence assumptions about the distribution of words and phrases. The particular independence assumptions implicit in a tree representation can be studied theoretically and investigated empirically by means of a tree transformation detransformation process."
]
} |
1708.02179 | 2743330515 | Human pose analysis is presently dominated by deep convolutional networks trained with extensive manual annotations of joint locations and beyond. To avoid the need for expensive labeling, we exploit spatiotemporal relations in training videos for self-supervised learning of pose embeddings. The key idea is to combine temporal ordering and spatial placement estimation as auxiliary tasks for learning pose similarities in a Siamese convolutional network. Since the self-supervised sampling of both tasks from natural videos can result in ambiguous and incorrect training labels, our method employs a curriculum learning idea that starts training with the most reliable data samples and gradually increases the difficulty. To further refine the training process we mine repetitive poses in individual videos which provide reliable labels while removing inconsistencies. Our pose embeddings capture visual characteristics of human pose that can boost existing supervised representations in human pose estimation and retrieval. We report quantitative and qualitative results on these tasks in Olympic Sports, Leeds Pose Sports and MPII Human Pose datasets. | Pose estimation aims at finding locations of body joints, whereas pose retrieval or embedding finds a metric that can retrieve the most similar poses and discriminates samples according to their pose information, without localizing joints directly. With the advancements in convolutional neural networks @cite_24 , pose estimation is also dominated by deep learning-based methods. Toshev and Szegedy @cite_28 estimated joint locations directly regressing in a CNN architecture. Instead of simply regressing joint locations, Chen and Yuille @cite_9 learned pairwise part relations combining CNN with graphical models. Tompson @cite_12 exploited CNNs for relationship between body parts with a cascade refinement. A recent work by Newell @cite_2 used fully convolutional networks in a bottom-up top-down manner to predict heatmaps for joint locations. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_24",
"@cite_2",
"@cite_12"
],
"mid": [
"2113325037",
"2155394491",
"",
"2950762923",
"2952422028"
],
"abstract": [
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.",
"",
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"This paper proposes a new hybrid architecture that consists of a deep Convolutional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques."
]
} |
1708.02179 | 2743330515 | Human pose analysis is presently dominated by deep convolutional networks trained with extensive manual annotations of joint locations and beyond. To avoid the need for expensive labeling, we exploit spatiotemporal relations in training videos for self-supervised learning of pose embeddings. The key idea is to combine temporal ordering and spatial placement estimation as auxiliary tasks for learning pose similarities in a Siamese convolutional network. Since the self-supervised sampling of both tasks from natural videos can result in ambiguous and incorrect training labels, our method employs a curriculum learning idea that starts training with the most reliable data samples and gradually increases the difficulty. To further refine the training process we mine repetitive poses in individual videos which provide reliable labels while removing inconsistencies. Our pose embeddings capture visual characteristics of human pose that can boost existing supervised representations in human pose estimation and retrieval. We report quantitative and qualitative results on these tasks in Olympic Sports, Leeds Pose Sports and MPII Human Pose datasets. | The first -type architecture @cite_14 was proposed to learn a similarity metric for signature verification. Similarity learning was also applied in human pose analysis. In @cite_8 and @cite_25 , body joint locations are used to create similar and dissimilar pairs of instances from annotated human pose datasets. @cite_25 also transferred a learned pose embedding to action recognition. | {
"cite_N": [
"@cite_14",
"@cite_25",
"@cite_8"
],
"mid": [
"2171590421",
"2428123325",
"789003102"
],
"abstract": [
"This paper describes the development of an algorithm for verification of signatures written on a touch-sensitive pad. The signature verification algorithm is based on an artificial neural network. The novel network presented here, called a “Siamese” time delay neural network, consists of two identical networks joined at their output. During training the network learns to measure the similarity between pairs of signatures. When used for verification, only one half of the Siamese network is evaluated. The output of this half network is the feature vector for the input signature. Verification consists of comparing this feature vector with a stored feature vector for the signer. Signatures closer than a chosen threshold to this stored representation are accepted, all other signatures are rejected as forgeries. System performance is illustrated with experiments performed in the laboratory.",
"We address the problem of learning a pose-aware, compact embedding that projects images with similar human poses to be placed close-by in the embedding space. The embedding function is built on a deep convolutional network, and trained with triplet-based rank constraints on real image data. This architecture allows us to learn a robust representation that captures differences in human poses by effectively factoring out variations in clothing, background, and imaging conditions in the wild. For a variety of pose-related tasks, the proposed pose embedding provides a cost-efficient and natural alternative to explicit pose estimation, circumventing challenges of localizing body joints. We demonstrate the efficacy of the embedding on pose-based image retrieval and action recognition problems.",
"We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method."
]
} |
1708.02179 | 2743330515 | Human pose analysis is presently dominated by deep convolutional networks trained with extensive manual annotations of joint locations and beyond. To avoid the need for expensive labeling, we exploit spatiotemporal relations in training videos for self-supervised learning of pose embeddings. The key idea is to combine temporal ordering and spatial placement estimation as auxiliary tasks for learning pose similarities in a Siamese convolutional network. Since the self-supervised sampling of both tasks from natural videos can result in ambiguous and incorrect training labels, our method employs a curriculum learning idea that starts training with the most reliable data samples and gradually increases the difficulty. To further refine the training process we mine repetitive poses in individual videos which provide reliable labels while removing inconsistencies. Our pose embeddings capture visual characteristics of human pose that can boost existing supervised representations in human pose estimation and retrieval. We report quantitative and qualitative results on these tasks in Olympic Sports, Leeds Pose Sports and MPII Human Pose datasets. | These works in pose estimation and similarity learning exploited large amounts of annotations (body joints or labeling of similar dissimilar postures). However, unsupervised learning methods without using labels showed promising performance in various learning tasks in the last decade. Self-supervised learning is very popular similar to classical unsupervised methods such as clustering, autoencoders @cite_1 , restricted Boltzman machines @cite_5 . The availability of big data motivated the community to investigate alternative sources of supervision such as ego-motion @cite_29 @cite_20 , colorization @cite_32 , image generation @cite_10 , spatial @cite_3 @cite_6 or temporal clues @cite_27 @cite_26 . As our approach belongs to the class of self-supervised methods using spatial and temporal information, we describe these methods in detail. | {
"cite_N": [
"@cite_26",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_6",
"@cite_27",
"@cite_5",
"@cite_10",
"@cite_20"
],
"mid": [
"",
"2951590555",
"",
"",
"2950187998",
"",
"219040644",
"2157629899",
"2173520492",
""
],
"abstract": [
"",
"The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.",
"",
"",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"When a vision system creates an interpretation of some input datn, it assigns truth values or probabilities to intcrnal hypothcses about the world. We present a non-dctcrministic method for assigning truth values that avoids many of the problcms encountered by existing relaxation methods. Instead of rcprcscnting probabilitics with realnumbers, we usc a more dircct encoding in which thc probability associated with a hypotlmis is rcprcscntcd by the probability hat it is in one of two states, true or false. Wc give a particular nondeterministic operator, based on statistical mechanics, for updating the truth values of hypothcses. The operator ensures that the probability of discovering a particular combination of hypothcscs is a simplc function of how good that combination is. Wc show that thcrc is a simple relationship bctween this operator and Bayesian inference, and we describe a learning rule which allows a parallel system to converge on a set ofweights that optimizes its perccptt al inferences.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
""
]
} |
1708.02179 | 2743330515 | Human pose analysis is presently dominated by deep convolutional networks trained with extensive manual annotations of joint locations and beyond. To avoid the need for expensive labeling, we exploit spatiotemporal relations in training videos for self-supervised learning of pose embeddings. The key idea is to combine temporal ordering and spatial placement estimation as auxiliary tasks for learning pose similarities in a Siamese convolutional network. Since the self-supervised sampling of both tasks from natural videos can result in ambiguous and incorrect training labels, our method employs a curriculum learning idea that starts training with the most reliable data samples and gradually increases the difficulty. To further refine the training process we mine repetitive poses in individual videos which provide reliable labels while removing inconsistencies. Our pose embeddings capture visual characteristics of human pose that can boost existing supervised representations in human pose estimation and retrieval. We report quantitative and qualitative results on these tasks in Olympic Sports, Leeds Pose Sports and MPII Human Pose datasets. | Wang and Gupta @cite_27 exploited videos by detecting interesting regions with SURF keypoints and tracking them. Then, they used a Siamese-triplet architecture with a ranking loss together with random negative selection and hard negative mining. However, tracking is not the best solution in the challenging context of pose analysis due to the non-rigid deformations of person patches which are in low resolution and contain too few keypoints to detect parts and track them precisely. | {
"cite_N": [
"@cite_27"
],
"mid": [
"219040644"
],
"abstract": [
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation."
]
} |
1708.02179 | 2743330515 | Human pose analysis is presently dominated by deep convolutional networks trained with extensive manual annotations of joint locations and beyond. To avoid the need for expensive labeling, we exploit spatiotemporal relations in training videos for self-supervised learning of pose embeddings. The key idea is to combine temporal ordering and spatial placement estimation as auxiliary tasks for learning pose similarities in a Siamese convolutional network. Since the self-supervised sampling of both tasks from natural videos can result in ambiguous and incorrect training labels, our method employs a curriculum learning idea that starts training with the most reliable data samples and gradually increases the difficulty. To further refine the training process we mine repetitive poses in individual videos which provide reliable labels while removing inconsistencies. Our pose embeddings capture visual characteristics of human pose that can boost existing supervised representations in human pose estimation and retrieval. We report quantitative and qualitative results on these tasks in Olympic Sports, Leeds Pose Sports and MPII Human Pose datasets. | In order to learn a better representation, we argue that temporal cues which aim to learn whether given inputs are from temporally close windows or not will be a more effective approach. The use of temporal cues to learn whether given inputs are from temporally close windows or not is an effective approach for representation learning. Local proximity in data (slow feature analysis, SFA) has first been proposed by Becker and Hinton @cite_22 . The most recent spatial and temporal self-supervised learning methods are inspired from SFA. Goroshin @cite_17 created a connection between slowness and metric learning by temporal coherence. Motivated by temporal smoothness in feature space, Jayaraman and Grauman @cite_16 exploited higher order coherence, which they referred to as steadiness, in various tasks. Slowness or steadiness criterion can introduce significant drawbacks mostly because of limited motion and the repetitive nature of human actions. Thus, we learn auxiliary tasks in relatively small temporal windows which do not contain more than a single cycle of action. Moreover, the use of curriculum learning @cite_33 and repetition mining refine and guide our self-supervised tasks to learn stronger temporal features. | {
"cite_N": [
"@cite_16",
"@cite_22",
"@cite_33",
"@cite_17"
],
"mid": [
"2285336231",
"2063971957",
"",
"2950760473"
],
"abstract": [
"How can unlabeled video augment visual learning? Existing methods perform \"slow\" feature analysis, encouraging the representations of temporally close frames to exhibit only small differences. While this standard approach captures the fact that high-level visual signals change slowly over time, it fails to capture *how* the visual content changes. We propose to generalize slow feature analysis to \"steady\" feature analysis. The key idea is to impose a prior that higher order derivatives in the learned feature space must be small. To this end, we train a convolutional neural network with a regularizer on tuples of sequential frames from unlabeled video. It encourages feature changes over time to be smooth, i.e., similar to the most recent changes. Using five diverse datasets, including unlabeled YouTube and KITTI videos, we demonstrate our method's impact on object, scene, and action recognition tasks. We further show that our features learned from unlabeled video can even surpass a standard heavily supervised pretraining approach.",
"THE standard form of back-propagation learning1 is implausible as a model of perceptual learning because it requires an external teacher to specify the desired output of the network. We show how the external teacher can be replaced by internally derived teaching signals. These signals are generated by using the assumption that different parts of the perceptual input have common causes in the external world. Small modules that look at separate but related parts of the perceptual input discover these common causes by striving to produce outputs that agree with each other (Fig. la). The modules may look at different modalities (such as vision and touch), or the same modality at different times (for example, the consecutive two-dimensional views of a rotating three-dimensional object), or even spatially adjacent parts of the same image. Our simulations show that when our learning procedure is applied to adjacent patches of two-dimensional images, it allows a neural network that has no prior knowledge of the third dimension to discover depth in random dot stereograms of curved surfaces.",
"",
"Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric."
]
} |
1708.01956 | 2743101122 | We aim to tackle a novel vision task called Weakly Supervised Visual Relation Detection (WSVRD) to detect "subject-predicate-object" relations in an image with object relation groundtruths available only at the image level. This is motivated by the fact that it is extremely expensive to label the combinatorial relations between objects at the instance level. Compared to the extensively studied problem, Weakly Supervised Object Detection (WSOD), WSVRD is more challenging as it needs to examine a large set of regions pairs, which is computationally prohibitive and more likely stuck in a local optimal solution such as those involving wrong spatial context. To this end, we present a Parallel, Pairwise Region-based, Fully Convolutional Network (PPR-FCN) for WSVRD. It uses a parallel FCN architecture that simultaneously performs pair selection and classification of single regions and region pairs for object and relation detection, while sharing almost all computation shared over the entire image. In particular, we propose a novel position-role-sensitive score map with pairwise RoI pooling to efficiently capture the crucial context associated with a pair of objects. We demonstrate the superiority of PPR-FCN over all baselines in solving the WSVRD challenge by using results of extensive experiments over two visual relation benchmarks. | . A recent trend in deep networks is to use convolutions instead of fully-connected (fc) layers such as ResNets @cite_43 and GoogLeNet @cite_5 . Different from fc layers where the input and output are fixed size, FCN can output dense predictions from arbitrary-sized inputs. Therefore, FCN is widely used in segmentation @cite_38 @cite_6 , image restoration @cite_29 , and dense object detection windows @cite_3 . In particular, our PPR-FCN is inspired by another benefit of FCN utilized in R-FCN @cite_16 : per-RoI computation can be shared by convolutions. This is appealing because the expensive computation of pairwise RoIs is replaced by almost cost-free pooling. | {
"cite_N": [
"@cite_38",
"@cite_29",
"@cite_6",
"@cite_3",
"@cite_43",
"@cite_5",
"@cite_16"
],
"mid": [
"2952632681",
"2154815154",
"2557993245",
"2613718673",
"2949650786",
"2950179405",
"2950800384"
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions.",
"Surveillance video parsing, which segments the video frames into several labels, e.g., face, pants, left-leg, has wide applications [41, 8]. However, pixel-wisely annotating all frames is tedious and inefficient. In this paper, we develop a Single frame Video Parsing (SVP) method which requires only one labeled frame per video in training stage. To parse one particular frame, the video segment preceding the frame is jointly considered. SVP (i) roughly parses the frames within the video segment, (ii) estimates the optical flow between frames and (iii) fuses the rough parsing results warped by optical flow to produce the refined parsing result. The three components of SVP, namely frame parsing, optical flow estimation and temporal fusion are integrated in an end-to-end manner. Experimental results on two surveillance video datasets show the superiority of SVP over state-of-the-arts. The collected video parsing datasets can be downloaded via http: liusi-group.com projects SVP for the further studies.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL"
]
} |
1708.01956 | 2743101122 | We aim to tackle a novel vision task called Weakly Supervised Visual Relation Detection (WSVRD) to detect "subject-predicate-object" relations in an image with object relation groundtruths available only at the image level. This is motivated by the fact that it is extremely expensive to label the combinatorial relations between objects at the instance level. Compared to the extensively studied problem, Weakly Supervised Object Detection (WSOD), WSVRD is more challenging as it needs to examine a large set of regions pairs, which is computationally prohibitive and more likely stuck in a local optimal solution such as those involving wrong spatial context. To this end, we present a Parallel, Pairwise Region-based, Fully Convolutional Network (PPR-FCN) for WSVRD. It uses a parallel FCN architecture that simultaneously performs pair selection and classification of single regions and region pairs for object and relation detection, while sharing almost all computation shared over the entire image. In particular, we propose a novel position-role-sensitive score map with pairwise RoI pooling to efficiently capture the crucial context associated with a pair of objects. We demonstrate the superiority of PPR-FCN over all baselines in solving the WSVRD challenge by using results of extensive experiments over two visual relation benchmarks. | . As there are no instance-level bounding boxes for training, the key challenge of WSOD is to localize and classify candidate RoIs simultaneously @cite_15 @cite_20 @cite_31 @cite_32 . The parallel architecture in PPR-FCN is inspired by the two-branch network of Bilen and Vedaldi @cite_35 , where the final detection score is a product of the scores from the parallel localization and classification branches. Similar structures can be also found in al @cite_40 @cite_1 . Such parallel design is different from MIL @cite_19 in a fundamental way as regions are selected by a localization branch, which is independent of the classification branch. In this manner, it helps to avoid one of the pitfalls of MIL, namely the tendency of the method to get stuck in local optima. | {
"cite_N": [
"@cite_35",
"@cite_31",
"@cite_1",
"@cite_32",
"@cite_19",
"@cite_40",
"@cite_15",
"@cite_20"
],
"mid": [
"2101611867",
"2952072685",
"2247513039",
"2951270658",
"2154318594",
"2519284461",
"2133324800",
"318792885"
],
"abstract": [
"Weakly supervised learning of object detection is an important problem in image understanding that still does not have a satisfactory solution. In this paper, we address this problem by exploiting the power of deep convolutional neural networks pre-trained on large-scale image-level classification tasks. We propose a weakly supervised deep detection architecture that modifies one such network to operate at the level of image regions, performing simultaneously region selection and classification. Trained as an image classifier, the architecture implicitly learns object detectors that are better than alternative weakly supervised detection systems on the PASCAL VOC data. The model, which is a simple and elegant end-to-end architecture, outperforms standard data augmentation and fine-tuning techniques for the task of image-level classification as well.",
"Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain. In this paper, we propose a new method that achieves this goal with only image-level labels of whether the objects are present or not. Our approach combines a discriminative submodular cover problem for automatically discovering a set of positive object windows with a smoothed latent SVM formulation. The latter allows us to leverage efficient quasi-Newton optimization techniques. Our experiments demonstrate that the proposed approach provides a 50 relative improvement in mean average precision over the current state-of-the-art on PASCAL VOC 2007 detection.",
"Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual content is a challenging problem with many applications for human-computer interaction and image-text reference resolution. Few datasets provide the ground truth spatial localization of phrases, thus it is desirable to learn from data with no or little grounding supervision. We propose a novel approach which learns grounding by reconstructing a given phrase using an attention mechanism, which can be either latent or optimized directly. During training our approach encodes the phrase using a recurrent network language model and then learns to attend to the relevant image region in order to reconstruct the input phrase. At test time, the correct attention, i.e., the grounding, is evaluated. If grounding supervision is available it can be directly applied via a loss over the attention mechanism. We demonstrate the effectiveness of our approach on the Flickr30k Entities and ReferItGame datasets with different levels of supervision, ranging from no supervision over partial supervision to full supervision. Our supervised variant improves by a large margin over the state-of-the-art on both datasets.",
"Most existing weakly supervised localization (WSL) approaches learn detectors by finding positive bounding boxes based on features learned with image-level supervision. However, those features do not contain spatial location related information and usually provide poor-quality positive samples for training a detector. To overcome this issue, we propose a deep self-taught learning approach, which makes the detector learn the object-level features reliable for acquiring tight positive samples and afterwards re-train itself based on them. Consequently, the detector progressively improves its detection ability and localizes more informative positive samples. To implement such self-taught learning, we propose a seed sample acquisition method via image-to-object transferring and dense subgraph discovery to find reliable positive samples for initializing the detector. An online supportive sample harvesting scheme is further proposed to dynamically select the most confident tight positive samples and train the detector in a mutual boosting way. To prevent the detector from being trapped in poor optima due to overfitting, we propose a new relative improvement of predicted CNN scores for guiding the self-taught learning process. Extensive experiments on PASCAL 2007 and 2012 show that our approach outperforms the state-of-the-arts, strongly validating its effectiveness.",
"Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem.",
"We aim to localize objects in images using image-level supervision only. Previous approaches to this problem mainly focus on discriminative object regions and often fail to locate precise object boundaries. We address this problem by introducing two types of context-aware guidance models, additive and contrastive models, that leverage their surrounding context regions to improve localization. The additive model encourages the predicted object region to be supported by its surrounding context region. The contrastive model encourages the predicted object region to be outstanding from its surrounding context region. Our approach benefits from the recent success of convolutional neural networks for object recognition and extends Fast R-CNN to weakly supervised object localization. Extensive experimental evaluation on the PASCAL VOC 2007 and 2012 benchmarks shows that our context-aware approach significantly improves weakly supervised localization and detection.",
"Object category localization is a challenging problem in computer vision. Standard supervised training requires bounding box annotations of object instances. This time-consuming annotation process is sidestepped in weakly supervised learning. In this case, the supervised information is restricted to binary labels that indicate the absence presence of object instances in the image, without their locations. We follow a multiple-instance learning approach that iteratively trains the detector and infers the object locations in the positive training images. Our main contribution is a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. This procedure is particularly important when using high-dimensional representations, such as Fisher vectors and convolutional neural network features. We also propose a window refinement method, which improves the localization accuracy by incorporating an objectness prior. We present a detailed experimental evaluation using the PASCAL VOC 2007 dataset, which verifies the effectiveness of our approach.",
"Localizing objects in cluttered backgrounds is a challenging task in weakly supervised localization. Due to large object variations in cluttered images, objects have large ambiguity with backgrounds. However, backgrounds contain useful latent information, e.g., the sky for aeroplanes. If we can learn this latent information, object-background ambiguity can be reduced to suppress the background. In this paper, we propose the latent category learning (LCL), which is an unsupervised learning problem given only image-level class labels. Firstly, inspired by the latent semantic discovery, we use the typical probabilistic Latent Semantic Analysis (pLSA) to learn the latent categories, which can represent objects, object parts or backgrounds. Secondly, to determine which category contains the target object, we propose a category selection method evaluating each category’s discrimination. We evaluate the method on the PASCAL VOC 2007 database and ILSVRC 2013 detection challenge. On VOC 2007, the proposed method yields the annotation accuracy of 48 , which outperforms previous results by 10 . More importantly, we achieve the detection average precision of 30.9 , which improves previous results by 8 and can be competitive with the supervised deformable part model (DPM) 5.0 baseline 33.7 . On ILSVRC 2013 detection, the method yields the precision of 6.0 , which is also competitive with the DPM 5.0."
]
} |
1708.02281 | 2742674378 | We consider Berry's random planar wave model (1977) for a positive Laplace eigenvalue @math , both in the real and complex case, and prove limit theorems for the nodal statistics associated with a smooth compact domain, in the high-energy limit ( @math ). Our main result is that both the nodal length (real case) and the number of nodal intersections (complex case) verify a Central Limit Theorem, which is in sharp contrast with the non-Gaussian behaviour observed for real and complex arithmetic random waves on the flat @math -torus, see (2016) and (2016). Our findings can be naturally reformulated in terms of the nodal statistics of a single random wave restricted to a compact domain diverging to the whole plane. As such, they can be fruitfully combined with the recent results by Canzani and Hanin (2016), in order to show that, at any point of isotropic scaling and for energy levels diverging sufficently fast, the nodal length of any Gaussian pullback monochromatic wave verifies a central limit theorem with the same scaling as Berry's model. As a remarkable byproduct of our analysis, we rigorously confirm the asymptotic behaviour for the variances of the nodal length and of the number of nodal intersections of isotropic random waves, as derived in Berry (2002). | Phase singularities of complex arithmetic random waves . For @math , let @math indicate an independent copy of the arithmetic random wave @math defined in the previous paragraph. In @cite_41 , the distribution of the cardinality @math of the set of nodal intersections @math was investigated. One has that @math while the asymptotic variance, as @math , is @math Also in this case the asymptotic distribution is non-Gaussian (and non-universal), indeed for @math such that @math and @math , one has that where @math , @math and @math are independent random variables such that @math while @math (where @math are i.i.d. standard Gaussian random variables). | {
"cite_N": [
"@cite_41"
],
"mid": [
"2510611518"
],
"abstract": [
"Complex arithmetic random waves are stationary Gaussian complex-valued solutions of the Helmholtz equation on the two-dimensional flat torus. We use Wiener-It ^o chaotic expansions in order to derive a complete characterization of the second order high-energy behaviour of the total number of phase singularities of these functions. Our main result is that, while such random quantities verify a universal law of large numbers, they also exhibit non-universal and non-central second order fluctuations that are dictated by the arithmetic nature of the underlying spectral measures. Such fluctuations are qualitatively consistent with the cancellation phenomena predicted by Berry (2002) in the case of complex random waves on compact planar domains. Our results extend to the complex setting recent pathbreaking findings by Rudnick and Wigman (2008), Krishnapur, Kurlberg and Wigman (2013) and Marinucci, Peccati, Rossi and Wigman (2016). The exact asymptotic characterization of the variance is based on a fine analysis of the Kac-Rice kernel around the origin, as well as on a novel use of combinatorial moment formulae for controlling long-range weak correlations."
]
} |
1708.02281 | 2742674378 | We consider Berry's random planar wave model (1977) for a positive Laplace eigenvalue @math , both in the real and complex case, and prove limit theorems for the nodal statistics associated with a smooth compact domain, in the high-energy limit ( @math ). Our main result is that both the nodal length (real case) and the number of nodal intersections (complex case) verify a Central Limit Theorem, which is in sharp contrast with the non-Gaussian behaviour observed for real and complex arithmetic random waves on the flat @math -torus, see (2016) and (2016). Our findings can be naturally reformulated in terms of the nodal statistics of a single random wave restricted to a compact domain diverging to the whole plane. As such, they can be fruitfully combined with the recent results by Canzani and Hanin (2016), in order to show that, at any point of isotropic scaling and for energy levels diverging sufficently fast, the nodal length of any Gaussian pullback monochromatic wave verifies a central limit theorem with the same scaling as Berry's model. As a remarkable byproduct of our analysis, we rigorously confirm the asymptotic behaviour for the variances of the nodal length and of the number of nodal intersections of isotropic random waves, as derived in Berry (2002). | Nodal length of random spherical harmonics . The Laplacian eigenvalues on the two-dimensional unit sphere are of the form @math , where @math , and the multiplicity of the @math -th eigenvalue is @math . The @math -th random eigenfunction (random spherical harmonic) on @math is a centered Gaussian field whose covariance kernel is @math where @math denotes the @math -th Legendre polynomial and @math the geodesic distance between the two points @math and @math (see @cite_11 ). The mean of the nodal length @math was computed in @cite_19 as @math while the asymptotic behaviour of the variance was derived in @cite_26 : as @math , @math The second order fluctuations of @math are Gaussian; more precisely, in @cite_0 it was shown that @math where @math is a standard Gaussian random variable. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_26",
"@cite_11"
],
"mid": [
"2616288315",
"2561781306",
"1808899688",
"1530353871"
],
"abstract": [
"We study the asymptotic behaviour of the nodal length of random @math -spherical harmonics @math of high degree @math , i.e. the length of their zero set @math . It is found that the nodal lengths are asymptotically equivalent, in the @math -sense, to the \"sample trispectrum\", i.e., the integral of @math , the fourth-order Hermite polynomial of the values of @math . A particular by-product of this is a Quantitative Central Limit Theorem (in Wasserstein distance) for the nodal length, in the high energy limit.",
"",
"Using the multiplicities of the Laplace eigenspace on the sphere (the space of spherical harmonics) we endow the space with Gaussian probability measure. This induces a notion of random Gaussian spherical harmonics of degree n having Laplace eigenvalue E = n(n + 1). We study the length distribution of the nodal lines of random spherical harmonics.",
"Preface 1. Introduction 2. Background results in representation theory 3. Representations of SO(3) and harmonic analysis on S2 4. Background results in probability and graphical methods 5. Spectral representations 6. Characterizations of isotropy 7. Limit theorems for Gaussian subordinated random fields 8. Asymptotics for the sample power spectrum 9. Asymptotics for sample bispectra 10. Spherical needlets and their asymptotic properties 11. Needlets estimation of power spectrum and bispectrum 12. Spin random fields Appendix Bibliography Index."
]
} |
1708.01910 | 2744663655 | Although the definition of what empathetic preferences exactly are is still evolving, there is a general consensus in the psychology, science and engineering communities that the evolution toward players' behaviors in interactive decision-making problems will be accompanied by the exploitation of their empathy, sympathy, compassion, antipathy, spitefulness, selfishness, altruism, and self-abnegating states in the payoffs. In this article, we study one-shot bimatrix games from a psychological game theory viewpoint. A new empathetic payoff model is calculated to fit empirical observations and both pure and mixed equilibria are investigated. For a realized empathy structure, the bimatrix game is categorized among four generic class of games. Number of interesting results are derived. A notable level of involvement can be observed in the empathetic one-shot game compared the non-empathetic one and this holds even for games with dominated strategies. Partial altruism can help in breaking symmetry, in reducing payoff-inequality and in selecting social welfare and more efficient outcomes. By contrast, partial spite and self-abnegating may worsen payoff equity. Empathetic evolutionary game dynamics are introduced to capture the resulting empathetic evolutionarily stable strategies under wide range of revision protocols including Brown-von Neumann-Nash, Smith, imitation, replicator, and hybrid dynamics. Finally, mutual support and Berge solution are investigated and their connection with empathetic preferences are established. We show that pure altruism is logically inconsistent, only by balancing it with some partial selfishness does it create a consistent psychology. | In the 1880s, [pages 102-104] edge introduced the idea of other-regarding payoff transformations as follows: player @math maximizes the payoff function @math where @math Here @math and @math represent relative weights that @math assigns to @math (own) and @math (to the other player's) non-empathetic payoff, respectively. The work in @cite_9 proposed an interesting model of partial altruism as an explanation for the results of public good contribution games, where a player's utility is a linear function of both the player's own monetary payoff and the other players' payoffs. The work in @cite_4 @cite_2 proposed a model that uses both spite and altruism, where the adjusted utility of a player reflects the player's own utility and his regard for other players. A model of fairness is proposed in @cite_10 where in addition to purely selfish players, there are players who dislike inequitable outcomes. | {
"cite_N": [
"@cite_10",
"@cite_9",
"@cite_4",
"@cite_2"
],
"mid": [
"1514016864",
"1556452705",
"2099758346",
"2292444317"
],
"abstract": [
"We consider a problem at the intersection of distributed computing and game theory, namely: Is it possible to achieve the \"windfall of malice\" even without the actual presence of malicious players? Our answer to this question is \"Yes and No\". Our positive result is that for the virus inoculation game, it is possible to achieve the windfall of malice by use of a mediator. Our negative result is that for symmetric congestion games that are known to have a windfall of malice, it is not possible to design a mediator that achieves this windfall. In proving these two results, we develop novel techniques for mediator design that we believe will be helpful for creating non-trivial mediators to improve social welfare in a large class of games.",
"Environments with public goods are a wonderful playground for those interested in delicate experimental problems, serious theoretical challenges, and difficult mechanism design issues. A review is made of various public goods experiments. It is found that the public goods environment is a very sensitive one with much that can affect outcomes but are difficult to control. The many factors interact with each other in unknown ways. Nothing is known for sure. Environments with public goods present a serious challenge even to skilled experimentalists and many opportunities for imaginative work.",
"We examine a simple theory of altruism in which players' payoffs are linear in their own monetary income and their opponents. The weight on the opponent's income is private information and varies in the population, depending, moreover, on what the opponent's coefficient is believed to be. Using results of ultimatum experiments and the final round of a centipede experiment, we are able to pin down relatively accurately what the distribution of altruism (and spite) in the population is. This distribution is then used with a reasonable degree of success to explain the results of the earlier rounds of centipede and the results of some public goods contribution games. In addition, we show that in a market game where the theory of selfish players does quite well, the theory of altruism makes exactly the same predictions as the theory of selfish players. (Copyright: Elsevier)",
"With the explosive growth in wireless usage due in part to smart phones, tablets and a growing applications developer base, wireless operators are constantly looking for ways to increase the spectral efficiency of their networks with low power. Multiple input multiple output (MIMO) technology, which made its first broad commercial appearance in IEEE 802.11 systems, is now gaining substantial attention in mobile wireless wide area network with the launch of interoperable implementation of IEEE 802.16 and Long-Term Evolution networks. MIMO is a key technology in these networks which substantially improves network throughput, capacity and coverage. In this paper we investigate throughput sharing strategies in distributed massive MIMO network games where the users take into consideration not only their own throughout and risk but also the throughput of their neighborhood and subnetwork users. We provide equilibrium analysis and deduce the ex-post performance and network fairness. We show that, in presence of altruistic users, the sharing strategies improve the throughput fairness when the geodesic distance of the network is not large."
]
} |
1708.02191 | 2743242403 | Despite rapid advances in face recognition, there remains a clear gap between the performance of still image-based face recognition and video-based face recognition, due to the vast difference in visual quality between the domains and the difficulty of curating diverse large-scale video datasets. This paper addresses both of those challenges, through an image to video feature-level domain adaptation approach, to learn discriminative video frame representations. The framework utilizes large-scale unlabeled video data to reduce the gap between different domains while transferring discriminative knowledge from large-scale labeled still images. Given a face recognition network that is pretrained in the image domain, the adaptation is achieved by (i) distilling knowledge from the network to a video adaptation network through feature matching, (ii) performing feature restoration through synthetic data augmentation and (iii) learning a domain-invariant feature through a domain adversarial discriminator. We further improve performance through a discriminator-guided feature fusion that boosts high-quality frames while eliminating those degraded by video domain-specific factors. Experiments on the YouTube Faces and IJB-A datasets demonstrate that each module contributes to our feature-level domain adaptation framework and substantially improves video face recognition performance to achieve state-of-the-art accuracy. We demonstrate qualitatively that the network learns to suppress diverse artifacts in videos such as pose, illumination or occlusion without being explicitly trained for them. | Our work falls into the class of problems on unsupervised domain adaptation @cite_41 @cite_1 @cite_39 @cite_5 that concerns adapting a classifier trained on a source domain (e.g., web images) to a target domain (e.g., video) where there is no labeled training data for target domain to fine-tune the classifier. Among those, feature space alignment and domain adversarial learning methods are closely related to our approach. | {
"cite_N": [
"@cite_41",
"@cite_5",
"@cite_1",
"@cite_39"
],
"mid": [
"2096943734",
"1882958252",
"",
"1565327149"
],
"abstract": [
"Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.",
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.",
"",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task."
]
} |
1708.02191 | 2743242403 | Despite rapid advances in face recognition, there remains a clear gap between the performance of still image-based face recognition and video-based face recognition, due to the vast difference in visual quality between the domains and the difficulty of curating diverse large-scale video datasets. This paper addresses both of those challenges, through an image to video feature-level domain adaptation approach, to learn discriminative video frame representations. The framework utilizes large-scale unlabeled video data to reduce the gap between different domains while transferring discriminative knowledge from large-scale labeled still images. Given a face recognition network that is pretrained in the image domain, the adaptation is achieved by (i) distilling knowledge from the network to a video adaptation network through feature matching, (ii) performing feature restoration through synthetic data augmentation and (iii) learning a domain-invariant feature through a domain adversarial discriminator. We further improve performance through a discriminator-guided feature fusion that boosts high-quality frames while eliminating those degraded by video domain-specific factors. Experiments on the YouTube Faces and IJB-A datasets demonstrate that each module contributes to our feature-level domain adaptation framework and substantially improves video face recognition performance to achieve state-of-the-art accuracy. We demonstrate qualitatively that the network learns to suppress diverse artifacts in videos such as pose, illumination or occlusion without being explicitly trained for them. | The basic idea of feature space alignment is to minimize the distance between domains in the feature space through learning a transformation of source to target features @cite_1 @cite_36 @cite_24 @cite_7 @cite_10 , or a joint adaptation layer that embeds features into a new domain-invariant space @cite_41 @cite_39 . Specifically, @cite_39 use two CNNs for source and target domain with shared weights and the network is optimized for classification loss in the source domain as well as domain difference measured by the maximum mean discrepancy (MMD) metric. @cite_38 consider a similar network architecture for cross-modality supervision transfer. | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_36",
"@cite_41",
"@cite_1",
"@cite_39",
"@cite_24",
"@cite_10"
],
"mid": [
"2951874610",
"",
"1722318740",
"2096943734",
"",
"1565327149",
"2104068492",
""
],
"abstract": [
"In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as a supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We show experimental results where we transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers. Code, data and pre-trained models are available at this https URL",
"",
"Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.",
"Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.",
"",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.",
"In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces described by eigenvectors. In this context, our method seeks a domain adaptation solution by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We use a theoretical result to tune the unique hyper parameter corresponding to the size of the subspaces. We run our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.",
""
]
} |
1708.02191 | 2743242403 | Despite rapid advances in face recognition, there remains a clear gap between the performance of still image-based face recognition and video-based face recognition, due to the vast difference in visual quality between the domains and the difficulty of curating diverse large-scale video datasets. This paper addresses both of those challenges, through an image to video feature-level domain adaptation approach, to learn discriminative video frame representations. The framework utilizes large-scale unlabeled video data to reduce the gap between different domains while transferring discriminative knowledge from large-scale labeled still images. Given a face recognition network that is pretrained in the image domain, the adaptation is achieved by (i) distilling knowledge from the network to a video adaptation network through feature matching, (ii) performing feature restoration through synthetic data augmentation and (iii) learning a domain-invariant feature through a domain adversarial discriminator. We further improve performance through a discriminator-guided feature fusion that boosts high-quality frames while eliminating those degraded by video domain-specific factors. Experiments on the YouTube Faces and IJB-A datasets demonstrate that each module contributes to our feature-level domain adaptation framework and substantially improves video face recognition performance to achieve state-of-the-art accuracy. We demonstrate qualitatively that the network learns to suppress diverse artifacts in videos such as pose, illumination or occlusion without being explicitly trained for them. | For feature-level domain adaptation using adversarial learning, domain adversarial neural network (DANN) @cite_5 appends domain classifier to high-level features and introduces a gradient reversal layer for end-to-end learning via backpropagation while avoiding cumbersome minimax optimization of adversarial training. The goal of DANN is to transfer discriminative classifier from source to target domain, which implicitly assumes the label spaces of two domains are equivalent (or at least the label space of target domain is the subset of that of source domain). Our work is to transfer discriminative and hence there is no such restriction in label space definition. In addition, we propose domain-specific synthetic data augmentation to further enhance the performance of domain adaptation and use discriminator outputs for feature fusion. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1882958252"
],
"abstract": [
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets."
]
} |
1903.04933 | 2922386270 | Autoregressive generative models of images tend to be biased towards capturing local structure, and as a result they often produce samples which are lacking in terms of large-scale coherence. To address this, we propose two methods to learn discrete representations of images which abstract away local detail. We show that autoregressive models conditioned on these representations can produce high-fidelity reconstructions of images, and that we can train autoregressive priors on these representations that produce samples with large-scale coherence. We can recursively apply the learning procedure, yielding a hierarchy of progressively more abstract image representations. We train hierarchical class-conditional autoregressive models on the ImageNet dataset and demonstrate that they are able to generate realistic images at resolutions of 128 @math 128 and 256 @math 256 pixels. | train a hierarchical autoregressive model of musical audio signals by stacking autoregressive discrete autoencoders. However, the autoencoders are trained end-to-end, which makes them prone to the issues we identified in . This makes training the second level autoencoder cumbersome and fragile, requiring expensive population-based training @cite_3 or alternative quantisation strategies to succeed. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2770298516"
],
"abstract": [
"Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present , a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance."
]
} |
1903.04933 | 2922386270 | Autoregressive generative models of images tend to be biased towards capturing local structure, and as a result they often produce samples which are lacking in terms of large-scale coherence. To address this, we propose two methods to learn discrete representations of images which abstract away local detail. We show that autoregressive models conditioned on these representations can produce high-fidelity reconstructions of images, and that we can train autoregressive priors on these representations that produce samples with large-scale coherence. We can recursively apply the learning procedure, yielding a hierarchy of progressively more abstract image representations. We train hierarchical class-conditional autoregressive models on the ImageNet dataset and demonstrate that they are able to generate realistic images at resolutions of 128 @math 128 and 256 @math 256 pixels. | Masked self-prediction is closely related to representation learning methods such as context prediction @cite_19 and context encoders @cite_38 , which also rely on predicting pixels from other nearby pixels. Contrastive predictive coding @cite_39 on the other hand relies on prediction in the feature domain to extract structure that varies predictably across longer ranges. Although the motivation behind approaches such as these is usually to extract high-level, semantically meaningful features, our goal is different: we want to remove some of the local detail to make the task of modelling large-scale structure easier. Our representations also need to balance abstraction with reconstruction, so they need to retain enough information from the input. | {
"cite_N": [
"@cite_19",
"@cite_39",
"@cite_38"
],
"mid": [
"2950187998",
"2842511635",
"2342877626"
],
"abstract": [
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.",
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods."
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | To address this requirement, author in @cite_33 proposed a robust model of the sensors which extracts different features such as the duration of the activity. In this generalized model, the problem is independent of the sensor environment and can be implemented for different users. Besides, to evaluate the models, having enough correctly labeled data is inevitable. Nevertheless, manual labeling of the sensors' data, is prohibitive due to being highly time-consuming and often inaccurate. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2021361613"
],
"abstract": [
"Smart home activity recognition systems can learn generalized models for common activities that span multiple environment settings and resident types."
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | Moreover, the way that annotation has been done is often ignored while it makes bias on the data. In the most common approach of annotating, residents of the smart environment are asked to perform an activity and then annotation will be done based on activated sensors. Nonetheless, this approach may not be practical in all situations. A labeling mechanism presented in @cite_27 is an example of existing solutions for annotating sensor data automatically. | {
"cite_N": [
"@cite_27"
],
"mid": [
"1604589174"
],
"abstract": [
"The pervasive sensing technologies found in smart homes offer unprecedented opportunities for providing health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to monitor the functional health of smart home residents, we need to design technologies that recognize and track the activities that people perform at home. Machine learning techniques can perform this task, but the software algorithms rely upon large amounts of sample data that is correctly labeled with the corresponding activity. Labeling, or annotating, sensor data with the corresponding activity can be time consuming, may require input from the smart home resident, and is often inaccurate. Therefore, in this paper we investigate four alternative mechanisms for annotating sensor data with a corresponding activity label. We evaluate the alternative methods along the dimensions of annotation time, resident burden, and accuracy using sensor data collected in a real smart apartment."
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | Much researches on activity recognition have been done using pre-segmented data; which means the beginning and end of the activities is pre-determined in the dataset @cite_12 @cite_32 . Such approaches are far unrealistic compared to real-world setting and are not applicable in the online applications as the beginning and end of the activities is not determined when it comes to the stream of data. Researchers in @cite_14 @cite_24 developed methods for sensor stream segmentation which brings the activity recognition based systems closer to those of actual world. | {
"cite_N": [
"@cite_24",
"@cite_14",
"@cite_32",
"@cite_12"
],
"mid": [
"2086385378",
"",
"2157261647",
"2050836750"
],
"abstract": [
"Approaches and algorithms for activity recognition have recently made substantial progress due to advancements in pervasive and mobile computing, smart environments and ambient assisted living. Nevertheless, it is still difficult to achieve real-time continuous activity recognition as sensor data segmentation remains a challenge. This paper presents a novel approach to real-time sensor data segmentation for continuous activity recognition. Central to the approach is a dynamic segmentation model, based on the notion of varied time windows, which can shrink and expand the segmentation window size by using temporal information of sensor data and activities as well as the state of activity recognition. The paper first analyzes the characteristics of activities of daily living from which the segmentation model that is applicable to a wide range of activity recognition scenarios is motivated and developed. It then describes the working mechanism and relevant algorithms of the model in the context of knowledge-driven activity recognition based on ontologies. The presented approach has been implemented in a prototype system and evaluated in a number of experiments. Results have shown average recognition accuracy above 83 in all experiments for real time activity recognition, which proves the approach and the underlying model.",
"",
"By 2050, about a third of the French population will be over 65. To face this modification of the population, the current studies of our laboratory focus on the monitoring of elderly people at home. This aims at detect, as early as possible, a loss of autonomy by objectivizing criterions such as the international ADL or the French AGGIR scales implementing automatic classification of the different Activities of Daily Living. A Health Smart Home is used to achieve this goal. This flat includes different sensors. The data from the various sensors were used to classify each temporal frame into one of the activities of daily living that has been previously learnt (seven activities: hygiene, toilets, eating, resting, sleeping, communication and dressing undressing). This is done using Support Vector Machines. We performed an experimentation with 13 young and healthy subjects to learn the model of activities and then we tested the classification algorithm (cross-validation) on real data.",
"Although researchers have developed robust approaches for estimating, location, and user identity, estimating user activities has proven much more challenging. Human activities are so complex and dynamic that it's often unclear what information is even relevant for modeling activities. Robust approaches to recognize user activities requires identifying the relevant information to be sensed and the appropriate sensing technologies. In our effort to develop an approach for automatically estimating hospital-staff activities, we trained a discrete hidden Markov model (HMM) to map contextual information to a user activity. We trained the model and evaluated it using data captured from almost 200 hours of detailed observation and documentation of hospital workers. In this article, we discuss our approach, the results, and how activity recognition could empower our vision of the hospital as a smart environment."
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | Despite all the researches on HAR, still opened challenges such as overlapping or concurrent activities, have yet to be solved. Overlap is noted as the phenomenon that different activity classes activate the same set of sensor events which makes overlapping activities hard to discriminate only based on the types of sensor events that they have triggered @cite_21 . AALO is an Activity recognition system in presence of overlapped activities which works based on Active Learning. It can recognize the overlapped activities by preprocessing step and item-set mining phase @cite_25 . A key limitation of this research is that it is not capable of recognizing the overlapping activities which happen in the same location. In addition, performing on the online data stream is still lacking in this study. | {
"cite_N": [
"@cite_21",
"@cite_25"
],
"mid": [
"2051332471",
"2105414191"
],
"abstract": [
"We propose an efficient frequent activity patterns mining in smart environments.We build an accurate activity classifier based on the mined frequent patterns.We distinguish overlapped activities with global and local weights of sensor events.We use publicly available dataset of smart environments to validate our methods. In the past decades, activity recognition has aroused a great interest for the research groups majoring in context-awareness computing and human behaviours monitoring. However, the correlations between the activities and their frequent patterns have never been directly addressed by traditional activity recognition techniques. As a result, activities that trigger the same set of sensors are difficult to differentiate, even though they present different patterns such as different frequencies of the sensor events. In this paper, we propose an efficient association rule mining technique to find the association rules between the activities and their frequent patterns, and build an activity classifier based on these association rules. We also address the classification of overlapped activities by incorporating the global and local weight of the patterns. The experiment results using publicly available dataset demonstrate that our method is able to achieve better performance than traditional recognition methods such as Decision Tree, Naive Bayesian and HMM. Comparison studies show that the proposed association rule mining method is efficient, and we can further improve the activity recognition accuracy by considering global and local weight of frequent patterns of activities.",
"We present AALO: a novel Activity recognition system for single person smart homes using Active Learning in the presence of Overlapped activities. AALO applies data mining techniques to cluster in-home sensor firings so that each cluster represents instances of the same activity. Users only need to label each cluster as an activity as opposed to labeling all instances of all activities. Once the clusters are associated to their corresponding activities, our system can recognize future activities. To improve the activity recognition accuracy, our system preprocesses raw sensor data by identifying overlapping activities. The evaluation of activity recognition performance on a 26-day dataset shows that compared to Naive Bayesian (NB), Hidden Markov Model (HMM), and Hidden Semi Markov Model (HSMM) based activity recognition systems, our average time slice error (24.15 ) is much lower than NB (53.04 ), and similar to HMM (29.97 ) and HSMM (26.29 ). Thus, our active learning based approach performs as good as the state of the art supervised techniques (HMM and HSMM)."
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | Moreover, researchers in @cite_1 suggest a two-phase method based on emerging pattern which can recognize complex activities.In the first phase, this method extracts emerging pattern for distinct activities. In the second phase, it segments streaming sensor data, then uses time dependency between segments in order to concatenate the relevant segments. Segments concatenation lets the method recognize complex activities. @cite_16 , proposed a Real-Time method for recognizing interleaved activities based on Fuzzy Logic and Recurrent Neural Networks. | {
"cite_N": [
"@cite_16",
"@cite_1"
],
"mid": [
"2898388504",
"2728787983"
],
"abstract": [
"In this paper, we present a methodology for Real-Time Activity Recognition of Interleaved Activities based on Fuzzy Logic and Recurrent Neural Networks. Firstly, we propose a representation of binary-sensor activations based on multiple Fuzzy Temporal Windows. Secondly, an ensemble of activity-based classifiers for balanced training and selection of relevant sensors is proposed. Each classifier is configured as a Long Short-Term Memory with self-reliant detection of interleaved activities. The proposed approach was evaluated using well-known interleaved binary-sensor datasets comprised of activities of daily living.",
"New healthcare technologies are emerging with the increasing age of the society, where the development of smart homes for monitoring the elders’ activities is in the center of them. Identifying the resident’s activities in an apartment is an important module in such systems. Dense sensing approach aims to embed sensors in the environment to report the detected events continuously. The events are segmented and analyzed via classifiers to identify the corresponding activity. Although several methods were introduced in recent years for detecting simple activities, the recognition of complex ones requires more effort. Due to the different time duration and event density of each activity, finding the best size of the segments is one of the challenges in detecting the activity. Also, using appropriate classifiers that are capable of detecting simple and interleaved activities is the other issue. In this paper, we devised a two-phase approach called CARER (Complex Activity Recognition using Emerging patterns and Random forest). In the first phase, the emerging patterns are mined, and various features of the activities are extracted to build a model using the Random Forest technique. In the second phase, the sequences of events are segmented dynamically by considering their recency and sensor correlation. Then, the segments are analyzed by the generated model from the previous phase to recognize both simple and complex activities. We examined the performance of the devised approach using the CASAS dataset. To do this, first we investigated several classifiers. The outcome showed that the combination of emerging patterns and the random forest provide a higher degree of accuracy. Then, we compared CARER with the static window approach, which used Hidden Markov Model. To have a fair comparison, we replaced the dynamic segmentation module of CARER with the static one. The results showed more than 12 improvement in f-measure. Finally, we compared our work with Dynamic sensor segmentation for real-time activity recognition, which used dynamic segmentation. The f-measure metric demonstrated up to 12.73 improvement."
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | Authors in @cite_30 @cite_11 studied the problem of handling the large proportion of available data that are not categorized in predefined classes and addressed it by discovering patterns in them and segmenting it into learnable classes. These kinds of data usually belong to the sensors that are not exclusively involved in the predefined class of activities. | {
"cite_N": [
"@cite_30",
"@cite_11"
],
"mid": [
"1987522330",
"2746958487"
],
"abstract": [
"Activity recognition has received increasing attention from the machine learning community. Of particular interest is the ability to recognize activities in real time from streaming data, but this presents a number of challenges not faced by traditional offline approaches. Among these challenges is handling the large amount of data that does not belong to a predefined class. In this paper, we describe a method by which activity discovery can be used to identify behavioral patterns in observational data. Discovering patterns in the data that does not belong to a predefined class aids in understanding this data and segmenting it into learnable classes. We demonstrate that activity discovery not only sheds light on behavioral patterns, but it can also boost the performance of recognition algorithms. We introduce this partnership between activity discovery and online activity recognition in the context of the CASAS smart home project and validate our approach using CASAS data sets.",
"We present a novel unsupervised approach, UnADevs, for discovering activity clusters corresponding to periodic and stationary activities in streaming sensor data. Such activities usually last for some time, which is exploited by our method; it includes mechanisms to regulate sensitivity to brief outliers and can discover multiple clusters overlapping in time to better deal with deviations from nominal behaviour. The method was evaluated on two activity datasets containing large number of activities (14 and 33 respectively) against online agglomerative clustering and DBSCAN. In a multi-criteria evaluation, our approach achieved significantly better performance on majority of the measures, with the advantages that: (i) it does not require to specify the number of clusters beforehand (it is open ended); (ii) it is online and can find clusters in real time; (iii) it has constant time complexity; (iv) and it is memory efficient as it does not keep the data samples in memory. Overall, it has managed to discover 616 of the total 717 activities. Because it discovers clusters of activities in real time, it is ideal to work alongside an active learning system."
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | Several Machine Learning approaches have been examined in the domain of Activity Recognition. Ensemble methods @cite_13 , non-parametric models @cite_17 , Temporal Frequent pattern mining @cite_18 , SVM-based models @cite_12 , Recurrent Neural Networks @cite_15 , and probabilistic models like Hidden Markov Model and the Markov Random Field @cite_4 @cite_20 @cite_28 have been exploited in the literature. Nonetheless, less attention has been paid to the domain of Online Activity Recognition which deals with processing stream of sensor data contrary to the conventional approaches that utilize pre-segmented data. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_28",
"@cite_15",
"@cite_20",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2804524357",
"2286276551",
"2652699843",
"2748923651",
"",
"2025707815",
"2050836750",
""
],
"abstract": [
"One of the most important problems that arises during the knowledge discovery from data and data mining process in many new emerging technologies is mining data with temporal dependencies. One such application is activity recognition and prediction. Activity recognition is used in many real world settings, such as assisted living systems. Although activity recognition has been vastly studied by many researchers, the temporal features that constitute an activity, which can provide useful insights for activity models, have not been exploited to their full potentials by mining algorithms. In this paper, we utilize temporal features for activity recognition and prediction in assisted living settings. We discover temporal relations such as the order of activities, as well as their corresponding start time and duration features. Analysis of real data collected from smart homes was used to validate the proposed method.",
"Activities of Daily Livings (ADLs) refer to the activities that are carried out by an individual for everyday living. Recognition of ADLs is key element for building intelligent and pervasive environments. We propose a two-layer HMM to build a ADLs recognition model that can represent the mapping between low-level sensor data and high-level activity based on the binary sensor data. We used embedded sensor with appliances or object to get object used sequence data as well as object name, type, interaction time, and location. In the first layer, we use location data of object used sensor to predict the activity class and in the second layer object used sequence data to determine the exact activity. We perform comparison with other activity recognition models using three real datasets to validate the proposed model. The results show that the proposed model achieves significantly better recognition performance than other models.",
"Real time (online) recognition of complex activities remains a challenging and active area of research. In this paper, we propose a sliding window based activity recognition (AR) method by integrating Latent Dirichlet allocation (LDA) model and Bayes theorem on real time sensor streaming. In the proposed method, we first learn offline the feature pattern of activity from activity window sequences using LDA model. We then embed a Bayes estimation of the activity probability distribution for a given sliding window in the feature extracting stage based on the learned activity-feature pattern. Finally, the probability distribution prediction as a subset of features in the sliding window is further fed into the classifier model to generate the final class result for the sliding window. We validate our approach using smart home datasets CASAS. The results of the evaluation indicate that the proposed method achieves a high accuracy of the classifier model while maintains low time cost.",
"Human activity recognition using smart home sensors is one of the bases of ubiquitous computing in smart environments and a topic undergoing intense research in the field of ambient assisted living. The increasingly large amount of data sets calls for machine learning methods. In this paper, we introduce a deep learning model that learns to classify human activities without using any prior knowledge. For this purpose, a Long Short Term Memory (LSTM) Recurrent Neural Network was applied to three real world smart home datasets. The results of these experiments show that the proposed approach outperforms the existing ones in terms of accuracy and performance.",
"",
"Application of sensor-based technology within activity monitoring systems is becoming a popular technique within the smart environment paradigm. Nevertheless, the use of such an approach generates complex constructs of data, which subsequently requires the use of intricate activity recognition techniques to automatically infer the underlying activity. This paper explores a cluster-based ensemble method as a new solution for the purposes of activity recognition within smart environments. With this approach activities are modelled as collections of clusters built on different subsets of features. A classification process is performed by assigning a new instance to its closest cluster from each collection. Two different sensor data representations have been investigated, namely numeric and binary. Following the evaluation of the proposed methodology it has been demonstrated that the cluster-based ensemble method can be successfully applied as a viable option for activity recognition. Results following exposure to data collected from a range of activities indicated that the ensemble method had the ability to perform with accuracies of 94.2 and 97.5 for numeric and binary data, respectively. These results outperformed a range of single classifiers considered as benchmarks.",
"Although researchers have developed robust approaches for estimating, location, and user identity, estimating user activities has proven much more challenging. Human activities are so complex and dynamic that it's often unclear what information is even relevant for modeling activities. Robust approaches to recognize user activities requires identifying the relevant information to be sensed and the appropriate sensing technologies. In our effort to develop an approach for automatically estimating hospital-staff activities, we trained a discrete hidden Markov model (HMM) to map contextual information to a user activity. We trained the model and evaluated it using data captured from almost 200 hours of detailed observation and documentation of hospital workers. In this article, we discuss our approach, the results, and how activity recognition could empower our vision of the hospital as a smart environment.",
""
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | Most of the presented solutions for streaming data processing are based on sliding window technique @cite_24 @cite_8 @cite_5 @cite_7 @cite_0 . The sliding window approach, briefly named as , mainly considers the temporal relation or number of sensors for framing data. One of the key bottlenecks of this approach is fine-tuning the window size. One basic solution is to employ constant pre-determined window size @cite_20 @cite_8 . Though, as the number of activated sensors are varied in different activities, applying dynamic window size have been noticed by many researchers @cite_6 @cite_3 @cite_7 @cite_24 . | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_6",
"@cite_3",
"@cite_24",
"@cite_0",
"@cite_5",
"@cite_20"
],
"mid": [
"2513324077",
"",
"2059094808",
"2790762320",
"2086385378",
"2765171540",
"1638512605",
""
],
"abstract": [
"Determining the appropriate data window size for online sensor data streams to recognize a specific activity is still a challenging task. In particular, when new sensor events are recorded. This paper proposes a windowing algorithm which presents promising results to recognize complex activities, e.g., in a smart home environment. The underlying basic idea is to analyze the sensor data in order to identify the set of “best fitting sensors”: it contains those sensors that most contribute to the recognition task, and therefore should be considered in a window. To validate our approach, we applied it on the CASAS data set which is an international data set for activity recognition. Based on the promising results, we believe that this algorithm can assist to detect human activities. Thus, our approach might be used in Active and Assisted Living Environments (AAL), where activity recognition is required to distinguish the types of help, a person needs to master his her daily life activities.",
"",
"Activity recognition is fundamental to many of the services envisaged in pervasive computing and ambient intelligence scenarios. However, delivering sufficiently robust activity recognition systems that could be deployed with confidence in an arbitrary real-world setting remains an outstanding challenge. Developments in wireless, inexpensive and unobtrusive sensing devices have enabled the capture of large data volumes, upon which a variety of machine learning techniques have been applied in order to facilitate interpretation of and inference upon such data. Much of the documented research in this area has in the main focused on recognition across pre-segmented sensor data. Such approaches are insufficient for near real-time analysis as is required for many services, such as those envisaged by ambient assisted living. This paper presents a novel near real-time sensor segmentation approach that incorporates the notions of both sensor and time correlation.",
"Usually, approaches driven by data proposed in literature for sensor-based activity recognition use the begin label and the end label of each activity in the dataset, fixing a temporal window with sensor data events to identify the activity carried out in this window. This type of approach cannot be carried out in real time because it is not possible to predict the start time of an activity, i.e., the class of the future activity that an inhabitant will perform, neither when he she will begin to carry out this activity. However, an activity can be marked as finished in real time only with the previous observations. Therefore, there is a need of online activity recognition approaches that classify activities using only the end label of the activity. In this paper, we propose and evaluate a new approach for online activity recognition with three temporal sub-windows that uses only the end label of the activity. The advantage of our approach is that the temporal sub-windows keep a partial order in the sensor data stream from the end time of the activity in a short-term, medium-term, long-term. The experiments conducted to evaluate our approach suggest the importance of the use of temporal sub-windows versus a single temporal window in terms of accuracy, using only the end time of the activity. The use of temporal sub-windows has improved the accuracy in the 98.95 of experiments carried out.",
"Approaches and algorithms for activity recognition have recently made substantial progress due to advancements in pervasive and mobile computing, smart environments and ambient assisted living. Nevertheless, it is still difficult to achieve real-time continuous activity recognition as sensor data segmentation remains a challenge. This paper presents a novel approach to real-time sensor data segmentation for continuous activity recognition. Central to the approach is a dynamic segmentation model, based on the notion of varied time windows, which can shrink and expand the segmentation window size by using temporal information of sensor data and activities as well as the state of activity recognition. The paper first analyzes the characteristics of activities of daily living from which the segmentation model that is applicable to a wide range of activity recognition scenarios is motivated and developed. It then describes the working mechanism and relevant algorithms of the model in the context of knowledge-driven activity recognition based on ontologies. The presented approach has been implemented in a prototype system and evaluated in a number of experiments. Results have shown average recognition accuracy above 83 in all experiments for real time activity recognition, which proves the approach and the underlying model.",
"In active and assisted living environments, a major service that can be provided is the automated assessment of elderly people’s well-being. Therefore, activity recognition is required to detect what types of help disabled persons need to support them in their daily life activities. Unfortunately, it is still a difficult task to estimate the size of the required window for online sensor data streams to recognize a specific activity, especially when new sensor events are recorded. This paper proposes a windowing algorithm, which presents promising results to recognize complex human activities for multi-resident homes. The approach is based on the analysis of the sensor data to identify the best fitting sensors that should be considered in a specified window. Moreover, the second part of this paper proposes a set of different statistical spatio-temporal features to recognize human activities. In order to check the overall performance, this approach is tested using the CASAS data set and artificially generated laboratory data using our HBMS simulator. The results show high performance based on different evaluation metrics compared to other approaches. We believe that the proposed windowing approach provides a good approximation of the required window size in order to robustly detect human activities in comparison to other windowing approaches.",
"An online recognition system must analyze the changes in the sensing data and at any significant detection; it has to decide if there is a change in the activity performed by the person. Such a system can use the previous sensor readings for decision-making (decide which activity is performed), without the need to wait for future ones. This paper proposes an approach of human activity recognition on online sensor data. We present four methods used to extract features from the sequence of sensor events. Our experimental results on public smart home data show an improvement of effectiveness in classification accuracy.",
""
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | Authors in @cite_7 @cite_0 present a novel probabilistic method to determine the window size. A different window size is initialized regarding each class of activity based on prior estimation and it is getting updated by the upcoming sensor events. | {
"cite_N": [
"@cite_0",
"@cite_7"
],
"mid": [
"2765171540",
"2513324077"
],
"abstract": [
"In active and assisted living environments, a major service that can be provided is the automated assessment of elderly people’s well-being. Therefore, activity recognition is required to detect what types of help disabled persons need to support them in their daily life activities. Unfortunately, it is still a difficult task to estimate the size of the required window for online sensor data streams to recognize a specific activity, especially when new sensor events are recorded. This paper proposes a windowing algorithm, which presents promising results to recognize complex human activities for multi-resident homes. The approach is based on the analysis of the sensor data to identify the best fitting sensors that should be considered in a specified window. Moreover, the second part of this paper proposes a set of different statistical spatio-temporal features to recognize human activities. In order to check the overall performance, this approach is tested using the CASAS data set and artificially generated laboratory data using our HBMS simulator. The results show high performance based on different evaluation metrics compared to other approaches. We believe that the proposed windowing approach provides a good approximation of the required window size in order to robustly detect human activities in comparison to other windowing approaches.",
"Determining the appropriate data window size for online sensor data streams to recognize a specific activity is still a challenging task. In particular, when new sensor events are recorded. This paper proposes a windowing algorithm which presents promising results to recognize complex activities, e.g., in a smart home environment. The underlying basic idea is to analyze the sensor data in order to identify the set of “best fitting sensors”: it contains those sensors that most contribute to the recognition task, and therefore should be considered in a window. To validate our approach, we applied it on the CASAS data set which is an international data set for activity recognition. Based on the promising results, we believe that this algorithm can assist to detect human activities. Thus, our approach might be used in Active and Assisted Living Environments (AAL), where activity recognition is required to distinguish the types of help, a person needs to master his her daily life activities."
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | Researchers in @cite_4 presented a multi-stage classification method. The first stage is to cluster the activities using a Hidden Markov Model based on location data and then in the next stage, another HMM classifies the exact activity using a sequence of sensor data. The major weakness of this method is that it makes no attempt to specify the boundaries of activities which negatively affects the performance. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2286276551"
],
"abstract": [
"Activities of Daily Livings (ADLs) refer to the activities that are carried out by an individual for everyday living. Recognition of ADLs is key element for building intelligent and pervasive environments. We propose a two-layer HMM to build a ADLs recognition model that can represent the mapping between low-level sensor data and high-level activity based on the binary sensor data. We used embedded sensor with appliances or object to get object used sequence data as well as object name, type, interaction time, and location. In the first layer, we use location data of object used sensor to predict the activity class and in the second layer object used sequence data to determine the exact activity. We perform comparison with other activity recognition models using three real datasets to validate the proposed model. The results show that the proposed model achieves significantly better recognition performance than other models."
]
} |
1903.04820 | 2959808785 | In the last few years there has been a growing interest in Human Activity Recognition (HAR) topic. Sensor-based HAR approaches, in particular, has been gaining more popularity owing to their privacy preserving nature. Furthermore, due to the widespread accessibility of the internet, a broad range of streaming-based applications such as online HAR, has emerged over the past decades. However, proposing sufficiently robust online activity recognition approach in smart environment setting is still considered as a remarkable challenge. This paper presents a novel online application of Hierarchical Hidden Markov Model in order to detect the current activity on the live streaming of sensor events. Our method consists of two phases. In the first phase, data stream is segmented based on the beginning and ending of the activity patterns. Also, on-going activity is reported with every receiving observation. This phase is implemented using Hierarchical Hidden Markov models. The second phase is devoted to the correction of the provided label for the segmented data stream based on statistical features. The proposed model can also discover the activities that happen during another activity - so-called interrupted activities. After detecting the activity pane, the predicted label will be corrected utilizing statistical features such as time of day at which the activity happened and the duration of the activity. We validated our proposed method by testing it against two different smart home datasets and demonstrated its effectiveness, which is competing with the state-of-the-art methods. | Another method to tackle online recognition is introduced by @cite_9 . They proposed cumulative fixed sliding windows for real time activity recognition. Their segmentation method consists of several fixed time length windows which have overlapped with each other. These overlapping windows considered as the whole a window, and its information is used to detect the on-going activities. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2889810448"
],
"abstract": [
"Robust activity recognition in near real-time is a prerequisite for delivering the smartness intrinsic to the pragmatic realisation of smart homes, environments and so forth. Many of the physical devices necessary for equipping a smart home are already available as consumer electronic devices and certified for use by the public. Yet activity recognition remains the preserve of the research community, despite the array of machine learning and other AI techniques currently available. To-date, research has been dominated by the use of pre-segmented data, resulting in the recognition of an arbitrary activity subsequent to its completion. For assistive paradigms dependent on smart technologies, for example Ambient Assisted Living, such approaches are insufficient. The overall objective must be the identification of an activity within an appropriative confidence level as soon as possible after activity commencement. This paper presents a novel approach, COBRA (Cumulatively Overlapping windowing approach for AmBient Recognition of Activities), for near real-time activity recognition, specifically within 60 seconds of the commencement of an activity. COBRA utilises an innovative combination of sliding windows augmented with a logistic regression model. The approach is evaluated using the well-established, open, CASAS dataset."
]
} |
1903.04566 | 2922368466 | Despite huge success, deep networks are unable to learn effectively in sequential multitask learning settings as they forget the past learned tasks after learning new tasks. Inspired from complementary learning systems theory, we address this challenge by learning a generative model that couples the current task to the past learned tasks through a discriminative embedding space. We learn an abstract level generative distribution in the embedding that allows the generation of data points to represent the experience. We sample from this distribution and utilize experience replay to avoid forgetting and simultaneously accumulate new knowledge to the abstract distribution in order to couple the current task with past experience. We demonstrate theoretically and empirically that our framework learns a distribution in the embedding that is shared across all task and as a result tackles catastrophic forgetting. | Past works have addressed catastrophic forgetting using two main approaches: model consolidation @cite_1 and experience replay @cite_17 . Both approaches implement a notion of memory to enable a network to remember the distributions of past learned tasks. | {
"cite_N": [
"@cite_1",
"@cite_17"
],
"mid": [
"2081484203",
"2116522068"
],
"abstract": [
"NK cell cytotoxicity is controlled by numerous NK inhibitory and activating receptors. Most of the inhibitory receptors bind MHC class I proteins and are expressed in a variegated fashion. It was recently shown that TIGIT, a new protein expressed by T and NK cells binds to PVR and PVR-like receptors and inhibits T cell activity indirectly through the manipulation of DC activity. Here, we show that TIGIT is expressed by all human NK cells, that it binds PVR and PVRL2 but not PVRL3 and that it inhibits NK cytotoxicity directly through its ITIM. Finally, we show that TIGIT counter inhibits the NK-mediated killing of tumor cells and protects normal cells from NK-mediated cytoxicity thus providing an “alternative self” mechanism for MHC class I inhibition.",
"This paper reviews the problem of catastrophic forgetting (the loss or disruption of previously learned information when new information is learned) in neural networks, and explores rehearsal mechanisms (the retraining of some of the previously learned information as the new information is added) as a potential solution. We replicate some of the experiments described by Ratcliff (1990), including those relating to a simple 'recency' based rehearsal regime. We then develop further rehearsal regimes which are more effective than recency rehearsal. In particular, 'sweep rehearsal' is very successful at minimizing catastrophic forgetting. One possible limitation of rehearsal in general, however, is that previously learned information may not be available for retraining. We describe a solution to this problem, 'pseudorehearsal', a method which provides the advantages of rehearsal without actually requiring any access to the previously learned information (the original training population) itself. We then sugge..."
]
} |
1903.04666 | 2942009982 | Features in machine learning problems are often time-varying and may be related to outputs in an algebraic or dynamical manner. The dynamic nature of these machine learning problems renders current higher order accelerated gradient descent methods unstable or weakens their convergence guarantees. Inspired by methods employed in adaptive control, this paper proposes new algorithms for the case when time-varying features are present, and demonstrates provable performance guarantees. In particular, we develop a unified variational perspective within a continuous time algorithm. This variational perspective includes higher order learning concepts and normalization, both of which stem from adaptive control, and allows stability to be established for dynamical machine learning problems where time-varying features are present. These higher order algorithms are also examined for provably correct learning in adaptive control and identification. Simulations are provided to verify the theoretical results. | Learning for dynamical systems has been an active area of research within the machine learning community, especially within the area of reinforcement learning @cite_51 @cite_10 @cite_23 @cite_34 . There has also been a large increase in recent work studying learning and control for unknown linear dynamical systems: least squares @cite_18 , linear quadratic regulator, robust control @cite_56 @cite_46 @cite_17 , and spectral filtering @cite_54 @cite_50 . One major difference between these works and the one presented here is that our algorithm is streaming, with the exception of @cite_54 @cite_50 . Control techniques have also been leveraged in the opposite direction, treating gradient descent explicitly as a dynamical system, and leveraging tools from robust control theory @cite_44 @cite_41 @cite_35 @cite_6 @cite_22 . It is an exciting time to be studying problems at the intersection of machine learning and control. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_22",
"@cite_41",
"@cite_54",
"@cite_6",
"@cite_56",
"@cite_44",
"@cite_23",
"@cite_50",
"@cite_51",
"@cite_46",
"@cite_34",
"@cite_10",
"@cite_17"
],
"mid": [
"2546420264",
"2963412706",
"1965324089",
"",
"2963787546",
"",
"2963681938",
"",
"2964084913",
"2964326601",
"2160779730",
"2761923184",
"2822752092",
"2121863487",
"2893059165"
],
"abstract": [
"We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives. In a time @math , the method finds an @math -stationary point, meaning a point @math such that @math . The method improves upon the @math complexity of gradient descent and provides the additional second-order guarantee that @math for the computed @math . Furthermore, our method is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications.",
"We prove that the ordinary least-squares (OLS) estimator attains nearly minimax optimal performance for the identification of linear dynamical systems from a single observed trajectory. Our upper bound analysis relies on a generalization of Mendelson’s small-ball method to dependent data, eschewing the use of standard mixing-time arguments. Our lower bounds reveal that these upper bounds match up to logarithmic factors. In particular, we capture the correct signal-to-noise behavior of the problem, showing that more unstable linear systems are easier to estimate. This behavior is qualitatively different from arguments which rely on mixing-time calculations that suggest that unstable systems are more difficult to estimate. We generalize our technique to provide bounds for a more general class of linear response time-series.",
"Das Buch behandelt die Systemidentifizierung in dem theoretischen Bereich, der direkte Auswirkungen auf Verstaendnis und praktische Anwendung der verschiedenen Verfahren zur Identifizierung hat. Da ...",
"",
"We present an efficient and practical algorithm for the online prediction of discrete-time linear dynamical systems with a symmetric transition matrix. We circumvent the non-convex optimization problem using improper learning: carefully overparameterize the class of LDSs by a polylogarithmic factor, in exchange for convexity of the loss functions. From this arises a polynomial-time algorithm with a near-optimal regret guarantee, with an analogous sample complexity bound for agnostic learning. Our algorithm is based on a novel filtering technique, which may be of independent interest: we convolve the time series with the eigenvectors of a certain Hankel matrix.",
"",
"We consider adaptive control of the Linear Quadratic Regulator (LQR), where an unknown linear system is controlled subject to quadratic costs. Leveraging recent developments in the estimation of linear systems and in robust controller synthesis, we present the first provably polynomial time algorithm that achieves sub-linear regret on this problem. We further study the interplay between regret minimization and parameter estimation by proving a lower bound on the expected regret in terms of the exploration schedule used by any algorithm. Finally, we conduct a numerical study comparing our robust adaptive algorithm to other methods from the adaptive LQR literature, and demonstrate the flexibility of our proposed method by extending it to a demand forecasting problem subject to state constraints.",
"",
"",
"We give a polynomial-time algorithm for learning latent-state linear dynamical systems without system identification, and without assumptions on the spectral radius of the system's transition matrix. The algorithm extends the recently introduced technique of spectral filtering, previously applied only to systems with a symmetric transition matrix, using a novel convex relaxation to allow for the efficient identification of phases.",
"A neural network based approach is presented for controlling two distinct types of nonlinear systems. The first corresponds to nonlinear systems with parametric uncertainties where the parameters occur nonlinearly. The second corresponds to systems for which stabilizing control structures cannot be determined. The proposed neural controllers are shown to result in closed-loop system stability under certain conditions.",
"This paper addresses the optimal control problem known as the Linear Quadratic Regulator in the case when the dynamics are unknown. We propose a multi-stage procedure, called Coarse-ID control, that estimates a model from a few experimental trials, estimates the error in that model with respect to the truth, and then designs a controller using both the model and uncertainty estimate. Our technique uses contemporary tools from random matrix theory to bound the error in the estimation procedure. We also employ a recently developed approach to control synthesis called System Level Synthesis that enables robust control design by solving a convex optimization problem. We provide end-to-end bounds on the relative error in control cost that are nearly optimal in the number of parameters and that highlight salient properties of the system to be controlled such as closed-loop sensitivity and optimal control magnitude. We show experimentally that the Coarse-ID approach enables efficient computation of a stabilizing controller in regimes where simple control schemes that do not take the model uncertainty into account fail to stabilize the true system.",
"This article surveys reinforcement learning from the perspective of optimization and control, with a focus on continuous control applications. It reviews the general formulation, terminology, and t...",
"Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.",
"We study the constrained linear quadratic regulator with unknown dynamics, addressing the tension between safety and exploration in data-driven control techniques. We present a framework which allows for system identification through persistent excitation, while maintaining safety by guaranteeing the satisfaction of state and input constraints. This framework involves a novel method for synthesizing robust constraint-satisfying feedback controllers, leveraging newly developed tools from system level synthesis. We connect statistical results with cost sub-optimality bounds to give non-asymptotic guarantees on both estimation and controller performance."
]
} |
1903.04666 | 2942009982 | Features in machine learning problems are often time-varying and may be related to outputs in an algebraic or dynamical manner. The dynamic nature of these machine learning problems renders current higher order accelerated gradient descent methods unstable or weakens their convergence guarantees. Inspired by methods employed in adaptive control, this paper proposes new algorithms for the case when time-varying features are present, and demonstrates provable performance guarantees. In particular, we develop a unified variational perspective within a continuous time algorithm. This variational perspective includes higher order learning concepts and normalization, both of which stem from adaptive control, and allows stability to be established for dynamical machine learning problems where time-varying features are present. These higher order algorithms are also examined for provably correct learning in adaptive control and identification. Simulations are provided to verify the theoretical results. | This work continues in the tradition of @cite_26 and @cite_55 whereby insight is gained into accelerated gradient descent methods through a continuous lens. Future work will be to obtain discrete time implementations of our algorithms with matching rates @cite_48 @cite_27 , and to connect those back to classic discrete time adaptive control algorithms @cite_19 @cite_11 @cite_30 . Other fruitful directions forward would be to study these accelerated algorithms within the context of output feedback control, and to rigorously prove convergence rates when the regressor vectors are persistently exciting @cite_38 @cite_25 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_26",
"@cite_48",
"@cite_55",
"@cite_19",
"@cite_27",
"@cite_25",
"@cite_11"
],
"mid": [
"1569320505",
"2963610705",
"2528062157",
"2893838930",
"2963430672",
"2147873335",
"2787367827",
"",
"2162725061"
],
"abstract": [
"This unified survey focuses on linear discrete-time systems and explores the natural extensions to nonlinear systems. In keeping with the importance of computers to practical applications, the authors emphasize discrete-time systems. Their approach summarizes the theoretical and practical aspects of a large class of adaptive algorithms.1984 edition.",
"The convergence properties of adaptive systems in terms of excitation conditions on the regressor vector are well known. With persistent excitation of the regressor vector in model reference adaptive control the state error and the adaptation error are globally exponentially stable or, equivalently, exponentially stable in the large. When the excitation condition, however, is imposed on the reference input or the reference model state, it is often incorrectly concluded that the persistent excitation in those signals also implies exponential stability in the large. The definition of persistent excitation is revisited so as to address some possible confusion in the adaptive control literature. It is then shown that persistent excitation of the reference model only implies local persistent excitation (weak persistent excitation). Weak persistent excitation of the regressor is still sufficient for uniform asymptotic stability in the large, but not exponential stability in the large. We show that there exists ...",
"We derive a second-order ordinary differential equation (ODE) which is the limit of Nesterov's accelerated gradient method. This ODE exhibits approximate equivalence to Nesterov's scheme and thus can serve as a tool for analysis. We show that the continuous time ODE allows for a better understanding of Nesterov's scheme. As a byproduct, we obtain a family of schemes with similar convergence rates. The ODE interpretation also suggests restarting Nesterov's scheme leading to an algorithm, which can be rigorously proven to converge at a linear rate whenever the objective is strongly convex.",
"Author(s): Wilson, Ashia | Advisor(s): Jordan, Michael I; Recht, Benjamin | Abstract: Optimization is among the richest modeling languages in science. In statistics and machine learning, for instance, inference is typically posed as an optimization problem. While there are many algorithms designed to solve optimization problems, and a seemingly greater number of convergence proofs, essentially all proofs follow a classical approach from dynamical systems theory: they present a Lyapunov function and show it decreases. The primary goal of this thesis is to demonstrate that making the Lyapunov argument explicit greatly simplifies, clarifies, and to a certain extent, unifies, convergence theory for optimization.The central contributions of this thesis are the following results: we• present several variational principles whereby we obtain continuous-time dynamical systems useful for optimization;• introduce Lyapunov functions for both the continuous-time dynamical systems and discrete-time algorithms and demonstrate how to move between these Lyapunov functions;• utilize the Lyapunov framework as well as numerical analysis and integration techniques to obtain upper bounds for several novel discrete-time methods for optimization, a few of which have matching lower bounds.",
"Accelerated gradient methods play a central role in optimization, achieving optimal rates in many settings. Although many generalizations and extensions of Nesterov’s original acceleration method have been proposed, it is not yet clear what is the natural scope of the acceleration concept. In this paper, we study accelerated methods from a continuous-time perspective. We show that there is a Lagrangian functional that we call the Bregman Lagrangian, which generates a large class of accelerated methods in continuous time, including (but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-order gradient methods. We show that the continuous-time limit of all of these methods corresponds to traveling the same curve in spacetime at different speeds. From this perspective, Nesterov’s technique and many of its generalizations can be viewed as a systematic way to go from the continuous-time curves generated by the Bregman Lagrangian to a family of discrete-time accelerated algorithms.",
"This paper establishes global convergence for a class of adaptive control algorithms applied to discrete-time multiinput multioutput deterministic linear systems. It is shown that the algorithms will ensure that the systems inptus and outputs remain bounded for all time and that the output tracking error converges to zero.",
"Accelerated gradient methods have had significant impact in machine learning -- in particular the theoretical side of machine learning -- due to their ability to achieve oracle lower bounds. But their heuristic construction has hindered their full integration into the practical machine-learning algorithmic toolbox, and has limited their scope. In this paper we build on recent work which casts acceleration as a phenomenon best explained in continuous time, and we augment that picture by providing a systematic methodology for converting continuous-time dynamics into discrete-time algorithms while retaining oracle rates. Our framework is based on ideas from Hamiltonian dynamical systems and symplectic integration. These ideas have had major impact in many areas in applied mathematics, but have not yet been seen to have a relationship with optimization.",
"",
"This paper establishes global convergence of a stochastic adaptive control algorithm for discrete time linear systems. It is shown that, with probability one, the algorithm will ensure the system inputs and outputs are sample mean square bounded and the conditional mean square output tracking error achieves its global minimum possible value for linear feedback control. Thus, asymptotically, the adaptive control algorithm achieves the same performance as could be achieved if the system parameters were known."
]
} |
1903.04752 | 2934806712 | Concatenation of the deep network representations extracted from different facial patches helps to improve face recognition performance. However, the concatenated facial template increases in size and contains redundant information. Previous solutions aim to reduce the dimensionality of the facial template without considering the occlusion pattern of the facial patches. In this paper, we propose an occlusion-guided compact template learning (OGCTL) approach that only uses the information from visible patches to construct the compact template. The compact face representation is not sensitive to the number of patches that are used to construct the facial template and is more suitable for incorporating the information from different view angles for image-set based face recognition. Instead of using occlusion masks in face matching (e.g., DPRFS [38]), the proposed method uses occlusion masks in template construction and achieves significantly better image-set based face verification performance on a challenging database with a template size that is an order-of-magnitude smaller than DPRFS. | A compact template can be further compressed by principal component analysis (PCA) @cite_32 , product quantization @cite_29 , or hashing-based binarization approaches @cite_17 @cite_12 . PCA is a well-known dimensionality reduction approach which has been applied in @cite_8 @cite_3 to reduce the facial template size from 76.8 to 0.625 . Wang @cite_13 employs Product Quantization @cite_29 to convert a 1.25 into a binary 64 Bytes template for large-scale face retrieval. Recently, deep hashing-based approaches were used to generate a binary template from the output of CNN with hashing layers @cite_17 @cite_12 . The common point of all these approaches is that they require a floating point face representation as the starting point to derive a compact facial template. The contribution of our approach is to provide a method to generate a compact face representation (0.5 to 1 ) as a good candidate for the aforementioned studies @cite_32 @cite_29 @cite_17 @cite_12 . This template is robust to facial self-occlusion caused by large head pose variations. | {
"cite_N": [
"@cite_8",
"@cite_29",
"@cite_32",
"@cite_3",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"1998808035",
"2124509324",
"2148694408",
"",
"2464026376",
"2740912563",
"2530345236"
],
"abstract": [
"This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45 verification accuracy on LFW is achieved with only weakly aligned faces.",
"This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.",
"Introduction * Properties of Population Principal Components * Properties of Sample Principal Components * Interpreting Principal Components: Examples * Graphical Representation of Data Using Principal Components * Choosing a Subset of Principal Components or Variables * Principal Component Analysis and Factor Analysis * Principal Components in Regression Analysis * Principal Components Used with Other Multivariate Techniques * Outlier Detection, Influential Observations and Robust Estimation * Rotation and Interpretation of Principal Components * Principal Component Analysis for Time Series and Other Non-Independent Data * Principal Component Analysis for Special Types of Data * Generalizations and Adaptations of Principal Component Analysis",
"",
"Given the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to search for persons of interest among the billions of shared photos on these websites. Despite significant progress in face recognition, searching a large collection of unconstrained face images remains a difficult problem. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top- @math most similar faces using features learned by a convolutional neural network. The @math retrieved candidates are re-ranked by combining similarities based on deep features and those output by the COTS matcher. We evaluate the proposed face search system on a gallery containing @math million web-downloaded face images. Experimental results demonstrate that while the deep features perform worse than the COTS matcher on a mugshot dataset (93.7 percent versus 98.6 percent TAR@FAR of 0.01 percent), fusing the deep features with the COTS matcher improves the overall performance ( @math percent TAR@FAR of 0.01 percent). This shows that the learned deep features provide complementary information over representations used in state-of-the-art face matchers. On the unconstrained face image benchmarks, the performance of the learned deep features is competitive with reported accuracies. LFW database: @math percent accuracy under the standard protocol and @math percent TAR@FAR of @math percent under the BLUFR protocol; IJB-A benchmark: @math percent TAR@FAR of @math percent (verification), rank 1 retrieval of @math percent (closed-set search), @math percent FNIR@FAR of @math percent (open-set search). The proposed face search system offers an excellent trade-off between accuracy and scalability on galleries with millions of images. Additionally, in a face search experiment involving photos of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother's (Dzhokhar Tsarnaev) photo at rank @math in @math second on a @math M gallery and at rank @math in @math seconds on an @math M gallery.",
"",
"Retrieving faces from large mess of videos is an attractive research topic with wide range of applications. Its challenging problems are large intra-class variations, and tremendous time and space complexity. In this paper, we develop a new deep convolutional neural network (deep CNN) to learn discriminative and compact binary representations of faces for face video retrieval. The network integrates feature extraction and hash learning into a unified optimization framework for the optimal compatibility of feature extractor and hash functions. In order to better initialize the network, the low-rank discriminative binary hashing is proposed to pre-learn hash functions during the training procedure. Our method achieves excellent performances on two challenging TV-Series datasets."
]
} |
1903.04772 | 2935276166 | Machine learning is advancing towards a data-science approach, implying a necessity to a line of investigation to divulge the knowledge learnt by deep neuronal networks. Limiting the comparison among networks merely to a predefined intelligent ability, according to ground truth, does not suffice, it should be associated with innate similarity of these artificial entities. Here, we analysed multiple instances of an identical architecture trained to classify objects in static images (CIFAR and ImageNet data sets). We evaluated the performance of the networks under various distortions and compared it to the intrinsic similarity between their constituent kernels. While we expected a close correspondence between these two measures, we observed a puzzling phenomenon. Pairs of networks whose kernels' weights are over 99.9 correlated can exhibit significantly different performances, yet other pairs with no correlation can reach quite compatible levels of performance. We show implications of this for transfer learning, and argue its importance in our general understanding of what intelligence is, whether natural or artificial. | This article attempts to discuss the question in the context of visual information processing. Previous works on this topic could be broadly summarised into three groups: [label= *] . A large body of literature is devoted to visualising internal units of DNNs, , @cite_16 @cite_14 @cite_15 . Despite their genuine usefulness to give an idea of what kernels respond to, they are of a qualitative nature and should be complemented with quantitative techniques. . Another set of papers investigates transferability of knowledge across networks, data sets and tasks, , @cite_18 @cite_10 . Although, they empirically demonstrate the crucial hierarchical characteristics of layers, , the transition from generic to specific in deeper layers, thus far, no account of feature invariance has been provided. . Further works attempt to interpret the intrinsic behaviour of kernels by analysing their activation patterns, , @cite_8 @cite_21 @cite_4 . These techniques successfully exhibit the existence of selectivity among kernels, similar to biological neurons @cite_25 . However, the causality of these kernels for a specific function remains to be demonstrated. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_25"
],
"mid": [
"",
"2952186574",
"2951308125",
"2610018085",
"2890474147",
"2949987032",
"2962851944",
"",
""
],
"abstract": [
"",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs.",
"We propose a general framework called Network Dissection for quantifying the interpretability of latent representations of CNNs by evaluating the alignment between individual hidden units and a set of semantic concepts. Given any CNN model, the proposed method draws on a broad data set of visual concepts to score the semantics of hidden units at each intermediate convolutional layer. The units with semantics are given labels across a range of objects, parts, scenes, textures, materials, and colors. We use the proposed method to test the hypothesis that interpretability of units is equivalent to random linear combinations of units, then we apply our method to compare the latent representations of various networks when trained to solve different supervised and self-supervised training tasks. We further analyze the effect of training iterations, compare networks trained with different initializations, examine the impact of network depth and width, and measure the effect of dropout and batch normalization on the interpretability of deep visual representations. We demonstrate that the proposed method can shed light on characteristics of CNN models and training methods that go beyond measurements of their discriminative power.",
"Contrast is a crucial factor in visual information processing. It is desired for a visual system - irrespective of being biological or artificial - to \"perceive\" the world robustly under large potential changes in illumination. In this work, we studied the responses of deep neural networks (DNN) to identical images at different levels of contrast. We analysed the activation of kernels in the convolutional layers of eight prominent networks with distinct architectures (e.g. VGG and Inception). The results of our experiments indicate that those networks with a higher tolerance to alteration of contrast have more than one convolutional layer prior to the first max-pooling operator. It appears that the last convolutional layer before the first max-pooling acts as a mitigator of contrast variation in input images. In our investigation, interestingly, we observed many similarities between the mechanisms of these DNNs and biological visual systems. These comparisons allow us to understand more profoundly the underlying mechanisms of a visual system that is grounded on the basis of \"data-analysis\".",
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].",
"",
""
]
} |
1903.04611 | 2926767350 | High performance multi-GPU computing becomes an inevitable trend due to the ever-increasing demand on computation capability in emerging domains such as deep learning, big data and planet-scale simulations. However, the lack of deep understanding on how modern GPUs can be connected and the real impact of state-of-the-art interconnect technology on multi-GPU application performance become a hurdle. In this paper, we fill the gap by conducting a thorough evaluation on five latest types of modern GPU interconnects: PCIe, NVLink-V1, NVLink-V2, NVLink-SLI and NVSwitch, from six high-end servers and HPC platforms: NVIDIA P100-DGX-1, V100-DGX-1, DGX-2, OLCF's SummitDev and Summit supercomputers, as well as an SLI-linked system with two NVIDIA Turing RTX-2080 GPUs. Based on the empirical evaluation, we have observed four new types of GPU communication network NUMA effects: three are triggered by NVLink's topology, connectivity and routing, while one is caused by PCIe chipset design issue. These observations indicate that, for an application running in a multi-GPU node, choosing the right GPU combination can impose considerable impact on GPU communication efficiency, as well as the application's overall performance. Our evaluation can be leveraged in building practical multi-GPU performance models, which are vital for GPU task allocation, scheduling and migration in a shared environment (e.g., AI cloud and HPC centers), as well as communication-oriented performance tuning. | . @cite_52 analyzed the NUMA effects in a multi-GPU node and provided optimization guidance. @cite_12 proposed to rely on hybrid-memory-cubes (HMCs) to build a memory network for simplifying multi-GPU memory management and improving programmability. @cite_35 presented a design to realize GPU-Aware MPI to support data communication among intra-node GPUs with standard MPI. Ben- @cite_24 described an automatic multi-GPU partition framework to distribute workload based on their memory access patterns. @cite_8 showed a software solution, including programming interfaces, compiler support and runtime, to partition GPU kernels for multi-GPU execution in a single node. Finally, @cite_54 evaluated the potential performance benefit and tradeoffs of AMD's (ROC) platform for (HSA). | {
"cite_N": [
"@cite_35",
"@cite_8",
"@cite_54",
"@cite_52",
"@cite_24",
"@cite_12"
],
"mid": [
"2025388037",
"",
"2795994638",
"1980759071",
"1988307172",
"2007511350"
],
"abstract": [
"Designing high-performance and scalable applications on GPU clusters requires tackling several challenges. The key challenge is the separate host memory and device memory, which requires programmers to use multiple programming models, such as CUDA and MPI, to operate on data in different memory spaces. This challenge becomes more difficult to tackle when non-contiguous data in multidimensional structures is used by real-world applications. These challenges limit the programming productivity and the application performance. We propose the GPU-Aware MPI to support data communication from GPU to GPU using standard MPI. It unifies the separate memory spaces, and avoids explicit CPU-GPU data movement and CPU GPU buffer management. It supports all MPI datatypes on device memory with two algorithms: a GPU datatype vectorization algorithm and a vector based GPU kernel data pack and unpack algorithm. A pipeline is designed to overlap the non-contiguous data packing and unpacking on GPUs, the data movement on the PCIe, and the RDMA data transfer on the network. We incorporate our design with the open-source MPI library MVAPICH2 and optimize a production application: the multiphase 3D LBM. Besides the increase of programming productivity, we observe up to 19.9 percent improvement in application-level performance on 64 GPUs of the Oakley supercomputer.",
"",
"GPUs have been shown to deliver impressive computing performance, while also providing high energy efficiency, across a wide range of high-performance and embedded system workloads. However, limited support for efficient communication and synchronization between the CPU and the GPU impacts our ability to fully exploit the benefits of heterogeneous systems. Recently, the Heterogeneous System Architecture (HSA) was introduced to address these issues with synchronization and communication, but given the low-level nature of HSA, it was not easily adopted by the broader programming community. In 2016, AMD described the Radeon Open Compute (ROC) platform that brings high-level programming frameworks such as OpenCL, HC++, and HIP to end users. These high-level programming frameworks offer a simpler programming experience by wrapping complex HSA APIs, while still delivering the power of HSA. To date, there has been little evaluation of the potential performance benefits and trade-offs of leveraging the ROC platform. In this work, we evaluate the performance of the ROC platform using the Hetero-Mark and DNNMark benchmark suites. Equipped with Hetero-Mark, we compare the performance of different programming frameworks, including OpenCL, HC++, and HIP on both integrated APUs and discrete GPUs. We also present three new CPU-GPU collaborative patterns and employ three new benchmarks to evaluate system-level atomics. With DNNMark and a new DNN Face Detection benchmark, we evaluate the performance of ROC libraries including rocBLAS and MIOpen. We also provide guidance on best practices to programmers when developing applications leveraging the ROC platform.",
"As system architects strive for increased density and power efficiency, the traditional compute node is being augmented with an increasing number of graphics processing units (GPUs). The integration of multiple GPUs per node introduces complex performance phenomena including non-uniform memory access (NUMA) and contention for shared system resources. Utilizing the Keeneland system, this paper quantifies these effects and presents some guidance on programming strategies to maximize performance in multi-GPU environments.",
"With the increased popularity of multi-GPU nodes in modern HPC clusters, it is imperative to develop matching programming paradigms for their efficient utilization. In order to take advantage of the local GPUs and the low-latency high-throughput interconnects that link them, programmers need to meticulously adapt parallel applications with respect to load balancing, boundary conditions and device synchronization. This paper presents MAPS-Multi, an automatic multi-GPU partitioning framework that distributes the workload based on the underlying memory access patterns. The framework consists of host- and device-level APIs that allow programs to efficiently run on a variety of GPU and multi-GPU architectures. The framework implements several layers of code optimization, device abstraction, and automatic inference of inter-GPU memory exchanges. The paper demonstrates that the performance of MAPS-Multi achieves near-linear scaling on fundamental computational operations, as well as real-world applications in deep learning and multivariate analysis.",
"GPUs are being widely used to accelerate different workloads and multi-GPU systems can provide higher performance with multiple discrete GPUs interconnected together. However, there are two main communication bottlenecks in multi-GPU systems -- accessing remote GPU memory and the communication between GPU and the host CPU. Recent advances in multi-GPU programming, including unified virtual addressing and unified memory from NVIDIA, has made programming simpler but the costly remote memory access still makes multi-GPU programming difficult. In order to overcome the communication limitations, we propose to leverage the memory network based on hybrid memory cubes (HMCs) to simplify multi-GPU memory management and improve programmability. In particular, we propose scalable kernel execution (SKE) where multiple GPUs are viewed as a single virtual GPU as a single kernel can be executed across multiple GPUs without modifying the source code. To fully enable the benefits of SKE, we explore alternative memory network designs in a multi-GPU system. We propose a GPU memory network (GMN) to simplify data sharing between the discrete GPUs while a CPU memory network (CMN) is used to simplify data communication between the host CPU and the discrete GPUs. These two types of networks can be combined to create a unified memory network (UMN) where the communication bottleneck in multi-GPU can be significantly minimized as both the CPU and GPU share the memory network. We evaluate alternative network designs and propose a sliced flattened butterfly topology for the memory network that scales better than previously proposed alternative topologies by removing local HMC channels. In addition, we propose an overlay network organization for unified memory network to minimize the latency for CPU access while providing high bandwidth for the GPUs. We evaluate trade-offs between the different memory network organization and show how UMN significantly reduces the communication bottleneck in multi-GPU systems."
]
} |
1903.04611 | 2926767350 | High performance multi-GPU computing becomes an inevitable trend due to the ever-increasing demand on computation capability in emerging domains such as deep learning, big data and planet-scale simulations. However, the lack of deep understanding on how modern GPUs can be connected and the real impact of state-of-the-art interconnect technology on multi-GPU application performance become a hurdle. In this paper, we fill the gap by conducting a thorough evaluation on five latest types of modern GPU interconnects: PCIe, NVLink-V1, NVLink-V2, NVLink-SLI and NVSwitch, from six high-end servers and HPC platforms: NVIDIA P100-DGX-1, V100-DGX-1, DGX-2, OLCF's SummitDev and Summit supercomputers, as well as an SLI-linked system with two NVIDIA Turing RTX-2080 GPUs. Based on the empirical evaluation, we have observed four new types of GPU communication network NUMA effects: three are triggered by NVLink's topology, connectivity and routing, while one is caused by PCIe chipset design issue. These observations indicate that, for an application running in a multi-GPU node, choosing the right GPU combination can impose considerable impact on GPU communication efficiency, as well as the application's overall performance. Our evaluation can be leveraged in building practical multi-GPU performance models, which are vital for GPU task allocation, scheduling and migration in a shared environment (e.g., AI cloud and HPC centers), as well as communication-oriented performance tuning. | . For MPI-based multi-node GPU computing, @cite_6 introduced a MPI design that integrates CUDA data movement transparently with MPI. @cite_57 proposed a hardware approach to overlap computation and communication in a GPU cluster. @cite_20 analyzed the exascale proxy applications on their communication patterns and proposed a matching algorithm for GPUs to comply with MPI constraints. @cite_53 proposed a pipelined chain design for MPI broadcast collective operations on multi-GPU nodes to facilitate various deep learning frameworks. | {
"cite_N": [
"@cite_57",
"@cite_53",
"@cite_20",
"@cite_6"
],
"mid": [
"2557367833",
"2953200226",
"2733525365",
""
],
"abstract": [
"Over the last decade, CUDA and the underlying GPU hardware architecture have continuously gained popularity in various high-performance computing application domains such as climate modeling, computational chemistry, or machine learning. Despite this popularity, we lack a single coherent programming model for GPU clusters. We therefore introduce the dCUDA programming model, which implements device-side remote memory access with target notification. To hide instruction pipeline latencies, CUDA programs over-decompose the problem and over-subscribe the device by running many more threads than there are hardware execution units. Whenever a thread stalls, the hardware scheduler immediately proceeds with the execution of another thread ready for execution. This latency hiding technique is key to make best use of the available hardware resources. With dCUDA, we apply latency hiding at cluster scale to automatically overlap computation and communication. Our benchmarks demonstrate perfect overlap for memory bandwidth-bound tasks and good overlap for compute-bound tasks.",
"Dense Multi-GPU systems have recently gained a lot of attention in the HPC arena. Traditionally, MPI runtimes have been primarily designed for clusters with a large number of nodes. However, with the advent of MPI+CUDA applications and CUDA-Aware MPI runtimes like MVAPICH2 and OpenMPI, it has become important to address efficient communication schemes for such dense Multi-GPU nodes. This coupled with new application workloads brought forward by Deep Learning frameworks like Caffe and Microsoft CNTK pose additional design constraints due to very large message communication of GPU buffers during the training phase. In this context, special-purpose libraries like NVIDIA NCCL have been proposed for GPU-based collective communication on dense GPU systems. In this paper, we propose a pipelined chain (ring) design for the MPI_Bcast collective operation along with an enhanced collective tuning framework in MVAPICH2-GDR that enables efficient intra- inter-node multi-GPU communication. We present an in-depth performance landscape for the proposed MPI_Bcast schemes along with a comparative analysis of NVIDIA NCCL Broadcast and NCCL-based MPI_Bcast. The proposed designs for MVAPICH2-GDR enable up to 14X and 16.6X improvement, compared to NCCL-based solutions, for intra- and inter-node broadcast latency, respectively. In addition, the proposed designs provide up to 7 improvement over NCCL-based solutions for data parallel training of the VGG network on 128 GPUs using Microsoft CNTK.",
"Accelerators, such as GPUs, have proven to be highly successful in reducing execution time and power consumption of compute-intensive applications. Even though they are already used pervasively, they are typically supervised by general-purpose CPUs, which results in frequent control flow switches and data transfers as CPUs are handling all communication tasks. However, we observe that accelerators are recently being augmented with peer-to-peer communication capabilities that allow for autonomous traffic sourcing and sinking. While appropriate hardware support is becoming available, it seems that the right communication semantics are yet to be identified. Maintaining the semantics of existing communication models, such as the Message Passing Interface (MPI), seems problematic as they have been designed for the CPU’s execution model, which inherently differs from such specialized processors. In this paper, we analyze the compatibility of traditional message passing with massively parallel Single Instruction Multiple Thread (SIMT) architectures, as represented by GPUs, and focus on the message matching problem. We begin with a fully MPI-compliant set of guarantees, including tag and source wildcards and message ordering. Based on an analysis of exascale proxy applications, we start relaxing these guarantees to adapt message passing to the GPU’s execution model. We present suitable algorithms for message matching on GPUs that can yield matching rates of 60M and 500M matches s, depending on the constraints that are being relaxed. We discuss our experiments and create an understanding of the mismatch of current message passing protocols and the architecture and execution model of SIMT processors.",
""
]
} |
1903.04360 | 2920851513 | Ontology learning is a critical task in industry, dealing with identifying and extracting concepts captured in text data such that these concepts can be used in different tasks, e.g. information retrieval. Ontology learning is non-trivial due to several reasons with limited amount of prior research work that automatically learns a domain specific ontology from data. In our work, we propose a two-stage classification system to automatically learn an ontology from unstructured text data. We first collect candidate concepts, which are classified into concepts and irrelevant collocates by our first classifier. The concepts from the first classifier are further classified by the second classifier into different concept types. The proposed system is deployed as a prototype at a company and its performance is validated by using complaint and repair verbatim data collected in automotive industry from different data sources. | In our work, we also propose a new approach to disambiguate abbreviations. There are several related works. @cite_2 extracted features such as concept unique identifiers and then built a classification model. @cite_9 identified context based features for classification, but they assumed an ambiguous phrase only has one correct expansion in the same article. @cite_3 proposed a word embedding based approach to select the expansion from all possible expansions with largest embedding similarity. There are two major differences between our approach and these works. First, we propose a new model which combines the statistical approach (TF-IDF) and machine learning approach (Naive Bayes model) together, i.e. we measure the importance of each collocate by TF-IDF and estimate the posterior probability of each possible expansion, while in their work they either only applied machine learning classification models or simply calculated statistical similarity between abbreviation and possible expansions. Second, in these works strong assumptions were made, such as each phrase only has one expansion in the same article and features are conditionally independent, while we do not have any assumptions for our model and therefore it is more robust. | {
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_2"
],
"mid": [
"2095697172",
"2210308568",
"2061478295"
],
"abstract": [
"A process that attempts to solve abbreviation ambiguity is presented. Various context-related features and statistical features have been explored. Almost all features are domain independent and language independent. The application domain is Jewish Law documents written in Hebrew. Such documents are known to be rich in ambiguous abbreviations. Various implementations of the one sense per discourse hypothesis are used, improving the features with new variants. An accuracy of 96.09 has been achieved by SVM.",
"According to the website AcronymFinder.com which is one of the world's largest and most comprehensive dictionaries of acronyms, an average of 37 new human-edited acronym definitions are added every day. There are 379,918 acronyms with 4,766,899 definitions on that site up to now, and each acronym has 12.5 definitions on average. It is a very important research topic to identify what exactly an acronym means in a given context for document comprehension as well as for document retrieval. In this paper, we propose two word embedding based models for acronym disambiguation. Word embedding is to represent words in a continuous and multidimensional vector space, so that it is easy to calculate the semantic similarity between words by calculating the vector distance. We evaluate the models on MSH Dataset and ScienceWISE Dataset, and both models outperform the state-of-art methods on accuracy. The experimental results show that word embedding helps to improve acronym disambiguation.",
"Abbreviations are common in biomedical documents and many are ambiguous in the sense that they have several potential expansions. Identifying the correct expansion is necessary for language understanding and important for applications such as document retrieval. Identifying the correct expansion can be viewed as a Word Sense Disambiguation (WSD) problem. A WSD system that uses a variety of knowledge sources, including two types of information specific to the biomedical domain, is also described. This system was tested on a corpus of ambiguous abbreviations, created by automatically identifying the correct expansion in Medline abstracts, and found to identify the correct expansion with up to 99 accuracy."
]
} |
1903.04240 | 2921194745 | As deployment of automated vehicles increases, so does the rate at which they are exposed to critical traffic situations. Such situations, e.g. a late detected pedestrian in the vehicle path, require operation at the handling limits in order to maximize the capacity to avoid an accident. Also, the physical limitations of the vehicle typically vary in time due to local road and weather conditions. In this paper, we tackle the problem of trajectory planning and control at the limits of handling under time varying constraints, by adapting to local traction limitations. The proposed method is based on Real Time Iteration Sequential Quadratic Programming (RTI-SQP) augmented with state space sampling, which we call Sampling Augmented Adaptive RTI-SQP (SAA-SQP). Through extensive numerical simulations we demonstrate that our method increases the vehicle's capacity to avoid late detected obstacles compared to the traditional planning tracking approaches, as a direct consequence of safe operating constraint adaptation in real time. | Research in motion planning and control at the handling limits was influenced by research in the racing community. Through the use of nonlinear programming, @cite_28 were able to compute the time-optimal speed profile and racing line for an entire race track, although computational limits required the trajectories to be computed offline. Kapania and Gerdes @cite_34 presented an experimentally validated algorithm that reduced computational expense by breaking down the combined lateral longitudinal vehicle control problem into two sequential subproblems that were solved iteratively. | {
"cite_N": [
"@cite_28",
"@cite_34"
],
"mid": [
"1976729497",
"2335306738"
],
"abstract": [
"The minimum-lap-time optimal control problem for a Formula One race car is solved using direct transcription and nonlinear programming. Features of this work include significantly reduced full-lap solution times and the simultaneous optimisation of the driven line, the driver controls and multiple car set-up parameters. It is shown that significant reductions in the driven lap time can be obtained from track-specific set-up parameter optimisation. Reduced computing times are achieved using a combination of a track description based on curvilinear coordinates, analytical derivatives and model non-dimensionalisation. The curvature of the track centre line is found by solving an auxiliary optimal control problem that negates the difficulties associated with integration drift and trajectory closure.",
"The problem of maneuvering a vehicle through a race course in minimum time requires computation of both longitudinal (brake and throttle) and lateral (steering wheel) control inputs. Unfortunately, solving the resulting nonlinear optimal control problem is typically computationally expensive and infeasible for real-time trajectory planning. This paper presents an iterative algorithm that divides the path generation task into two sequential subproblems that are significantly easier to solve. Given an initial path through the race track, the algorithm runs a forward-backward integration scheme to determine the minimum-time longitudinal speed profile, subject to tire friction constraints. With this fixed speed profile, the algorithm updates the vehicle's path by solving a convex optimization problem that minimizes the resulting path curvature while staying within track boundaries and obeying affine, time-varying vehicle dynamics constraints. This two-step process is repeated iteratively until the predicted lap time no longer improves. While providing no guarantees of convergence or a globally optimal solution, the approach performs very well when validated on the Thunderhill Raceway course in Willows, CA. The predicted lap time converges after four to five iterations, with each iteration over the full 4.5 km race course requiring only thirty seconds of computation time on a laptop computer. The resulting trajectory is experimentally driven at the race circuit with an autonomous Audi TTS test vehicle, and the resulting lap time and racing line is comparable to both a nonlinear gradient descent solution and a trajectory recorded from a professional racecar driver. The experimental results indicate that the proposed method is a viable option for online trajectory planning in the near future."
]
} |
1903.04240 | 2921194745 | As deployment of automated vehicles increases, so does the rate at which they are exposed to critical traffic situations. Such situations, e.g. a late detected pedestrian in the vehicle path, require operation at the handling limits in order to maximize the capacity to avoid an accident. Also, the physical limitations of the vehicle typically vary in time due to local road and weather conditions. In this paper, we tackle the problem of trajectory planning and control at the limits of handling under time varying constraints, by adapting to local traction limitations. The proposed method is based on Real Time Iteration Sequential Quadratic Programming (RTI-SQP) augmented with state space sampling, which we call Sampling Augmented Adaptive RTI-SQP (SAA-SQP). Through extensive numerical simulations we demonstrate that our method increases the vehicle's capacity to avoid late detected obstacles compared to the traditional planning tracking approaches, as a direct consequence of safe operating constraint adaptation in real time. | Hence for practicality, state space sampling methods such as those presented by @cite_29 are widely used in industry for collision avoidance. The core concept of the method is as follows. A grid is defined in the terminal state of the planning horizon and a set of two point boundary value problems are solved between the initial state and each sampled terminal state, generating a trajectory set. Dynamic constraints are not considered in the generation of the trajectory set. Instead, a dynamic feasibility check is done in conjunction with the collision check for each trajectory. extended the method in @cite_9 by generating the trajectory set in a road-aligned coordinate frame, @cite_17 , and in @cite_22 by introducing a terminal manifold to improve the selection of terminal states. It has been shown that the method is well suited for planning in scenarios including discrete decisions. However, even though it reliably produces feasible maneuvers, they are suboptimal w.r.t. the physical capabilities of the vehicle. | {
"cite_N": [
"@cite_9",
"@cite_29",
"@cite_22",
"@cite_17"
],
"mid": [
"2107338474",
"2041006327",
"1983948265",
"32443946"
],
"abstract": [
"Safe handling of dynamic highway and inner city scenarios with autonomous vehicles involves the problem of generating traffic-adapted trajectories. In order to account for the practical requirements of the holistic autonomous system, we propose a semi-reactive trajectory generation method, which can be tightly integrated into the behavioral layer. The method realizes long-term objectives such as velocity keeping, merging, following, stopping, in combination with a reactive collision avoidance by means of optimal-control strategies within the Frenet-Frame [12] of the street. The capabilities of this approach are demonstrated in the simulation of a typical high-speed highway scenario.",
"Sampling in the space of controls or actions is a well-established method for ensuring feasible local motion plans. However, as mobile robots advance in performance and competence in complex environments, this classical motion-planning technique ceases to be effective. When environmental constraints severely limit the space of acceptable motions or when global motion planning expresses strong preferences, a state space sampling strategy is more effective. Although this has been evident for some time, the practical question is how to achieve it while also satisfying the severe constraints of vehicle dynamic feasibility. The paper presents an effective algorithm for state space sampling utilizing a model-based trajectory generation approach. This method enables high-speed navigation in highly constrained and-or partially known environments such as trails, roadways, and dense off-road obstacle fields. © 2008 Wiley Periodicals, Inc.",
"This paper deals with the trajectory generation problem faced by an autonomous vehicle in moving traffic. Being given the predicted motion of the traffic flow, the proposed semi-reactive planning strategy realizes all required long-term maneuver tasks (lane-changing, merging, distance-keeping, velocity-keeping, precise stopping, etc.) while providing short-term collision avoidance. The key to comfortable, human-like as well as physically feasible trajectories is the combined optimization of the lateral and longitudinal movements in street-relative coordinates with carefully chosen cost functionals and terminal state sets (manifolds). The performance of the approach is demonstrated in simulated traffic scenarios.",
"Through two different approaches, this report proposes two general controllers for unicycle-type and two-steering-wheels mobile robots. For both systems, conditions for asymptotical convergence to a predefined path are established and simulation results are presented. Rather than writing the systems' equations with respect to a fixed reference frame, the robot state is here parametrized relative to the followed path, in terms of distance and orientation."
]
} |
1903.04240 | 2921194745 | As deployment of automated vehicles increases, so does the rate at which they are exposed to critical traffic situations. Such situations, e.g. a late detected pedestrian in the vehicle path, require operation at the handling limits in order to maximize the capacity to avoid an accident. Also, the physical limitations of the vehicle typically vary in time due to local road and weather conditions. In this paper, we tackle the problem of trajectory planning and control at the limits of handling under time varying constraints, by adapting to local traction limitations. The proposed method is based on Real Time Iteration Sequential Quadratic Programming (RTI-SQP) augmented with state space sampling, which we call Sampling Augmented Adaptive RTI-SQP (SAA-SQP). Through extensive numerical simulations we demonstrate that our method increases the vehicle's capacity to avoid late detected obstacles compared to the traditional planning tracking approaches, as a direct consequence of safe operating constraint adaptation in real time. | An intuitive way to reduce suboptimality of the trajectories in the set is to solve the two-point boundary value problem offline, using a dynamic model. This method has been demonstrated successfully in several previous works @cite_3 @cite_16 . However, this approach prohibits online model adaptation, since the trajectories in the pre-computed library assumes a static vehicle model. | {
"cite_N": [
"@cite_16",
"@cite_3"
],
"mid": [
"2896214412",
"1578293866"
],
"abstract": [
"Highly automated road vehicles need the capability of stopping safely in a situation that disrupts continued normal operation, e.g. due to internal system faults. Motion planning for safe stop differs from nominal motion planning, since there is not a specific goal location. Rather, the desired behavior is that the vehicle should reach a stopped state, preferably outside of active lanes. Also, the functionality to stop safely needs to be of high integrity. The first contribution of this paper is to formulate the safe stop problem as a benchmark optimal control problem, which can be solved by dynamic programming. However, this solution method cannot be used in real-time. The second contribution is to develop a real-time safe stop trajectory planning algorithm, based on selection from a precomputed set of trajectories. By exploiting the particular properties of the safe stop problem, the cardinality of the set is decreased, making the algorithm computationally efficient. Furthermore, a monitoring based architecture concept is proposed, that ensures dependability of the safe stop function. Finally, a proof of concept simulation using the proposed architecture and the safe stop trajectory planner is presented.",
"Summary This paper describes autonomous racing of RC race cars based on mathematical optimization. Using a dynamical model of the vehicle, control inputs are computed by receding horizon based controllers, where the objective is to maximize progress on the track subject to the requirement of staying on the track and avoiding opponents. Two different control formulations are presented. The first controller employs a two-level structure, consisting of a path planner and a nonlinear model predictive controller (NMPC) for tracking. The second controller combines both tasks in one nonlinear optimization problem (NLP) following the ideas of contouring control. Linear time varying models obtained by linearization are used to build local approximations of the control NLPs in the form of convex quadratic programs (QPs) at each sampling time. The resulting QPs have a typical MPC structure and can be solved in the range of milliseconds by recent structure exploiting solvers, which is key to the real-time feasibility of the overall control scheme. Obstacle avoidance is incorporated by means of a high-level corridor planner based on dynamic programming, which generates convex constraints for the controllers according to the current position of opponents and the track layout. The control performance is investigated experimentally using 1:43 scale RC race cars, driven at speeds of more than 3 m s and in operating regions with saturated rear tire forces (drifting). The algorithms run at 50 Hz sampling rate on embedded computing platforms, demonstrating the real-time feasibility and high performance of optimization-based approaches for autonomous racing. Copyright © 2014 John Wiley & Sons, Ltd."
]
} |
1903.04191 | 2952221552 | Suppose one is faced with the challenge of tissue segmentation in MR images, without annotators at their center to provide labeled training data. One option is to go to another medical center for a trained classifier. Sadly, tissue classifiers do not generalize well across centers due to voxel intensity shifts caused by center-specific acquisition protocols. However, certain aspects of segmentations, such as spatial smoothness, remain relatively consistent and can be learned separately. Here we present a smoothness prior that is fit to segmentations produced at another medical center. This informative prior is presented to an unsupervised Bayesian model. The model clusters the voxel intensities, such that it produces segmentations that are similarly smooth to those of the other medical center. In addition, the unsupervised Bayesian model is extended to a semi-supervised variant, which needs no visual interpretation of clusters into tissues. | In transfer learning and domain adaptation, a model learns from a domain and aims to generalize to a differently distributed domain @cite_1 @cite_9 . In Bayesian transfer learning, the source domain can be interpreted as prior knowledge for the target task @cite_6 @cite_14 . For instance, in natural language processing, a document classification task can be performed using a Bayesian linear classifier trained on a bag-of-word encoding of the document @cite_6 . Instead of imposing a weakly informative prior on how important each word of the dictionary is for the document classification task, one could fit the prior on data from Wikipedia. That produces a stronger, more informative prior over how important each word is. To our knowledge, no Bayesian transfer learning models have been proposed for medical imaging tasks. Our interest is to study what forms of prior knowledge can be obtained from large open access labeled data sets, and how that knowledge can be exploited for a specific task. | {
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_1",
"@cite_6"
],
"mid": [
"2908696593",
"2104936489",
"2165698076",
"2165744911"
],
"abstract": [
"Domain adaptation has become a prominent problem setting in machine learning and related fields. This review asks the questions: when and how a classifier can learn from a source domain and generalize to a target domain. As for when, we review conditions that allow for cross-domain generalization error bounds. As for how, we present a categorization of approaches, divided into, what we refer to as, sample-based, feature-based and inference-based methods. Sample-based methods focus on weighting individual observations during training based on their importance to the target domain. Feature-based methods focus on mapping, projecting and representing features such that a source classifier performs well on the target domain and inference-based methods focus on alternative estimators, such as robust, minimax or Bayesian. Our categorization highlights recurring ideas and raises a number of questions important to further research.",
"Multi-task learning is the problem of maximizing the performance of a system across a number of related tasks. When applied to multiple domains for the same task, it is similar to domain adaptation, but symmetric, rather than limited to improving performance on a target domain. We present a more principled, better performing model for this problem, based on the use of a hierarchical Bayesian prior. Each domain has its own domain-specific parameter for each feature but, rather than a constant prior over these parameters, the model instead links them via a hierarchical Bayesian global prior. This prior encourages the features to have similar weights across domains, unless there is good evidence to the contrary. We show that the method of (Daume III, 2007), which was presented as a simple \"preprocessing step,\" is actually equivalent, except our representation explicitly separates hyperparameters which were tied in his work. We demonstrate that allowing different values for these hyperparameters significantly improves performance over both a strong baseline and (Daume III, 2007) within both a conditional random field sequence model for named entity recognition and a discriminatively trained dependency parser.",
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"Many applications of supervised learning require good generalization from limited labeled data. In the Bayesian setting, we can try to achieve this goal by using an informative prior over the parameters, one that encodes useful domain knowledge. Focusing on logistic regression, we present an algorithm for automatically constructing a multivariate Gaussian prior with a full covariance matrix for a given supervised learning task. This prior relaxes a commonly used but overly simplistic independence assumption, and allows parameters to be dependent. The algorithm uses other \"similar\" learning problems to estimate the covariance of pairs of individual parameters. We then use a semidefinite program to combine these estimates and learn a good prior for the current learning task. We apply our methods to binary text classification, and demonstrate a 20 to 40 test error reduction over a commonly used prior."
]
} |
1903.04191 | 2952221552 | Suppose one is faced with the challenge of tissue segmentation in MR images, without annotators at their center to provide labeled training data. One option is to go to another medical center for a trained classifier. Sadly, tissue classifiers do not generalize well across centers due to voxel intensity shifts caused by center-specific acquisition protocols. However, certain aspects of segmentations, such as spatial smoothness, remain relatively consistent and can be learned separately. Here we present a smoothness prior that is fit to segmentations produced at another medical center. This informative prior is presented to an unsupervised Bayesian model. The model clusters the voxel intensities, such that it produces segmentations that are similarly smooth to those of the other medical center. In addition, the unsupervised Bayesian model is extended to a semi-supervised variant, which needs no visual interpretation of clusters into tissues. | Hidden Markov Random Field (MRF) models are a form of Bayesian models for image segmentation. They pose a hidden state for each voxel that accounts for some intrinsic latent structure of the image @cite_13 . For tissue segmentation, the latent state is assumed to be the tissue of the voxel, while the observed voxel intensity value is a sample from a probabilistic observation model. The observation model specifies the causal relations between the latent image and the observed image @cite_19 @cite_12 . Such assumptions are not unreasonable for the case of MR imaging, where T1 relaxation times depend on the tissue of the voxel. | {
"cite_N": [
"@cite_19",
"@cite_13",
"@cite_12"
],
"mid": [
"2136573752",
"2114358147",
""
],
"abstract": [
"The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limitation-no spatial information is taken into account. This causes the FM model to work only on well-defined images with low levels of noise; unfortunately, this is often not the the case due to artifacts such as partial volume effect and bias field distortion. Under these conditions, FM model-based methods produce unreliable results. Here, the authors propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported methods are limited to using MRF as a general prior in an FM model-based approach. To fit the HMRF model, an EM algorithm is used. The authors show that by incorporating both the HMRF model and the EM algorithm into a HMRF-EM framework, an accurate and robust segmentation can be achieved. More importantly, the HMRF-EM framework can easily be combined with other techniques. As an example, the authors show how the bias field correction algorithm of Guillemaud and Brady (1997) can be incorporated into this framework to achieve a three-dimensional fully automated approach for brain MR image segmentation.",
"In this paper, we present a comprehensive survey of Markov Random Fields (MRFs) in computer vision and image understanding, with respect to the modeling, the inference and the learning. While MRFs were introduced into the computer vision field about two decades ago, they started to become a ubiquitous tool for solving visual perception problems around the turn of the millennium following the emergence of efficient inference methods. During the past decade, a variety of MRF models as well as inference and learning methods have been developed for addressing numerous low, mid and high-level vision problems. While most of the literature concerns pairwise MRFs, in recent years we have also witnessed significant progress in higher-order MRFs, which substantially enhances the expressiveness of graph-based models and expands the domain of solvable problems. This survey provides a compact and informative summary of the major literature in this research topic.",
""
]
} |
1903.04227 | 2921562060 | Most image completion methods produce only one result for each masked input, although there may be many reasonable possibilities. In this paper, we present an approach for -- the task of generating multiple and diverse plausible solutions for image completion. A major challenge faced by learning-based approaches is that usually only one ground truth training instance per label. As such, sampling from conditional VAEs still leads to minimal diversity. To overcome this, we propose a novel and probabilistically principled framework with two parallel paths. One is a reconstructive path that utilizes the only one given ground truth to get prior distribution of missing parts and rebuild the original image from this distribution. The other is a generative path for which the conditional prior is coupled to the distribution obtained in the reconstructive path. Both are supported by GANs. We also introduce a new short+long term attention layer that exploits distant relations among decoder and encoder features, improving appearance consistency. When tested on datasets with buildings (Paris), faces (CelebA-HQ), and natural images (ImageNet), our method not only generated higher-quality completion results, but also with multiple and diverse plausible outputs. | To generate semantically new content, inter-image completion borrows information from a large dataset. Hays and Efros @cite_24 presented an image completion method using millions of images, in which the image most similar to the masked input is retrieved, and corresponding regions are transferred. However, this requires a high contextual match, which is not always available. Recently, learning-based approaches were proposed. Initial works @cite_16 @cite_39 focused on small and thin holes. Context encoders (CE) @cite_46 handled 64 @math 64-sized holes using GANs @cite_14 . This was followed by several CNN-based methods, which included combining global and local discriminators as adversarial loss @cite_51 , identifying closest features in the latent space of masked images @cite_22 , utilizing semantic labels to guide the completion network @cite_17 , introducing additional face parsing loss for face completion @cite_34 , and designing particular convolutions to address irregular holes @cite_45 @cite_29 . A common drawback of these methods is that they often create distorted structures and blurry textures inconsistent with the visible regions, especially for large holes. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_29",
"@cite_34",
"@cite_39",
"@cite_24",
"@cite_45",
"@cite_46",
"@cite_16",
"@cite_51",
"@cite_17"
],
"mid": [
"2099471712",
"2963917315",
"",
"2611104282",
"",
"2171011251",
"2798365772",
"2963420272",
"345598540",
"2738588019",
"2801693445"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.",
"",
"In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.",
"",
"What can you do with a million images? In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of results for each input image and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach.",
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"Most inpainting approaches require a good image model to infer the unknown pixels. In this work, we directly learn a mapping from image patches, corrupted by missing pixels, onto complete image patches. This mapping is represented as a deep neural network that is automatically trained on a large image data set. In particular, we are interested in the question whether it is helpful to exploit the shape information of the missing regions, i.e. the masks, which is something commonly ignored by other approaches. In comprehensive experiments on various images, we demonstrate that our learning-based approach is able to use this extra information and can achieve state-of-the-art inpainting results. Furthermore, we show that training with such extra information is useful for blind inpainting, where the exact shape of the missing region might be uncertain, for instance due to aliasing effects.",
"We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.",
"In this paper, we focus on image inpainting task, aiming at recovering the missing area of an incomplete image given the context information. Recent development in deep generative models enables an efficient end-to-end framework for image synthesis and inpainting tasks, but existing methods based on generative models don't exploit the segmentation information to constrain the object shapes, which usually lead to blurry results on the boundary. To tackle this problem, we propose to introduce the semantic segmentation information, which disentangles the inter-class difference and intra-class variation for image inpainting. This leads to much clearer recovered boundary between semantically different regions and better texture within semantically consistent segments. Our model factorizes the image inpainting process into segmentation prediction (SP-Net) and segmentation guidance (SG-Net) as two steps, which predict the segmentation labels in the missing area first, and then generate segmentation guided inpainting results. Experiments on multiple public datasets show that our approach outperforms existing methods in optimizing the image inpainting quality, and the interactive segmentation guidance provides possibilities for multi-modal predictions of image inpainting."
]
} |
1903.04227 | 2921562060 | Most image completion methods produce only one result for each masked input, although there may be many reasonable possibilities. In this paper, we present an approach for -- the task of generating multiple and diverse plausible solutions for image completion. A major challenge faced by learning-based approaches is that usually only one ground truth training instance per label. As such, sampling from conditional VAEs still leads to minimal diversity. To overcome this, we propose a novel and probabilistically principled framework with two parallel paths. One is a reconstructive path that utilizes the only one given ground truth to get prior distribution of missing parts and rebuild the original image from this distribution. The other is a generative path for which the conditional prior is coupled to the distribution obtained in the reconstructive path. Both are supported by GANs. We also introduce a new short+long term attention layer that exploits distant relations among decoder and encoder features, improving appearance consistency. When tested on datasets with buildings (Paris), faces (CelebA-HQ), and natural images (ImageNet), our method not only generated higher-quality completion results, but also with multiple and diverse plausible outputs. | To overcome the above problems, Yang al @cite_10 proposed multi-scale neural patch synthesis, which generates high-frequency details by copying patches from mid-layer features. However, this optimization is computational costly. More recently, several works @cite_55 @cite_40 @cite_9 exploited spatial attention @cite_47 @cite_35 to get high-frequency details. Yu al @cite_55 presented a contextual attention layer to copy similar features from visible regions to the holes. Yan al @cite_32 and Song al @cite_37 proposed PatchMatch-like ideas on feature domain. However, these methods identify similar features by comparing features of holes and features of visible regions, which is somewhat contradictory as feature transfer is unnecessary when two features are very similar, but when needed the features are too different to be matched easily. Furthermore, distant information is not used for new content that differs from visible regions. Our model will solve this problem by extending self-attention @cite_27 to harness abundant context. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_55",
"@cite_9",
"@cite_32",
"@cite_40",
"@cite_27",
"@cite_47",
"@cite_10"
],
"mid": [
"",
"2796286534",
"2784790939",
"",
"2963270367",
"",
"2950893734",
"2184016288",
"2557414982"
],
"abstract": [
"",
"We study the task of image inpainting, which is to fill in the missing region of an incomplete image with plausible contents. To this end, we propose a learning-based approach to generate visually coherent completion given a high-resolution image with missing components. In order to overcome the difficulty to directly learn the distribution of high-dimensional image data, we divide the task into inference and translation as two separate steps and model each step with a deep neural network. We also use simple heuristics to guide the propagation of local textures from the boundary to the hole. We show that, by using such techniques, inpainting reduces to the problem of learning two image-feature translation functions in much smaller space and hence easier to train. We evaluate our method on several public datasets and show that we generate results of better visual quality than previous state-of-the-art methods.",
"Recent deep learning based approaches have shown promising results on image inpainting for the challenging task of filling in large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feed-forward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces, textures and natural images demonstrate that the proposed approach generates higher-quality inpainting results than existing ones. Code and trained models will be released.",
"",
"Deep convolutional networks (CNNs) have exhibited their potential in image inpainting for producing plausible results. However, in most existing methods, e.g., context encoder, the missing parts are predicted by propagating the surrounding convolutional features through a fully connected layer, which intends to produce semantically plausible but blurry result. In this paper, we introduce a special shift-connection layer to the U-Net architecture, namely Shift-Net, for filling in missing regions of any shape with sharp structures and fine-detailed textures. To this end, the encoder feature of the known region is shifted to serve as an estimation of the missing parts. A guidance loss is introduced on decoder feature to minimize the distance between the decoder feature after fully connected layer and the ground-truth encoder feature of the missing parts. With such constraint, the decoder feature in missing region can be used to guide the shift of encoder feature in known region. An end-to-end learning algorithm is further developed to train the Shift-Net. Experiments on the Paris StreetView and Places datasets demonstrate the efficiency and effectiveness of our Shift-Net in producing sharper, fine-detailed, and visually plausible results. The codes and pre-trained models are available at https: github.com Zhaoyi-Yan Shift-Net.",
"",
"In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.",
"Deep learning has recently been introduced to the field of low-level computer vision and image processing. Promising results have been obtained in a number of tasks including super-resolution, inpainting, deconvolution, filtering, etc. However, previously adopted neural network approaches such as convolutional neural networks and sparse auto-encoders are inherently with translation invariant operators. We found this property prevents the deep learning approaches from outperforming the state-of-the-art if the task itself requires translation variant interpolation (TVI). In this paper, we draw on Shepard interpolation and design Shepard Convolutional Neural Networks (ShCNN) which efficiently realizes end-to-end trainable TVI operators in the network. We show that by adding only a few feature maps in the new Shepard layers, the network is able to achieve stronger results than a much deeper architecture. Superior performance on both image in-painting and super-resolution is obtained where our system outperforms previous ones while keeping the running time competitive.",
"Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal. While these learning-based methods are significantly more effective in capturing high-level features than prior techniques, they can only handle very low-resolution inputs due to memory limitations and difficulty in training. Even for slightly larger images, the inpainted regions would appear blurry and unpleasant boundaries become visible. We propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. We evaluate our method on the ImageNet and Paris Streetview datasets and achieved state-of-the-art inpainting accuracy. We show our approach produces sharper and more coherent results than prior methods, especially for high-resolution images."
]
} |
1903.04227 | 2921562060 | Most image completion methods produce only one result for each masked input, although there may be many reasonable possibilities. In this paper, we present an approach for -- the task of generating multiple and diverse plausible solutions for image completion. A major challenge faced by learning-based approaches is that usually only one ground truth training instance per label. As such, sampling from conditional VAEs still leads to minimal diversity. To overcome this, we propose a novel and probabilistically principled framework with two parallel paths. One is a reconstructive path that utilizes the only one given ground truth to get prior distribution of missing parts and rebuild the original image from this distribution. The other is a generative path for which the conditional prior is coupled to the distribution obtained in the reconstructive path. Both are supported by GANs. We also introduce a new short+long term attention layer that exploits distant relations among decoder and encoder features, improving appearance consistency. When tested on datasets with buildings (Paris), faces (CelebA-HQ), and natural images (ImageNet), our method not only generated higher-quality completion results, but also with multiple and diverse plausible outputs. | Image generation has progressed significantly using methods such as VAE @cite_44 and GANs @cite_14 . These have been applied to conditional image generation tasks, such as image translation @cite_31 , synthetic to realistic @cite_28 , future prediction @cite_18 , and 3D models @cite_7 . Perhaps most relevant are conditional VAEs (CVAE) @cite_15 @cite_53 and CVAE-GAN @cite_13 , but these were not specially targeted for image completion. CVAE-based methods are most useful when the conditional labels are few and discrete, and there are sufficient training instances per label. Some recent work utilizing these in image translation can produce diverse output @cite_4 @cite_50 , but in such situations the condition-to-sample mappings are more local ( pixel-to-pixel), and only change the visual appearance. This is untrue for image completion, where the conditional label is itself the masked image, with only one training instance of the original holes. In @cite_19 , different outputs were obtained for face completion by specifying facial attributes ( smile), but this method is very domain specific, requiring targeted attributes. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_15",
"@cite_28",
"@cite_53",
"@cite_44",
"@cite_19",
"@cite_50",
"@cite_31",
"@cite_13"
],
"mid": [
"2248556341",
"2099471712",
"2768959015",
"2598591334",
"2188365844",
"2950594409",
"2470142083",
"",
"2784649957",
"2885192629",
"2963073614",
"2963426391"
],
"abstract": [
"Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Our approach first explicitly infers the parts of the geometry visible both in the input and novel views and then casts the remaining synthesis problem as image completion. Specifically, we both predict a flow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves significantly better results compared to existing methods.",
"Supervised deep learning has been successfully applied to many recognition problems. Although it can approximate a complex many-to-one function well when a large amount of training data is provided, it is still challenging to model complex structured output representations that effectively perform probabilistic inference and make diverse predictions. In this work, we develop a deep conditional generative model for structured output prediction using Gaussian latent variables. The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build robust structured prediction algorithms, such as input noise-injection and multi-scale prediction objective at training. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in generating diverse but realistic structured output predictions using stochastic inference. Furthermore, the proposed training methods are complimentary, which leads to strong pixel-level object segmentation and semantic labeling performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset.",
"Current methods for single-image depth estimation use training datasets with real image-depth pairs or stereo pairs, which are not easy to acquire. We propose a framework, trained on synthetic image-depth pairs and unpaired real images, that comprises an image translation network for enhancing realism of input images, followed by a depth prediction network. A key idea is having the first network act as a wide-spectrum input translator, taking in either synthetic or real images, and ideally producing minimally modified realistic images. This is done via a reconstruction loss when the training input is real, and GAN loss when synthetic, removing the need for heuristic self-regularization. The second network is trained on a task loss for synthetic image-depth pairs, with extra GAN loss to unify real and synthetic feature distributions. Importantly, the framework can be trained end-to-end, leading to good results, even surpassing early deep-learning methods that use real paired data.",
"In a given scene, humans can easily predict a set of immediate future events that might happen. However, pixel-level anticipation in computer vision is difficult because machine learning struggles with the ambiguity in predicting the future. In this paper, we focus on predicting the dense trajectory of pixels in a scene—what will move in the scene, where it will travel, and how it will deform over the course of one second. We propose a conditional variational autoencoder as a solution to this problem. In this framework, direct inference from the image shapes the distribution of possible trajectories while latent variables encode information that is not available in the image. We show that our method predicts events in a variety of scenes and can produce multiple different predictions for an ambiguous future. We also find that our method learns a representation that is applicable to semantic vision tasks.",
"",
"We present a deep learning approach for high resolution face completion with multiple controllable attributes (e.g., male and smiling) under arbitrary masks. Face completion entails understanding both structural meaningfulness and appearance consistency locally and globally to fill in \"holes\" whose content do not appear elsewhere in an input image. It is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of \"holes\" and the controllable attributes of filled-in fragments. Our system addresses the challenges by learning a fully end-to-end framework that trains generative adversarial networks (GANs) progressively from low resolution to high resolution with conditional vectors encoding controllable attributes. We design novel network architectures to exploit information across multiple scales effectively and efficiently. We introduce new loss functions encouraging sharp completion. We show that our system can complete faces with large structural and appearance variations using a single feed-forward pass of computation with mean inference time of 0.007 seconds for images at 1024 x 1024 resolution. We also perform a pilot human study that shows our approach outperforms state-of-the-art face completion methods in terms of rank analysis. The code will be released upon publication.",
"Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for many applications: (1) the lack of aligned training pairs and (2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for producing diverse outputs without paired training images. To achieve diversity, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Using the disentangled features as inputs greatly reduces mode collapse. To handle unpaired training data, we introduce a novel cross-cycle consistency loss. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks. We validate the effectiveness of our approach through extensive evaluation.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"We present variational generative adversarial networks, a general learning framework that combines a variational auto-encoder with a generative adversarial network, for synthesizing images in fine-grained categories, such as faces of a specific person or objects in a category. Our approach models an image as a composition of label and latent attributes in a probabilistic model. By varying the fine-grained category label fed into the resulting generative model, we can generate images in a specific category with randomly drawn values on a latent attribute vector. Our approach has two novel aspects. First, we adopt a cross entropy loss for the discriminative and classifier network, but a mean discrepancy objective for the generative network. This kind of asymmetric loss function makes the GAN training more stable. Second, we adopt an encoder network to learn the relationship between the latent space and the real image space, and use pairwise feature matching to keep the structure of generated images. We experiment with natural images of faces, flowers, and birds, and demonstrate that the proposed models are capable of generating realistic and diverse samples with fine-grained category labels. We further show that our models can be applied to other tasks, such as image inpainting, super-resolution, and data augmentation for training better face recognition models."
]
} |
1903.04454 | 2921876925 | We study the asymptotic behavior of Masur-Veech volumes as the genus goes to infinity. We show the existence of a complete asymptotic expansion of these volumes that depends only on the genus and the number of singularities. The computation of the first term of this asymptotics expansion was a long standing problem. This problem was recently solved in by Aggarwal using purely combinatorial arguments, and then by D. Chen, M. Moeller, D. Zagier and the author using algebro-geometric insights. Our proof relies on a combination of both methods. | Recently, the large genus asymptotics of numerical invariants associated to moduli spaces of curves has interested geometers and physicists. For example, the asymptotic expansion of Weil-Peterson volumes was studied in @cite_11 , and @cite_8 , while the asymptotic bahavior of Gromov-Witten invariants and Hurwitz numbers was studied in @cite_13 , or @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_8",
"@cite_13",
"@cite_11"
],
"mid": [
"2906679210",
"2466470134",
"2745614482",
"2117489802"
],
"abstract": [
"",
"We establish the asymptotic expansion of certain integrals of ψ classes on moduli spaces of curves ℳ¯g,n, when either the g or n goes to infinity. Our main tools are cut-join type recursion formulae from the Witten–Kontsevich theorem, as well as asymptotics of solutions to the first Painleve equation. We also raise a conjecture on large genus asymptotics for n-point functions of ψ classes and partially verify the positivity of coefficients in generalized Mirzakhani’s formula of higher Weil–Petersson volumes.",
"The purpose of this note is to share some observations and speculations concerning the asymptotic behavior of Gromov-Witten invariants. They may be indicative of some deep phenomena in symplectic topology that in full generality are outside of the reach of current techniques. On the other hand, many interesting cases can perhaps be treated via combinatorial techniques.",
"We explicitly compute the diverging factor in the large genus asymptotics of the Weil–Petersson volumes of the moduli spaces of n-pointed complex algebraic curves. Modulo a universal multiplicative constant we prove the existence of a complete asymptotic expansion of the Weil–Petersson volumes in the inverse powers of the genus with coefficients that are polynomials in n. This is done by analyzing various recursions for the more general intersection numbers of tautological classes, whose large genus asymptotic behavior is also extensively studied."
]
} |
1903.04454 | 2921876925 | We study the asymptotic behavior of Masur-Veech volumes as the genus goes to infinity. We show the existence of a complete asymptotic expansion of these volumes that depends only on the genus and the number of singularities. The computation of the first term of this asymptotics expansion was a long standing problem. This problem was recently solved in by Aggarwal using purely combinatorial arguments, and then by D. Chen, M. Moeller, D. Zagier and the author using algebro-geometric insights. Our proof relies on a combination of both methods. | These problems often admits a counterpart in terms of asymptotic dynamical properties of random surfaces (see @cite_3 or @cite_7 ), or in string theory: one can either consider systems with a large number of particles (see @cite_4 ) or the asymptotic behavior of perturbative expansions (see @cite_5 ). | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_7",
"@cite_3"
],
"mid": [
"2544712211",
"2557781035",
"1982041505",
"2766415863"
],
"abstract": [
"The quest to find a nonperturbative formulation of topological string theory has recently seen two unrelated developments. On the one hand, via quantization of the mirror curve associated to a toric Calabi-Yau background, it has been possible to give a nonperturbative definition of the topological-string partition function. On the other hand, using techniques of resurgence and transseries, it has been possible to extend the string (asymptotic) perturbative expansion into a transseries involving nonperturbative instanton sectors. Within the specific example of the local P2 toric Calabi-Yau threefold, the present work shows how the Borel-Pade-Ecalle resummation of this resurgent transseries, alongside occurrence of Stokes phenomenon, matches the string-theoretic partition function obtained via quantization of the mirror curve. This match is highly non-trivial, given the unrelated nature of both nonperturbative frameworks, signaling at the existence of a consistent underlying structure.",
"We consider massless string scattering amplitudes in a limit where the number of external particles becomes very large, while the energy of each particle remains small. Using the growth of the volume of the relevant moduli space, and by means of independent numerical evidence, we argue that string perturbation theory breaks down in this limit. We discuss some remarkable implications for the information paradox.",
"",
"We prove Poisson approximation results for the bottom part of the length spectrum of a random closed hyperbolic surface of large genus. Here, a random hyperbolic surface is a surface picked at random using the Weil-Petersson volume form on the corresponding moduli space. As an application of our result, we compute the large genus limit of the expected systole."
]
} |
1903.04454 | 2921876925 | We study the asymptotic behavior of Masur-Veech volumes as the genus goes to infinity. We show the existence of a complete asymptotic expansion of these volumes that depends only on the genus and the number of singularities. The computation of the first term of this asymptotics expansion was a long standing problem. This problem was recently solved in by Aggarwal using purely combinatorial arguments, and then by D. Chen, M. Moeller, D. Zagier and the author using algebro-geometric insights. Our proof relies on a combination of both methods. | Finally, let us mention that A. Eskin and A. Zorich proposed a series of four conjectures about the limits of numerical invariants of strata of abelian differentials: they considered Masur-Veech volumes, area Siegel-Veech constants, and refinements of these two functions according to spin parity (see @cite_14 ). All conjectures were solved in @cite_6 , @cite_0 and @cite_9 . Using the arguments of the present text, one can show that these four functions admit asymptotic expansions in the spirit of Theorem . However, we only consider the Masur-Veech volumes to keep the presentation short and clear. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_14",
"@cite_6"
],
"mid": [
"2897234425",
"2907635706",
"1413288907",
"2797574323"
],
"abstract": [
"In this paper we consider the large genus asymptotics for two classes of Siegel-Veech constants associated with an arbitrary connected stratum @math of Abelian differentials. The first is the saddle connection Siegel-Veech constant @math counting saddle connections between two distinct, fixed zeros of prescribed orders @math and @math , and the second is the area Siegel-Veech constant @math counting maximal cylinders weighted by area. By combining a combinatorial analysis of explicit formulas of Eskin-Masur-Zorich that express these constants in terms of Masur-Veech strata volumes, with a recent result for the large genus asymptotics of these volumes, we show that @math and @math , both as @math tends to @math . The former result confirms a prediction of Zorich and the latter confirms one of Eskin-Zorich in the case of connected strata.",
"We show that the Masur-Veech volumes and area Siegel-Veech constants can be obtained by intersection numbers on the strata of Abelian differentials with prescribed orders of zeros. As applications, we evaluate their large genus limits and compute the saddle connection Siegel-Veech constants for all strata. We also show that the same results hold for the spin and hyper-elliptic components of the strata.",
"We state conjectures on the asymptotic behavior of the volumes of moduli spaces of Abelian differentials and their Siegel–Veech constants as genus tends to infinity. We provide certain numerical evidence, describe recent advances and the state of the art towards proving these conjectures.",
"In this paper we consider the large genus asymptotics for Masur-Veech volumes of arbitrary strata of Abelian differentials. Through a combinatorial analysis of an algorithm proposed in 2002 by Eskin-Okounkov to exactly evaluate these quantities, we show that the volume @math of a stratum indexed by a partition @math is @math as @math tends to @math . This confirms a prediction of Eskin-Zorich and generalizes some of the recent results of Chen-Moeller-Zagier and Sauvaget, who established these limiting statements in the special cases @math and @math , respectively. We also include an Appendix by Anton Zorich that uses our main result to deduce the large genus asymptotics for Siegel-Veech constants that count certain types of saddle connections."
]
} |
1903.04337 | 2921251356 | Preventing early progression of epilepsy and so the severity of seizures requires an effective diagnosis. Epileptic transients indicate the ability to develop seizures but humans overlook such brief events in an electroencephalogram (EEG) what compromises patient treatment. Traditionally, training of the EEG event detection algorithms has relied on ground truth labels, obtained from the consensus of the majority of labelers. In this work, we go beyond labeler consensus on EEG data. Our event descriptor integrates EEG signal features with one-hot encoded labeler category that is a key to improved generalization performance. Notably, boosted decision trees take advantage of singly-labeled but more varied training sets. Our quantitative experiments show the proposed labeler-hot epileptic event detector consistently outperforms a consensus-trained detector and maintains confidence bounds of the detection. The results on our infant EEG recordings suggest datasets can gain higher event variety faster and thus better performance by shifting available human effort from consensus-oriented to separate labeling when labels include both, the event and the labeler category. | Different approaches to learning from noisy labels have been studied before. Ground truth can be estimated from multiple, noisy labels using crowdsourcing. Besides naive majority voting, more sophisticated algorithms based on EM and labeler reliability estimation were proposed @cite_5 @cite_24 @cite_25 but require high redundancy of labels @cite_12 . Recently, to overcome high redundancy regiment, an EM algorithm used predicated labeled as ground truth to estimate labeler confusion matrix @cite_26 . There are also results specifically in the area of time series labeling, which are more related to EEG annotations than image labeling @cite_21 @cite_27 . Another line of works tweaks loss function to incorporate assumption about uniform noise process disturbing labels @cite_6 @cite_13 @cite_2 . There was significant amount of work in the area of active learning @cite_16 @cite_15 that ask for more labels for inconsistent examples. | {
"cite_N": [
"@cite_26",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"2772357980",
"2506591631",
"2003630567",
"",
"2897725862",
"",
"9014458",
"",
"",
"",
"",
"2538903535"
],
"abstract": [
"Supervised learning depends on annotated examples, which are taken to be the . But these labels often come from noisy crowdsourcing platforms, like Amazon Mechanical Turk. Practitioners typically collect multiple labels per example and aggregate the results to mitigate noise (the classic crowdsourcing problem). Given a fixed annotation budget and unlimited unlabeled data, redundant annotation comes at the expense of fewer labeled examples. This raises two fundamental questions: (1) How can we best learn from noisy workers? (2) How should we allocate our labeling budget to maximize the performance of a classifier? We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data. The alternating minimization proceeds in rounds, estimating worker quality from disagreement with the current model and then updating the model by optimizing a loss function that accounts for the current estimate of worker quality. Unlike previous approaches, even with only one annotation per example, our algorithm can estimate worker quality. We establish a generalization error bound for models learned with our algorithm and establish theoretically that it's better to label many examples once (vs less multiply) when worker quality is above a threshold. Experiments conducted on both ImageNet (with simulated noisy workers) and MS-COCO (using the real crowdsourced labels) confirm our algorithm's benefits.",
"Studies of time-continuous human behavioral phenomena often rely on ratings from multiple annotators. Since the ground truth of the target construct is often latent, the standard practice is to use ad-hoc metrics (such as averaging annotator ratings). Despite being easy to compute, such metrics may not provide accurate representations of the underlying construct. In this paper, we present a novel method for modeling multiple time series annotations over a continuous variable that computes the ground truth by modeling annotator specific distortions. We condition the ground truth on a set of features extracted from the data and further assume that the annotators provide their ratings as modification of the ground truth, with each annotator having specific distortion tendencies. We train the model using an Expectation-Maximization based algorithm and evaluate it on a study involving natural interaction between a child and a psychologist, to predict confidence ratings of the children’s smiles. We compare and analyze the model against two baselines where: (i) the ground truth in considered to be framewise mean of ratings from various annotators and, (ii) each annotator is assumed to bear a distinct time delay in annotation and their annotations are aligned before computing the framewise mean.",
"Summary: Epilepsy is associated with a two- to three-fold increase in mortality. Studies of cause-specific mortality show that deaths may be classified into those that are directly or indirectly related to epilepsy, those that are related to the underlying pathology giving rise to epilepsy, and those that are unrelated to both epilepsy and its causes. Overall, direct epilepsy related deaths are infrequent. Pneumonia, especially in the elderly, central nervous system (CNS) and non-CNS neoplasias, and cerebrovascular disease are frequent causes of death. Suicides, accidental deaths, and ischemic heart disease do not appear to be significant contributors to mortality in community-based studies. In hospital institution-based analyses, epilepsy-related deaths are common and sudden unexpected death in epilepsy (SUDEP) may account for up to 17 of all deaths in epilepsy. A small proportion of these deaths may be witnessed and most such witnessed deaths occur in relation to convulsive seizures. The exact pathogenetic mechanisms are unknown although it is very probable that lack of seizure control is an important risk factor. Patients who continue to suffer seizures appear to have an almost 40 times higher risk of mortality than those in remission. Ke yW ords: Mortality—Epilepsy.",
"",
"Emotions are often perceived by humans through a series of multimodal cues, such as verbal expressions, facial expressions and gestures. In order to recognise emotions automatically, reliable emotional labels are required to learn a mapping from human expressions to corresponding emotions. Dimensional emotion models have become popular and have been widely applied for annotating emotions continuously in the time domain. However, the statistical relationship between emotional dimensions is rarely studied. This paper provides a solution to automatic emotion recognition for the Audio Visual Emotion Challenge (AVEC) 2018. The objective is to find a robust way to detect emotions using more reliable emotion annotations in the valence and arousal dimensions. The two main contributions of this paper are: 1) the proposal of a new approach capable of generating more dependable emotional ratings for both arousal and valence from multiple annotators by extracting consistent annotation features; 2) the exploration of the valence and arousal distribution using outlier detection methods, which shows a specific oblique elliptic shape. With the learned distribution, we are able to detect the prediction outliers based on their local density deviations and correct them towards the learned distribution. The proposed method performance is evaluated on the RECOLA database containing audio, video and physiological recordings. Our results show that a moving average filter is sufficient to remove the incidental errors in annotations. The unsupervised dimensionality reduction approaches could be used to determine a gold standard annotations from multiple annotations. Compared with the baseline model of AVEC 2018, our approach improved the arousal and valence prediction of concordance correlation coefficient significantly to respectively 0.821 and 0.589.",
"",
"In compiling a patient record many facets are subject to errors of measurement. A model is presented which allows individual error-rates to be estimated for polytomous facets even when the patient's \"true\" response is not available. The EM algorithm is shown to provide a slow but sure way of obtaining maximum likelihood estimates of the parameters of interest. Some preliminary experience is reported and the limitations of the method are described.",
"",
"",
"",
"",
"Requesters on crowdsourcing platforms, such as Amazon Mechanical Turk, routinely insert gold questions to verify that a worker is diligent and is providing high-quality answers. However, there is no clear understanding of when and how many gold questions to insert. Typically, requesters mix a flat 10-30 of gold questions into the task stream of every worker. This static policy is arbitrary and wastes valuable budget --- the exact percentage is often chosen with little experimentation, and, more importantly, it does not adapt to individual workers, the current mixture of spamming vs. diligent workers, or the number of tasks workers perform before quitting. We formulate the problem of balancing between (1) testing workers to determine their accuracy and (2) actually getting work performed as a partially-observable Markov decision process (POMDP) and apply reinforcement learning to dynamically calculate the best policy. Evaluations on both synthetic data and with real Mechanical Turk workers show that our agent learns adaptive testing policies that produce up to 111 more reward than the non-adaptive policies used by most requesters. Furthermore, our method is fully automated, easy to apply, and runs mostly out of the box."
]
} |
1903.04337 | 2921251356 | Preventing early progression of epilepsy and so the severity of seizures requires an effective diagnosis. Epileptic transients indicate the ability to develop seizures but humans overlook such brief events in an electroencephalogram (EEG) what compromises patient treatment. Traditionally, training of the EEG event detection algorithms has relied on ground truth labels, obtained from the consensus of the majority of labelers. In this work, we go beyond labeler consensus on EEG data. Our event descriptor integrates EEG signal features with one-hot encoded labeler category that is a key to improved generalization performance. Notably, boosted decision trees take advantage of singly-labeled but more varied training sets. Our quantitative experiments show the proposed labeler-hot epileptic event detector consistently outperforms a consensus-trained detector and maintains confidence bounds of the detection. The results on our infant EEG recordings suggest datasets can gain higher event variety faster and thus better performance by shifting available human effort from consensus-oriented to separate labeling when labels include both, the event and the labeler category. | Detection of epileptiform EEG discharges has been addressed before as well. Detection has usually used time domain features (e.g. amplitude, duration, curvature sharpness, complication, fractal dimension, sample entropy), frequency domain features (e.g. spectral power density, Hjorth's parameters, phase congruence) or wavelet domain features (e.g. wavelet coefficients). Different classification algorithms ware applied: SVM, logistic regression, boosted trees, random forests. There were also experiments with clustering and anomaly detection methods and dynamic time warping. For comprehensive review see @cite_19 @cite_20 . Spike detection brought attention in deep learning community @cite_4 . Commercial implementations also exist @cite_14 . | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_4",
"@cite_20"
],
"mid": [
"2130947713",
"2553360915",
"2402787549",
""
],
"abstract": [
"For algorithm developers, this review details recent approaches to the problem, compares the accuracy of various algorithms, identifies common testing issues and proposes some solutions. For the algorithm user, e.g. electroencephalograph (EEG) technician or neurologist, this review provides an estimate of algorithm accuracy and comparison to that of human experts. Manuscripts dated from 1975 are reviewed. Progress since Frost's 1985 review of the state of the art is discussed. Twenty-five manuscripts are reviewed. Many novel methods have been proposed including neural networks and high-resolution frequency methods. Algorithm accuracy is less than that of experts, but the accuracy of experts is probably less than what is commonly believed. Larger record sets will be required for expert-level detection algorithms.",
"Abstract Objective Compare the spike detection performance of three skilled humans and three computer algorithms. Methods 40 prolonged EEGs, 35 containing reported spikes, were evaluated. Spikes and sharp waves were marked by the humans and algorithms. Pairwise sensitivity and false positive rates were calculated for each human–human and algorithm-human pair. Differences in human pairwise performance were calculated and compared to the range of algorithm versus human performance differences as a type of statistical Turing test. Results 5474 individual spike events were marked by the humans. Mean, pairwise human sensitivities and false positive rates were 40.0 , 42.1 , and 51.5 , and 0.80, 0.97, and 1.99 min. Only the Persyst 13 (P13) algorithm was comparable to humans – 43.9 and 1.65 min. Evaluation of pairwise differences in sensitivity and false positive rate demonstrated that P13 met statistical noninferiority criteria compared to the humans. Conclusion Humans had only a fair level of agreement in spike marking. The P13 algorithm was statistically noninferior to the humans. Significance This was the first time that a spike detection algorithm and humans performed similarly. The performance comparison methodology utilized here is generally applicable to problems in which skilled human performance is the desired standard and no external gold standard exists.",
"The EEG of epileptic patients often contains sharp waveforms called \"spikes\", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated fashion. The CNN has a convolutional architecture with filters of various sizes applied to the input layer, leaky ReLUs as activation functions, and a sigmoid output layer. Balanced mini-batches were applied to handle the imbalance in the data set. Leave-one-patient-out cross-validation was carried out to test the CNN and benchmark models on EEG data of five epilepsy patients. We achieved 0.947 AUC for the CNN, while the best performing benchmark model, Support Vector Machines with Gaussian kernel, achieved an AUC of 0.912.",
""
]
} |
1903.04192 | 2921318724 | Machine learning, especially deep neural networks, has been rapidly developed in fields including computer vision, speech recognition and reinforcement learning. Although Mini-batch SGD is one of the most popular stochastic optimization methods in training deep networks, it shows a slow convergence rate due to the large noise in gradient approximation. In this paper, we attempt to remedy this problem by building more efficient batch selection method based on typicality sampling, which reduces the error of gradient estimation in conventional Minibatch SGD. We analyze the convergence rate of the resulting typical batch SGD algorithm and compare convergence properties between Minibatch SGD and the algorithm. Experimental results demonstrate that our batch selection scheme works well and more complex Minibatch SGD variants can benefit from the proposed batch selection strategy. | Plenty of works have been proposed to talk about the idea that using non-uniform batch selection method for optimization process in machine learning problems. The first remarkable attempt is curriculum learning , which process the samples in an order of easiness and suggest that easy data points should be provided to the network at early stage. @cite_9 propose self-paced learning that uses the loss on data to quantifies the easiness, which make the algorithm more accessible when dealing with real world dataset. However, the measurements of easiness mentioned in these two works ignore the basic spatial distribution information of training set samples, making it hard to be generalized to broader learning scenarios. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2132984949"
],
"abstract": [
"Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that often we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition."
]
} |
1903.04192 | 2921318724 | Machine learning, especially deep neural networks, has been rapidly developed in fields including computer vision, speech recognition and reinforcement learning. Although Mini-batch SGD is one of the most popular stochastic optimization methods in training deep networks, it shows a slow convergence rate due to the large noise in gradient approximation. In this paper, we attempt to remedy this problem by building more efficient batch selection method based on typicality sampling, which reduces the error of gradient estimation in conventional Minibatch SGD. We analyze the convergence rate of the resulting typical batch SGD algorithm and compare convergence properties between Minibatch SGD and the algorithm. Experimental results demonstrate that our batch selection scheme works well and more complex Minibatch SGD variants can benefit from the proposed batch selection strategy. | The approaches described in @cite_11 and @cite_3 takes advantage of importance sampling to accelerate training process. The first one proposes an online batch selection strategy that evaluates the importance by ranking all data points with respect to their latest loss value, while the latter one exhibits unbiased estimate of gradient with minimum variance by sampling proportional to the norm of the gradient. In practice, calculating the gradient norm of each sample needs a feed-forward process on all data at each iteration, which leads to quite considerable computational cost. Importance sampling with loss value may be able to alleviate this issue, but it is not a proper approximation of gradient norm. | {
"cite_N": [
"@cite_3",
"@cite_11"
],
"mid": [
"2177410802",
"2174940656"
],
"abstract": [
"Humans are able to accelerate their learning by selecting training materials that are the most informative and at the appropriate level of difficulty. We propose a framework for distributing deep learning in which one set of workers search for the most informative examples in parallel while a single worker updates the model on examples selected by importance sampling. This leads the model to update using an unbiased estimate of the gradient which also has minimum variance when the sampling proposal is proportional to the L2-norm of the gradient. We show experimentally that this method reduces gradient variance even in a context where the cost of synchronization across machines cannot be ignored, and where the factors for importance sampling are not updated instantly across the training set.",
"Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset. While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood. We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank. Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5."
]
} |
1903.04192 | 2921318724 | Machine learning, especially deep neural networks, has been rapidly developed in fields including computer vision, speech recognition and reinforcement learning. Although Mini-batch SGD is one of the most popular stochastic optimization methods in training deep networks, it shows a slow convergence rate due to the large noise in gradient approximation. In this paper, we attempt to remedy this problem by building more efficient batch selection method based on typicality sampling, which reduces the error of gradient estimation in conventional Minibatch SGD. We analyze the convergence rate of the resulting typical batch SGD algorithm and compare convergence properties between Minibatch SGD and the algorithm. Experimental results demonstrate that our batch selection scheme works well and more complex Minibatch SGD variants can benefit from the proposed batch selection strategy. | Instead of manually designing a batch selection scheme, researchers have focused on training neural networks to select samples for the target network. The authors in @cite_1 construct a MentorNet to supervise the training process of the base deep networks through building a dynamic curriculum at each iteration. @cite_2 propose a deep reinforcement learning framework to develop an adaptive data selection method which filters important data points automatically. Although these two approaches show promising experimental results, both of them lack solid theoretical analysis and guarantees for speedup. Moreover, the training of extra neural network is quite computationally expensive when applied to large scale dataset. | {
"cite_N": [
"@cite_1",
"@cite_2"
],
"mid": [
"2885593519",
"2594061220"
],
"abstract": [
"Recent deep networks are capable of memorizing the entire data even when the labels are completely random. To overcome the overfitting on corrupted labels, we propose a novel technique of learning another neural network, called MentorNet, to supervise the training of the base deep networks, namely, StudentNet. During training, MentorNet provides a curriculum (sample weighting scheme) for StudentNet to focus on the sample the label of which is probably correct. Unlike the existing curriculum that is usually predefined by human experts, MentorNet learns a data-driven curriculum dynamically with StudentNet. Experimental results demonstrate that our approach can significantly improve the generalization performance of deep networks trained on corrupted training data. Notably, to the best of our knowledge, we achieve the best-published result on WebVision, a large benchmark containing 2.2 million images of real-world noisy labels. The code are at this https URL",
"Machine learning is essentially the sciences of playing with data. An adaptive data selection strategy, enabling to dynamically choose different data at various training stages, can reach a more effective model in a more efficient way. In this paper, we propose a deep reinforcement learning framework, which we call eural ata ilter (), to explore automatic and adaptive data selection in the training process. In particular, NDF takes advantage of a deep neural network to adaptively select and filter important data instances from a sequential stream of training data, such that the future accumulative reward (e.g., the convergence speed) is maximized. In contrast to previous studies in data selection that is mainly based on heuristic strategies, NDF is quite generic and thus can be widely suitable for many machine learning tasks. Taking neural network training with stochastic gradient descent (SGD) as an example, comprehensive experiments with respect to various neural network modeling (e.g., multi-layer perceptron networks, convolutional neural networks and recurrent neural networks) and several applications (e.g., image classification and text understanding) demonstrate that NDF powered SGD can achieve comparable accuracy with standard SGD process by using less data and fewer iterations."
]
} |
1903.04192 | 2921318724 | Machine learning, especially deep neural networks, has been rapidly developed in fields including computer vision, speech recognition and reinforcement learning. Although Mini-batch SGD is one of the most popular stochastic optimization methods in training deep networks, it shows a slow convergence rate due to the large noise in gradient approximation. In this paper, we attempt to remedy this problem by building more efficient batch selection method based on typicality sampling, which reduces the error of gradient estimation in conventional Minibatch SGD. We analyze the convergence rate of the resulting typical batch SGD algorithm and compare convergence properties between Minibatch SGD and the algorithm. Experimental results demonstrate that our batch selection scheme works well and more complex Minibatch SGD variants can benefit from the proposed batch selection strategy. | More closely related to our work, @cite_6 resort to using stratified sampling strategy for Minibatch SGD training. The authors first utilize clustering algorithm to divide training set in several groups, and then perform SRS in each group separately. This work is similar to ours, but differs significantly in two aspects: i) instead of revealing training set structure by dividing it into clusters with k-means, we apply t-SNE embedding algorithm to convert training set into low-dimensional space, which is better on capturing both local and global structure information of high-dimensional data while keeping low computational cost. ii) we do not need the corresponding label of each sample. For our approach, we distinguish typicality of each training sample by its contribution to the true gradient, which is then transformed into the form of density information in practical implementation. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2113097878"
],
"abstract": [
"Stochastic Gradient Descent (SGD) is a popular optimization method which has been applied to many important machine learning tasks such as Support Vector Machines and Deep Neural Networks. In order to parallelize SGD, minibatch training is often employed. The standard approach is to uniformly sample a minibatch at each step, which often leads to high variance. In this paper we propose a stratified sampling strategy, which divides the whole dataset into clusters with low within-cluster variance; we then take examples from these clusters using a stratified sampling technique. It is shown that the convergence rate can be significantly improved by the algorithm. Encouraging experimental results confirm the effectiveness of the proposed method."
]
} |
1903.04188 | 2921289133 | We propose an application-tailored data-driven fully automated method for functional approximation of combinational circuits. We demonstrate how an application-level error metric such as the classification accuracy can be translated to a component-level error metric needed for an efficient and fast search in the space of approximate low-level components that are used in the application. This is possible by employing a weighted mean error distance (WMED) metric for steering the circuit approximation process which is conducted by means of genetic programming. WMED introduces a set of weights (calculated from the data distribution measured on a selected signal in a given application) determining the importance of each input vector for the approximation process. The method is evaluated using synthetic benchmarks and application-specific approximate MAC (multiply-and-accumulate) units that are designed to provide the best trade-offs between the classification accuracy and power consumption of two image classifiers based on neural networks. | Approximations have been introduced to circuits described at the transistor, gate @cite_14 @cite_6 , register-transfer and behavioral @cite_3 levels. Many authors have introduced approximate operations directly at the level of abstract circuit representations such as binary decision diagrams and and-invert graphs @cite_15 . Basic functional approximation principles are: (i) truncation, which is based on reducing bit widths of registers and all operations of the data path; (ii) pruning, which lies in removing some parts of the circuit; (iii) component replacement, in which exact components are replaced with approximate components available in a library of approximate components; (iv) re-synthesis, in which the original logic function is replaced by a cheaper implementation; (v) other techniques such as table lookup etc. | {
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_3",
"@cite_6"
],
"mid": [
"2537354715",
"1996431812",
"2140145164",
""
],
"abstract": [
"Approximation circuits offer superior performance (speed and area) compared to traditional circuits at the cost of computational accuracy. The accuracy of the results in approximation circuits is evaluated based on several error metrics such as worst-case error, bit-flip error, or error-rate. Several applications have varied requirements in error metrics, i.e., all the error criteria have to be met together at a time, or in combinations. Nevertheless, all applications benefit from improved delay and area. An automated synthesis approach with formal guarantees on error metrics is very helpful in generating circuits that meet these criteria. Furthermore, each of these metrics are independent quantities (value of one metric does not correlate with the other), and automated synthesis can discover opportunities to trade off one or more of the relaxed metrics with a strict requirement on the other, resulting in better performance. In this paper, we present an automatic synthesis approach using And-Inverter Graphs (AIGs) based rewriting that not only improves the performance but also guarantees the bounds of approximation errors introduced. Our synthesis approach is evaluated on a wide range of designs and standard benchmark circuits to show the usefulness and applicability. In particular, we show that our synthesis results are even comparable with the optimization achieved with hand crafted adhoc approximation circuits such as approximation adders in a case study on image compression.",
"Approximate computing has emerged as a new design paradigm that exploits the inherent error resilience of a wide range of application domains by allowing hardware implementations to forsake exact Boolean equivalence with algorithmic specifications. A slew of manual design techniques for approximate computing have been proposed in recent years, but very little effort has been devoted to design automation. We propose SALSA, a Systematic methodology for Automatic Logic Synthesis of Approximate circuits. Given a golden RTL specification of a circuit and a quality constraint that defines the amount of error that may be introduced in the implementation, SALSA synthesizes an approximate version of the circuit that adheres to the pre-specified quality bounds. We make two key contributions: (i) the rigorous formulation of the problem of approximate logic synthesis, enabling the generation of circuits that are correct by construction, and (ii) mapping the problem of approximate synthesis into an equivalent traditional logic synthesis problem, thereby allowing the capabilities of existing synthesis tools to be fully utilized for approximate logic synthesis. In order to achieve these benefits, SALSA encodes the quality constraints using logic functions called Q-functions, and captures the flexibility that they engender as Approximation Don't Cares (ADCs), which are used for circuit simplification using traditional don't care based optimization techniques. We have implemented SALSA using two off-the-shelf logic synthesis tools - SIS and Synopsys Design Compiler. We automatically synthesize approximate circuits ranging from arithmetic building blocks (adders, multipliers, MAC) to entire datapaths (DCT, FIR, IIR, SAD, FFT Butterfly, Euclidean distance), demonstrating scalability and significant improvements in area (1.1X to 1.85X for tight error constraints, and 1.2X to 4.75X for relaxed error constraints) and power (1.15X to 1.75X for tight error constraints, and 1.3X to 5.25X for relaxed error constraints).",
"Many classes of applications, especially in the domains of signal and image processing, computer graphics, computer vision, and machine learning, are inherently tolerant to inaccuracies in their underlying computations. This tolerance can be exploited to design approximate circuits that perform within acceptable accuracies but have much lower power consumption and smaller area footprints (and often better run times) than their exact counterparts. In this paper, we propose a new class of automated synthesis methods for generating approximate circuits directly from behavioral-level descriptions. In contrast to previous methods that operate at the Boolean level or use custom modifications, our automated behavioral synthesis method enables a wider range of possible approximations and can operate on arbitrary designs. Our method first creates an abstract synthesis tree (AST) from the input behavioral description, and then applies variant operators to the AST using an iterative stochastic greedy approach to identify the optimal inexact designs in an efficient way. Our method is able to identify the optimal designs that represent the Pareto frontier trade-off between accuracy and power consumption. Our methodology is developed into a tool we call ABACUS, which we integrate with a standard ASIC experimental flow based on industrial tools. We validate our methods on three realistic Verilog-based benchmarks from three different domains - signal processing, computer vision and machine learning. Our tool automatically discovers optimal designs, providing area and power savings of up to 50 while maintaining good accuracy.",
""
]
} |
1903.04188 | 2921289133 | We propose an application-tailored data-driven fully automated method for functional approximation of combinational circuits. We demonstrate how an application-level error metric such as the classification accuracy can be translated to a component-level error metric needed for an efficient and fast search in the space of approximate low-level components that are used in the application. This is possible by employing a weighted mean error distance (WMED) metric for steering the circuit approximation process which is conducted by means of genetic programming. WMED introduces a set of weights (calculated from the data distribution measured on a selected signal in a given application) determining the importance of each input vector for the approximation process. The method is evaluated using synthetic benchmarks and application-specific approximate MAC (multiply-and-accumulate) units that are designed to provide the best trade-offs between the classification accuracy and power consumption of two image classifiers based on neural networks. | The automated approximation methods are often constructed as iterative methods in which many candidate approximate circuits have to be generated and evaluated. This is, in fact, a multi-objective search process. Examples of elementary circuit modifications (i.e. steps in the search space) are replacing a gate by another one, reconnecting an internal signal or reconnecting a circuit output. It has been shown that this kind of search can effectively be performed by means of Cartesian genetic programming @cite_8 @cite_2 @cite_7 . Details on CGP will be given in . | {
"cite_N": [
"@cite_2",
"@cite_7",
"@cite_8"
],
"mid": [
"2612139336",
"2533121491",
""
],
"abstract": [
"Approximate circuits and approximate circuit design methodologies attracted a significant attention of researchers as well as industry in recent years. In order to accelerate the approximate circuit and system design process and to support a fair benchmarking of circuit approximation methods, we propose a library of approximate adders and multipliers called EvoApprox8b. This library contains 430 non-dominated 8-bit approximate adders created from 13 conventional adders and 471 non-dominated 8-bit approximate multipliers created from 6 conventional multipliers. These implementations were evolved by a multi-objective Cartesian genetic programming. The EvoApprox8b library provides Verilog, Matlab and C models of all approximate circuits. In addition to standard circuit parameters, the error is given for seven different error metrics. The EvoApprox8b library is available at: www.fit.vutbr.cz research groups ehw approxlib",
"Artificial neural networks (NN) have shown a significant promise in difficult tasks like image classification or speech recognition. Even well-optimized hardware implementations of digital NNs show significant power consumption. It is mainly due to non-uniform pipeline structures and inherent redundancy of numerous arithmetic operations that have to be performed to produce each single output vector. This paper provides a methodology for the design of well-optimized power-efficient NNs with a uniform structure suitable for hardware implementation. An error resilience analysis was performed in order to determine key constraints for the design of approximate multipliers that are employed in the resulting structure of NN. By means of a search based approximation method, approximate multipliers showing desired tradeoffs between the accuracy and implementation cost were created. Resulting approximate NNs, containing the approximate multipliers, were evaluated using standard benchmarks (MNIST dataset) and a real-world classification problem of Street-View House Numbers. Significant improvement in power efficiency was obtained in both cases with respect to regular NNs. In some cases, 91 power reduction of multiplication led to classification accuracy degradation of less than 2.80 . Moreover, the paper showed the capability of the back propagation learning algorithm to adapt with NNs containing the approximate multipliers.",
""
]
} |
1903.04188 | 2921289133 | We propose an application-tailored data-driven fully automated method for functional approximation of combinational circuits. We demonstrate how an application-level error metric such as the classification accuracy can be translated to a component-level error metric needed for an efficient and fast search in the space of approximate low-level components that are used in the application. This is possible by employing a weighted mean error distance (WMED) metric for steering the circuit approximation process which is conducted by means of genetic programming. WMED introduces a set of weights (calculated from the data distribution measured on a selected signal in a given application) determining the importance of each input vector for the approximation process. The method is evaluated using synthetic benchmarks and application-specific approximate MAC (multiply-and-accumulate) units that are designed to provide the best trade-offs between the classification accuracy and power consumption of two image classifiers based on neural networks. | With the rapid development of artificial intelligence methods based on deep CNNs, a lot of attention has been focused on efficient hardware implementations of neural networks @cite_1 . CNNs employ multiple layers of computational elements performing the convolution operation, pooling (selection subsampling), non-linear transformations and the final classification based on a common multi-layer perceptron (MLP). | {
"cite_N": [
"@cite_1"
],
"mid": [
"2604319603"
],
"abstract": [
"Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems. This article aims to provide a comprehensive tutorial and survey about the recent advances toward the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic codesigns, being proposed in academia and industry. The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the tradeoffs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities."
]
} |
1903.04188 | 2921289133 | We propose an application-tailored data-driven fully automated method for functional approximation of combinational circuits. We demonstrate how an application-level error metric such as the classification accuracy can be translated to a component-level error metric needed for an efficient and fast search in the space of approximate low-level components that are used in the application. This is possible by employing a weighted mean error distance (WMED) metric for steering the circuit approximation process which is conducted by means of genetic programming. WMED introduces a set of weights (calculated from the data distribution measured on a selected signal in a given application) determining the importance of each input vector for the approximation process. The method is evaluated using synthetic benchmarks and application-specific approximate MAC (multiply-and-accumulate) units that are designed to provide the best trade-offs between the classification accuracy and power consumption of two image classifiers based on neural networks. | One of the key challenges in this area is to provide fast and energy efficient (i.e. the application of an already trained network). The reason is that trained CNNs are employed in embedded systems and have to process enormous volumes of data in a real-time scenario. As CNNs are highly error resilient, a good strategy is to reduce the bit width for all involved operations and storage elements. This approach has been taken by the Tensor Processing Unit (TPU), where only 8-bit operations are implemented in MAC units. The highly parallel processing enabled by TPU exploits a systolic array composed of 65,536 8-bit MAC units @cite_9 . | {
"cite_N": [
"@cite_9"
],
"mid": [
"2606722458"
],
"abstract": [
"Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU) --- deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95 of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X -- 30X faster than its contemporary GPU or CPU, with TOPS Watt about 30X -- 80X higher. Moreover, using the CPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS Watt to nearly 70X the GPU and 200X the CPU."
]
} |
1903.04188 | 2921289133 | We propose an application-tailored data-driven fully automated method for functional approximation of combinational circuits. We demonstrate how an application-level error metric such as the classification accuracy can be translated to a component-level error metric needed for an efficient and fast search in the space of approximate low-level components that are used in the application. This is possible by employing a weighted mean error distance (WMED) metric for steering the circuit approximation process which is conducted by means of genetic programming. WMED introduces a set of weights (calculated from the data distribution measured on a selected signal in a given application) determining the importance of each input vector for the approximation process. The method is evaluated using synthetic benchmarks and application-specific approximate MAC (multiply-and-accumulate) units that are designed to provide the best trade-offs between the classification accuracy and power consumption of two image classifiers based on neural networks. | Approximation techniques developed for circuit implementations of NNs were surveyed in @cite_10 . In the case of approximate multipliers for NNs, they are implemented either as multiplier-less multipliers @cite_4 , truncated multipliers @cite_9 or application-specific multipliers @cite_7 . For example, developed approximate multipliers that perform exact multiplication by zero (which is important as many weights are zero and no error is thus distributed to subsequent processing layers) and deep approximations are allowed for all the remaining operand values @cite_7 . On two benchmark problems, this strategy provided better trade-offs (energy vs. accuracy) than the multiplier-less multipliers @cite_4 @cite_7 . | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_7",
"@cite_4"
],
"mid": [
"2606722458",
"2395491504",
"2533121491",
""
],
"abstract": [
"Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU) --- deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95 of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X -- 30X faster than its contemporary GPU or CPU, with TOPS Watt about 30X -- 80X higher. Moreover, using the CPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS Watt to nearly 70X the GPU and 200X the CPU.",
"Neuromorphic algorithms are being increasingly deployed across the entire computing spectrum from data centers to mobile and wearable devices to solve problems involving recognition, analytics, search and inference. For example, large-scale artificial neural networks (popularly called deep learning) now represent the state-of-the art in a wide and ever-increasing range of video image audio text recognition problems. However, the growth in data sets and network complexities have led to deep learning becoming one of the most challenging workloads across the computing spectrum. We posit that approximate computing can play a key role in the quest for energy-efficient neuromorphic systems. We show how the principles of approximate computing can be applied to the design of neuromorphic systems at various layers of the computing stack. At the algorithm level, we present techniques to significantly scale down the computational requirements of a neural network with minimal impact on its accuracy. At the circuit level, we show how approximate logic and memory can be used to implement neurons and synapses in an energy-efficient manner, while still meeting accuracy requirements. A fundamental limitation to the efficiency of neuromorphic computing in traditional implementations (software and custom hardware alike) is the mismatch between neuromorphic algorithms and the underlying computing models such as von Neumann architecture and Boolean logic. To overcome this limitation, we describe how emerging spintronic devices can offer highly efficient, approximate realization of the building blocks of neuromorphic computing systems.",
"Artificial neural networks (NN) have shown a significant promise in difficult tasks like image classification or speech recognition. Even well-optimized hardware implementations of digital NNs show significant power consumption. It is mainly due to non-uniform pipeline structures and inherent redundancy of numerous arithmetic operations that have to be performed to produce each single output vector. This paper provides a methodology for the design of well-optimized power-efficient NNs with a uniform structure suitable for hardware implementation. An error resilience analysis was performed in order to determine key constraints for the design of approximate multipliers that are employed in the resulting structure of NN. By means of a search based approximation method, approximate multipliers showing desired tradeoffs between the accuracy and implementation cost were created. Resulting approximate NNs, containing the approximate multipliers, were evaluated using standard benchmarks (MNIST dataset) and a real-world classification problem of Street-View House Numbers. Significant improvement in power efficiency was obtained in both cases with respect to regular NNs. In some cases, 91 power reduction of multiplication led to classification accuracy degradation of less than 2.80 . Moreover, the paper showed the capability of the back propagation learning algorithm to adapt with NNs containing the approximate multipliers.",
""
]
} |
1903.04413 | 2921882310 | Robots need to understand their environment to perform their task. If it is possible to pre-program a visual scene analysis process in closed environments, robots operating in an open environment would benefit from the ability to learn it through their interaction with their environment. This ability furthermore opens the way to the acquisition of affordances maps in which the action capabilities of the robot structure its visual scene understanding. We propose an approach to build such affordances maps by relying on an interactive perception approach and an online classification. In the proposed formalization of affordances, actions and effects are related to visual features, not objects, and they can be combined. We have tested the approach on three action primitives and on a real PR2 robot. | Affordances have raised a lot of interest in the developmental robotics community these last ten years, as shown by the numerous reviews and surveys dedicated to this topic @cite_36 @cite_7 @cite_0 @cite_20 @cite_17 . | {
"cite_N": [
"@cite_7",
"@cite_36",
"@cite_0",
"@cite_20",
"@cite_17"
],
"mid": [
"73339090",
"2023758701",
"2528967817",
"2500440871",
"2754217830"
],
"abstract": [
"In this paper, we consider the influence of Gibson's affordance theory on the design of robotic agents. Affordance theory (and the ecological approach to agent design in gen- eral) has in many cases contributed to the development of successful robotic systems; we provide a brief survey of AI research in this area. However, there remain signifi- cant issues that complicate discussions on this topic, particularly in the exchange of ideas between researchers in artificial intelligence and ecological psychology. We identify some of these issues, specifically the lack of a generally accepted definition of \"affordance\" and fundamental differences in the current approaches taken in AI and ecological psychology. While we consider reconciliation between these fields to be possible and mutually beneficial, it will require some flexibility on the issue of direct perception.",
"The concept of affordances was introduced by J. J. Gibson to eXplain how inherent \"values\" and \"meanings\" of things in the environment can be directly perceived and how this information can be linked to the action possibilities offered to the organism by the environment. Although introduced in psychology, the concept influenced studies in other fields ranging from human—computer interaction to autonomous robotics. In this article, we first introduce the concept of affordances as conceived by J. J. Gibson and review the use of the term in different fields, with particular emphasis on its use in autonomous robotics. Then, we summarize four of the major formalization proposals for the affordance term. We point out that there are three, not one, perspectives from which to view affordances and that much of the confusion regarding discussions on the concept has arisen from this. We propose a new formalism for affordances and discuss its implications for autonomous robot control. We report preliminary results obtained with robots and link them with these implications.",
"Affordances capture the relationships between a robot and the environment in terms of the actions that the robot is able to perform. The notable characteristic of affordance-based perception is that an object is perceived by what it affords (e.g., graspable and rollable), instead of identities (e.g., name, color, and shape). Affordances play an important role in basic robot capabilities such as recognition, planning, and prediction. The key challenges in affordance research are: 1) how to automatically discover the distinctive features that specify an affordance in an online and incremental manner and 2) how to generalize these features to novel environments. This survey provides an entry point for interested researchers, including: 1) a general overview; 2) classification and critical analysis of existing work; 3) discussion of how affordances are useful in developmental robotics; 4) some open questions about how to use the affordance concept; and 5) a few promising research directions.",
"The concept of affordances appeared in psychology during the late 60s as an alternative perspective on the visual perception of the environment. It was revolutionary in the intuition that the way living beings perceive the world is deeply influenced by the actions they are able to perform. Then, across the last 40 years, it has influenced many applied fields, e.g., design, human-computer interaction, computer vision, and robotics. In this paper, we offer a multidisciplinary perspective on the notion of affordances. We first discuss the main definitions and formalizations of the affordance theory, then we report the most significant evidence in psychology and neuroscience that support it, and finally we review the most relevant applications of this concept in robotics.",
"J. J. Gibson’s concept of affordance, one of the central pillars of ecological psychology, is a truly remarkable idea that provides a concise theory of animal perception predicated on environmental..."
]
} |
1903.04413 | 2921882310 | Robots need to understand their environment to perform their task. If it is possible to pre-program a visual scene analysis process in closed environments, robots operating in an open environment would benefit from the ability to learn it through their interaction with their environment. This ability furthermore opens the way to the acquisition of affordances maps in which the action capabilities of the robot structure its visual scene understanding. We propose an approach to build such affordances maps by relying on an interactive perception approach and an online classification. In the proposed formalization of affordances, actions and effects are related to visual features, not objects, and they can be combined. We have tested the approach on three action primitives and on a real PR2 robot. | According to a recent survey @cite_17 , among 146 reviewed papers, 104 papers consider learning affordances directly from a meso level, i.e. considering objects as a whole, while only 27 papers consider it from global level, i.e. by considering the whole environment and only 15 papers from a local level. With the global level, considering the whole environment allows the learning system to integrate the context. The context is important to predict or to do recognition of high-level affordances. Most papers on affordance use the meso level because for most actions having a complete model of an object is practical. For instance, for successful grasps, the object states such as orientation and position or shape are important information. Learning affordances at a local level allows the system to perceive them directly, which is in line with Gibson's view. Moreover considering the local level is simpler and is thus suitable to bootstrap the system. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2754217830"
],
"abstract": [
"J. J. Gibson’s concept of affordance, one of the central pillars of ecological psychology, is a truly remarkable idea that provides a concise theory of animal perception predicated on environmental..."
]
} |
1903.04413 | 2921882310 | Robots need to understand their environment to perform their task. If it is possible to pre-program a visual scene analysis process in closed environments, robots operating in an open environment would benefit from the ability to learn it through their interaction with their environment. This ability furthermore opens the way to the acquisition of affordances maps in which the action capabilities of the robot structure its visual scene understanding. We propose an approach to build such affordances maps by relying on an interactive perception approach and an online classification. In the proposed formalization of affordances, actions and effects are related to visual features, not objects, and they can be combined. We have tested the approach on three action primitives and on a real PR2 robot. | @cite_37 proposed a method for learning "traversability" affordance with a wheeled mobile robot which explores a simulated environment. The robot tries to go through different obstacles: laying down cylinders, upright cylinders, rectangular boxes, and spheres. The laying down cylinders and spheres are traversable while boxes and upright cylinders are not. The robot is equipped with a 3D sensor and collects data after each action labeled with the success of going through the objects. The sample data are extracted thanks to a simulated RGB-D camera. Then, an online SVM ( @cite_9 ) is trained based on the collected data. The resulting model predicts the "traversability" of objects based on local features. To drive the exploration, an uncertainty measure is computed based on the soft margin of the model decision hyperplane. Finally, they tested their method on a navigation problem, on real robots and in a realistic environment. They demonstrate, by using the model learned in simulation, that the robot is able to navigate through a room full of boxes, spherical objects and cylindrical objects like trash bins without colliding with non-traversable objects. | {
"cite_N": [
"@cite_9",
"@cite_37"
],
"mid": [
"2135106139",
"2099816052"
],
"abstract": [
"Very high dimensional learning systems become theoretically possible when training examples are abundant. The computing cost then becomes the limiting factor. Any efficient learning algorithm should at least take a brief look at each example. But should all examples be given equal attention?This contribution proposes an empirical answer. We first present an online SVM algorithm based on this premise. LASVM yields competitive misclassification rates after a single pass over the training examples, outspeeding state-of-the-art SVM solvers. Then we show how active example selection can yield faster training, higher accuracies, and simpler models, using only a fraction of the training example labels.",
"The concept of affordances, as proposed by J.J. Gibson, refers to the relationship between the organism and its environment and has become popular in autonomous robot control. The learning of affordances in autonomous robots, however, typically requires a large set of training data obtained from the interactions of the robot with its environment. Therefore, the learning process is not only time-consuming, and costly but is also risky since some of the interactions may inflict damage on the robot. In this paper, we study the learning of traversability affordance on a mobile robot and investigate how the number of interactions required can be minimized with minimial degradation on the learning process. Specifically, we propose a two step learning process which consists of bootstrapping and curiosity-based learning phases. In the bootstrapping phase, a small set of initial interaction data are used to find the relevant perceptual features for the affordance, and a support vector machine (SVM) classifier is trained. In the curiosity-driven learning phase, a curiosity band around the decision hyperplane of the SVM is used to decide whether a given interaction opportunity is worth exploring or not. Specifically, if the output of the SVM for a given percept lies within curiosity band, indicating that the classifier is not so certain about the hypothesized effect of the interaction, the robot goes ahead with the interaction, and skips if not. Our studies within a physics-based robot simulator show that the robot can achieve better learning with the proposed curiosity-driven learning method for a fixed number of interactions. The results also show that, for optimum performance, there exists a minimum number of initial interactions to be used for bootstrapping. Finally, the trained classifier with the proposed learning method was also successfully tested on the real robot."
]
} |
1903.04413 | 2921882310 | Robots need to understand their environment to perform their task. If it is possible to pre-program a visual scene analysis process in closed environments, robots operating in an open environment would benefit from the ability to learn it through their interaction with their environment. This ability furthermore opens the way to the acquisition of affordances maps in which the action capabilities of the robot structure its visual scene understanding. We propose an approach to build such affordances maps by relying on an interactive perception approach and an online classification. In the proposed formalization of affordances, actions and effects are related to visual features, not objects, and they can be combined. We have tested the approach on three action primitives and on a real PR2 robot. | Kim and Sukhatme @cite_21 , with a similar idea, seek to learn pushable objects in a simulated environment using a PR2 with an RGB-D camera. The objects are blocks the size of the robot. They are either pushable in one or two directions, or not pushable. The PR2 uses its two arms to try to push the blocks. The learning process relies on a logistic regression classifier and a Markov random field is used to smooth spatially the predictions. The robot explores then the environment and collects data by trying to push the blocks. The outcome of the framework is what they called an affordance map indicating the probability of pushability of a block. When in the work of U g @cite_37 the learning is made on continuous space, in the work of Kim and Sukathme @cite_21 the environment is discretized in a grid with the cells of the size of a block, thus, the learning space is discrete. Finally, they use an exploration strategy based on uncertainty reduction to select the next block to interact with. | {
"cite_N": [
"@cite_37",
"@cite_21"
],
"mid": [
"2099816052",
"805016335"
],
"abstract": [
"The concept of affordances, as proposed by J.J. Gibson, refers to the relationship between the organism and its environment and has become popular in autonomous robot control. The learning of affordances in autonomous robots, however, typically requires a large set of training data obtained from the interactions of the robot with its environment. Therefore, the learning process is not only time-consuming, and costly but is also risky since some of the interactions may inflict damage on the robot. In this paper, we study the learning of traversability affordance on a mobile robot and investigate how the number of interactions required can be minimized with minimial degradation on the learning process. Specifically, we propose a two step learning process which consists of bootstrapping and curiosity-based learning phases. In the bootstrapping phase, a small set of initial interaction data are used to find the relevant perceptual features for the affordance, and a support vector machine (SVM) classifier is trained. In the curiosity-driven learning phase, a curiosity band around the decision hyperplane of the SVM is used to decide whether a given interaction opportunity is worth exploring or not. Specifically, if the output of the SVM for a given percept lies within curiosity band, indicating that the classifier is not so certain about the hypothesized effect of the interaction, the robot goes ahead with the interaction, and skips if not. Our studies within a physics-based robot simulator show that the robot can achieve better learning with the proposed curiosity-driven learning method for a fixed number of interactions. The results also show that, for optimum performance, there exists a minimum number of initial interactions to be used for bootstrapping. Finally, the trained classifier with the proposed learning method was also successfully tested on the real robot.",
"We describe a technique to build an affordance map interactively for robotic tasks. Affordances are predicted by a trained classifier using geometric features extracted from objects. Based on 2D occupancy grid, a Markov Random Field (MRF) model builds an affordance map with relational affordance with neighboring cells. The quality of the affordance map is refined by sequences of interactive manipulations selected from the model to yield the highest reduction in uncertainty."
]
} |
1903.04413 | 2921882310 | Robots need to understand their environment to perform their task. If it is possible to pre-program a visual scene analysis process in closed environments, robots operating in an open environment would benefit from the ability to learn it through their interaction with their environment. This ability furthermore opens the way to the acquisition of affordances maps in which the action capabilities of the robot structure its visual scene understanding. We propose an approach to build such affordances maps by relying on an interactive perception approach and an online classification. In the proposed formalization of affordances, actions and effects are related to visual features, not objects, and they can be combined. We have tested the approach on three action primitives and on a real PR2 robot. | In a more developmental perspective, @cite_2 proposed a framework to learn composite affordances by starting from low level affordances. Their approach is split into 3 steps: first, the robot explores its environment with a reactive behavior, like a grasp reflex, and collects visual data consisting of SIFT. Then, in a second step, basic affordances are learned with simple actions such as pushing or gripping. Finally, in the third step, the robot learns composite affordances based on a combination of the basic action used in the previous step. For instance, this combination of actions allows the robot to achieve stacking. They validate their framework with a mobile robot equipped with a stereo camera and a magnetized end-effector. In a real environment the robot tries to learn to identify objects that are liftable with its magnetized end-effector. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1599132676"
],
"abstract": [
"Recently, the aspect of visual perception has been explored in the context of Gibson's concept of affordances [1] in various ways. We focus in this work on the importance of developmental learning and the perceptual cueing for an agent's anticipation of opportunities for interaction, in extension to functional views on visual feature representations. The concept for the incremental learning of abstract from basic affordances is presented in relation to learning of complex affordance features. In addition, the work proposes that the originally defined representational concept for the perception of affordances - in terms of using either motion or 3D cues - should be generalized towards using arbitrary visual feature representations. We demonstrate the learning of causal relations between visual cues and associated anticipated interactions by reinforcement learning of predictive perceptual states. We pursue a recently presented framework for cueing and recognition of affordance-based visual entities that obviously plays an important role in robot control architectures, in analogy to human perception. We experimentally verify the concept within a real world robot scenario by learning predictive visual cues using reinforcement signals, proving that features were selected for their relevance in predicting opportunities for interaction."
]
} |
1903.04413 | 2921882310 | Robots need to understand their environment to perform their task. If it is possible to pre-program a visual scene analysis process in closed environments, robots operating in an open environment would benefit from the ability to learn it through their interaction with their environment. This ability furthermore opens the way to the acquisition of affordances maps in which the action capabilities of the robot structure its visual scene understanding. We propose an approach to build such affordances maps by relying on an interactive perception approach and an online classification. In the proposed formalization of affordances, actions and effects are related to visual features, not objects, and they can be combined. We have tested the approach on three action primitives and on a real PR2 robot. | These works @cite_37 @cite_21 @cite_2 are close to the work presented in this paper. They gather in a single study affordance learning, online learning, exploration process, and interactive perception. The affordance map of Kim and Sukhatme @cite_21 is close to our relevance map by the way they both segment interesting elements for the agent, but exploration and learning were conducted in simulation only, in simple environments and setups, and only one affordance was learnt. The study proposed by @cite_2 can learn several affordances in simulation, but it was tested in reality with only one action. The approach proposed in this article is based on similar principles but it allows the system to learn relevance maps relative to several affordances in more complex and realistic environments, in real world-experiments. | {
"cite_N": [
"@cite_37",
"@cite_21",
"@cite_2"
],
"mid": [
"2099816052",
"805016335",
"1599132676"
],
"abstract": [
"The concept of affordances, as proposed by J.J. Gibson, refers to the relationship between the organism and its environment and has become popular in autonomous robot control. The learning of affordances in autonomous robots, however, typically requires a large set of training data obtained from the interactions of the robot with its environment. Therefore, the learning process is not only time-consuming, and costly but is also risky since some of the interactions may inflict damage on the robot. In this paper, we study the learning of traversability affordance on a mobile robot and investigate how the number of interactions required can be minimized with minimial degradation on the learning process. Specifically, we propose a two step learning process which consists of bootstrapping and curiosity-based learning phases. In the bootstrapping phase, a small set of initial interaction data are used to find the relevant perceptual features for the affordance, and a support vector machine (SVM) classifier is trained. In the curiosity-driven learning phase, a curiosity band around the decision hyperplane of the SVM is used to decide whether a given interaction opportunity is worth exploring or not. Specifically, if the output of the SVM for a given percept lies within curiosity band, indicating that the classifier is not so certain about the hypothesized effect of the interaction, the robot goes ahead with the interaction, and skips if not. Our studies within a physics-based robot simulator show that the robot can achieve better learning with the proposed curiosity-driven learning method for a fixed number of interactions. The results also show that, for optimum performance, there exists a minimum number of initial interactions to be used for bootstrapping. Finally, the trained classifier with the proposed learning method was also successfully tested on the real robot.",
"We describe a technique to build an affordance map interactively for robotic tasks. Affordances are predicted by a trained classifier using geometric features extracted from objects. Based on 2D occupancy grid, a Markov Random Field (MRF) model builds an affordance map with relational affordance with neighboring cells. The quality of the affordance map is refined by sequences of interactive manipulations selected from the model to yield the highest reduction in uncertainty.",
"Recently, the aspect of visual perception has been explored in the context of Gibson's concept of affordances [1] in various ways. We focus in this work on the importance of developmental learning and the perceptual cueing for an agent's anticipation of opportunities for interaction, in extension to functional views on visual feature representations. The concept for the incremental learning of abstract from basic affordances is presented in relation to learning of complex affordance features. In addition, the work proposes that the originally defined representational concept for the perception of affordances - in terms of using either motion or 3D cues - should be generalized towards using arbitrary visual feature representations. We demonstrate the learning of causal relations between visual cues and associated anticipated interactions by reinforcement learning of predictive perceptual states. We pursue a recently presented framework for cueing and recognition of affordance-based visual entities that obviously plays an important role in robot control architectures, in analogy to human perception. We experimentally verify the concept within a real world robot scenario by learning predictive visual cues using reinforcement signals, proving that features were selected for their relevance in predicting opportunities for interaction."
]
} |
1903.04235 | 2950001461 | Data similarity is a key concept in many data-driven applications. Many algorithms are sensitive to similarity measures. To tackle this fundamental problem, automatically learning of similarity information from data via self-expression has been developed and successfully applied in various models, such as low-rank representation, sparse subspace learning, semi-supervised learning. However, it just tries to reconstruct the original data and some valuable information, e.g., the manifold structure, is largely ignored. In this paper, we argue that it is beneficial to preserve the overall relations when we extract similarity information. Specifically, we propose a novel similarity learning framework by minimizing the reconstruction error of kernel matrices, rather than the reconstruction error of original data adopted by existing work. Taking the clustering task as an example to evaluate our method, we observe considerable improvements compared to other state-of-the-art methods. More importantly, our proposed framework is very general and provides a novel and fundamental building block for many other similarity-based tasks. Besides, our proposed kernel preserving opens up a large number of possibilities to embed high-dimensional data into low-dimensional space. | In a similar spirit of LPP, for each data point @math , all the data points @math can be regarded as the neighborhood of @math with probability @math . To some extent, @math represents the similarity between @math and @math @cite_1 . The smaller distance @math is, the greater the probability @math is. Rather than prespecifying @math with the deterministic neighborhood relation as LPP does, one can adaptively learn @math from the data set by solving an optimization problem: where @math is the regularization parameter. Recently, a variety of algorithms have been developed by using Eq. ) to learn a similarity matrix. Some applications are clustering @cite_1 , NMF @cite_14 , and feature selection @cite_28 . This approach can effectively capture the local structure information. | {
"cite_N": [
"@cite_28",
"@cite_14",
"@cite_1"
],
"mid": [
"2073161435",
"",
"1979089718"
],
"abstract": [
"The problem of feature selection has raised considerable interests in the past decade. Traditional unsupervised methods select the features which can faithfully preserve the intrinsic structures of data, where the intrinsic structures are estimated using all the input features of data. However, the estimated intrinsic structures are unreliable inaccurate when the redundant and noisy features are not removed. Therefore, we face a dilemma here: one need the true structures of data to identify the informative features, and one need the informative features to accurately estimate the true structures of data. To address this, we propose a unified learning framework which performs structure learning and feature selection simultaneously. The structures are adaptively learned from the results of feature selection, and the informative features are reselected to preserve the refined structures of data. By leveraging the interactions between these two essential tasks, we are able to capture accurate structures and select more informative features. Experimental results on many benchmark data sets demonstrate that the proposed method outperforms many state of the art unsupervised feature selection methods.",
"",
"Many clustering methods partition the data groups based on the input data similarity matrix. Thus, the clustering results highly depend on the data similarity learning. Because the similarity measurement and data clustering are often conducted in two separated steps, the learned data similarity may not be the optimal one for data clustering and lead to the suboptimal results. In this paper, we propose a novel clustering model to learn the data similarity matrix and clustering structure simultaneously. Our new model learns the data similarity matrix by assigning the adaptive and optimal neighbors for each data point based on the local distances. Meanwhile, the new rank constraint is imposed to the Laplacian matrix of the data similarity matrix, such that the connected components in the resulted similarity matrix are exactly equal to the cluster number. We derive an efficient algorithm to optimize the proposed challenging problem, and show the theoretical analysis on the connections between our method and the K-means clustering, and spectral clustering. We also further extend the new clustering model for the projected clustering to handle the high-dimensional data. Extensive empirical results on both synthetic data and real-world benchmark data sets show that our new clustering methods consistently outperforms the related clustering approaches."
]
} |
1903.04235 | 2950001461 | Data similarity is a key concept in many data-driven applications. Many algorithms are sensitive to similarity measures. To tackle this fundamental problem, automatically learning of similarity information from data via self-expression has been developed and successfully applied in various models, such as low-rank representation, sparse subspace learning, semi-supervised learning. However, it just tries to reconstruct the original data and some valuable information, e.g., the manifold structure, is largely ignored. In this paper, we argue that it is beneficial to preserve the overall relations when we extract similarity information. Specifically, we propose a novel similarity learning framework by minimizing the reconstruction error of kernel matrices, rather than the reconstruction error of original data adopted by existing work. Taking the clustering task as an example to evaluate our method, we observe considerable improvements compared to other state-of-the-art methods. More importantly, our proposed framework is very general and provides a novel and fundamental building block for many other similarity-based tasks. Besides, our proposed kernel preserving opens up a large number of possibilities to embed high-dimensional data into low-dimensional space. | The so-called self-expression is to approximate each data point as a linear combination of other data points, i.e., @math . The rationale here is that if @math and @math are similar, weight @math should be big. Therefore, @math also behaves like the similarity matrix. This shares the similar spirit as LLE, except that we do not predetermine the neighborhood. Its corresponding learning problem is: where @math is a regularizer of @math . Two commonly used assumptions about @math are low-rank and sparse. Hence, in many domains, we also call @math as the low-dimensional representation of @math . Through this procedure, the individual pairwise similarity information hidden in the data is explored @cite_1 and the most informative neighbors'' for each data point are automatically chosen. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1979089718"
],
"abstract": [
"Many clustering methods partition the data groups based on the input data similarity matrix. Thus, the clustering results highly depend on the data similarity learning. Because the similarity measurement and data clustering are often conducted in two separated steps, the learned data similarity may not be the optimal one for data clustering and lead to the suboptimal results. In this paper, we propose a novel clustering model to learn the data similarity matrix and clustering structure simultaneously. Our new model learns the data similarity matrix by assigning the adaptive and optimal neighbors for each data point based on the local distances. Meanwhile, the new rank constraint is imposed to the Laplacian matrix of the data similarity matrix, such that the connected components in the resulted similarity matrix are exactly equal to the cluster number. We derive an efficient algorithm to optimize the proposed challenging problem, and show the theoretical analysis on the connections between our method and the K-means clustering, and spectral clustering. We also further extend the new clustering model for the projected clustering to handle the high-dimensional data. Extensive empirical results on both synthetic data and real-world benchmark data sets show that our new clustering methods consistently outperforms the related clustering approaches."
]
} |
1903.04235 | 2950001461 | Data similarity is a key concept in many data-driven applications. Many algorithms are sensitive to similarity measures. To tackle this fundamental problem, automatically learning of similarity information from data via self-expression has been developed and successfully applied in various models, such as low-rank representation, sparse subspace learning, semi-supervised learning. However, it just tries to reconstruct the original data and some valuable information, e.g., the manifold structure, is largely ignored. In this paper, we argue that it is beneficial to preserve the overall relations when we extract similarity information. Specifically, we propose a novel similarity learning framework by minimizing the reconstruction error of kernel matrices, rather than the reconstruction error of original data adopted by existing work. Taking the clustering task as an example to evaluate our method, we observe considerable improvements compared to other state-of-the-art methods. More importantly, our proposed framework is very general and provides a novel and fundamental building block for many other similarity-based tasks. Besides, our proposed kernel preserving opens up a large number of possibilities to embed high-dimensional data into low-dimensional space. | Moreover, this learned @math can not only reveal low-dimensional structure of data, but also be robust to data scale @cite_11 . Therefore, this approach has drawn significant attention and achieved impressive performance in a number of applications, including face recognition @cite_22 , subspace clustering @cite_7 @cite_19 , semi-supervised learning @cite_8 . In many real-world applications, data often present complex structures. Nevertheless, the first term in Eq. ) simply minimizes the reconstruction error. Some important manifold structure information, such as overall relations, could be lost during this process. Preserving relation information has been shown to be important for feature selection @cite_0 . In @cite_0 , new feature vector @math is obtained by maximizing @math , where @math is the refined similarity matrix derived from original kernel matrix @math with element @math . In this paper, we propose a novel model to preserve the overall relations of the original data and simultaneously learn the similarity matrix. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_0",
"@cite_19",
"@cite_11"
],
"mid": [
"2132467081",
"",
"2616803684",
"2132379769",
"",
"2406581873"
],
"abstract": [
"As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to the minimum representation error. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. However, is it really the l 1 -norm sparsity that improves the FR accuracy? This paper devotes to analyze the working mechanism of SRC, and indicates that it is the CR but not the l1-norm sparsity that makes SRC powerful for face classification. Consequently, we propose a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS). The extensive experiments clearly show that CRC_RLS has very competitive classification results, while it has significantly less complexity than SRC.",
"",
"In the literature, most existing graph-based semi-supervised learning methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph. In this paper, we argue that it is beneficial to consider the label information in the graph learning stage. Specifically, by enforcing the weight of edges between labeled samples of different classes to be zero, we explicitly incorporate the label information into the state-of-the-art graph learning methods, such as the low-rank representation (LRR), and propose a novel semi-supervised graph learning method called semi-supervised low-rank representation. This results in a convex optimization problem with linear constraints, which can be solved by the linearized alternating direction method. Though we take LRR as an example, our proposed method is in fact very general and can be applied to any self-representation graph learning methods. Experiment results on both synthetic and real data sets demonstrate that the proposed graph learning method can better capture the global geometric structure of the data, and therefore is more effective for semi-supervised learning tasks.",
"In the literature of feature selection, different criteria have been proposed to evaluate the goodness of features. In our investigation, we notice that a number of existing selection criteria implicitly select features that preserve sample similarity, and can be unified under a common framework. We further point out that any feature selection criteria covered by this framework cannot handle redundant features, a common drawback of these criteria. Motivated by these observations, we propose a new \"Similarity Preserving Feature Selection” framework in an explicit and rigorous way. We show, through theoretical analysis, that the proposed framework not only encompasses many widely used feature selection criteria, but also naturally overcomes their common weakness in handling feature redundancy. In developing this new framework, we begin with a conventional combinatorial optimization formulation for similarity preserving feature selection, then extend it with a sparse multiple-output regression formulation to improve its efficiency and effectiveness. A set of three algorithms are devised to efficiently solve the proposed formulations, each of which has its own advantages in terms of computational complexity and selection performance. As exhibited by our extensive experimental study, the proposed framework achieves superior feature selection performance and attractive properties.",
"",
"The Laplacian matrix of a graph can be used in many areas of mathematical research and has a physical interpretation in various theories. However, there are a few open issues in the Laplacian graph construction: (i) Selecting the appropriate scale of analysis, (ii) Selecting the appropriate number of neighbors, (iii) Handling multiscale data, and, (iv) Dealing with noise and outliers. In this paper, we propose that the affinity between pairs of samples could be computed using sparse representation with proper constraints. This parameter free setting automatically produces the Laplacian graph, leads to significant reduction in computation cost and robustness to the outliers and noise. We further provide an efficient algorithm to solve the difficult optimization problem based on improvement of existing algorithms. To demonstrate our motivation, we conduct spectral clustering experiments with benchmark methods. Empirical experiments on 9 data sets demonstrate the effectiveness of our method."
]
} |
1903.04267 | 2921071932 | Historically, battery is the power source for mobile, embedded and remote system applications. However, the development of battery techniques does not follow the Moore's Law. The large physical size, limited electric quantity and high-cost replacement process always restrict the performance of the application such as embedded systems, wireless sensors networks and lower-power electronics. Energy harvesting, a technique which enables the applications to scavenge energy from RF signal from TV towers, solar energy, piezoelectric driven by motion of people and thermal energy from the temperature difference, which could dramatically extend the operating lifetime of applications. Thus, energy harvesting is important for the sustainable operations of an application. | The Ultra-Low Power Sensor Evaluation Kit (ULPSEK) @cite_4 -for evaluation of biomedical sensors and monitoring applications, is a wearable, multi-parameter health sensor powered by an efficient body heat harvester. ULPSEK could measure and process electrocardiogram, respiration, motion and body temperature. The key component of ULPSEK is the thermal harvester, which is placed at the forearm or the chest. The harvester consists of a heat sink, a thermal-electric module and a DC-DC converter circuit. As for mechanism energy, a Piezoelectric Energy Harvester (PEH) is proposed to be used in cantilever configuration and Structural Health Monitoring (SHM) @cite_10 . It harvests energy from the vibration of bridge caused by the passing vehicles in the form of concrete vibration sensor for reinforced concrete structure. RF energy scavengers are the field attracting a great deal of researchers' attention, especially in low-power wireless sensors networks. Researchers from the University of Tokyo propose a deign of low-cost sensor nodes harvesting energy from TV broadcast signal and storing excess power in capacitors @cite_9 . | {
"cite_N": [
"@cite_10",
"@cite_9",
"@cite_4"
],
"mid": [
"2802083347",
"2083613462",
"2620300161"
],
"abstract": [
"Piezoelectric energy harvesting from bridge vibrations has attracted many researchers not because it provides a clean and autonomous solution to power portable electronic devices, in addition, it helps in making a smart city. This paper focuses on energy harvesting from low-frequency bridge vibrations which includes vibrations measurements from a city flyover and laboratory experiment using traditional rectifier circuit at low frequency and small amplitude vibrations for storage. The typical practical issues have been addressed associated with PEH from bridge vibrations and electrical circuitry.",
"In this paper, we present a software control method that maximizes the sensing rate of wireless sensor networks (WSN) that are solely powered by ambient RF power. Different from all other energy harvesting WSN systems, RF powered systems present a new challenge for the energy management. A WSN node repeatedly charges and discharges at short intervals depending on the energy intake. A capacitor is used for an energy storage in the energy harvesting system because of its efficient charge and discharge performance and infinite recharge cycles. When this charging time is too short, the node is more likely to experience an energy shortage. On the contrary, if it is too long, more energy is lost because of the leakage in the capacitor. Therefore, we implemented an adaptive duty cycle control scheme that is optimized for RF energy harvesting. This method maximizes the sensing rate considering the leakage problem, a factor that has never previously been studied in this context. Our control scheme improves efficiency by aggregate evaluation of the operation reliability and leakage reduction.",
"Wearable health sensors are about to change our health system. While several technological improvements have been presented to enhance performance and energy-efficiency, battery runtime is still a critical concern for practical use of wearable biomedical sensor systems. The runtime limitation is directly related to the battery size, which is another concern regarding practicality and customer acceptance. We introduced ULPSEK—Ultra-Low-Power Sensor Evaluation Kit—for evaluation of biomedical sensors and monitoring applications ( http: ulpsek.com ). ULPSEK includes a multiparameter sensor measuring and processing electrocardiogram, respiration, motion, body temperature, and photoplethysmography. Instead of a battery, ULPSEK is powered using an efficient body heat harvester. The harvester produced 171 @math W on average, which was sufficient to power the sensor below 25 @math C ambient temperature. We present design issues regarding the power supply and the power distribution network of the ULPSEK sensor platform. Due to the security aspect of self-powered health sensors, we suggest a hybrid solution consisting of a battery charged by a harvester."
]
} |
1903.04414 | 2921034270 | We investigate the stability condition of redundancy- @math multi-server systems. Each server has its own queue and implements popular scheduling disciplines such as First-Come-First-Serve (FCFS), Processor Sharing (PS), and Random Order of Service (ROS). New jobs arrive according to a Poisson process and copies of each job are sent to @math servers chosen uniformly at random. The service times of jobs are assumed to be exponentially distributed. A job departs as soon as one of its copies finishes service. Under the assumption that all @math copies are i.i.d., we show that for PS and ROS (for FCFS it is already known) sending redundant copies does not reduce the stability region. Under the assumption that the @math copies are identical, we show that (i) ROS does not reduce the stability region, (ii) FCFS reduces the stability region, which can be characterized through an associated saturated system, and (iii) PS severely reduces the stability region, which coincides with the system where all copies have to be served. The proofs are based on careful characterizations of scaling limits of the underlying stochastic process. Through simulations we obtain interesting insights on the system's performance for non-exponential service time distributions and heterogeneous server speeds. | In redundancy systems with cancel-on-complete ( @math , as considered in this paper), once one of the copies has completed service, the other copies are deleted and the job is said to have received service. Most of the recent literature on redundancy has focused on @math and i.i.d. copies with FCFS as service policy implemented in the servers. For example, under these assumption, a thorough performance analysis has been carried out by @cite_15 @cite_21 , and as mentioned in the introduction, the stability condition has been fully characterized in @cite_15 @cite_22 . In @cite_15 , the authors consider a class-based model where redundant copies of an arriving job type are dispatched to a type-specific subset of servers, and show that the steady-state distribution has a product form. In @cite_21 , the previous result is applied to analyze a multi-server model with homogeneous servers where incoming jobs are dispatched to randomly selected @math servers. An important insight obtained there is that stability is not affected by @math and that the mean job delay in the system reduces as the redundancy degree @math increases. | {
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_22"
],
"mid": [
"2462681966",
"2604490148",
"2611196571"
],
"abstract": [
"Recent computer systems research has proposed using redundant requests to reduce latency. The idea is to run a request on multiple servers and wait for the first completion (discarding all remaining copies of the request). However, there is no exact analysis of systems with redundancy. This paper presents the first exact analysis of systems with redundancy. We allow for any number of classes of redundant requests, any number of classes of non-redundant requests, any degree of redundancy, and any number of heterogeneous servers. In all cases we derive the limiting distribution of the state of the system. In small (two or three server) systems, we derive simple forms for the distribution of response time of both the redundant classes and non-redundant classes, and we quantify the \"gain\" to redundant classes and \"pain\" to non-redundant classes caused by redundancy. We find some surprising results. First, the response time of a fully redundant class follows a simple exponential distribution and that of the non-redundant class follows a generalized hyperexponential. Second, fully redundant classes are \"immune\" to any pain caused by other classes becoming redundant. We also compare redundancy with other approaches for reducing latency, such as optimal probabilistic splitting of a class among servers (Opt-Split) and join-the-shortest-queue (JSQ) routing of a class. We find that, in many cases, redundancy outperforms JSQ and Opt-Split with respect to overall response time, making it an attractive solution.",
"Redundancy is an important strategy for reducing response time in multi-server distributed queueing systems. This strategy has been used in a variety of settings, but only recently have researchers begun analytical studies. The idea behind redundancy is that customers can greatly reduce response time by waiting in multiple queues at the same time, thereby experiencing the minimum time across queues. Redundancy has been shown to produce significant response time improvements in applications ranging from organ transplant waitlists to Google’s BigTable service. However, despite the growing body of theoretical and empirical work on the benefits of redundancy, there is little work addressing the questions of how many copies one needs to make to achieve a response time benefit, and the magnitude of the potential gains. In this paper we propose a theoretical model and dispatching policy to evaluate these questions. Our system consists of k servers, each with its own queue. We introduce the Redundancy-d policy, u...",
"We represent a computer cluster as a multi-server queue with some arbitrary graph of compatibilities between jobs and servers. Each server processes its jobs sequentially in FCFS order. The service rate of a job at any given time is the sum of the service rates of all servers processing this job. We show that the corresponding queue is quasi-reversible and use this property to design a scheduling algorithm achieving balanced fair sharing of the computing resources."
]
} |
1903.04414 | 2921034270 | We investigate the stability condition of redundancy- @math multi-server systems. Each server has its own queue and implements popular scheduling disciplines such as First-Come-First-Serve (FCFS), Processor Sharing (PS), and Random Order of Service (ROS). New jobs arrive according to a Poisson process and copies of each job are sent to @math servers chosen uniformly at random. The service times of jobs are assumed to be exponentially distributed. A job departs as soon as one of its copies finishes service. Under the assumption that all @math copies are i.i.d., we show that for PS and ROS (for FCFS it is already known) sending redundant copies does not reduce the stability region. Under the assumption that the @math copies are identical, we show that (i) ROS does not reduce the stability region, (ii) FCFS reduces the stability region, which can be characterized through an associated saturated system, and (iii) PS severely reduces the stability region, which coincides with the system where all copies have to be served. The proofs are based on careful characterizations of scaling limits of the underlying stochastic process. Through simulations we obtain interesting insights on the system's performance for non-exponential service time distributions and heterogeneous server speeds. | In a recent study, @cite_18 , the impact of the scheduling policy employed in the server is investigated for i.i.d. copies and exponential service. The authors show that for FCFS the performance might not improve as the number of redundant copies increases, while for other policies as proposed in that paper, redundancy does improve the performance. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2912554127"
],
"abstract": [
"Abstract Redundancy is an increasingly popular technique for reducing response times in computer systems, and there is a growing body of theoretical work seeking to analyze performance in systems with redundancy. The idea is to dispatch a job to multiple servers at the same time and wait for the first copy to complete service. Redundancy can help reduce response time because redundant jobs get to experience the shortest of multiple queueing times and potentially of multiple service times—but it can hurt jobs that are not redundant and must wait behind the redundant jobs’ extra copies. Thus in designing redundancy systems it is critical to find ways to leverage the potential benefits without incurring the potential costs. Scheduling represents one tool for maximizing the benefits of redundancy. In this paper we study three scheduling policies: First-Come First-Served (FCFS), Least Redundant First (LRF, under which less-redundant jobs have priority over more-redundant jobs), and Primaries First (PF, under which each job designates a “primary” copy, and all other copies have lowest priority). Our goal for each of these policies is to understand the marginal impact of redundancy: how much redundancy is needed to get the biggest benefit? We study this question analytically for LRF and FCFS, and via simulation for all three policies. One of our primary contributions is a surprisingly intricate proof that mean response time is convex as well as decreasing as the proportion of jobs that are redundant increases under LRF for exponential services. While response time under PF is also decreasing and appears to be convex as well, we find that, surprisingly, FCFS may be neither decreasing nor convex, depending on the parameter values. Thus, the scheduling policy is key in determining both whether redundancy helps and the marginal effects of adding more redundancy to the system."
]
} |
1903.04414 | 2921034270 | We investigate the stability condition of redundancy- @math multi-server systems. Each server has its own queue and implements popular scheduling disciplines such as First-Come-First-Serve (FCFS), Processor Sharing (PS), and Random Order of Service (ROS). New jobs arrive according to a Poisson process and copies of each job are sent to @math servers chosen uniformly at random. The service times of jobs are assumed to be exponentially distributed. A job departs as soon as one of its copies finishes service. Under the assumption that all @math copies are i.i.d., we show that for PS and ROS (for FCFS it is already known) sending redundant copies does not reduce the stability region. Under the assumption that the @math copies are identical, we show that (i) ROS does not reduce the stability region, (ii) FCFS reduces the stability region, which can be characterized through an associated saturated system, and (iii) PS severely reduces the stability region, which coincides with the system where all copies have to be served. The proofs are based on careful characterizations of scaling limits of the underlying stochastic process. Through simulations we obtain interesting insights on the system's performance for non-exponential service time distributions and heterogeneous server speeds. | Very recently, preliminary results on redundancy without the i.i.d. assumption have been published. @cite_28 propose a model in which the service time of a redundant copy is decoupled into two components, one related to the inherent job size of the task, and the other related to the server's slowdown. The paper also proposes a load balancing scheme that in case all servers are busy, it would only dispatch one copy per job. Such a dispatching policy, under the assumption that the dispatcher has the information regarding the status of servers, would be stable under the condition @math . Hellemans and van Houdt @cite_8 consider identical copies and FCFS, and develop a numerical method to compute the workload and response time distribution when the number of servers tend to infinity. In order for this method to work, the system needs to be stable, but the stability condition is not characterized. | {
"cite_N": [
"@cite_28",
"@cite_8"
],
"mid": [
"2757172027",
"2912987741"
],
"abstract": [
"Recent computer systems research has proposed using redundant requests to reduce latency. The idea is to replicate a request so that it joins the queue at multiple servers. The request is considered complete as soon as any one of its copies completes. Redundancy allows us to overcome server-side variability–the fact that a server might be temporarily slow due to factors such as background load, network interrupts, and garbage collection to reduce response time. In the past few years, queueing theorists have begun to study redundancy, first via approximations, and, more recently, via exact analysis. Unfortunately, for analytical tractability, most existing theoretical analysis has assumed an Independent Runtimes (IR) model, wherein the replicas of a job each experience independent runtimes (service times) at different servers. The IR model is unrealistic and has led to theoretical results that can be at odds with computer systems implementation results. This paper introduces a much more realistic model of redundancy. Our model decouples the inherent job size ( @math ) from the server-side slowdown ( @math ), where we track both @math and @math for each job. Analysis within the @math model is, of course, much more difficult. Nevertheless, we design a dispatching policy, Redundant-to-Idle-Queue, which is both analytically tractable within the @math model and has provably excellent performance.",
"Queueing systems with redundancy have received considerable attention recently. The idea of redundancy is to reduce latency by replicating each incoming job a number of times and to assign these replicas to a set of randomly selected servers. As soon as one replica completes service the remaining replicas are cancelled. Most prior work on queueing systems with redundancy assumes that the job durations of the different replicas are i.i.d., which yields insights that can be misleading for computer system design. In this paper we develop a differential equation, using the cavity method, to assess the workload and response time distribution in a large homogeneous system with redundancy without the need to rely on this independence assumption. More specifically, we assume that the duration of each replica of a single job is identical across the servers and follows a general service time distribution. Simulation results suggest that the differential equation yields exact results as the system size tends to infinity and can be used to study the stability of the system."
]
} |
1903.04414 | 2921034270 | We investigate the stability condition of redundancy- @math multi-server systems. Each server has its own queue and implements popular scheduling disciplines such as First-Come-First-Serve (FCFS), Processor Sharing (PS), and Random Order of Service (ROS). New jobs arrive according to a Poisson process and copies of each job are sent to @math servers chosen uniformly at random. The service times of jobs are assumed to be exponentially distributed. A job departs as soon as one of its copies finishes service. Under the assumption that all @math copies are i.i.d., we show that for PS and ROS (for FCFS it is already known) sending redundant copies does not reduce the stability region. Under the assumption that the @math copies are identical, we show that (i) ROS does not reduce the stability region, (ii) FCFS reduces the stability region, which can be characterized through an associated saturated system, and (iii) PS severely reduces the stability region, which coincides with the system where all copies have to be served. The proofs are based on careful characterizations of scaling limits of the underlying stochastic process. Through simulations we obtain interesting insights on the system's performance for non-exponential service time distributions and heterogeneous server speeds. | As opposed to @math , in redundancy systems with cancel-on-start ( @math ), once one of the copies starts being served, the other copies are deleted. Up till now, @math has received far less attention than @math . The main reason for this comes from the fact that in practice, redundancy aims at exploiting server's speed variability, which is a task that @math achieves better. From the stability point of view, @math does not bring any extra work to the system, and thus, its stability region is the same as in the non-redundant system. The steady-state distribution of @math has been recently analyzed in @cite_13 , and the equivalence of the @math redundancy model with two other parallel-service models has been shown in @cite_3 . A thorough analysis of @math in the mean-field regime has been derived in Hellemans and van Houdt @cite_26 . | {
"cite_N": [
"@cite_26",
"@cite_13",
"@cite_3"
],
"mid": [
"2786117165",
"2971432649",
""
],
"abstract": [
"Motivated by distributed schedulers that combine the power-of-d-choices with late binding and systems that use replication with cancellation-on-start, we study the performance of the LL(d) policy which assigns a job to a server that currently has the least workload among d randomly selected servers in large-scale homogeneous clusters. We consider general job size distributions and propose a partial integro-differential equation to describe the evolution of the system. This equation relies on the earlier proven ansatz for LL(d) which asserts that the workload distribution of any finite set of queues becomes independent of one another as the number of servers tends to infinity. Based on this equation we propose a fixed point iteration for the limiting workload distribution and study its convergence.",
"Abstract In this paper, we present a unifying analysis for redundancy systems with cancel-on-start ( c . o . s . ) and cancel-on-complete ( c . o . c . ) with exponentially distributed service requirements. With c . o . s . ( c . o . c . ) all redundant copies are removed as soon as one of the copies starts (completes) service. As a consequence, c . o . s . does not waste any computing resources, as opposed to c . o . c . We show that the c . o . s . model is equivalent to a queueing system with multi-type jobs and servers, which was analyzed in , (2012), and show that c . o . c . (under the assumption of i.i.d. copies) can be analyzed by a generalization of , (2012) where state-dependent departure rates are permitted. This allows us to show that the stationary distribution for both the c . o . c . and c . o . s . models has a product form. We give a detailed first-time analysis for c . o . s and derive a closed form expression for important metrics like mean number of jobs in the system, and probability of waiting. We also note that the c . o . s . model is equivalent to Join-Shortest-Work queue with power of d (JSW( d )). In the latter, an incoming job is dispatched to the server with smallest workload among d randomly chosen ones. Thus, all our results apply mutatis-mutandis to JSW( d ). Comparing the performance of c . o . s . with that of c . o . c . with i.i.d. copies gives the unexpected conclusion (since c . o . s . does not waste any resources) that c . o . s . is worse in terms of mean number of jobs. As part of ancillary results, we illustrate that this is primarily due to the assumption of i.i.d. copies in case of c . o . c . (together with exponentially distributed requirements) and that such assumptions might lead to conclusions that are qualitatively different from that observed in practice.",
""
]
} |
1903.04442 | 2921651847 | We propose that intelligently combining models from the domains of Artificial Intelligence or Machine Learning with Physical and Expert models will yield a more "trustworthy" model than any one model from a single domain, given a complex and narrow enough problem. Based on mean-variance portfolio theory and bias-variance trade-off analysis, we prove combining models from various domains produces a model that has lower risk, increasing user trust. We call such combined models - physics enhanced artificial intelligence (PEAI), and suggest use cases for PEAI. | Other ways to improve user trust and adoption in AI include providing tractable performance improvement compared to non-AI systems in controlled data set and experiments. This is normally achieved by building metrics to quantify performance @cite_7 . However, any quantitative comparison will likely tie the performance to the testing data sets with known data bias being an issue for AI systems @cite_4 . Another practical way to restoring user trust for AI services is by providing supplementary documentation such as a supplier's declaration of conformity (SDoC) for the AI services provided @cite_16 . | {
"cite_N": [
"@cite_16",
"@cite_4",
"@cite_7"
],
"mid": [
"2911599914",
"2909334458",
""
],
"abstract": [
"Accuracy is an important concern for suppliers of artificial intelligence (AI) services, but considerations beyond accuracy, such as safety (which includes fairness and explainability), security, and provenance, are also critical elements to engender consumers' trust in a service. Many industries use transparent, standardized, but often not legally required documents called supplier's declarations of conformity (SDoCs) to describe the lineage of a product along with the safety and performance testing it has undergone. SDoCs may be considered multi-dimensional fact sheets that capture and quantify various aspects of the product and its development to make it worthy of consumers' trust. Inspired by this practice, we propose FactSheets to help increase trust in AI services. We envision such documents to contain purpose, performance, safety, security, and provenance information to be completed by AI service providers for examination by consumers. We suggest a comprehensive set of declaration items tailored to AI and provide examples for two fictitious AI services in the appendix of the paper.",
"Datasets often contain biases which unfairly disadvantage certain groups, and classifiers trained on such datasets can inherit these biases. In this paper, we provide a mathematical formulation of how this bias can arise. We do so by assuming the existence of underlying, unknown, and unbiased labels which are overwritten by an agent who intends to provide accurate labels but may have biases against certain groups. Despite the fact that we only observe the biased labels, we are able to show that the bias may nevertheless be corrected by re-weighting the data points without changing the labels. We show, with theoretical guarantees, that training on the re-weighted dataset corresponds to training on the unobserved but unbiased labels, thus leading to an unbiased machine learning classifier. Our procedure is fast and robust and can be used with virtually any learning algorithm. We evaluate on a number of standard machine learning fairness datasets and a variety of fairness notions, finding that our method outperforms standard approaches in achieving fair classification.",
""
]
} |
1903.04254 | 2921626067 | Product categorization using text data for eCommerce is a very challenging extreme classification problem with several thousands of classes and several millions of products to classify. Even though multi-class text classification is a well studied problem both in academia and industry, most approaches either deal with treating product content as a single pile of text, or only consider a few product attributes for modelling purposes. Given the variety of products sold on popular eCommerce platforms, it is hard to consider all available product attributes as part of the modeling exercise, considering that products possess their own unique set of attributes based on category. In this paper, we compare hierarchical models to flat models and show that in specific cases, flat models perform better. We explore two Deep Learning based models that extract features from individual pieces of unstructured data from each product and then combine them to create a product signature. We also propose a novel idea of using structured attributes and their values together in an unstructured fashion along with convolutional filters such that the ordering of the attributes and the differing attributes by product categories no longer becomes a modelling challenge. This approach is also more robust to the presence of faulty product attribute names and values and can elegantly generalize to use both closed list and open list attributes. | Weigend et. al. @cite_2 used a hierarchical classification scheme with one meta-classifier to determine the broad topic followed by individual topic level classifiers to distinguish nuances within each topic. They also observed that using neural networks in place of standard logistic regression resulted in improved performance. Shen et. al. @cite_13 reformulated this into a two level classification problem ignoring the prior hierarchy and distributing the leaf nodes fairly evenly across top level categories. They used a kNN classifier at the top level followed by individual SVM classifiers for each second level node. | {
"cite_N": [
"@cite_13",
"@cite_2"
],
"mid": [
"2143774383",
"1487445520"
],
"abstract": [
"This paper studies the problem of leveraging computationally intensive classification algorithms for large scale text categorization problems. We propose a hierarchical approach which decomposes the classification problem into a coarse level task and a fine level task. A simple yet scalable classifier is applied to perform the coarse level classification while a more sophisticated model is used to separate classes at the fine level. However, instead of relying on a human-defined hierarchy to decompose the problem, we we use a graph algorithm to discover automatically groups of highly similar classes. As an illustrative example, we apply our approach to real-world industrial data from eBay, a major e-commerce site where the goal is to classify live items into a large taxonomy of categories. In such industrial setting, classification is very challenging due to the number of classes, the amount of training data, the size of the feature space and the real-world requirements on the response time. We demonstrate through extensive experimental evaluation that (1) the proposed hierarchical approach is superior to flat models, and (2) the data-driven extraction of latent groups works significantly better than the existing human-defined hierarchy.",
"With the recent dramatic increase in electronic access to documents, text categorization—the task of assigning topics to a given document—has moved to the center of the information sciences and knowledge management. This article uses the structure that is present in the semantic space of topics in order to improve performance in text categorization: according to their meaning, topics can be grouped together into “meta-topics”, e.g., gold, silver, and copper are all met als. The proposed architecture matches the hierarchical structure of the topic space, as opposed to a flat model that ignores the structure. It accommodates both single and multiple topic assignments for each document. Its probabilistic interpretation allows its predictions to be combined in a principled way with information from other sources. The first level of the architecture predicts the probabilities of the meta-topic groups. This allows the individual models for each topic on the second level to focus on finer discriminations within the group. Evaluating the performance of a two-level implementation on the Reuters-22173 testbed of newswire articles shows the most significant improvement for rare classes."
]
} |
1903.04254 | 2921626067 | Product categorization using text data for eCommerce is a very challenging extreme classification problem with several thousands of classes and several millions of products to classify. Even though multi-class text classification is a well studied problem both in academia and industry, most approaches either deal with treating product content as a single pile of text, or only consider a few product attributes for modelling purposes. Given the variety of products sold on popular eCommerce platforms, it is hard to consider all available product attributes as part of the modeling exercise, considering that products possess their own unique set of attributes based on category. In this paper, we compare hierarchical models to flat models and show that in specific cases, flat models perform better. We explore two Deep Learning based models that extract features from individual pieces of unstructured data from each product and then combine them to create a product signature. We also propose a novel idea of using structured attributes and their values together in an unstructured fashion along with convolutional filters such that the ordering of the attributes and the differing attributes by product categories no longer becomes a modelling challenge. This approach is also more robust to the presence of faulty product attribute names and values and can elegantly generalize to use both closed list and open list attributes. | Yundi Li et. al. @cite_11 used a Machine Translation approach to generate a root-to-leaf path in a product taxonomy. Gupta et. al. @cite_1 trained path-wise, node-wise and depth-wise models (with respect to the taxonomy tree) and trained an ensemble model based on the outputs of these models. Zahavy et. al. @cite_3 use decision level fusion networks for multi-modal classification using text and image inputs. | {
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_11"
],
"mid": [
"2467946607",
"2559721862",
"2904728018"
],
"abstract": [
"Product classification is the task of automatically predicting a taxonomy path for a product in a predefined taxonomy hierarchy given a textual product description or title. For efficient product classification we require a suitable representation for a document (the textual description of a product) feature vector and efficient and fast algorithms for prediction. To address the above challenges, we propose a new distributional semantics representation for document vector formation. We also develop a new two-level ensemble approach utilizing (with respect to the taxonomy tree) a path-wise, node-wise and depth-wise classifiers for error reduction in the final product classification. Our experiments show the effectiveness of the distributional representation and the ensemble approach on data sets from a leading e-commerce platform and achieve better results on various evaluation metrics compared to earlier approaches.",
"Classifying products into categories precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification using text and image inputs. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves the top-1 accuracy @math over both networks on a real-world large-scale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that characterizes e-commerce domains, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc.",
"E-commerce platforms categorize their products into a multi-level taxonomy tree with thousands of leaf categories. Conventional methods for product categorization are typically based on machine learning classification algorithms. These algorithms take product information as input (e.g., titles and descriptions) to classify a product into a leaf category. In this paper, we propose a new paradigm based on machine translation. In our approach, we translate a product's natural language description into a sequence of tokens representing a root-to-leaf path in a product taxonomy. In our experiments on two large real-world datasets, we show that our approach achieves better predictive accuracy than a state-of-the-art classification system for product categorization. In addition, we demonstrate that our machine translation models can propose meaningful new paths between previously unconnected nodes in a taxonomy tree, thereby transforming the taxonomy into a directed acyclic graph (DAG). We discuss how the resultant taxonomy DAG promotes user-friendly navigation, and how it is more adaptable to new products."
]
} |
1903.04254 | 2921626067 | Product categorization using text data for eCommerce is a very challenging extreme classification problem with several thousands of classes and several millions of products to classify. Even though multi-class text classification is a well studied problem both in academia and industry, most approaches either deal with treating product content as a single pile of text, or only consider a few product attributes for modelling purposes. Given the variety of products sold on popular eCommerce platforms, it is hard to consider all available product attributes as part of the modeling exercise, considering that products possess their own unique set of attributes based on category. In this paper, we compare hierarchical models to flat models and show that in specific cases, flat models perform better. We explore two Deep Learning based models that extract features from individual pieces of unstructured data from each product and then combine them to create a product signature. We also propose a novel idea of using structured attributes and their values together in an unstructured fashion along with convolutional filters such that the ordering of the attributes and the differing attributes by product categories no longer becomes a modelling challenge. This approach is also more robust to the presence of faulty product attribute names and values and can elegantly generalize to use both closed list and open list attributes. | All of the above approaches use unstructured product attributes like title, description etc. to perform classification. Ha et. al. @cite_7 used a limited set of structured attributes but used an independent LSTM block for each structured attribute which does not scale when each category of products contains different sets of attributes and there are thousands of product attributes overall that need to be considered. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2510938745"
],
"abstract": [
"Precise item categorization is a key issue in e-commerce domains. However, it still remains a challenging problem due to data size, category skewness, and noisy metadata. Here, we demonstrate a successful report on a deep learning-based item categorization method, i.e., deep categorization network (DeepCN), in an e-commerce website. DeepCN is an end-to-end model using multiple recurrent neural networks (RNNs) dedicated to metadata attributes for generating features from text metadata and fully connected layers for classifying item categories from the generated features. The categorization errors are propagated back through the fully connected layers to the RNNs for weight update in the learning process. This deep learning-based approach allows diverse attributes to be integrated into a common representation, thus overcoming sparsity and scalability problems. We evaluate DeepCN on large-scale real-world data including more than 94 million items with approximately 4,100 leaf categories from a Korean e-commerce website. Experiment results show our method improves the categorization accuracy compared to the model using single RNN as well as a standard classification model using unigram-based bag-of-words. Furthermore, we investigate how much the model parameters and the used attributes influence categorization performances."
]
} |
1903.04147 | 2949254051 | We aim to study the multi-scale receptive fields of a single convolutional neural network to detect faces of varied scales. This paper presents our Multi-Scale Receptive Field Face Detector (MSFD), which has superior performance on detecting faces at different scales and enjoys real-time inference speed. MSFD agglomerates context and texture by hierarchical structure. More additional information and rich receptive field bring significant improvement but generate marginal time consumption. We simultaneously propose an anchor assignment strategy which can cover faces with a wide range of scales to improve the recall rate of small faces and rotated faces. To reduce the false positive rate, we train our detector with focal loss which keeps the easy samples from overwhelming. As a result, MSFD reaches superior results on the FDDB, Pascal-Faces and WIDER FACE datasets, and can run at 31 FPS on GPU for VGA-resolution images. | As a fundamental literature in the computer vision, face detection has been extensively studied in recent years. Viola-Jones detection framework @cite_16 is a groundbreaking work using Harr feature and Adaboost to train a cascade classifier, achieving a fairly good result. Since then, researchers have focused on designing more powerful hand-crafted features @cite_2 @cite_21 @cite_29 @cite_32 @cite_34 @cite_31 to improve the performance. However, those traditional approaches rely heavily on the effectiveness of hand-crafted feature and optimize solely on each component, causing a sub-optimal problem to the whole pipeline. | {
"cite_N": [
"@cite_29",
"@cite_21",
"@cite_32",
"@cite_34",
"@cite_2",
"@cite_31",
"@cite_16"
],
"mid": [
"2041497292",
"2247274765",
"204612701",
"2047508432",
"1966822758",
"1849007038",
"2137401668"
],
"abstract": [
"Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS",
"We propose a method to address challenges in unconstrained face detection, such as arbitrary pose variations and occlusions. First, a new image feature called Normalized Pixel Difference (NPD) is proposed. NPD feature is computed as the difference to sum ratio between two pixel values, inspired by the Weber Fraction in experimental psychology. The new feature is scale invariant, bounded, and is able to reconstruct the original image. Second, we propose a deep quadratic tree to learn the optimal subset of NPD features and their combinations, so that complex face manifolds can be partitioned by the learned rules. This way, only a single soft-cascade classifier is needed to handle unconstrained face detection. Furthermore, we show that the NPD features can be efficiently obtained from a look up table, and the detection template can be easily scaled, making the proposed face detector very fast. Experimental results on three public face datasets (FDDB, GENKI, and CMU-MIT) show that the proposed method achieves state-of-the-art performance in detecting unconstrained faces with arbitrary pose variations and occlusions in cluttered scenes.",
"We present a new state-of-the-art approach for face detection. The key idea is to combine face alignment with detection, observing that aligned face shapes provide better features for face classification. To make this combination more effective, our approach learns the two tasks jointly in the same cascade framework, by exploiting recent advances in face alignment. Such joint learning greatly enhances the capability of cascade detection and still retains its realtime performance. Extensive experiments show that our approach achieves the best accuracy on challenging datasets, where all existing solutions are either inaccurate or too slow.",
"We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).",
"Despite the fact that face detection has been studied intensively over the past several decades, the problem is still not completely solved. Challenging conditions, such as extreme pose, lighting, and occlusion, have historically hampered traditional, model-based methods. In contrast, exemplar-based face detection has been shown to be effective, even under these challenging conditions, primarily because a large exemplar database is leveraged to cover all possible visual variations. However, relying heavily on a large exemplar database to deal with the face appearance variations makes the detector impractical due to the high space and time complexity. We construct an efficient boosted exemplar-based face detector which overcomes the defect of the previous work by being faster, more memory efficient, and more accurate. In our method, exemplars as weak detectors are discriminatively trained and selectively assembled in the boosting framework which largely reduces the number of required exemplars. Notably, we propose to include non-face images as negative exemplars to actively suppress false detections to further improve the detection accuracy. We verify our approach over two public face detection benchmarks and one personal photo album, and achieve significant improvement over the state-of-the-art algorithms in terms of both accuracy and efficiency.",
"Face detection is a mature problem in computer vision. While diverse high performing face detectors have been proposed in the past, we present two surprising new top performance results. First, we show that a properly trained vanilla DPM reaches top performance, improving over commercial and research systems. Second, we show that a detector based on rigid templates - similar in structure to the Viola&Jones detector - can reach similar top performance on this task. Importantly, we discuss issues with existing evaluation benchmark and propose an improved procedure.",
"This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second."
]
} |
1903.04147 | 2949254051 | We aim to study the multi-scale receptive fields of a single convolutional neural network to detect faces of varied scales. This paper presents our Multi-Scale Receptive Field Face Detector (MSFD), which has superior performance on detecting faces at different scales and enjoys real-time inference speed. MSFD agglomerates context and texture by hierarchical structure. More additional information and rich receptive field bring significant improvement but generate marginal time consumption. We simultaneously propose an anchor assignment strategy which can cover faces with a wide range of scales to improve the recall rate of small faces and rotated faces. To reduce the false positive rate, we train our detector with focal loss which keeps the easy samples from overwhelming. As a result, MSFD reaches superior results on the FDDB, Pascal-Faces and WIDER FACE datasets, and can run at 31 FPS on GPU for VGA-resolution images. | In recent years, as the deep learning techniques, especially the convolutional neural networks(CNNs), gradually gain popularity and produce remarkable results on numerous computer vision tasks, CNN-based face detectors has become the mainstream. Among these, CascadeCNN @cite_27 and MTCNN @cite_11 both train a cascade structure for detection, while the latter uses multi-task CNNs to solve detection and alignment jointly. Yang . @cite_10 trains multiple CNNs for facial attributes to enhance the detection of occluded faces. | {
"cite_N": [
"@cite_27",
"@cite_10",
"@cite_11"
],
"mid": [
"1934410531",
"2209882149",
"2341528187"
],
"abstract": [
"In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.",
"In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method [23] by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.",
"Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations, and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this letter, we propose a deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance. In particular, our framework leverages a cascaded architecture with three stages of carefully designed deep convolutional networks to predict face and landmark location in a coarse-to-fine manner. In addition, we propose a new online hard sample mining strategy that further improves the performance in practice. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark and WIDER FACE benchmarks for face detection, and annotated facial landmarks in the wild benchmark for face alignment, while keeps real-time performance."
]
} |
1903.04147 | 2949254051 | We aim to study the multi-scale receptive fields of a single convolutional neural network to detect faces of varied scales. This paper presents our Multi-Scale Receptive Field Face Detector (MSFD), which has superior performance on detecting faces at different scales and enjoys real-time inference speed. MSFD agglomerates context and texture by hierarchical structure. More additional information and rich receptive field bring significant improvement but generate marginal time consumption. We simultaneously propose an anchor assignment strategy which can cover faces with a wide range of scales to improve the recall rate of small faces and rotated faces. To reduce the false positive rate, we train our detector with focal loss which keeps the easy samples from overwhelming. As a result, MSFD reaches superior results on the FDDB, Pascal-Faces and WIDER FACE datasets, and can run at 31 FPS on GPU for VGA-resolution images. | Naturally, face detection can be regarded as a special case for generic object detection, the framework of which may also be transfered to fit the face detection task. Faster R-CNN @cite_3 is one of the state-of-the-art detection pipelines composing of two stages. Based on that, Jiang . @cite_1 build a face detector and the performance is fairly good. Wan . @cite_20 and Sun . @cite_12 both add some effective stategies including hard example mining and feature fusion on Faster R-CNN in order to achieve better results, while CMS-RCNN @cite_7 attaches body contextual information as well. What's more, Wang . @cite_26 adopt another two-stage framework, R-FCN, to build their detector and state-of-the-art on FDDB dataset @cite_15 . | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_15",
"@cite_12",
"@cite_20"
],
"mid": [
"2754700116",
"",
"2964095005",
"2953106684",
"182571476",
"2952671661",
"2477332545"
],
"abstract": [
"Face detection has achieved great success using the region-based methods. In this report, we propose a region-based face detector applying deep networks in a fully convolutional fashion, named Face R-FCN. Based on Region-based Fully Convolutional Networks (R-FCN), our face detector is more accurate and computational efficient compared with the previous R-CNN based face detectors. In our approach, we adopt the fully convolutional Residual Network (ResNet) as the backbone network. Particularly, We exploit several new techniques including position-sensitive average pooling, multi-scale training and testing and on-line hard example mining strategy to improve the detection accuracy. Over two most popular and challenging face detection benchmarks, FDDB and WIDER FACE, Face R-FCN achieves superior performance over state-of-the-arts.",
"",
"While deep learning based methods for generic object detection have improved rapidly in the last two years, most approaches to face detection are still based on the R-CNN framework [11], leading to limited accuracy and processing speed. In this paper, we investigate applying the Faster RCNN [26], which has recently demonstrated impressive results on various object detection benchmarks, to face detection. By training a Faster R-CNN model on the large scale WIDER face dataset [34], we report state-of-the-art results on the WIDER test set as well as two other widely used face detection benchmarks, FDDB and the recently released IJB-A.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"Despite the maturity of face detection research, it remains difficult to compare different algorithms for face detection. This is partly due to the lack of common evaluation schemes. Also, existing data sets for evaluating face detection algorithms do not capture some aspects of face appearances that are manifested in real-world scenarios. In this work, we address both of these issues. We present a new data set of face images with more faces and more accurate annotations for face regions than in previous data sets. We also propose two rigorous and precise methods for evaluating the performance of face detection algorithms. We report results of several standard algorithms on the new benchmark.",
"In this report, we present a new face detection scheme using deep learning and achieve the state-of-the-art detection performance on the well-known FDDB face detetion benchmark evaluation. In particular, we improve the state-of-the-art faster RCNN framework by combining a number of strategies, including feature concatenation, hard negative mining, multi-scale training, model pretraining, and proper calibration of key parameters. As a consequence, the proposed scheme obtained the state-of-the-art face detection performance, making it the best model in terms of ROC curves among all the published methods on the FDDB benchmark.",
"Recently significant performance improvement in face detection was made possible by deeply trained convolutional networks. In this report, a novel approach for training state-of-the-art face detector is described. The key is to exploit the idea of hard negative mining and iteratively update the Faster R-CNN based face detector with the hard negatives harvested from a large set of background examples. We demonstrate that our face detector outperforms state-of-the-art detectors on the FDDB dataset, which is the de facto standard for evaluating face detection algorithms."
]
} |
1903.04147 | 2949254051 | We aim to study the multi-scale receptive fields of a single convolutional neural network to detect faces of varied scales. This paper presents our Multi-Scale Receptive Field Face Detector (MSFD), which has superior performance on detecting faces at different scales and enjoys real-time inference speed. MSFD agglomerates context and texture by hierarchical structure. More additional information and rich receptive field bring significant improvement but generate marginal time consumption. We simultaneously propose an anchor assignment strategy which can cover faces with a wide range of scales to improve the recall rate of small faces and rotated faces. To reduce the false positive rate, we train our detector with focal loss which keeps the easy samples from overwhelming. As a result, MSFD reaches superior results on the FDDB, Pascal-Faces and WIDER FACE datasets, and can run at 31 FPS on GPU for VGA-resolution images. | The single-stage detector including SSD @cite_23 and YOLO @cite_4 is another popular form of detection pipeline which simultaneously performs classification and regression. SSH @cite_17 is the typical single stage detector with context modules. Inspired by SSD and RPN @cite_3 , Zhang . propose S @math FD @cite_33 with anchor matching strategy and max-out background label to ensure state-of-the-art performance on WIDER FACE @cite_22 with real-time speed. In this work, we develop a superior face detector with real-time speed which adopts pyramidal feature hierarchy and aggregates multi-scale features with Context-Texture module. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_22",
"@cite_3",
"@cite_23",
"@cite_17"
],
"mid": [
"2963037989",
"2750317406",
"2963566548",
"2953106684",
"2193145675",
"2747648373"
],
"abstract": [
"We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.",
"This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.",
"Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset1, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categories, and face bounding boxes. Faces in the proposed dataset are extremely challenging due to large variations in scale, pose and occlusion, as shown in Fig. 1. Furthermore, we show that WIDER FACE dataset is an effective training source for face detection. We benchmark several representative detection systems, providing an overview of state-of-the-art performance and propose a solution to deal with large scale variation. Finally, we discuss common failure cases that worth to be further investigated.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"We introduce the Single Stage Headless (SSH) face detector. Unlike two stage proposal-classification detectors, SSH detects faces in a single stage directly from the early convolutional layers in a classification network. SSH is headless. That is, it is able to achieve state-of-the-art results while removing the \"head\" of its underlying classification network -- i.e. all fully connected layers in the VGG-16 which contains a large number of parameters. Additionally, instead of relying on an image pyramid to detect faces with various scales, SSH is scale-invariant by design. We simultaneously detect faces with different scales in a single forward pass of the network, but from different layers. These properties make SSH fast and light-weight. Surprisingly, with a headless VGG-16, SSH beats the ResNet-101-based state-of-the-art on the WIDER dataset. Even though, unlike the current state-of-the-art, SSH does not use an image pyramid and is 5X faster. Moreover, if an image pyramid is deployed, our light-weight network achieves state-of-the-art on all subsets of the WIDER dataset, improving the AP by 2.5 . SSH also reaches state-of-the-art results on the FDDB and Pascal-Faces datasets while using a small input size, leading to a runtime of 50 ms image on a GPU. The code is available at this https URL."
]
} |
1903.03905 | 2921091109 | How can we generate semantically meaningful adversarial examples? We propose to answer this question by restricting the search for adversarial examples in the low dimensional manifold of the data using variational autoencoders. First, we introduce a stochastic variational inference method to learn the manifold, in the presence of continuous latent variables, with minimal assumptions about the parametric form of the encoder network. Then, we apply Gram-Schmidt orthogonalization to partition the span of representative local points of the manifold, and use the basis vectors as directions along which we perturb the points with learned noise. In doing so, we encourage the perturbed points to remain in the manifold. Finally, we map these points onto the input space to generate adversarial examples. Experiments on a number of image and text classification tasks and a pilot study show the effectiveness of our approach in producing coherent adversarial examples capable of evading defenses known traditionally to be resilient to adversarial attacks. | VAEs are generally used to learn manifolds @cite_7 @cite_15 @cite_8 by maximizing the ELBO of the data log-likelihood @cite_24 @cite_0 . Optimizing the ELBO requires to reparameterize the encoder to a simple distribution like a Gaussian @cite_30 . The Gaussian prior assumption is however too restrictive @cite_12 and may lead to learning poorly the manifold of the data @cite_35 . To alleviate this issue, one can optimize instead the divergence between the encoder and the posterior inference as in @cite_25 . Although our work and @cite_25 are similar since we both use recognition networks, our approach is more general. In our case, @math generates the particles, which are model instances, from which we can sample infinite latent codes rather than finite pointwise estimates @cite_25 . Also, given that our particles are Bayesian, we learn to capture better the uncertainty inherent to encoding data using VAEs. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_7",
"@cite_8",
"@cite_24",
"@cite_0",
"@cite_15",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"2622563070",
"2887212531",
"2439880944",
"2766227112",
"2753672091",
"2817444259",
"2753293176",
"299440670"
],
"abstract": [
"",
"A key advance in learning generative models is the use of amortized inference distributions that are jointly trained with the models. We find that existing training objectives for variational autoencoders can lead to inaccurate amortized inference distributions and, in some cases, improving the objective provably degrades the inference quality. In addition, it has been observed that variational autoencoders tend to ignore the latent variables when combined with a decoding distribution that is too flexible. We again identify the cause in existing training criteria and propose a new class of objectives (InfoVAE) that mitigate these problems. We show that our model can significantly improve the quality of the variational posterior and can make effective use of the latent features regardless of the flexibility of the decoding distribution. Through extensive qualitative and quantitative analyses, we demonstrate that our models outperform competing approaches on multiple performance metrics.",
"The ever-increasing size of modern datasets combined with the difficulty of obtaining label information has made semi-supervised learning of significant practical importance in modern machine learning applications. Compared with supervised learning, the key difficulty in semi-supervised learning is how to make full use of the unlabeled data. In order to utilize manifold information provided by unlabeled data, we propose a novel regularization called the tangent-normal adversarial regularization, which is composed by two parts. The two terms complement with each other and jointly enforce the smoothness along two different directions that are crucial for semi-supervised learning. One is applied along the tangent space of the data manifold, aiming to enforce local invariance of the classifier on the manifold, while the other is performed on the normal space orthogonal to the tangent space, intending to impose robustness on the classifier against the noise causing the observed data deviating from the underlying data manifold. Both of the two regularizers are achieved by the strategy of virtual adversarial training. Our method has achieved state-of-the-art performance on semi-supervised learning tasks on both artificial dataset and FashionMNIST dataset.",
"Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of \"objectness\".",
"We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.",
"A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback-Leibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.",
"The manifold hypothesis states that many kinds of high-dimensional data are concentrated near a low-dimensional manifold. If the topology of this data manifold is non-trivial, a continuous en-coder network cannot embed it in a one-to-one manner without creating holes of low density in the latent space. This is at odds with the Gaussian prior assumption typically made in Variational Auto-Encoders (VAEs), because the density of a Gaussian concentrates near a blob-like manifold. In this paper we investigate the use of manifold-valued latent variables. Specifically, we focus on the important case of continuously differen-tiable symmetry groups (Lie groups), such as the group of 3D rotations SO(3). We show how a VAE with SO(3)-valued latent variables can be constructed, by extending the reparameterization trick to compact connected Lie groups. Our exper-iments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space.",
"A new method for learning variational autoencoders (VAEs) is developed, based on Stein variational gradient descent. A key advantage of this approach is that one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with importance sampling. Excellent performance is demonstrated across multiple unsupervised and semi-supervised problems, including semi-supervised analysis of the ImageNet data, demonstrating the scalability of the model to large datasets.",
"The choice of approximate posterior distribution is one of the core problems in variational inference. Most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference, focusing on mean-field or other simple structured approximations. This restriction has a significant impact on the quality of inferences made using variational methods. We introduce a new approach for specifying flexible, arbitrarily complex and scalable approximate posterior distributions. Our approximations are distributions constructed through a normalizing flow, whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained. We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations. We demonstrate that the theoretical advantages of having posteriors that better match the true posterior, combined with the scalability of amortized variational approaches, provides a clear improvement in performance and applicability of variational inference."
]
} |
1903.04154 | 2922003464 | In a graph convolutional network, we assume that the graph @math is generated with respect to some observation noise. We make small random perturbations @math of the graph and try to improve generalization. Based on quantum information geometry, we can have quantitative measurements on the scale of @math . We try to maximize the intrinsic scale of the permutation with a small budget while minimizing the loss based on the perturbed @math . Our proposed model can consistently improve graph convolutional networks on semi-supervised node classification tasks with reasonable computational overhead. We present two different types of geometry on the manifold of graphs: one is for measuring the intrinsic change of a graph; the other is for measuring how such changes can affect externally a graph neural network. These new analytical tools will be useful in developing a good understanding of graph neural networks and fostering new techniques. | Graph embeddings @cite_8 @cite_9 capture structural similarities in the graph. DeepWalk @cite_8 takes advantage of simulated localized walks in the node proximity which are then forwarded to the language modeling neural network to form the node context. Node2Vec @cite_9 interpolates between breadth- and depth-first sampling strategies to aggregate over different types of neighborhood. MoNet @cite_4 generalizes the notion of coordinate spaces by learning a set of parameters of Gaussian functions to encode some distance for the node embedding, difference between degrees of a pair of nodes. GAT @cite_18 learns such weights via a self-attention mechanism. Jumping Knowledge Networks @cite_24 also target the notion of node locality. Experiments on JK-Net show that depending on the graph topology, the notion of the subgraph neighborhood varies, random walks progress at different rates in different graphs. Thus, JK-Net aggregates over various neighborhoods thus considers multiple node localities. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_24"
],
"mid": [
"2963858333",
"2558460151",
"2154851992",
"2962756421",
"2804057010"
],
"abstract": [
"We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).",
"Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",
"Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of \"neighboring\" nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance."
]
} |
1903.04154 | 2922003464 | In a graph convolutional network, we assume that the graph @math is generated with respect to some observation noise. We make small random perturbations @math of the graph and try to improve generalization. Based on quantum information geometry, we can have quantitative measurements on the scale of @math . We try to maximize the intrinsic scale of the permutation with a small budget while minimizing the loss based on the perturbed @math . Our proposed model can consistently improve graph convolutional networks on semi-supervised node classification tasks with reasonable computational overhead. We present two different types of geometry on the manifold of graphs: one is for measuring the intrinsic change of a graph; the other is for measuring how such changes can affect externally a graph neural network. These new analytical tools will be useful in developing a good understanding of graph neural networks and fostering new techniques. | * Adversarial Learning The role of adversarial learning is to generate difficult-to-classify data samples by identifying them along the decision boundary and pushing' them over this boundary. In a recent DeepFool approach @cite_16 , a cumulative sparse adversarial pattern is learned to maximally confuse predictions on the training dataset. Such an adversarial pattern generalizes well to confuse prediction on test data. Adversarial learning is directly connected to sampling strategies, sampling hard negatives (obtaining the most difficult samples), and it has been long investigated in the community, especially in the shallow setting @cite_10 @cite_20 @cite_7 . | {
"cite_N": [
"@cite_16",
"@cite_10",
"@cite_7",
"@cite_20"
],
"mid": [
"2243397390",
"2187013920",
"2107397716",
"2109300365"
],
"abstract": [
"State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.1",
"In adversarial classication tasks like spam ltering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classication performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classication problems, their eectiveness in adversarial classication tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.",
"Machine learning algorithms are increasingly being applied in security-related tasks such as spam and malware detection, although their security properties against deliberate attacks have not yet been widely understood. Intelligent and adaptive attackers may indeed exploit specific vulnerabilities exposed by machine learning techniques to violate system security. Being robust to adversarial data manipulation is thus an important, additional requirement for machine learning algorithms to successfully operate in adversarial settings. In this work, we evaluate the security of Support Vector Machines (SVMs) to well-crafted, adversarial label noise attacks. In particular, we consider an attacker that aims to maximize the SVM?s classification error by flipping a number of labels in the training data. We formalize a corresponding optimal attack strategy, and solve it by means of heuristic approaches to keep the computational complexity tractable. We report an extensive experimental analysis on the effectiveness of the considered attacks against linear and non-linear SVMs, both on synthetic and real-world datasets. We finally argue that our approach can also provide useful insights for developing more secure SVM learning algorithms, and also novel techniques in a number of related research areas, such as semi-supervised and active learning.",
"Many data mining applications, such as spam filtering and intrusion detection, are faced with active adversaries. In all these applications, the future data sets and the training data set are no longer from the same population, due to the transformations employed by the adversaries. Hence a main assumption for the existing classification techniques no longer holds and initially successful classifiers degrade easily. This becomes a game between the adversary and the data miner: The adversary modifies its strategy to avoid being detected by the current classifier; the data miner then updates its classifier based on the new threats. In this paper, we investigate the possibility of an equilibrium in this seemingly never ending game, where neither party has an incentive to change. Modifying the classifier causes too many false positives with too little increase in true positives; changes by the adversary decrease the utility of the false negative items that are not detected. We develop a game theoretic framework where equilibrium behavior of adversarial classification applications can be analyzed, and provide solutions for finding an equilibrium point. A classifier's equilibrium performance indicates its eventual success or failure. The data miner could then select attributes based on their equilibrium performance, and construct an effective classifier. A case study on online lending data demonstrates how to apply the proposed game theoretic framework to a real application."
]
} |
1903.04154 | 2922003464 | In a graph convolutional network, we assume that the graph @math is generated with respect to some observation noise. We make small random perturbations @math of the graph and try to improve generalization. Based on quantum information geometry, we can have quantitative measurements on the scale of @math . We try to maximize the intrinsic scale of the permutation with a small budget while minimizing the loss based on the perturbed @math . Our proposed model can consistently improve graph convolutional networks on semi-supervised node classification tasks with reasonable computational overhead. We present two different types of geometry on the manifold of graphs: one is for measuring the intrinsic change of a graph; the other is for measuring how such changes can affect externally a graph neural network. These new analytical tools will be useful in developing a good understanding of graph neural networks and fostering new techniques. | Adversarial attacks under the FIM @cite_19 propose to carry out perturbations in the spectral domain. Given a quadratic form of FIM, the optimal adversarial perturbation is given by the first eigenvector corresponding to the largest eigenvalue. The larger the eigenvalues of FIM are the larger is the susceptibility of the classification approach to attacks on the corresponding eigenvectors. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2963671728"
],
"abstract": [
"Many deep learning models are vulnerable to the adversarial attack, i.e., imperceptible but intentionally-designed perturbations to the input can cause incorrect output of the networks. In this paper, using information geometry, we provide a reasonable explanation for the vulnerability of deep learning models. By considering the data space as a non-linear space with the Fisher information metric induced from a neural network, we first propose an adversarial attack algorithm termed one-step spectral attack (OSSA). The method is described by a constrained quadratic form of the Fisher information matrix, where the optimal adversarial perturbation is given by the first eigenvector, and the vulnerability is reflected by the eigenvalues. The larger an eigenvalue is, the more vulnerable the model is to be attacked by the corresponding eigenvector. Taking advantage of the property, we also propose an adversarial detection method with the eigenvalues serving as characteristics. Both our attack and detection algorithms are numerically optimized to work efficiently on large datasets. Our evaluations show superior performance compared with other methods, implying that the Fisher information is a promising approach to investigate the adversarial attacks and defenses."
]
} |
1903.04154 | 2922003464 | In a graph convolutional network, we assume that the graph @math is generated with respect to some observation noise. We make small random perturbations @math of the graph and try to improve generalization. Based on quantum information geometry, we can have quantitative measurements on the scale of @math . We try to maximize the intrinsic scale of the permutation with a small budget while minimizing the loss based on the perturbed @math . Our proposed model can consistently improve graph convolutional networks on semi-supervised node classification tasks with reasonable computational overhead. We present two different types of geometry on the manifold of graphs: one is for measuring the intrinsic change of a graph; the other is for measuring how such changes can affect externally a graph neural network. These new analytical tools will be useful in developing a good understanding of graph neural networks and fostering new techniques. | Our work is related in that we also construct FIM for the purpose of extrinsically parameterize the graph Laplacian. We perform a maximization these parameters to flatten the FIM around its local optimum which corresponds to decreasing eigenvalues in @cite_19 , thus making our approach well regularized in the sense of flattening the largest curvature associated with FIM (improved smoothness). With the smoothness, the classification performance typically degrades, see the impact of smoothness on kernel representations @cite_1 . Indeed, study @cite_6 further shows there is a fundamental trade-off between high accuracy and the adversarial robustness. | {
"cite_N": [
"@cite_19",
"@cite_1",
"@cite_6"
],
"mid": [
"2963671728",
"2123872146",
"2964116600"
],
"abstract": [
"Many deep learning models are vulnerable to the adversarial attack, i.e., imperceptible but intentionally-designed perturbations to the input can cause incorrect output of the networks. In this paper, using information geometry, we provide a reasonable explanation for the vulnerability of deep learning models. By considering the data space as a non-linear space with the Fisher information metric induced from a neural network, we first propose an adversarial attack algorithm termed one-step spectral attack (OSSA). The method is described by a constrained quadratic form of the Fisher information matrix, where the optimal adversarial perturbation is given by the first eigenvector, and the vulnerability is reflected by the eigenvalues. The larger an eigenvalue is, the more vulnerable the model is to be attacked by the corresponding eigenvector. Taking advantage of the property, we also propose an adversarial detection method with the eigenvalues serving as characteristics. Both our attack and detection algorithms are numerically optimized to work efficiently on large datasets. Our evaluations show superior performance compared with other methods, implying that the Fisher information is a promising approach to investigate the adversarial attacks and defenses.",
"An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art.",
""
]
} |
1903.04154 | 2922003464 | In a graph convolutional network, we assume that the graph @math is generated with respect to some observation noise. We make small random perturbations @math of the graph and try to improve generalization. Based on quantum information geometry, we can have quantitative measurements on the scale of @math . We try to maximize the intrinsic scale of the permutation with a small budget while minimizing the loss based on the perturbed @math . Our proposed model can consistently improve graph convolutional networks on semi-supervised node classification tasks with reasonable computational overhead. We present two different types of geometry on the manifold of graphs: one is for measuring the intrinsic change of a graph; the other is for measuring how such changes can affect externally a graph neural network. These new analytical tools will be useful in developing a good understanding of graph neural networks and fostering new techniques. | However, our min-max formulation seeks the most effective perturbations (according to @cite_19 ) which thus simultaneously prevents unnecessary degradation of the decision boundary. With robust regularization for medium size datasets, we avoid overfitting which boosts our classification performance, as demonstrated in section . | {
"cite_N": [
"@cite_19"
],
"mid": [
"2963671728"
],
"abstract": [
"Many deep learning models are vulnerable to the adversarial attack, i.e., imperceptible but intentionally-designed perturbations to the input can cause incorrect output of the networks. In this paper, using information geometry, we provide a reasonable explanation for the vulnerability of deep learning models. By considering the data space as a non-linear space with the Fisher information metric induced from a neural network, we first propose an adversarial attack algorithm termed one-step spectral attack (OSSA). The method is described by a constrained quadratic form of the Fisher information matrix, where the optimal adversarial perturbation is given by the first eigenvector, and the vulnerability is reflected by the eigenvalues. The larger an eigenvalue is, the more vulnerable the model is to be attacked by the corresponding eigenvector. Taking advantage of the property, we also propose an adversarial detection method with the eigenvalues serving as characteristics. Both our attack and detection algorithms are numerically optimized to work efficiently on large datasets. Our evaluations show superior performance compared with other methods, implying that the Fisher information is a promising approach to investigate the adversarial attacks and defenses."
]
} |
1903.04154 | 2922003464 | In a graph convolutional network, we assume that the graph @math is generated with respect to some observation noise. We make small random perturbations @math of the graph and try to improve generalization. Based on quantum information geometry, we can have quantitative measurements on the scale of @math . We try to maximize the intrinsic scale of the permutation with a small budget while minimizing the loss based on the perturbed @math . Our proposed model can consistently improve graph convolutional networks on semi-supervised node classification tasks with reasonable computational overhead. We present two different types of geometry on the manifold of graphs: one is for measuring the intrinsic change of a graph; the other is for measuring how such changes can affect externally a graph neural network. These new analytical tools will be useful in developing a good understanding of graph neural networks and fostering new techniques. | Natural gradient @cite_21 @cite_30 @cite_11 @cite_37 @cite_27 is a second-order optimization procedure which takes the steepest descent the Riemannian geometry defined by the FIM, which takes small steps on the directions with a large scale of FIM. This is also suggestive that the largest eigenvectors of the FIM are the most susceptible to attacks. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_21",
"@cite_27",
"@cite_11"
],
"mid": [
"2888674590",
"2962739609",
"2485135680",
"2739741052",
"2964309400"
],
"abstract": [
"The parameter space of a deep neural network is a Riemannian manifold, where the metric is defined by the Fisher information matrix. The natural gradient method uses the steepest descent direction in a Riemannian manifold, but it requires inversion of the Fisher matrix, however, which is practically difficult. The present paper uses statistical neurodynamical method to reveal the properties of the Fisher information matrix in a net of random connections. We prove that the Fisher information matrix is unit-wise block diagonal supplemented by small order terms of off-block-diagonal elements. We further prove that the Fisher information matrix of a single unit has a simple reduced form, a sum of a diagonal matrix and a rank 2 matrix of weight-bias correlations. We obtain the inverse of Fisher information explicitly. We then have an explicit form of the approximate natural gradient, without relying on the matrix inversion.",
"",
"This is the first comprehensive book on information geometry, written by the founder of the field. It begins with an elementary introduction to dualistic geometry and proceeds to a wide range of applications, covering information science, engineering, and neuroscience. It consists of four parts, which on the whole can be read independently. A manifold with a divergence function is first introduced, leading directly to dualistic structure, the heart of information geometry. This part (Part I) can be apprehended without any knowledge of differential geometry. An intuitive explanation of modern differential geometry then follows in Part II, although the book is for the most part understandable without modern differential geometry. Information geometry of statistical inference, including time series analysis and semiparametric estimation (the NeymanScott problem), is demonstrated concisely in Part III. Applications addressed in Part IV include hot current topics in machine learning, signal processing, optimization, and neural networks. The book is interdisciplinary, connecting mathematics, information sciences, physics, and neurosciences, inviting readers to a new world of information and geometry. This book is highly recommended to graduate students and researchers who seek new mathematical methods and tools useful in their own fields.",
"Fisher information and natural gradient provided deep insights and powerful tools to artificial neural networks. However related analysis becomes more and more difficult as the learner's structure turns large and complex. This paper makes a preliminary step towards anew direction. We extract a local component from a large neural system, and define its relative Fisher information metric that describes accurately this small component, and is invariant to the other parts of the system. This concept is important because the geometry structure is much simplified and it can be easily applied to guide the learning of neural networks. We provide an analysis on a list of commonly used components, and demonstrate how to use this concept to further improve optimization.",
"Abstract: We evaluate natural gradient, an algorithm originally proposed in Amari (1997), for learning deep models. The contributions of this paper are as follows. We show the connection between natural gradient and three other recently proposed methods for training deep models: Hessian-Free (Martens, 2010), Krylov Subspace Descent (Vinyals and Povey, 2012) and TONGA (Le , 2008). We describe how one can use unlabeled data to improve the generalization error obtained by natural gradient and empirically evaluate the robustness of the algorithm to the ordering of the training set compared to stochastic gradient descent. Finally we extend natural gradient to incorporate second order information alongside the manifold information and provide a benchmark of the new algorithm using a truncated Newton approach for inverting the metric matrix instead of using a diagonal approximation of it."
]
} |
1903.04154 | 2922003464 | In a graph convolutional network, we assume that the graph @math is generated with respect to some observation noise. We make small random perturbations @math of the graph and try to improve generalization. Based on quantum information geometry, we can have quantitative measurements on the scale of @math . We try to maximize the intrinsic scale of the permutation with a small budget while minimizing the loss based on the perturbed @math . Our proposed model can consistently improve graph convolutional networks on semi-supervised node classification tasks with reasonable computational overhead. We present two different types of geometry on the manifold of graphs: one is for measuring the intrinsic change of a graph; the other is for measuring how such changes can affect externally a graph neural network. These new analytical tools will be useful in developing a good understanding of graph neural networks and fostering new techniques. | Bethe Hessian @cite_31 , or deformed Laplacian, was shown to improve the performance of spectral clustering on a par with non-symmetric and higher dimensional operators, yet, drawing advantages of symmetric positive-definite representation. Our graph Laplacian parameterisation also draws on this view. | {
"cite_N": [
"@cite_31"
],
"mid": [
"2964029874"
],
"abstract": [
"Spectral clustering is a standard approach to label nodes on a graph by studying the (largest or lowest) eigenvalues of a symmetric real matrix such as e.g. the adjacency or the Laplacian. Recently, it has been argued that using instead a more complicated, non-symmetric and higher dimensional operator, related to the non-backtracking walk on the graph, leads to improved performance in detecting clusters, and even to optimal performance for the stochastic block model. Here, we propose to use instead a simpler object, a symmetric real matrix known as the Bethe Hessian operator, or deformed Laplacian. We show that this approach combines the performances of the non-backtracking operator, thus detecting clusters all the way down to the theoretical limit in the stochastic block model, with the computational, theoretical and memory advantages of real symmetric matrices."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.