aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1906.01102
2948682407
We posit that hippocampal place cells encode information about future locations under a transition distribution observed as an agent explores a given (physical or conceptual) space. The encoding of information about the current location, usually associated with place cells, then emerges as a necessary step to achieve this broader goal. We formally derive a biologically-inspired neural network from Nystrom kernel approximations and empirically demonstrate that the network successfully approximates transition distributions. The proposed network yields representations that, just like place cells, soft-tile the input space with highly sparse and localized receptive fields. Additionally, we show that the proposed computational motif can be extended to handle supervised problems, creating class-specific place cells while exhibiting low sample complexity.
@cite_25 show that localized receptive fields emerge in similarity-preserving networks of rectifying neurons. These networks learn to represent low-dimensional manifolds populated by sensory inputs and yield localized receptive fields tiling these manifolds.
{ "cite_N": [ "@cite_25" ], "mid": [ "2805950715" ], "abstract": [ "Many neurons in the brain, such as place cells in the rodent hippocampus, have localized receptive fields, i.e. they respond to a small neighborhood of stimulus space. What is the functional significance of such representations and how can they arise? Here, we propose that localized receptive fields emerge in similarity-preserving networks of rectifying neurons that learn low-dimensional manifolds populated by sensory inputs. Numerical simulations of such networks on standard datasets yield manifold-tiling localized receptive fields. More generally, we show analytically that, for data lying on symmetric manifolds, optimal solutions of objectives, from which similarity-preserving networks are derived, have localized receptive fields. Therefore, nonnegative similarity-preserving mapping (NSM) implemented by neural networks can model representations of continuous manifolds in the brain." ] }
1906.01155
2951111176
This paper introduces a large-scale, validated database for Persian called Sharif Emotional Speech Database (ShEMO). The database includes 3000 semi-natural utterances, equivalent to 3 hours and 25 minutes of speech data extracted from online radio plays. The ShEMO covers speech samples of 87 native-Persian speakers for five basic emotions including anger, fear, happiness, sadness and surprise, as well as neutral state. Twelve annotators label the underlying emotional state of utterances and majority voting is used to decide on the final labels. According to the kappa measure, the inter-annotator agreement is 64 which is interpreted as "substantial agreement". We also present benchmark results based on common classification methods in speech emotion detection task. According to the experiments, support vector machine achieves the best results for both gender-independent (58.2 ) and gender-dependent models (female=59.4 , male=57.6 ). The ShEMO is available in this http URL for academic purposes free of charge to provide a baseline for further research on Persian emotional speech.
Another type of emotional speech database is semi-natural which is built using either scenario-based approach or acting-based one. In the scenario-based approach @cite_61 , the affective state of speakers is first evoked by a method. For instance, speakers recall some memories or read given sentences describing a scenario to get emotional. Then, they are asked to read a pre-written text in a particular emotion which aligns their provoked affective state. Persian Emotional Speech Database @cite_25 is an instance of this type. In the acting-based approach, emotional utterances are extracted from movies or radio plays. To illustrate, Chinese Emotional Speech Database @cite_58 includes 721 utterances extracted from teleplays. Giannakopoulos @cite_21 also uses English movies to collect 1500 affective speech samples.
{ "cite_N": [ "@cite_61", "@cite_58", "@cite_21", "@cite_25" ], "mid": [ "2018338387", "2108581428", "2105768069", "1995642379" ], "abstract": [ "This research examines the correspondence between theoretical predictions on vocal expression patterns in naturally occurring emotions (as based on the component process theory of emotion; Scherer, 1986) and empirical data on the acoustic characteristics of actors' portrayals. Two male and two female professional radio actors portrayed anger, sadness, joy, fear, and disgust based on realistic scenarios of emotion-eliciting events. A series of judgment studies was conducted to assess the degree to which judges are able to recognize the intended emotion expressions. Disgust was relatively poorly recognized; average recognition accuracy for the other emotions attained 62.8 across studies. A set of portrayals reaching a satisfactory level of recognition accuracy underwent digital acoustic analysis. The results for the acoustic parameters extracted from the speech signal show a number of significant differences between emotions, generally confirming the theoretical predictions.", "This paper describes an experimental study on the detection of emotion from speech. As computer-based characters such as avatars and virtual chat faces become more common, the use of emotion to drive the expression of the virtual characters becomes more important. This study utilizes a corpus containing emotional speech with 721 short utterances expressing four emotions: anger, happiness, sadness, and the neutral (unemotional) state, which were captured manually from movies and teleplays. We introduce a new concept to evaluate emotions in speech. Emotions are so complex that most speech sentences cannot be precisely assigned to a particular emotion category; however, most emotional states nevertheless can be described as a mixture of multiple emotions. Based on this concept we have trained SVMs (support vector machines) to recognize utterances within these four categories and developed an agent that can recognize and express emotions.", "In this paper we present a novel method for extracting affective information from movies, based on speech data. The method is based on a 2-D representation of speech emotions (Emotion Wheel). The goal is twofold. First, to investigate whether the Emotion Wheel offers a good representation for emotions associated with speech signals. To this end, several humans have manually annotated speech data from movies using the Emotion Wheel and the level of disagreement has been computed as a measure of representation quality. The results indicate that the emotion wheel is a good representation of emotions in speech data. Second, a regression approach is adopted, in order to predict the location of an unknown speech segment in the Emotion Wheel. Each speech segment is represented by a vector of ten audio features. The results indicate that the resulting architecture can estimate emotion states of speech from movies, with sufficient accuracy.", "Research on emotional speech often requires valid stimuli for assessing perceived emotion through prosody and lexical content. To date, no comprehensive emotional speech database for Persian is officially available. The present article reports the process of designing, compiling, and evaluating a comprehensive emotional speech database for colloquial Persian. The database contains a set of 90 validated novel Persian sentences classified in five basic emotional categories (anger, disgust, fear, happiness, and sadness), as well as a neutral category. These sentences were validated in two experiments by a group of 1,126 native Persian speakers. The sentences were articulated by two native Persian speakers (one male, one female) in three conditions: (1) congruent (emotional lexical content articulated in a congruent emotional voice), (2) incongruent (neutral sentences articulated in an emotional voice), and (3) baseline (all emotional and neutral sentences articulated in neutral voice). The speech materials comprise about 470 sentences. The validity of the database was evaluated by a group of 34 native speakers in a perception test. Utterances recognized better than five times chance performance (71.4 ) were regarded as valid portrayals of the target emotions. Acoustic analysis of the valid emotional utterances revealed differences in pitch, intensity, and duration, attributes that may help listeners to correctly classify the intended emotion. The database is designed to be used as a reliable material source (for both text and speech) in future cross-cultural or cross-linguistic studies of emotional speech, and it is available for academic research purposes free of charge. To access the database, please contact the first author." ] }
1906.01155
2951111176
This paper introduces a large-scale, validated database for Persian called Sharif Emotional Speech Database (ShEMO). The database includes 3000 semi-natural utterances, equivalent to 3 hours and 25 minutes of speech data extracted from online radio plays. The ShEMO covers speech samples of 87 native-Persian speakers for five basic emotions including anger, fear, happiness, sadness and surprise, as well as neutral state. Twelve annotators label the underlying emotional state of utterances and majority voting is used to decide on the final labels. According to the kappa measure, the inter-annotator agreement is 64 which is interpreted as "substantial agreement". We also present benchmark results based on common classification methods in speech emotion detection task. According to the experiments, support vector machine achieves the best results for both gender-independent (58.2 ) and gender-dependent models (female=59.4 , male=57.6 ). The ShEMO is available in this http URL for academic purposes free of charge to provide a baseline for further research on Persian emotional speech.
Emotional speech databases can also be differentiated in terms of speakers. In most cases, professional actors are recruited to read pre-written sentences in target emotions (e.g. Berlin Database of Emotional Speech). However, some databases use semi-professional actors (e.g. Danish Emotional Speech Database) or ordinary people (e.g. Sahand Emotional Speech Database) to avoid exaggerated emotion expression. Furthermore, the utterances of some datasets (e.g. Berlin Database of Emotional Speech) are uniformly distributed over emotions while the distribution of emotions in other datasets are unbalanced and may reveal their frequency in the real world (e.g. Chinese Emotional Speech Database). Another important factor is availability of databases. While the majority of emotional speech databases are private (e.g. MPEG-4 @cite_43 ), there are some datasets which are available for public use (e.g. FERMUS III @cite_18 , RAVDESS @cite_50 ).
{ "cite_N": [ "@cite_43", "@cite_18", "@cite_50" ], "mid": [ "2169295472", "1509579602", "2135776491" ], "abstract": [ "Emotion recognition grows to an important factor in future media retrieval and man machine interfaces. However, even human deciders often experience problems realizing one's emotion, especially of strangers. In this work we strive to recognize emotion independent of the person concentrating on the speech channel. Single feature relevance of acoustic features is a critical point, which we address by filter-based gain ratio calculation starting at a basis of 276 features. As optimization of a minimum set as a whole in general saves more extraction effort, we furthermore apply an SVM-SFFS wrapper based search. For a more robust estimation we also integrate spoken content information by a Bayesian net analysis of ASR outputs. Overall classification is realized in an early feature fusion by stacked ensembles of diverse base classifiers. Tests ran on a 3,947 movie and automotive interaction dialog-turns database consisting of 35 speakers. Remarkable overall performance can be reported in the discrimination of the seven discrete emotions named in the MPEG-4 standard with added neutrality", "In this paper new results in instantaneous recognition of emotion in non-verbal speech are presented. As classification method dynamic programming with dynamic time warp or Bakis-hidden-Markov-models with vector quantization or Gaussian mixtures are used to analyze the pitch and energy contour of a speech signal. As emotional states joy, anger, fear, sadness, disgust, irritation, and an additional neutral user state have been evaluated. As rather unusual innovative user states the influences of tiredness and alcohol consumption of a speaker on his speech have been analyzed by use of the same methods. One of the main goals of the presented work is to keep the models applicable for new users and to keep recognition simple for real-time evaluation. Finally the observed results are presented and discussed.", "We have recorded a new corpus of emotionally coloured conversations. Users were recorded while holding conversations with an operator who adopts in sequence four roles designed to evoke emotional reactions. The operator and the user are seated in separate rooms; they see each other through teleprompter screens, and hear each other through speakers. To allow high quality recording, they are recorded by five high-resolution, high framerate cameras, and by four microphones. All sensor information is recorded synchronously, with an accuracy of 25 μs. In total, we have recorded 20 participants, for a total of 100 character conversational and 50 non-conversational recordings of approximately 5 minutes each. All recorded conversations have been fully transcribed and annotated for five affective dimensions and partially annotated for 27 other dimensions. The corpus has been made available to the scientific community through a web-accessible database." ] }
1906.01290
2948818258
Exploiting relationships among objects has achieved remarkable progress in interpreting images or videos by natural language. Most existing methods resort to first detecting objects and their relationships, and then generating textual descriptions, which heavily depends on pre-trained detectors and leads to performance drop when facing problems of heavy occlusion, tiny-size objects and long-tail in object detection. In addition, the separate procedure of detecting and captioning results in semantic inconsistency between the pre-defined object relation categories and the target lexical words. We exploit prior human commonsense knowledge for reasoning relationships between objects without any pre-trained detectors and reaching semantic coherency within one image or video in captioning. The prior knowledge (e.g., in the form of knowledge graph) provides commonsense semantic correlation and constraint between objects that are not explicit in the image and video, serving as useful guidance to build semantic graph for sentence generation. Particularly, we present a joint reasoning method that incorporates 1) commonsense reasoning for embedding image or video regions into semantic space to build semantic graph and 2) relational reasoning for encoding semantic graph to generate sentences. Extensive experiments on the MS-COCO image captioning benchmark and the MSVD video captioning benchmark validate the superiority of our method on leveraging prior commonsense knowledge to enhance relational reasoning for visual captioning.
Exploiting relationships between objects for image captioning has gain increasing attentions in nearly a year. @cite_29 employ two Graphic Convolutional Networks (GCN) to reason the semantic and spatial correlations among visual features of the detected objects and relationships, and add them to a language model to boost image captioning. @cite_6 generates scene graphs of images by detectors, and builds a hierarchical attention-based model to reason visual relationships for image captioning. @cite_21 incorporate language inductive bias into a GCN based image captioning model to not only reason relationship via the GCN but also represent visual information in language domain via a scene graph auto-encoder for easier translation. The above methods explicitly exploit high-level semantic concepts for image captioning with the pre-defined scene graph of each image and the annotations of object and relationship locations in the image. Different from them, our method leverages prior knowledge to generate graphs of latent semantic concepts in images or videos without any pre-trained detectors. This enables scene graph generation and visual captioning to be trained in an end-to-end manner, and alleviates the semantic inconsistency in vision-to-language translation.
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_6" ], "mid": [ "2890531016", "2902243109", "2913618459" ], "abstract": [ "It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1 to 128.7 on COCO testing set.", "We propose Scene Graph Auto-Encoder (SGAE) that incorporates the language inductive bias into the encoder-decoder image captioning framework for more human-like captions. Intuitively, we humans use the inductive bias to compose collocations and contextual inference in discourse. For example, when we see the relation person on bike', it is natural to replace on' with ride' and infer person riding bike on a road' even the road' is not evident. Therefore, exploiting such bias as a language prior is expected to help the conventional encoder-decoder models less likely overfit to the dataset bias and focus on reasoning. Specifically, we use the scene graph --- a directed graph ( @math ) where an object node is connected by adjective nodes and relationship nodes --- to represent the complex structural layout of both image ( @math ) and sentence ( @math ). In the textual domain, we use SGAE to learn a dictionary ( @math ) that helps to reconstruct sentences in the @math pipeline, where @math encodes the desired language prior; in the vision-language domain, we use the shared @math to guide the encoder-decoder in the @math pipeline. Thanks to the scene graph representation and shared dictionary, the inductive bias is transferred across domains in principle. We validate the effectiveness of SGAE on the challenging MS-COCO image captioning benchmark, e.g., our SGAE-based single-model achieves a new state-of-the-art @math CIDEr-D on the Karpathy split, and a competitive @math CIDEr-D (c40) on the official server even compared to other ensemble models.", "Automatically describing the content of an image has been attracting considerable research attention in the multimedia field. To represent the content of an image, many approaches directly utilize convolutional neural networks (CNNs) to extract visual representations, which are fed into recurrent neural networks to generate natural language. Recently, some approaches have detected semantic concepts from images and then encoded them into high-level representations. Although substantial progress has been achieved, most of the previous methods treat entities in images individually, thus lacking structured information that provides important cues for image captioning. In this paper, we propose a framework based on scene graphs for image captioning. Scene graphs contain abundant structured information because they not only depict object entities in images but also present pairwise relationships. To leverage both visual features and semantic knowledge in structured scene graphs, we extract CNN features from the bounding box offsets of object entities for visual representations, and extract semantic relationship features from triples (e.g., man riding bike ) for semantic representations. After obtaining these features, we introduce a hierarchical-attention-based module to learn discriminative features for word generation at each time step. The experimental results on benchmark datasets demonstrate the superiority of our method compared with several state-of-the-art methods." ] }
1906.01290
2948818258
Exploiting relationships among objects has achieved remarkable progress in interpreting images or videos by natural language. Most existing methods resort to first detecting objects and their relationships, and then generating textual descriptions, which heavily depends on pre-trained detectors and leads to performance drop when facing problems of heavy occlusion, tiny-size objects and long-tail in object detection. In addition, the separate procedure of detecting and captioning results in semantic inconsistency between the pre-defined object relation categories and the target lexical words. We exploit prior human commonsense knowledge for reasoning relationships between objects without any pre-trained detectors and reaching semantic coherency within one image or video in captioning. The prior knowledge (e.g., in the form of knowledge graph) provides commonsense semantic correlation and constraint between objects that are not explicit in the image and video, serving as useful guidance to build semantic graph for sentence generation. Particularly, we present a joint reasoning method that incorporates 1) commonsense reasoning for embedding image or video regions into semantic space to build semantic graph and 2) relational reasoning for encoding semantic graph to generate sentences. Extensive experiments on the MS-COCO image captioning benchmark and the MSVD video captioning benchmark validate the superiority of our method on leveraging prior commonsense knowledge to enhance relational reasoning for visual captioning.
Some recent methods apply external knowledge graph for image captioning. In @cite_33 , commonsense reasoning is used to detect the scene description graph (SDG) of the image, and the SDG can be directly translated into a sentence via a template-based language model. CNet-NIC @cite_10 incorporates knowledge graphs to augment information extracted from images for captioning. Different from these methods that directly adds explicit high-level semantic concepts from external knowledge, our method use external knowledge to reason relationships between semantic concepts via joint commonsense and relation reasoning, without facing the hallucinating'' problem as stated in @cite_22 .
{ "cite_N": [ "@cite_22", "@cite_10", "@cite_33" ], "mid": [ "2962735233", "2914629512", "2226480163" ], "abstract": [ "", "We explore the use of a knowledge graphs, that capture general or commonsense knowledge, to augment the information extracted from images by the state-of-the-art methods for image captioning. We compare the performance of image captioning systems that as measured by CIDEr-D, a performance measure that is explicitly designed for evaluating image captioning systems, on several benchmark data sets such as MS COCO. The results of our experiments show that the variants of the state-of-the-art methods for image captioning that make use of the information extracted from knowledge graphs can substantially outperform those that rely solely on the information extracted from images.", "In this paper we propose the construction of linguistic descriptions of images. This is achieved through the extraction of scene description graphs (SDGs) from visual scenes using an automatically constructed knowledge base. SDGs are constructed using both vision and reasoning. Specifically, commonsense reasoning is applied on (a) detections obtained from existing perception methods on given images, (b) a \"commonsense\" knowledge base constructed using natural language processing of image annotations and (c) lexical ontological knowledge from resources such as WordNet. Amazon Mechanical Turk(AMT)-based evaluations on Flickr8k, Flickr30k and MS-COCO datasets show that in most cases, sentences auto-constructed from SDGs obtained by our method give a more relevant and thorough description of an image than a recent state-of-the-art image caption based approach. Our Image-Sentence Alignment Evaluation results are also comparable to that of the recent state-of-the art approaches." ] }
1906.01290
2948818258
Exploiting relationships among objects has achieved remarkable progress in interpreting images or videos by natural language. Most existing methods resort to first detecting objects and their relationships, and then generating textual descriptions, which heavily depends on pre-trained detectors and leads to performance drop when facing problems of heavy occlusion, tiny-size objects and long-tail in object detection. In addition, the separate procedure of detecting and captioning results in semantic inconsistency between the pre-defined object relation categories and the target lexical words. We exploit prior human commonsense knowledge for reasoning relationships between objects without any pre-trained detectors and reaching semantic coherency within one image or video in captioning. The prior knowledge (e.g., in the form of knowledge graph) provides commonsense semantic correlation and constraint between objects that are not explicit in the image and video, serving as useful guidance to build semantic graph for sentence generation. Particularly, we present a joint reasoning method that incorporates 1) commonsense reasoning for embedding image or video regions into semantic space to build semantic graph and 2) relational reasoning for encoding semantic graph to generate sentences. Extensive experiments on the MS-COCO image captioning benchmark and the MSVD video captioning benchmark validate the superiority of our method on leveraging prior commonsense knowledge to enhance relational reasoning for visual captioning.
Some Visual Question Answering (VQA) methods @cite_0 @cite_15 @cite_32 @cite_28 apply commonsense or relational reasoning. However, conducting reasoning for visual captioning is more challenging than for VQA. The reason is that for visual captioning the semantic graph is extracted only from the input visual cues, while for VQA almost the entire semantic graph is given in terms of the question sentences. In this paper, we resort to prior knowledge to tackle the reasoning problem in visual captioning via a newly proposed joint reasoning method.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_28", "@cite_32" ], "mid": [ "2252136820", "2090243146", "2908791737", "2964092725" ], "abstract": [ "In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.", "We consider the problem of open-domain question answering (Open QA) over massive knowledge bases (KBs). Existing approaches use either manually curated KBs like Freebase or KBs automatically extracted from unstructured text. In this paper, we present OQA, the first approach to leverage both curated and extracted KBs. A key technical challenge is designing systems that are robust to the high variability in both natural language questions and massive KBs. OQA achieves robustness by decomposing the full Open QA problem into smaller sub-problems including question paraphrasing and query reformulation. OQA solves these sub-problems by mining millions of rules from an unlabeled question corpus and across multiple KBs. OQA then learns to integrate these rules by performing discriminative training on question-answer pairs using a latent-variable structured perceptron algorithm. We evaluate OQA on three benchmark question sets and demonstrate that it achieves up to twice the precision and recall of a state-of-the-art Open QA system.", "", "Visual question answering (VQA) requires joint comprehension of images and natural language questions, where many questions can't be directly or clearly answered from visual content but require reasoning from structured human knowledge with confirmation from visual content. This paper proposes visual knowledge memory network (VKMN) to address this issue, which seamlessly incorporates structured human knowledge and deep visual features into memory networks in an end-to-end learning framework. Comparing to existing methods for leveraging external knowledge for supporting VQA, this paper stresses more on two missing mechanisms. First is the mechanism for integrating visual contents with knowledge facts. VKMN handles this issue by embedding knowledge triples (subject, relation, target) and deep visual features jointly into the visual knowledge features. Second is the mechanism for handling multiple knowledge facts expanding from question and answer pairs. VKMN stores joint embedding using key-value pair structure in the memory networks so that it is easy to handle multiple facts. Experiments show that the proposed method achieves promising results on both VQA v1.0 and v2.0 benchmarks, while outperforms state-of-the-art methods on the knowledge-reasoning related questions." ] }
1709.03551
2755176770
Multilayer network analysis has become a vital tool for understanding different relationships and their interactions in a complex system, where each layer in a multilayer network depicts the topological structure of a group of nodes corresponding to a particular relationship. The interactions among different layers imply how the interplay of different relations on the topology of each layer. For a single-layer network, network embedding methods have been proposed to project the nodes in a network into a continuous vector space with a relatively small number of dimensions, where the space embeds the social representations among nodes. These algorithms have been proved to have a better performance on a variety of regular graph analysis tasks, such as link prediction, or multi-label classification. In this paper, by extending a standard graph mining into multilayer network, we have proposed three methods ("network aggregation," "results aggregation" and "layer co-analysis") to project a multilayer network into a continuous vector space. From the evaluation, we have proved that comparing with regular link prediction methods, "layer co-analysis" achieved the best performance on most of the datasets, while "network aggregation" and "results aggregation" also have better performance than regular link prediction methods.
However, both methods have limitations. As DeepWalk uses uniform random walks for searching, it cannot provide control over the explored neighborhoods. In contrast, LINE proposes a breadth-first strategy to sample nodes and optimize the likelihood independently over 1-hop and 2-hop neighbors, but it has no flexibility in exploring nodes at future depths. In order to deal with both of these limitations, node2vec @cite_13 provides a flexible and controllable strategy for exploring network neighborhoods through the parameters @math and @math . From a practical standpoint, node2vec is scalable and robust to perturbations. Of course, none of these methods can deal with random walk samples that intelligently consider traversals between layers of multilayer networks. One of our methods (layer co-analysis) is therefore a natural progression of the literature in extending the capabilities of the random walk samples.
{ "cite_N": [ "@cite_13" ], "mid": [ "2366141641" ], "abstract": [ "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks." ] }
1709.03714
2754560064
In this paper, we propose a recurrent neural network (RNN) with residual attention (RRA) to learn long-range dependencies from sequential data. We propose to add residual connections across timesteps to RNN, which explicitly enhances the interaction between current state and hidden states that are several timesteps apart. This also allows training errors to be directly back-propagated through residual connections and effectively alleviates gradient vanishing problem. We further reformulate an attention mechanism over residual connections. An attention gate is defined to summarize the individual contribution from multiple previous hidden states in computing the current state. We evaluate RRA on three tasks: the adding problem, pixel-by-pixel MNIST classification and sentiment analysis on the IMDB dataset. Our experiments demonstrate that RRA yields better performance, faster convergence and more stable training compared to a standard LSTM network. Furthermore, RRA shows highly competitive performance to the state-of-the-art methods.
RNN is a powerful network architecture for processing sequential data. It has been widely used in natural language processing @cite_20 , speech recognition @cite_27 and handwriting recognition @cite_8 in recent years. In RNN, it allows cyclical connection and reuse the weights across different instances of neurons, each of them associated with different time steps. This idea can explicitly support network to learn the entire history of previous states and map them to current states. With this property, RNN is able to map an arbitrary length sequence to a fixed length vector. But RNN is known for its difficult training due to gradient vanishing problem.
{ "cite_N": [ "@cite_27", "@cite_20", "@cite_8" ], "mid": [ "2950689855", "1423339008", "" ], "abstract": [ "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.", "Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the Stanford background dataset (78.1 ). The features from the image parse tree outperform Gist descriptors for scene classification by 4 .", "" ] }
1709.03714
2754560064
In this paper, we propose a recurrent neural network (RNN) with residual attention (RRA) to learn long-range dependencies from sequential data. We propose to add residual connections across timesteps to RNN, which explicitly enhances the interaction between current state and hidden states that are several timesteps apart. This also allows training errors to be directly back-propagated through residual connections and effectively alleviates gradient vanishing problem. We further reformulate an attention mechanism over residual connections. An attention gate is defined to summarize the individual contribution from multiple previous hidden states in computing the current state. We evaluate RRA on three tasks: the adding problem, pixel-by-pixel MNIST classification and sentiment analysis on the IMDB dataset. Our experiments demonstrate that RRA yields better performance, faster convergence and more stable training compared to a standard LSTM network. Furthermore, RRA shows highly competitive performance to the state-of-the-art methods.
The vanishing problem was originally found in @cite_6 , then LSTM (Long short-term memory) was proposed to prevent gradient from vanishing during training. Therefore, compare to traditional RNN, LSTM has the ability to learn the long-term dependencies between inputs and outputs. Recently, LSTM has became very popular in the field of machine translation @cite_5 , speech recognition @cite_27 and sequence learning @cite_10 recently. Another special type of RNN is Gated Recurrent Unit (GRU) @cite_5 . It simplifies LSTM by removing memory cell and provides a different way to prevent vanishing gradient problem. Our work falls into this category and aims to alleviate gradient vanishing in learning ultra-long dependencies.
{ "cite_N": [ "@cite_5", "@cite_27", "@cite_10", "@cite_6" ], "mid": [ "2950635152", "2950689855", "2949888546", "" ], "abstract": [ "In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "" ] }
1709.03714
2754560064
In this paper, we propose a recurrent neural network (RNN) with residual attention (RRA) to learn long-range dependencies from sequential data. We propose to add residual connections across timesteps to RNN, which explicitly enhances the interaction between current state and hidden states that are several timesteps apart. This also allows training errors to be directly back-propagated through residual connections and effectively alleviates gradient vanishing problem. We further reformulate an attention mechanism over residual connections. An attention gate is defined to summarize the individual contribution from multiple previous hidden states in computing the current state. We evaluate RRA on three tasks: the adding problem, pixel-by-pixel MNIST classification and sentiment analysis on the IMDB dataset. Our experiments demonstrate that RRA yields better performance, faster convergence and more stable training compared to a standard LSTM network. Furthermore, RRA shows highly competitive performance to the state-of-the-art methods.
Previous work @cite_9 @cite_3 have proven that network depth is of crucial importance of neural network architectures, but it is more challenging to train deeper networks. Residual learning @cite_4 paves a way for training such networks. The residual mapping between layers enables networks can be substantially deep (e.g. with hundreds of layers) and leads more efficient optimization, most importantly, yields better performance. The short-cut skip connections were considered across multiple layers to force a direct information flow in both forward and backward passes. By doing this, feedforward signals as well as feedback errors can be passed easily. Adding residual connection across layers has shown its powerful capability in computer vision @cite_4 @cite_22 . Inspired by this, our work incorporates residual connection across multiple precessing steps to learn long and complex dependencies from sequential data.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_22", "@cite_3" ], "mid": [ "2962835968", "2949650786", "2274287116", "2950179405" ], "abstract": [ "Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection." ] }
1709.03714
2754560064
In this paper, we propose a recurrent neural network (RNN) with residual attention (RRA) to learn long-range dependencies from sequential data. We propose to add residual connections across timesteps to RNN, which explicitly enhances the interaction between current state and hidden states that are several timesteps apart. This also allows training errors to be directly back-propagated through residual connections and effectively alleviates gradient vanishing problem. We further reformulate an attention mechanism over residual connections. An attention gate is defined to summarize the individual contribution from multiple previous hidden states in computing the current state. We evaluate RRA on three tasks: the adding problem, pixel-by-pixel MNIST classification and sentiment analysis on the IMDB dataset. Our experiments demonstrate that RRA yields better performance, faster convergence and more stable training compared to a standard LSTM network. Furthermore, RRA shows highly competitive performance to the state-of-the-art methods.
Attention in neural networks @cite_1 is designed to assign weights to different inputs instead of threat all input sequences equally as original neural networks do. It can be seen as an additional network that is now widely incorporated into different neural networks leading to a new variety of models @cite_30 @cite_26 @cite_7 @cite_19 . Formally, an attention model takes @math arguments e.g. @math ,..., @math , and a context information @math . It returns a weighted output @math which summaries based on how @math is related to context @math . The weights are corresponds to the relevances between each @math and @math and sum to 1, e.g. the weights @math in Figure (c). This determines the relative contributions of each @math to final output. But the current state-of-the-art attention methods are either layer or network based, and not well studied in recurrent manner. This work reformulates an attention over residual connection in recurrent network.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_7", "@cite_1", "@cite_19" ], "mid": [ "2950178297", "2964036520", "2951527505", "2964308564", "1850742715" ], "abstract": [ "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "Abstract: We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "Abstract: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye." ] }
1709.03528
2755278809
For distributed computing environments, we consider the canonical machine learning problem of empirical risk minimization (ERM) with quadratic regularization, and we propose a distributed and communication-efficient Newton-type optimization method. At every iteration, each worker locally finds an Approximate NewTon (ANT) direction, and then it sends this direction to the main driver. The driver, then, averages all the ANT directions received from workers to form a Globally Improved ANT (GIANT) direction. GIANT naturally exploits the trade-offs between local computations and global communications in that more local computations result in fewer overall rounds of communications. GIANT is highly communication efficient in that, for @math -dimensional data uniformly distributed across @math workers, it has @math or @math rounds of communication and @math communication complexity per iteration. Theoretically, we show that GIANT's convergence rate is faster than first-order methods and existing distributed Newton-type methods. From a practical point-of-view, a highly beneficial feature of GIANT is that it has only one tuning parameter---the iterations of the local solver for computing an ANT direction. This is indeed in sharp contrast with many existing distributed Newton-type methods, as well as popular first-order methods, which have several tuning parameters, and whose performance can be greatly affected by the specific choices of such parameters. In this light, we empirically demonstrate the superior performance of GIANT compared with other competing methods.
Among the existing distributed second-order optimization methods, the most notably are DANE @cite_4 , AIDE @cite_18 , and DiSCO @cite_31 . Another similar method is CoCoA @cite_15 @cite_40 @cite_26 , which is analogous to second-order methods in that it involves sub-problems which are local quadratic approximations to the dual objective function. However, despite the fact that CoCoA makes use of the smoothness condition, it does not exploit any explicit second-order information.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_15", "@cite_40", "@cite_31" ], "mid": [ "2510078447", "2556660792", "2963992805", "", "2952676558", "1697545848" ], "abstract": [ "In this paper, we present two new communication-efficient methods for distributed minimization of an average of functions. The first algorithm is an inexact variant of the DANE algorithm that allows any local algorithm to return an approximate solution to a local subproblem. We show that such a strategy does not affect the theoretical guarantees of DANE significantly. In fact, our approach can be viewed as a robustification strategy since the method is substantially better behaved than DANE on data partition arising in practice. It is well known that DANE algorithm does not match the communication complexity lower bounds. To bridge this gap, we propose an accelerated variant of the first method, called AIDE, that not only matches the communication lower bounds but can also be implemented using a purely first-order oracle. Our empirical results show that AIDE is superior to other communication efficient algorithms in settings that naturally arise in machine learning applications.", "The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets.", "We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably improves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.", "", "We address the statistical and optimization impacts of the classical sketch and Hessian sketch used to approximately solve the Matrix Ridge Regression (MRR) problem. Prior research has quantified the effects of classical sketch on the strictly simpler least squares regression (LSR) problem. We establish that classical sketch has a similar effect upon the optimization properties of MRR as it does on those of LSR: namely, it recovers nearly optimal solutions. By contrast, Hessian sketch does not have this guarantee, instead, the approximation error is governed by a subtle interplay between the \"mass\" in the responses and the optimal objective value. For both types of approximation, the regularization in the sketched MRR problem results in significantly different statistical properties from those of the sketched LSR problem. In particular, there is a bias-variance trade-off in sketched MRR that is not present in sketched LSR. We provide upper and lower bounds on the bias and variance of sketched MRR, these bounds show that classical sketch significantly increases the variance, while Hessian sketch significantly increases the bias. Empirically, sketched MRR solutions can have risks that are higher by an order-of-magnitude than those of the optimal MRR solutions. We establish theoretically and empirically that model averaging greatly decreases the gap between the risks of the true and sketched solutions to the MRR problem. Thus, in parallel or distributed settings, sketching combined with model averaging is a powerful technique that quickly obtains near-optimal solutions to the MRR problem while greatly mitigating the increased statistical risk incurred by sketching.", "We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1 √n show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines." ] }
1709.03528
2755278809
For distributed computing environments, we consider the canonical machine learning problem of empirical risk minimization (ERM) with quadratic regularization, and we propose a distributed and communication-efficient Newton-type optimization method. At every iteration, each worker locally finds an Approximate NewTon (ANT) direction, and then it sends this direction to the main driver. The driver, then, averages all the ANT directions received from workers to form a Globally Improved ANT (GIANT) direction. GIANT naturally exploits the trade-offs between local computations and global communications in that more local computations result in fewer overall rounds of communications. GIANT is highly communication efficient in that, for @math -dimensional data uniformly distributed across @math workers, it has @math or @math rounds of communication and @math communication complexity per iteration. Theoretically, we show that GIANT's convergence rate is faster than first-order methods and existing distributed Newton-type methods. From a practical point-of-view, a highly beneficial feature of GIANT is that it has only one tuning parameter---the iterations of the local solver for computing an ANT direction. This is indeed in sharp contrast with many existing distributed Newton-type methods, as well as popular first-order methods, which have several tuning parameters, and whose performance can be greatly affected by the specific choices of such parameters. In this light, we empirically demonstrate the superior performance of GIANT compared with other competing methods.
In Table , we compare the communication costs with other methods for the ridge regression problem: @math . As for general convex problems, it is very hard to present the comparison in an easily understanding way. This is why we do not compare the convergence for the general convex optimization. The communication cost of GIANT has a mere logarithmic dependence on the condition number @math ; in contrast, the other methods have at least a square root dependence on @math . Even if @math is assumed to be small, say @math , which was made by @cite_31 , GIANT's bound is better than the compared methods regarding the dependence on the number of partitions, @math .
{ "cite_N": [ "@cite_31" ], "mid": [ "1697545848" ], "abstract": [ "We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1 √n show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines." ] }
1709.03528
2755278809
For distributed computing environments, we consider the canonical machine learning problem of empirical risk minimization (ERM) with quadratic regularization, and we propose a distributed and communication-efficient Newton-type optimization method. At every iteration, each worker locally finds an Approximate NewTon (ANT) direction, and then it sends this direction to the main driver. The driver, then, averages all the ANT directions received from workers to form a Globally Improved ANT (GIANT) direction. GIANT naturally exploits the trade-offs between local computations and global communications in that more local computations result in fewer overall rounds of communications. GIANT is highly communication efficient in that, for @math -dimensional data uniformly distributed across @math workers, it has @math or @math rounds of communication and @math communication complexity per iteration. Theoretically, we show that GIANT's convergence rate is faster than first-order methods and existing distributed Newton-type methods. From a practical point-of-view, a highly beneficial feature of GIANT is that it has only one tuning parameter---the iterations of the local solver for computing an ANT direction. This is indeed in sharp contrast with many existing distributed Newton-type methods, as well as popular first-order methods, which have several tuning parameters, and whose performance can be greatly affected by the specific choices of such parameters. In this light, we empirically demonstrate the superior performance of GIANT compared with other competing methods.
Our GIANT method is motivated by the subsampled Newton method @cite_3 @cite_29 @cite_12 . Later we realized that very similar idea has been proposed by DANE @cite_4 GIANT and DANE are identical for quadratic programming; they are different for the general convex problems. and FADL @cite_30 , but we show better convergence bounds. Mahajan al @cite_30 has conducted comprehensive empirical studies and concluded that the local quadratic approximation, which is very similar to GIANT, is the final method which they recommended.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_29", "@cite_3", "@cite_12" ], "mid": [ "", "2963992805", "2467074172", "2295465721", "2963060476" ], "abstract": [ "", "We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably improves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.", "We consider the problem of finding the minimizer of a convex function @math of the form @math where a low-rank factorization of @math is readily available. We consider the regime where @math . As second-order methods prove to be effective in finding the minimizer to a high-precision, in this work, we propose randomized Newton-type algorithms that exploit sub-sampling of @math , as well as inexact updates, as means to reduce the computational complexity. Two non-uniform sampling distributions based on block norm squares and block partial leverage scores are considered in order to capture important terms among @math . We show that at each iteration non-uniformly sampling at most @math terms from @math is sufficient to achieve a linear-quadratic convergence rate in @math when a suitable initial point is provided. In addition, we show that our algorithms achieve a lower computational complexity and exhibit more robustness and better dependence on problem specific quantities, such as the condition number, compared to similar existing methods, especially the ones based on uniform sampling. Finally, we empirically demonstrate that our methods are at least twice as fast as Newton's methods with ridge logistic regression on several real datasets.", "Many data-fitting applications require the solution of an optimization problem involving a sum of large number of functions of high dimensional parameter. Here, we consider the problem of minimizing a sum of @math functions over a convex constraint set @math where both @math and @math are large. In such problems, sub-sampling as a way to reduce @math can offer great amount of computational efficiency. Within the context of second order methods, we first give quantitative local convergence results for variants of Newton's method where the Hessian is uniformly sub-sampled. Using random matrix concentration inequalities, one can sub-sample in a way that the curvature information is preserved. Using such sub-sampling strategy, we establish locally Q-linear and Q-superlinear convergence rates. We also give additional convergence results for when the sub-sampled Hessian is regularized by modifying its spectrum or Levenberg-type regularization. Finally, in addition to Hessian sub-sampling, we consider sub-sampling the gradient as way to further reduce the computational complexity per iteration. We use approximate matrix multiplication results from randomized numerical linear algebra (RandNLA) to obtain the proper sampling strategy and we establish locally R-linear convergence rates. In such a setting, we also show that a very aggressive sample size increase results in a R-superlinearly convergent algorithm. While the sample size depends on the condition number of the problem, our convergence rates are problem-independent, i.e., they do not depend on the quantities related to the problem. Hence, our analysis here can be used to complement the results of our basic framework from the companion paper, [38], by exploring algorithmic trade-offs that are important in practice.", "We propose a randomized second-order method for optimization known as the Newton sketch: it is based on performing an approximate Newton step using a randomly projected Hessian. For self-concordant functions, we prove that the algorithm has superlinear convergence with exponentially high probability, with convergence and complexity guarantees that are independent of condition numbers and related problem-dependent quantities. Given a suitable initialization, similar guarantees also hold for strongly convex and smooth objectives without self-concordance. When implemented using randomized projections based on a subsampled Hadamard basis, the algorithm typically has substantially lower complexity than Newton's method. We also describe extensions of our methods to programs involving convex constraints that are equipped with self-concordant barriers. We discuss and illustrate applications to linear programs, quadratic programs with convex constraints, logistic regression, and other generalized linear models, as..." ] }
1709.03655
2754254578
For the two-stream style methods in action recognition, fusing the two streams' predictions is always by the weighted averaging scheme. This fusion method with fixed weights lacks of pertinence to different action videos and always needs trial and error on the validation set. In order to enhance the adaptability of two-stream ConvNets and improve its performance, an end-to-end trainable gated fusion method, namely gating ConvNet, for the two-stream ConvNets is proposed in this paper based on the MoE (Mixture of Experts) theory. The gating ConvNet takes the combination of feature maps from the same layer of the spatial and the temporal nets as input and adopts ReLU (Rectified Linear Unit) as the gating output activation function. To reduce the over-fitting of gating ConvNet caused by the redundancy of parameters, a new multi-task learning method is designed, which jointly learns the gating fusion weights for the two streams and learns the gating ConvNet for action classification. With our gated fusion method and multi-task learning approach, a high accuracy of 94.5 is achieved on the dataset UCF101.
Multi-task learning . Multi-task learning is a useful regularizer which could reduce over-fitting and improve performance in deep learning. In the community of object detection, Faster R-CNN @cite_18 employs multi-task learning both in RPN (Region Proposal Network) training and Fast R-CNN @cite_31 training to do object classification and bounding box regression simultaneously. While in action recognition, the two-stream ConvNets @cite_21 method combines different action datasets @cite_19 @cite_34 together in the training stage and back-propagates gradients through two different classification branches which share the same input feature. In this way, over-fitting is reduced by increasing the amounts of training data. In this work, a different multi-task learning approach is proposed for action recognition, namely, learning the gating fusion weights for the two streams and learning the gating ConvNet for action classification jointly. With this multi-task learning approach, the accuracy of our MoE is improved by more than 0.1
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_19", "@cite_31", "@cite_34" ], "mid": [ "2613718673", "2952186347", "24089286", "", "2126579184" ], "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.", "", "With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion." ] }
1709.03856
2755957574
We present StarSpace, a general-purpose neural embedding model that can solve a wide variety of problems: labeling tasks such as text classification, ranking tasks such as information retrieval web search, collaborative filtering-based or content-based recommendation, embedding of multi-relational graphs, and learning word, sentence or document level embeddings. In each case the model works by embedding those entities comprised of discrete features and comparing them against each other -- learning similarities dependent on the task. Empirical results on a number of tasks show that StarSpace is highly competitive with existing methods, whilst also being generally applicable to new cases where those methods are not.
In the domain of supervised embeddings, SSI @cite_24 and WSABIE @cite_23 are early approaches that showed promise in NLP and information retrieval tasks ( @cite_30 , @cite_6 ). Several more recent works including @cite_3 , @cite_19 , @cite_26 , TagSpace @cite_21 and fastText @cite_18 have yielded good results on classification tasks such as sentiment analysis or hashtag prediction.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_21", "@cite_3", "@cite_6", "@cite_24", "@cite_19", "@cite_23" ], "mid": [ "1792926363", "2468328197", "2413904250", "2113552117", "", "", "2129921015", "1775434803", "21006490" ], "abstract": [ "This paper proposes a novel approach for relation extraction from free text which is trained to jointly use information from the text and from existing knowledge. Our model is based on scoring functions that operate by learning low-dimensional embeddings of words, entities and relationships from a knowledge base. We empirically show on New York Times articles aligned with Freebase relations that our approach is able to efficiently use the extra information provided by a large subset of Freebase data (4M entities, 23k relationships) to improve over methods that rely on text features alone.", "This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute.", "The dominant approach for many NLP tasks are recurrent neural networks, in particular LSTMs, and convolutional neural networks. However, these architectures are rather shallow in comparison to the deep convolutional networks which are very successful in computer vision. We present a new architecture for text processing which operates directly on the character level and uses only small convolutions and pooling operations. We are able to show that the performance of this model increases with the depth: using up to 29 convolutional layers, we report significant improvements over the state-of-the-art on several public text classification tasks. To the best of our knowledge, this is the first time that very deep convolutional nets have been applied to NLP.", "We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.", "", "", "In this article we propose Supervised Semantic Indexing (SSI), an algorithm that is trained on (query, document) pairs of text documents to predict the quality of their match. Like Latent Semantic Indexing (LSI), our models take account of correlations between words (synonymy, polysemy). However, unlike LSI our models are trained with a supervised signal directly on the ranking task of interest, which we argue is the reason for our superior results. As the query and target texts are modeled separately, our approach is easily generalized to different retrieval tasks, such as online advertising placement. Dealing with models on all pairs of words features is computationally challenging. We propose several improvements to our basic model for addressing this issue, including low rank (but diagonal preserving) representations, and correlated feature hashing (CFH). We provide an empirical study of all these methods on retrieval tasks based on Wikipedia documents as well as an Internet advertisement task. We obtain state-of-the-art performance while providing realistically scalable methods.", "This article demontrates that we can apply deep learning to text understanding from character-level inputs all the way up to abstract text concepts, using temporal convolutional networks (ConvNets). We apply ConvNets to various large-scale datasets, including ontology classification, sentiment analysis, and text categorization. We show that temporal ConvNets can achieve astonishing performance without the knowledge of words, phrases, sentences and any other syntactic or semantic structures with regards to a human language. Evidence shows that our models can work for both English and Chinese.", "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method, called WSABIE, both outperforms several baseline methods and is faster and consumes less memory." ] }
1709.03749
2753778655
In this paper we introduce a natural image prior that directly represents a Gaussian-smoothed version of the natural image distribution. We include our prior in a formulation of image restoration as a Bayes estimator that also allows us to solve noise-blind image restoration problems. We show that the gradient of our prior corresponds to the mean-shift vector on the natural image distribution. In addition, we learn the mean-shift vector field using denoising autoencoders, and use it in a gradient descent approach to perform Bayes risk minimization. We demonstrate competitive results for noise-blind deblurring, super-resolution, and demosaicing.
@cite_26 designed a prior model that is also implemented by a denoiser, but that does not build on a proximal formulation such as ADMM. Interestingly, the gradient of their regularization term boils down to the residual of the denoiser, that is, the difference between its input and output, which is the same as in our approach. However, their framework does not establish the connection between the prior and the natural image probability distribution, as we do. Finally, Bigdeli and Zwicker @cite_30 formulate an energy function, where they used a Denoising Autoencoder (DAE) network for the prior, similar as in our approach, but they do not address the case of noise-blind restoration.
{ "cite_N": [ "@cite_30", "@cite_26" ], "mid": [ "2951811868", "2952159414" ], "abstract": [ "We propose to leverage denoising autoencoder networks as priors to address image restoration problems. We build on the key observation that the output of an optimal denoising autoencoder is a local mean of the true data density, and the autoencoder error (the difference between the output and input of the trained autoencoder) is a mean shift vector. We use the magnitude of this mean shift vector, that is, the distance to the local mean, as the negative log likelihood of our natural image prior. For image restoration, we maximize the likelihood using gradient descent by backpropagating the autoencoder error. A key advantage of our approach is that we do not need to train separate networks for different image restoration tasks, such as non-blind deconvolution with different kernels, or super-resolution at different magnification factors. We demonstrate state of the art results for non-blind deconvolution and super-resolution using the same autoencoding prior.", "Removal of noise from an image is an extensively studied problem in image processing. Indeed, the recent advent of sophisticated and highly effective denoising algorithms lead some to believe that existing methods are touching the ceiling in terms of noise removal performance. Can we leverage this impressive achievement to treat other tasks in image processing? Recent work has answered this question positively, in the form of the Plug-and-Play Prior ( @math ) method, showing that any inverse problem can be handled by sequentially applying image denoising steps. This relies heavily on the ADMM optimization technique in order to obtain this chained denoising interpretation. Is this the only way in which tasks in image processing can exploit the image denoising engine? In this paper we provide an alternative, more powerful and more flexible framework for achieving the same goal. As opposed to the @math method, we offer Regularization by Denoising (RED): using the denoising engine in defining the regularization of the inverse problem. We propose an explicit image-adaptive Laplacian-based regularization functional, making the overall objective functional clearer and better defined. With a complete flexibility to choose the iterative optimization procedure for minimizing the above functional, RED is capable of incorporating any image denoising algorithm, treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result. We test this approach and demonstrate state-of-the-art results in the image deblurring and super-resolution problems." ] }
1709.03749
2753778655
In this paper we introduce a natural image prior that directly represents a Gaussian-smoothed version of the natural image distribution. We include our prior in a formulation of image restoration as a Bayes estimator that also allows us to solve noise-blind image restoration problems. We show that the gradient of our prior corresponds to the mean-shift vector on the natural image distribution. In addition, we learn the mean-shift vector field using denoising autoencoders, and use it in a gradient descent approach to perform Bayes risk minimization. We demonstrate competitive results for noise-blind deblurring, super-resolution, and demosaicing.
Kernel-blind deconvolution has seen the most effort recently, while we support the fully (noise and kernel) blind setting. Noise-blind deblurring is usually performed by first estimating the noise level and then restoration with the estimated noise. @cite_1 proposed a Bayes risk formulation that can perform deblurring by adaptively changing the regularization without the need of the noise variance estimate. @cite_16 @cite_35 explored a spatially-adaptive sparse prior and scale-space formulation to handle noise- or kernel-blind deconvolution. These methods, however, are tailored specifically for image deconvolution. Also, they only handle the noise- or kernel-blind case, but not fully blind.
{ "cite_N": [ "@cite_35", "@cite_16", "@cite_1" ], "mid": [ "", "2125566565", "2741183886" ], "abstract": [ "", "Typical blur from camera shake often deviates from the standard uniform convolutional assumption, in part because of problematic rotations which create greater blurring away from some unknown center point. Consequently, successful blind deconvolution for removing shake artifacts requires the estimation of a spatially-varying or non-uniform blur operator. Using ideas from Bayesian inference and convex analysis, this paper derives a simple non-uniform blind deblurring algorithm with a spatially-adaptive image penalty. Through an implicit normalization process, this penalty automatically adjust its shape based on the estimated degree of local blur and image structure such that regions with large blur or few prominent edges are discounted. Remaining regions with modest blur and revealing edges therefore dominate on average without explicitly incorporating structure-selection heuristics. The algorithm can be implemented using an optimization strategy that is virtually tuning-parameter free and simpler than existing methods, and likely can be applied in other settings such as dictionary learning. Detailed theoretical analysis and empirical comparisons on real images serve as validation.", "We present a novel approach to noise-blind deblurring, the problem of deblurring an image with known blur, but unknown noise level. We introduce an efficient and robust solution based on a Bayesian framework using a smooth generalization of the 0-1 loss. A novel bound allows the calculation of very high-dimensional integrals in closed form. It avoids the degeneracy of Maximum a-Posteriori (MAP) estimates and leads to an effective noise-adaptive scheme. Moreover, we drastically accelerate our algorithm by using Majorization Minimization (MM) without introducing any approximation or boundary artifacts. We further speed up convergence by turning our algorithm into a neural network termed GradNet, which is highly parallelizable and can be efficiently trained. We demonstrate that our noise-blind formulation can be integrated with different priors and significantly improves existing deblurring algorithms in the noise-blind and in the known-noise case. Furthermore, GradNet leads to state-of-the-art performance across different noise levels, while retaining high computational efficiency." ] }
1709.03652
2754207934
Android embodies security mechanisms at both OS and application level. In this platform application security is built primarily upon a system of permissions which specify restrictions on the operations a particular process can perform. The critical role of these security mechanisms makes them a prime target for (formal) verification. We present an idealized model of a reference monitor of the novel mechanisms of Android 6 (and further), where it is possible to grant permissions at run time. Using the programming language of the proof-assistant Coq we have developed a functional implementation of the reference validation mechanism and certified its correctness with respect to the specified reference monitor. Several properties concerning the permission model of Android 6 and its security mechanisms have been formally formulated and proved. Applying the program extraction mechanism provided by Coq we have also derived a certified Haskell prototype of the reference validation mechanism.
Several analyses have been carried out concerning the security of the Android system @cite_15 @cite_8 @cite_0 @cite_6 @cite_10 . Few works, however, pay attention to the formal aspects of the permission enforcing framework. In particular, Shin @cite_19 @cite_22 build a formal framework that represents the Android permission system, which is developed in Coq , as we do. However, that work does not consider several aspects of the platform covered in our model, namely, the different types of components, the interaction between a running instance and the system, the R W operation on a content provider, the semantics of the permission delegation mechanism and novel aspects of the security model, such as the management of runtime permissions.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_6", "@cite_0", "@cite_19", "@cite_15", "@cite_10" ], "mid": [ "192281934", "2105412867", "1912565424", "201827489", "2156368183", "", "2150587160" ], "abstract": [ "", "Android is the first mass-produced consumer-market open source mobile platform that allows developers to easily create applications and users to readily install them. However, giving users the ability to install third-party applications poses serious security concerns. While the existing security mechanism in Android allows a mobile phone user to see which resources an application requires, she has no choice but to allow access to all the requested permissions if she wishes to use the applications. There is no way of granting some permissions and denying others. Moreover, there is no way of restricting the usage of resources based on runtime constraints such as the location of the device or the number of times a resource has been previously used. In this paper, we present Apex -- a policy enforcement framework for Android that allows a user to selectively grant permissions to applications as well as impose constraints on the usage of resources. We also describe an extended package installer that allows the user to set these constraints through an easy-to-use interface. Our enforcement framework is implemented through a minimal change to the existing Android code base and is backward compatible with the current security mechanism.", "Modern browsers and smartphone operating systems treat applications as mutually untrusting, potentially malicious principals. Applications are (1) isolated except for explicit IPC or inter-application communication channels and (2) unprivileged by default, requiring user permission for additional privileges. Although inter-application communication supports useful collaboration, it also introduces the risk of permission redelegation. Permission re-delegation occurs when an application with permissions performs a privileged task for an application without permissions. This undermines the requirement that the user approve each application's access to privileged devices and data. We discuss permission re-delegation and demonstrate its risk by launching real-world attacks on Android system applications; several of the vulnerabilities have been confirmed as bugs. We discuss possible ways to address permission redelegation and present IPC Inspection, a new OS mechanism for defending against permission re-delegation. IPC Inspection prevents opportunities for permission redelegation by reducing an application's permissions after it receives communication from a less privileged application. We have implemented IPC Inspection for a browser and Android, and we show that it prevents the attacks we found in the Android system applications.", "The widespread adoption of Android devices has attracted the attention of a growing computer security audience. Fundamental weaknesses and subtle design flaws of the Android architecture have been identified, studied and fixed, mostly through techniques from data-flow analysis, runtime protection mechanisms, or changes to the operating system. This paper complements this research by developing a framework for the analysis of Android applications based on typing techniques. We introduce a formal calculus for reasoning on the Android inter-component communication API and a type-and-effect system to statically prevent privilege escalation attacks on well-typed components. Drawing on our abstract framework, we develop a prototype implementation of Lintent, a security type-checker for Android applications integrated with the Android Development Tools suite. We finally discuss preliminary experiences with our tool, which highlight real attacks on existing applications.", "This paper proposes a formal model of the Android permission scheme. We describe the scheme specifying entities and relationships, and provide a state-based model which includes the behavior specification of permission authorization and the interactions between application components. We also show how we can logically confirm the security of the specified system. Utilizing a theorem prover, we can verify security with given security requirements based on mechanically checked proofs. The proposed model can be used as a reference model when the scheme is implemented in a different embedded platform, or when we extend the current scheme with additional constraints or elements. We demonstrate the use of the verifiable specification through finding a security vulnerability in the Android system. To our knowledge, this is the first formalization of the permission scheme enforced by the Android framework.", "", "Several works have recently shown that Android’s security architecture cannot prevent many undesired behaviors that compromise the integrity of applications and the privacy of their data. This paper makes two main contributions to the body of research on Android security: first, it develops a formal framework for analyzing Android-style security mechanisms; and, second, it describes the design and implementation of Sorbet, an enforcement system that enables developers to use permissions to specify secrecy and integrity policies. Our formal framework is composed of an abstract model with several specific instantiations. The model enables us to formally define some desired security properties, which we can prove hold on Sorbet but not on Android. We implement Sorbet on top of Android 2.3.7, test it on a Nexus S phone, and demonstrate its usefulness through a case study." ] }
1709.03675
2754738066
The gap between sensing patterns of different face modalities remains a challenging problem in heterogeneous face recognition (HFR). This paper proposes an adversarial discriminative feature learning framework to close the sensing gap via adversarial learning on both raw-pixel space and compact feature space. This framework integrates cross-spectral face hallucination and discriminative feature learning into an end-to-end adversarial network. In the pixel space, we make use of generative adversarial networks to perform cross-spectral face hallucination. An elaborate two-path model is introduced to alleviate the lack of paired images, which gives consideration to both global structures and local textures. In the feature space, an adversarial loss and a high-order variance discrepancy loss are employed to measure the global and local discrepancy between two heterogeneous distributions respectively. These two losses enhance domain-invariant feature learning and modality independent noise removing. Experimental results on three NIR-VIS databases show that our proposed approach outperforms state-of-the-art HFR methods, without requiring of complex network or large-scale training dataset.
A kind of approaches uses data synthesis to map data from one modality into another. Thus the similarity relationship of heterogeneous data from different domain can be measured. In @cite_31 , a local geometry preserving based nonlinear method is proposed to generate pseudo-sketch from face photo. @cite_4 , they propose a canonical correlation analysis (CCA) based multi-variate mapping algorithm to reconstruct 3D model from a single 2D NIR image. @cite_34 , multi-scale Markov Random Fields (MRF) models are extend to synthesize sketch drawing from given face photo and vice versa. In @cite_40 , a cross-spectrum face mapping method is proposed to transform NIR and VIS data to another type. Many works @cite_2 @cite_29 resort to coupled or joint dictionary learning to reconstruct face images and then perform face recognition. However, large amount of pairwise multi-view data are essential for these methods based on data synthesis, making it very difficult to collect training images. @cite_38 , they design a patch mining strategy to collect aligned image patches, and then produce VIS faces from NIR images through a deep learning approach.
{ "cite_N": [ "@cite_38", "@cite_4", "@cite_29", "@cite_40", "@cite_2", "@cite_31", "@cite_34" ], "mid": [ "2953112575", "2117113028", "1913277147", "1607277929", "2120824855", "2141345255", "2153288431" ], "abstract": [ "Surveillance cameras today often capture NIR (near infrared) images in low-light environments. However, most face datasets accessible for training and verification are only collected in the VIS (visible light) spectrum. It remains a challenging problem to match NIR to VIS face images due to the different light spectrum. Recently, breakthroughs have been made for VIS face recognition by applying deep learning on a huge amount of labeled VIS face samples. The same deep learning approach cannot be simply applied to NIR face recognition for two main reasons: First, much limited NIR face images are available for training compared to the VIS spectrum. Second, face galleries to be matched are mostly available only in the VIS spectrum. In this paper, we propose an approach to extend the deep learning breakthrough for VIS face recognition to the NIR spectrum, without retraining the underlying deep models that see only VIS faces. Our approach consists of two core components, cross-spectral hallucination and low-rank embedding, to optimize respectively input and output of a VIS deep model for cross-spectral face recognition. Cross-spectral hallucination produces VIS faces from NIR images through a deep learning approach. Low-rank embedding restores a low-rank structure for faces deep features across both NIR and VIS spectrum. We observe that it is often equally effective to perform hallucination to input NIR images or low-rank embedding to output deep features for a VIS deep model for cross-spectral recognition. When hallucination and low-rank embedding are deployed together, we observe significant further improvement; we obtain state-of-the-art accuracy on the CASIA NIR-VIS v2.0 benchmark, without the need at all to re-train the recognition system.", "In this paper, we propose a new approach for face shape recovery from a single image. A single near infrared (NIR) image is used as the input, and a mapping from the NIR tensor space to 3D tensor space, learned by using statistical learning, is used for the shape recovery. In the learning phase, the two tensor models are constructed for NIR and 3D images respectively, and a canonical correlation analysis (CCA) based multi-variate mapping from NIR to 3D faces is learned from a given training set of NIR-3D face pairs. In the reconstruction phase, given an NIR face image, the depth map is computed directly using the learned mapping with the help of tensor models. Experimental results are provided to evaluate the accuracy and speed of the method. The work provides a practical solution for reliable and fast shape recovery and modeling of 3D objects.", "A lot of real-world data is spread across multiple domains. Handling such data has been a challenging task. Heterogeneous face biometrics has begun to receive attention in recent years. In real-world scenarios, many surveillance cameras capture data in the NIR (near infrared) spectrum. However, most datasets accessible to law enforcement have been collected in the VIS (visible light) domain. Thus, there exists a need to match NIR to VIS face images. In this paper, we approach the problem by developing a method to reconstruct VIS images in the NIR domain and vice-versa. This approach is more applicable to real-world scenarios since it does not involve having to project millions of VIS database images into learned common subspace for subsequent matching. We present a cross-spectral joint l 0 minimization based dictionary learning approach to learn a mapping function between the two domains. One can then use the function to reconstruct facial images between the domains. Our method is open set and can reconstruct any face not present in the training data. We present results on the CASIA NIR-VIS v2.0 database and report state-of-the-art results.", "Face images captured in different spectral bands, e.g. , in visual (VIS) and near infrared (NIR), are said to be heterogeneous. Although a person's face looks different in heterogeneous images, it should be classified as being from the same individual. In this paper, we present a new method, called face analogy , in the analysis-by-synthesis framework, for heterogeneous face mapping, that is, transforming face images from one type to another, and thereby performing heterogeneous face matching. Experiments show promising results.", "In various computer vision applications, often we need to convert an image in one style into another style for better visualization, interpretation and recognition; for examples, up-convert a low resolution image to a high resolution one, and convert a face sketch into a photo for matching, etc. A semi-coupled dictionary learning (SCDL) model is proposed in this paper to solve such cross-style image synthesis problems. Under SCDL, a pair of dictionaries and a mapping function will be simultaneously learned. The dictionary pair can well characterize the structural domains of the two styles of images, while the mapping function can reveal the intrinsic relationship between the two styles' domains. In SCDL, the two dictionaries will not be fully coupled, and hence much flexibility can be given to the mapping function for an accurate conversion across styles. Moreover, clustering and image nonlocal redundancy are introduced to enhance the robustness of SCDL. The proposed SCDL model is applied to image super-resolution and photo-sketch synthesis, and the experimental results validated its generality and effectiveness in cross-style image synthesis.", "Most face recognition systems focus on photo-based face recognition. In this paper, we present a face recognition system based on face sketches. The proposed system contains two elements: pseudo-sketch synthesis and sketch recognition. The pseudo-sketch generation method is based on local linear preserving of geometry between photo and sketch images, which is inspired by the idea of locally linear embedding. The nonlinear discriminate analysis is used to recognize the probe sketch from the synthesized pseudo-sketches. Experimental results on over 600 photo-sketch pairs show that the performance of the proposed method is encouraging.", "In this paper, we propose a novel face photo-sketch synthesis and recognition method using a multiscale Markov Random Fields (MRF) model. Our system has three components: 1) given a face photo, synthesizing a sketch drawing; 2) given a face sketch drawing, synthesizing a photo; and 3) searching for face photos in the database based on a query sketch drawn by an artist. It has useful applications for both digital entertainment and law enforcement. We assume that faces to be studied are in a frontal pose, with normal lighting and neutral expression, and have no occlusions. To synthesize sketch photo images, the face region is divided into overlapping patches for learning. The size of the patches decides the scale of local face structures to be learned. From a training set which contains photo-sketch pairs, the joint photo-sketch model is learned at multiple scales using a multiscale MRF model. By transforming a face photo to a sketch (or transforming a sketch to a photo), the difference between photos and sketches is significantly reduced, thus allowing effective matching between the two in face sketch recognition. After the photo-sketch transformation, in principle, most of the proposed face photo recognition approaches can be applied to face sketch recognition in a straightforward way. Extensive experiments are conducted on a face sketch database including 606 faces, which can be downloaded from our Web site (http: mmlab.ie.cuhk.edu.hk facesketch.html)." ] }
1709.03675
2754738066
The gap between sensing patterns of different face modalities remains a challenging problem in heterogeneous face recognition (HFR). This paper proposes an adversarial discriminative feature learning framework to close the sensing gap via adversarial learning on both raw-pixel space and compact feature space. This framework integrates cross-spectral face hallucination and discriminative feature learning into an end-to-end adversarial network. In the pixel space, we make use of generative adversarial networks to perform cross-spectral face hallucination. An elaborate two-path model is introduced to alleviate the lack of paired images, which gives consideration to both global structures and local textures. In the feature space, an adversarial loss and a high-order variance discrepancy loss are employed to measure the global and local discrepancy between two heterogeneous distributions respectively. These two losses enhance domain-invariant feature learning and modality independent noise removing. Experimental results on three NIR-VIS databases show that our proposed approach outperforms state-of-the-art HFR methods, without requiring of complex network or large-scale training dataset.
As mentioned before, our work is also related to the famous adversarial learning. GAN @cite_13 has achieved great success in many computer vision applications including image style transfer @cite_32 @cite_6 , image generation @cite_1 @cite_26 , image super-resolution @cite_37 , object detection @cite_42 @cite_15 . Adversarial learning provides a simple yet efficient way to fit target distribution via the min-max two-player game between generator and discriminator. Motivated by this, we introduce adversarial learning in NIR-VIS face hallucination and domain-invariant feature learning, aiming at closing the sensing gap of heterogeneous data in pixel space and feature space simultaneously.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_42", "@cite_1", "@cite_32", "@cite_6", "@cite_15", "@cite_13" ], "mid": [ "2523714292", "2964337551", "2951649776", "2567101557", "2962793481", "2552465644", "2952815469", "2099471712" ], "abstract": [ "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoderdecoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-theart results on large pose face recognition.", "Detecting small objects is notoriously challenging due to their low resolution and noisy representation. Existing object detection pipelines usually detect small objects through learning representations of all the objects at multiple scales. However, the performance gain of such ad hoc architectures is usually limited to pay off the computational cost. In this work, we address the small object detection problem by developing a single architecture that internally lifts representations of small objects to \"super-resolved\" ones, achieving similar characteristics as large objects and thus more discriminative for detection. For this purpose, we propose a new Perceptual Generative Adversarial Network (Perceptual GAN) model that improves small object detection through narrowing representation difference of small objects from the large ones. Specifically, its generator learns to transfer perceived poor representations of the small objects to super-resolved ones that are similar enough to real large objects to fool a competing discriminator. Meanwhile its discriminator competes with the generator to identify the generated representation and imposes an additional perceptual requirement - generated representations of small objects must be beneficial for detection purpose - on the generator. Extensive evaluations on the challenging Tsinghua-Tencent 100K and the Caltech benchmark well demonstrate the superiority of Perceptual GAN in detecting small objects, including traffic signs and pedestrians, over well-established state-of-the-arts.", "With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.", "How do we learn an object detector that is invariant to occlusions and deformations? Our current solution is to use a data-driven strategy -- collect large-scale datasets which have object instances under different conditions. The hope is that the final classifier can use these examples to learn invariances. But is it really possible to see all the occlusions in a dataset? We argue that like categories, occlusions and object deformations also follow a long-tail. Some occlusions and deformations are so rare that they hardly happen; yet we want to learn a model invariant to such occurrences. In this paper, we propose an alternative solution. We propose to learn an adversarial network that generates examples with occlusions and deformations. The goal of the adversary is to generate examples that are difficult for the object detector to classify. In our framework both the original detector and adversary are learned in a joint manner. Our experimental results indicate a 2.3 mAP boost on VOC07 and a 2.6 mAP boost on VOC2012 object detection challenge compared to the Fast-RCNN pipeline. We also release the code for this paper.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1709.03741
2754490690
Predicating macroscopic influences of drugs on human body, like efficacy and toxicity, is a central problem of small-molecule based drug discovery. Molecules can be represented as an undirected graph, and we can utilize graph convolution networks to predication molecular properties. However, graph convolutional networks and other graph neural networks all focus on learning node-level representation rather than graph-level representation. Previous works simply sum all feature vectors for all nodes in the graph to obtain the graph feature vector for drug predication. In this paper, we introduce a dummy super node that is connected with all nodes in the graph by a directed edge as the representation of the graph and modify the graph operation to help the dummy super node learn graph-level feature. Thus, we can handle graph-level classification and regression in the same way as node-level classification and regression. In addition, we apply focal loss to address class imbalance in drug datasets. The experiments on MoleculeNet show that our method can effectively improve the performance of molecular properties predication.
@cite_5 propose graph convolutional network that operates directly on graph, which could learn better representations than conventional method like ECFP. @cite_21 combine graph convolutional network with residual LSTM embedding for one-shot learning on drug discovery. @cite_22 @cite_10 apply neural network to predicate the molecular energy and reduced the predication error to 1 kcal mol. @cite_11 propose atomic neural network to predicate the binding free energy of a subset of protein-ligand complexes found in the PDBBind dataset. @cite_20 applied Massively multitask neural architectures to synthesize information from many distinct biological sources. @cite_6 introduce MoleculeNet, a large scale benchmark for molecular machine learning, which contains multiple public datasets, and establish metrics for evaluation and high quality open-source implementations.
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_6", "@cite_5", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2620906374", "", "2949858440", "2406128552", "", "1738019091", "2604306554" ], "abstract": [ "Neural networks are being used to make new types of empirical chemical models as inexpensive as force fields, but with accuracy similar to the ab initio methods used to build them. In this work, we present a neural network that predicts the energies of molecules as a sum of intrinsic bond energies. The network learns the total energies of the popular GDB9 database to a competitive MAE of 0.94 kcal mol on molecules outside of its training set, is naturally linearly scaling, and applicable to molecules consisting of thousands of bonds. More importantly, it gives chemical insight into the relative strengths of bonds as a function of their molecular environment, despite only being trained on total energy information. We show that the network makes predictions of relative bond strengths in good agreement with measured trends and human predictions. A Bonds-in-Molecules Neural Network (BIM-NN) learns heuristic relative bond strengths like expert synthetic chemists, and compares well with ab initio bond order mea...", "", "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.", "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "", "Massively multitask neural architectures provide a learning framework for drug discovery that synthesizes information from many distinct biological sources. To train these architectures at scale, we gather large amounts of data from public sources to create a dataset of nearly 40 million measurements across more than 200 biological targets. We investigate several aspects of the multitask framework by performing a series of empirical studies and obtain some interesting results: (1) massively multitask networks obtain predictive accuracies significantly better than single-task methods, (2) the predictive power of multitask networks improves as additional tasks and data are added, (3) the total amount of data and the total number of tasks both contribute significantly to multitask improvement, and (4) multitask networks afford limited transferability to tasks not in the training set. Our results underscore the need for greater data sharing and further algorithmic innovation to accelerate the drug discovery process.", "Empirical scoring functions based on either molecular force fields or cheminformatics descriptors are widely used, in conjunction with molecular docking, during the early stages of drug discovery to predict potency and binding affinity of a drug-like molecule to a given target. These models require expert-level knowledge of physical chemistry and biology to be encoded as hand-tuned parameters or features rather than allowing the underlying model to select features in a data-driven procedure. Here, we develop a general 3-dimensional spatial convolution operation for learning atomic-level chemical interactions directly from atomic coordinates and demonstrate its application to structure-based bioactivity prediction. The atomic convolutional neural network is trained to predict the experimentally determined binding affinity of a protein-ligand complex by direct calculation of the energy associated with the complex, protein, and ligand given the crystal structure of the binding pose. Non-covalent interactions present in the complex that are absent in the protein-ligand sub-structures are identified and the model learns the interaction strength associated with these features. We test our model by predicting the binding free energy of a subset of protein-ligand complexes found in the PDBBind dataset and compare with state-of-the-art cheminformatics and machine learning-based approaches. We find that all methods achieve experimental accuracy and that atomic convolutional networks either outperform or perform competitively with the cheminformatics based methods. Unlike all previous protein-ligand prediction systems, atomic convolutional networks are end-to-end and fully-differentiable. They represent a new data-driven, physics-based deep learning model paradigm that offers a strong foundation for future improvements in structure-based bioactivity prediction." ] }
1709.03741
2754490690
Predicating macroscopic influences of drugs on human body, like efficacy and toxicity, is a central problem of small-molecule based drug discovery. Molecules can be represented as an undirected graph, and we can utilize graph convolution networks to predication molecular properties. However, graph convolutional networks and other graph neural networks all focus on learning node-level representation rather than graph-level representation. Previous works simply sum all feature vectors for all nodes in the graph to obtain the graph feature vector for drug predication. In this paper, we introduce a dummy super node that is connected with all nodes in the graph by a directed edge as the representation of the graph and modify the graph operation to help the dummy super node learn graph-level feature. Thus, we can handle graph-level classification and regression in the same way as node-level classification and regression. In addition, we apply focal loss to address class imbalance in drug datasets. The experiments on MoleculeNet show that our method can effectively improve the performance of molecular properties predication.
Graph-based Neural networks have previously introduced in @cite_24 @cite_4 as a form of recurrent neural network. @cite_17 modify the graph neural network by using gated recurrent units and modern optimization techniques and then extend to output sequences. Spectral graph convolutional neural networks are introduced by @cite_25 and later extended by @cite_18 with fast localized convolutions. @cite_8 introduced a number of simplifications to spectral graph convolutinal neural network and improve scalibility and classification performance in large-scale networks. @cite_1 propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. @cite_29 propose a framework for learning convolutional neural networks for arbitrary graphs and applied it to molecule classification.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_8", "@cite_29", "@cite_1", "@cite_24", "@cite_25", "@cite_17" ], "mid": [ "", "2116341502", "2519887557", "", "2415243320", "1501856433", "1662382123", "2244807774" ], "abstract": [ "", "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.", "", "In this paper, we propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. Different from other previous research efforts, we adopt a random surfing model to capture graph structural information directly, instead of using the sampling-based method for generating linear sequences proposed by (2014). The advantages of our approach will be illustrated from both theorical and empirical perspectives. We also give a new perspective for the matrix factorization method proposed by Levy and Goldberg (2014), in which the pointwise mutual information (PMI) matrix is considered as an analytical solution to the objective function of the skip-gram model with negative sampling proposed by (2013). Unlike their approach which involves the use of the SVD for finding the low-dimensitonal projections from the PMI matrix, however, the stacked denoising autoencoder is introduced in our model to extract complex features and model non-linearities. To demonstrate the effectiveness of our model, we conduct experiments on clustering and visualization tasks, employing the learned vertex representations as features. Empirical results on datasets of varying sizes show that our model outperforms other stat-of-the-art models in such tasks.", "In several applications the information is naturally represented by graphs. Traditional approaches cope with graphical data structures using a preprocessing phase which transforms the graphs into a set of flat vectors. However, in this way, important topological information may be lost and the achieved results may heavily depend on the preprocessing stage. This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs. A learning algorithm for GNNs is proposed and some experiments are discussed which assess the properties of the model.", "Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.", "Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (, 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures." ] }
1709.03851
2754125855
Humans focus attention on different face regions when recognizing face attributes. Most existing face attribute classification methods use the whole image as input. Moreover, some of these methods rely on fiducial landmarks to provide defined face parts. In this paper, we propose a cascade network that simultaneously learns to localize face regions specific to attributes and performs attribute classification without alignment. First, a weakly-supervised face region localization network is designed to automatically detect regions (or parts) specific to attributes. Then multiple part-based networks and a whole-image-based network are separately constructed and combined together by the region switch layer and attribute relation layer for final attribute classification. A multi-net learning method and hint-based model compression is further proposed to get an effective localization model and a compact classification model, respectively. Our approach achieves significantly better performance than state-of-the-art methods on unaligned CelebA dataset, reducing the classification error by 30.9 .
Despite training with only image-level labels, recent works @cite_19 @cite_24 @cite_16 showed that deep Convolutional Neural Networks (CNN) have remarkable object localization ability. zhou2016learning proposed a class activation mapping method to localize the objects with class labels only. The design of our face region localization network is motivated by this work. However, to fully utilize the correlations among different face attributes, the localization network is designed in a multi-task learning framework.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_16" ], "mid": [ "2950328304", "1994488211", "2133324800" ], "abstract": [ "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them", "Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.", "Object category localization is a challenging problem in computer vision. Standard supervised training requires bounding box annotations of object instances. This time-consuming annotation process is sidestepped in weakly supervised learning. In this case, the supervised information is restricted to binary labels that indicate the absence presence of object instances in the image, without their locations. We follow a multiple-instance learning approach that iteratively trains the detector and infers the object locations in the positive training images. Our main contribution is a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. This procedure is particularly important when using high-dimensional representations, such as Fisher vectors and convolutional neural network features. We also propose a window refinement method, which improves the localization accuracy by incorporating an objectness prior. We present a detailed experimental evaluation using the PASCAL VOC 2007 dataset, which verifies the effectiveness of our approach." ] }
1709.03851
2754125855
Humans focus attention on different face regions when recognizing face attributes. Most existing face attribute classification methods use the whole image as input. Moreover, some of these methods rely on fiducial landmarks to provide defined face parts. In this paper, we propose a cascade network that simultaneously learns to localize face regions specific to attributes and performs attribute classification without alignment. First, a weakly-supervised face region localization network is designed to automatically detect regions (or parts) specific to attributes. Then multiple part-based networks and a whole-image-based network are separately constructed and combined together by the region switch layer and attribute relation layer for final attribute classification. A multi-net learning method and hint-based model compression is further proposed to get an effective localization model and a compact classification model, respectively. Our approach achieves significantly better performance than state-of-the-art methods on unaligned CelebA dataset, reducing the classification error by 30.9 .
To obtain a compact model, several methods including network distillation @cite_0 , parameter pruning @cite_27 have been proposed. Recently, knowledge distillation @cite_2 has been shown to be very effective to teach a small student model. However, it can not be directly applied to our problem: the teacher net uses the soft labels which contain rich ambiguous information to supervise the student net, while for attribute classification, the output has only one logit for each attribute. Thus, a new loss function based on hints is proposed to replace soft label supervision.
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_2" ], "mid": [ "", "2114766824", "1821462560" ], "abstract": [ "", "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel." ] }
1709.03480
2755694566
A commonly used technique for managing AI complexity in real-time strategy (RTS) games is to use action and or state abstractions. High-level abstractions can often lead to good strategic decision making, but tactical decision quality may suffer due to lost details. A competing method is to sample the search space which often leads to good tactical performance in simple scenarios, but poor high-level planning. We propose to use a deep convolutional neural network (CNN) to select among a limited set of abstract action choices, and to utilize the remaining computation time for game tree search to improve low level tactics. The CNN is trained by supervised learning on game states labelled by Puppet Search, a strategic search algorithm that uses action abstractions. The network is then used to select a script --- an abstract action --- to produce low level actions for all units. Subsequently, the game tree search algorithm improves the tactical actions of a subset of units using a limited view of the game state only considering units close to opponent units. Experiments in the microRTS game show that the combined algorithm results in higher win-rates than either of its two independent components and other state-of-the-art microRTS agents. To the best of our knowledge, this is the first successful application of a convolutional network to play a full RTS game on standard game maps, as previous work has focused on sub-problems, such as combat, or on very small maps.
These results have sparked interest in applying deep learning to games with larger state and action spaces. Some limited success has been found in micromanagement tasks for RTS games @cite_10 , where a deep network managed to slightly outperform a set of baseline heuristics. Additional encouraging results were achieved for the task of evaluating RTS game states @cite_3 . The network significantly outperforms other state-of-the-art approaches at predicting game outcomes. When it is used in adversarial search algorithms, they perform significantly better than using simpler evaluation functions that are three to four orders of magnitude faster.
{ "cite_N": [ "@cite_10", "@cite_3" ], "mid": [ "2518713116", "2589422499" ], "abstract": [ "We consider scenarios from the real-time strategy game StarCraft as new benchmarks for reinforcement learning algorithms. We propose micromanagement tasks, which present the problem of the short-term, low-level control of army members during a battle. From a reinforcement learning point of view, these scenarios are challenging because the state-action space is very large, and because there is no obvious feature representation for the state-action evaluation function. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. In addition, we present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm allows for the collection of traces for learning using deterministic policies, which appears much more efficient than, for example, -greedy exploration. Experiments show that with this algorithm, we successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle.", "Real-time strategy (RTS) games, such as Blizzard's StarCraft, are fast paced war simulation games in which players have to manage economies, control many dozens of units, and deal with uncertainty about opposing unit locations in real-time. Even in perfect information settings, constructing strong AI systems has been difficult due to enormous state and action spaces and the lack of good state evaluation functions and high-level action abstractions. To this day, good human players are still handily defeating the best RTS game AI systems, but this may change in the near future given the recent success of deep convolutional neural networks (CNNs) in computer Go, which demonstrated how networks can be used for evaluating complex game states accurately and to focus look-ahead search. In this paper we present a CNN for RTS game state evaluation that goes beyond commonly used material based evaluations by also taking spatial relations between units into account. We evaluate the CNN's performance by comparing it with various other evaluation functions by means of tournaments played by several state-of-the-art search algorithms. We find that, despite its much slower evaluation speed, on average the CNN based search performs significantly better compared to simpler but faster evaluations. These promising initial results together with recent advances in hierarchical search suggest that dominating human players in RTS games may not be far off." ] }
1709.03332
2755850239
Distributed Stream Processing Systems (DSPS) like Apache Storm and Spark Streaming enable composition of continuous dataflows that execute persistently over data streams. They are used by Internet of Things (IoT) applications to analyze sensor data from Smart City cyber-infrastructure, and make active utility management decisions. As the ecosystem of such IoT applications that leverage shared urban sensor streams continue to grow, applications will perform duplicate pre-processing and analytics tasks. This offers the opportunity to collaboratively reuse the outputs of overlapping dataflows, thereby improving the resource efficiency. In this paper, we propose that given a submitted dataflow, identifies the intersection of reusable tasks and streams from a collection of running dataflows to form a . Similar algorithms to unmerge dataflows when they are removed are also proposed. We implement these algorithms for the popular Apache Storm DSPS, and validate their performance and resource savings for 35 synthetic dataflows based on public OPMW workflows with diverse arrival and departure distributions, and on 21 real IoT dataflows from RIoTBench.
There are two broad categories of reuse'' research that are relevant to our problem: @cite_28 , and @cite_14 .
{ "cite_N": [ "@cite_28", "@cite_14" ], "mid": [ "2003272624", "1994326726" ], "abstract": [ "With the widespread adoption of location tracking technologies like GPS, the domain of intelligent transportation services has seen growing interest in the last few years. Services in this domain make use of real-time location-based data from a variety of sources, combine this data with static location-based data such as maps and points of interest databases, and provide useful information to end-users. Some of the major challenges in this domain include i) scalability, in terms of processing large volumes of real-time and static data; ii) extensibility, in terms of being able to add new kinds of analyses on the data rapidly, and iii) user interaction, in terms of being able to support different kinds of one-time and continuous queries from the end-user. In this paper, we demonstrate the use of IBM InfoSphere Streams, a scalable stream processing platform, for tackling these challenges. We describe a prototype system that generates dynamic, multi-faceted views of transportation information for the city of Stockholm, using real vehicle GPS and road-network data. The system also continuously derives current traffic statistics, and provides useful value-added information such as shortest-time routes from real-time observed and inferred traffic conditions. Our performance experiments illustrate the scalability of the system. For instance, our system can process over 120000 incoming GPS points per second, combine it with a map containing over 600,000 links, continuously generate different kinds of traffic statistics and answer user queries.", "Data management is growing in complexity as large-scale applications take advantage of the loosely coupled resources brought together by grid middleware and by abundant storage capacity. Metadata describing the data products used in and generated by these applications is essential to disambiguate the data and enable reuse. Data provenance, one kind of metadata, pertains to the derivation history of a data product starting from its original sources.In this paper we create a taxonomy of data provenance characteristics and apply it to current research efforts in e-science, focusing primarily on scientific workflow approaches. The main aspect of our taxonomy categorizes provenance systems based on why they record provenance, what they describe, how they represent and store provenance, and ways to disseminate it. The survey culminates with an identification of open research problems in the field." ] }
1709.03332
2755850239
Distributed Stream Processing Systems (DSPS) like Apache Storm and Spark Streaming enable composition of continuous dataflows that execute persistently over data streams. They are used by Internet of Things (IoT) applications to analyze sensor data from Smart City cyber-infrastructure, and make active utility management decisions. As the ecosystem of such IoT applications that leverage shared urban sensor streams continue to grow, applications will perform duplicate pre-processing and analytics tasks. This offers the opportunity to collaboratively reuse the outputs of overlapping dataflows, thereby improving the resource efficiency. In this paper, we propose that given a submitted dataflow, identifies the intersection of reusable tasks and streams from a collection of running dataflows to form a . Similar algorithms to unmerge dataflows when they are removed are also proposed. We implement these algorithms for the popular Apache Storm DSPS, and validate their performance and resource savings for 35 synthetic dataflows based on public OPMW workflows with diverse arrival and departure distributions, and on 21 real IoT dataflows from RIoTBench.
Prior works @cite_18 @cite_24 explore the problem of composing streaming applications in a wide area P2P network, along with reuse of streams and tasks. Their DAG of tasks has an ontologically unique name for streams, newly submitted DAGs have their stream names matched against the existing streams, and identical streams are reused. Rather than just a lookup by stream names, we offer a more rigorous graph-based approach to distinctively identify equivalent tasks and their output streams. We also limit our work to a local cluster rather than wide area networks, and hence do not require the distributed probing mechanism they use to propagate state and connectivity. We can also use a centrally coordinate the reuse within the data center. Lastly, they do not adequately examine the removal of a submitted DAG -- as we saw, demerging can cause cascading impact on the deployed DAGs.
{ "cite_N": [ "@cite_24", "@cite_18" ], "mid": [ "1513872491", "2140078503" ], "abstract": [ "Many emerging on-line data analysis applications require applying continuous query operations such as correlation, aggregation, and filtering to data streams in real-time. Distributed stream processing systems allow in-network stream processing to achieve better scalability and quality-of-service (QoS) provision. In this paper we present Synergy, a distributed stream processing middleware that provides sharing-aware component composition. Synergy enables efficient reuse of both data streams and processing components, while composing distributed stream processing applications with QoS demands. Synergy provides a set of fully distributed algorithms to discover and evaluate the reusability of available data streams and processing components when instantiating new stream applications. For QoS provision, Synergy performs QoS impact projection to examine whether the shared processing can cause QoS violations on currently running applications. We have implemented a prototype of the Synergy middleware and evaluated its performance on both PlanetLab and simulation testbeds. The experimental results show that Synergy can achieve much better resource utilization and QoS provision than previously proposed schemes, by judiciously sharing streams and processing components during application composition.", "Many emerging online data analysis applications require applying continuous query operations such as correlation, aggregation, and filtering to data streams in real time. Distributed stream processing systems allow in-network stream processing to achieve better scalability and quality-of-service (QoS) provision. In this paper, we present Synergy, a novel distributed stream processing middleware that provides automatic sharing-aware component composition capability. Synergy enables efficient reuse of both result streams and processing components, while composing distributed stream processing applications with QoS demands. It provides a set of fully distributed algorithms to discover and evaluate the reusability of available result streams and processing components when instantiating new stream applications. Specifically, Synergy performs QoS impact projection to examine whether the shared processing can cause QoS violations on currently running applications. The QoS impact projection algorithm can handle different types of streams including both regular traffic and bursty traffic. If no existing processing components can be reused, Synergy dynamically deploys new components at strategic locations to satisfy new application requests. We have implemented a prototype of the Synergy middleware and evaluated its performance on both PlanetLab and simulation testbeds. The experimental results show that Synergy can achieve much better resource utilization and QoS provisioning than previously proposed schemes, by judiciously sharing streams and components during application composition." ] }
1709.03426
1989506746
Estimation of model parameters in a dynamic system can be significantly improved with the choice of experimental trajectory. For general nonlinear dynamic systems, finding globally “best” trajectories is typically not feasible; however, given an initial estimate of the model parameters and an initial trajectory, we present a continuous-time optimization method that produces a locally optimal trajectory for parameter estimation in the presence of measurement noise. The optimization algorithm is formulated to find system trajectories that improve a norm on the Fisher information matrix (FIM). A double-pendulum cart apparatus is used to numerically and experimentally validate this technique. In simulation, the optimized trajectory increases the minimum eigenvalue of the FIM by three orders of magnitude, compared with the initial trajectory. Experimental results show that this optimized trajectory translates to an order-of-magnitude improvement in the parameter estimate error in practice.
Since the design of an experimental trajectory has a wide range of potential uses, there have been a number of contributions to the area from different fields. A large amount of literature on optimal experimental design exists in the fields of biology @cite_30 @cite_11 @cite_15 , chemistry @cite_8 , and systems @cite_31 @cite_28 @cite_25 @cite_35 . Many of these results focus on particular applications to experiments specific to their respective fields; however, the underlying principles of information theory remain the same.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_31", "@cite_8", "@cite_28", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "2167485946", "2029867572", "2039992026", "2036407535", "1988210542", "2021344274", "2169207653", "2066233746" ], "abstract": [ "To obtain a systems-level understanding of a biological system, the authors conducted quantitative dynamic experiments from which the system structure and the parameters have to be deduced. Since biological systems have to cope with different environmental conditions, certain properties are often robust with respect to variations in some of the parameters. Hence, it is important to use optimal experimental design considerations in advance of the experiments to improve the information content of the measurements. Using the MAP-Kinase pathway as an example, the authors present a simulation study investigating the application of different optimality criteria. It is demonstrated that experimental design significantly improves the parameter estimation accuracy and also reveals difficulties in parameter estimation due to robustness.", "This article reviews the development of experiment design in the field of identification of dynamical systems, from the early work of the seventies on input design for open loop identification to the developments of the last decade that were spurred by the research on identification for control. While the early work focused entirely on criteria based on the asymptotic parameter covariance, the results of the last decade aim at minimizing a wide range of possible criteria, including measures of the estimated transfer function, or of functions of this estimated transfer function. Two important recent developments are the solution of the experiment design problem for closed loop identification, and the formulation and solution of the dual optimal design problem in which the cost of identification is minimized subject to a quality constraint on the estimated model. We shall conclude this survey with new results on the optimal closed loop experiment design problem, where the optimization is performed jointly with respect to the controller and the spectrum of the external excitation.", "This article presents advances in optimal experiment design, which are intended to improve the parameter identification of nonlinear state space models. Instead of using a sequence of samples from one or just a few coherent sequences, the idea of identifying nonlinear dynamic models at distinct points in the state space is considered. In this way, the placement of the experiment points is fully flexible with respect to the set of reachable points. Also, a method for model-based generation of prediction errors is proposed, which is used to compute an a-priori estimate of the sample covariance of the prediction error. This covariance matrix may be used to approximate the Fisher information matrix a-priori. The availability of the Fisher matrix a-priori is a prerequisite for experiment optimization with respect to covariance in the parameter estimates. This work is driven by the problem of parameter identification of hydraulic models. There are methods for hydraulic systems regarding the estimation of parameters from experimental data, but the choice of experiments has not been treated adequately yet. A hydraulic servo system actuating a stewart platform serves as an illustrative example to which the methods above are applied.", "Due to the wide use and key importance of mathematical models in process engineering, experiment design is becoming an essential tool for the rapid building and validation of these mechanistic models. Several experiment design techniques have been developed in the past and applied successfully to a wide range of systems. This paper is focused on the so-called model-based design of experiments (DOE) and aims at presenting an up-to-date state of the art in this important field. In order to provide an adequate and thorough background to this technique, a detailed description of the key elements of a model identification procedure (the model itself, the experiment, the statistical tools, etc.) and the major steps of a model-building strategy are introduced before focusing on the experiment design for parameter precision, which is the topic of this survey. An overview and critical analysis of the state of the art in this sector are proposed. The main contributions to model-based experiment design procedures in terms of novel criteria, mathematical formulations and numerical implementations are highlighted. A list of the most recent applications of these techniques in various fields (from chemical kinetics to biological modelling) is then presented highlighting the key role of model-based DOE in the process engineering area.", "It is well known that if we intend to use a minimum variance control strategy, which is designed based on a model obtained from an identification experiment, the best experiment which can be performed on the system to determine such a model (subject to output power constraints, or for some specific model structures) is to use the true minimum variance controller. This result has been derived under several circumstances, first using asymptotic (in model order) variance expressions but also more recently for ARMAX models of finite order. In this paper we re-approach this problem using a recently developed expression for the variance of parametric frequency function estimates. This allows a geometric analysis of the problem and the generalization of the aforementioned finite model order ARMAX results to general linear model structures.", "Abstract The investigation of enzyme kinetics is increasingly important, especially for finding active substances and understanding intracellular behaviors. Therefore, the determination of an enzyme's kinetic parameters is crucial. For this a systematic experimental design procedure is necessary to avoid wasting time and resources. The parameter estimation error of a Michaelis–Menten enzyme kinetic process is analysed analytically to reduce the search area as well as numerically to specify the optimum for parameter estimation. From analytical analysis of the Fisher information matrix the fact is obtained, that an enzyme feed will not improve the estimation process, but substrate feeding is favorable with small volume flow. Unconstrained and constrained process conditions are considered. If substrate fed-batch process design is used instead of pure batch experiments the improvements of the Cramer–Rao lower bound of the variance of parameter estimation error reduces to 82 for μ max and to 60 for K m of the batch values in average.", "We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the (m k) possible choices of sensor measurements is not practical unless m and k are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m 3 operations; for m= 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer.", "A method of optimal experimental design for parameter estimation in unstructured growth models is presented. The approach is based on a method suggested by Munack (1991) for application in fed-batch processes. In a critical analysis of this method, special emphasis is given to the model validity, because unstructured growth models often are not valid under transient conditions. In consequence, a combined object function has been introduced, which considers model validity and the accuracy of the kinetic parameters to be estimated. The application of this method for fed-batch processes leads to satisfactory results. Investigations of different fed-batch strategies regarding model validity and the quality of parameter estimation are presented. In addition, an experimental verification has been performed with fermentations of the yeast Trichosporon cutaneum." ] }
1709.03426
1989506746
Estimation of model parameters in a dynamic system can be significantly improved with the choice of experimental trajectory. For general nonlinear dynamic systems, finding globally “best” trajectories is typically not feasible; however, given an initial estimate of the model parameters and an initial trajectory, we present a continuous-time optimization method that produces a locally optimal trajectory for parameter estimation in the presence of measurement noise. The optimization algorithm is formulated to find system trajectories that improve a norm on the Fisher information matrix (FIM). A double-pendulum cart apparatus is used to numerically and experimentally validate this technique. In simulation, the optimized trajectory increases the minimum eigenvalue of the FIM by three orders of magnitude, compared with the initial trajectory. Experimental results show that this optimized trajectory translates to an order-of-magnitude improvement in the parameter estimate error in practice.
A common metric used in these areas of experimental design---also the key metric in this paper---is the Fisher information matrix computed from observations of the system trajectory @cite_20 . Metrics on the Fisher information are used as a cost function in many optimization problems including work by Swevers on exciting'' trajectories @cite_23 . This work, as well as related works @cite_33 @cite_10 , synthesize trajectories for nonlinear systems that can be recast as linear systems with respect to the parameters.
{ "cite_N": [ "@cite_10", "@cite_23", "@cite_33", "@cite_20" ], "mid": [ "2159537373", "2078856056", "2102154372", "2802577565" ], "abstract": [ "A common way to identify the inertial parameters of robots is to use a linear model in relation to the parameters and standard least-squares (LS) techniques. This article presents a method to generate exciting identification trajectories in order to minimize the effect of noise and error modeling on the LS solution. Using nonlinear optimization techniques, the condition number of a matrix W obtained from the energy model is minimized, and the scaling of its terms is carried out. An example of a three-degree-of-freedom robot is presented.", "This paper discusses experimental robot identification based on a statistical framework. It presents a new approach toward the design of optimal robot excitation trajectories, and formulates the maximum-likelihood estimation of dynamic robot model parameters. The differences between the new design approach and the existing approaches lie in the parameterization of the excitation trajectory and in the optimization criterion. The excitation trajectory for each joint is a finite Fourier series. This approach guarantees periodic excitation which is advantageous because it allows: 1) time-domain data averaging; 2) estimation of the characteristics of the measurement noise, which is valuable in the case of maximum-likelihood parameter estimation. In addition, the use of finite Fourier series allows calculation of the joint velocities and acceleration in an analytic way from the measured position response, and allows specification of the bandwidth of the excitation trajectories. The optimization criterion is the uncertainty on the estimated parameters or a lower bound for it, instead of the often used condition of the parameter estimation problem. Simulations show that this criterion yields parameter estimates with smaller uncertainty bounds than trajectories optimized according to the classical criterion. Experiments on an industrial robot show that the presented trajectory design and maximum-likelihood parameter estimation approaches complement each other to make a practicable robot identification technique which yields accurate robot models.", "When designing an identification experiment for a system described by nonlinear functions such as those of manipulator dynamics, it is necessary to consider whether the excitation is sufficient to provide an accurate estimate of the parameters in the presence of experimental noise. It is shown that the convergence rate and noise immunity of a parameter identifi cation experiment depend directly on the condition number of the input correlation matrix, a measure of excitation. The sensitivity of an identification experiment to unmodeled dynamics is also studied; a dimensionless measure of this sensitivity—bias susceptibility—is proposed and related to excitation. The issue of how exciting a trajectory may be is addressed, and a method is presented to maximize the exci tation. Two identification experiments reported in the litera ture are studied; analysis of these experiments shows that intuitively selected trajectories may provide poor excitation, and considerable improvement results from employing the opt...", "Preparations. Unbiasedness. Equivariance. Global properties. Large-sample theory. Asymptotic optimality. Author index. Subject index." ] }
1709.03426
1989506746
Estimation of model parameters in a dynamic system can be significantly improved with the choice of experimental trajectory. For general nonlinear dynamic systems, finding globally “best” trajectories is typically not feasible; however, given an initial estimate of the model parameters and an initial trajectory, we present a continuous-time optimization method that produces a locally optimal trajectory for parameter estimation in the presence of measurement noise. The optimization algorithm is formulated to find system trajectories that improve a norm on the Fisher information matrix (FIM). A double-pendulum cart apparatus is used to numerically and experimentally validate this technique. In simulation, the optimized trajectory increases the minimum eigenvalue of the FIM by three orders of magnitude, compared with the initial trajectory. Experimental results show that this optimized trajectory translates to an order-of-magnitude improvement in the parameter estimate error in practice.
Further research has resulted in optimal design methods for general nonlinear systems. In work by Emery @cite_17 , least-squares and maximum-likelihood estimation techniques are combined with Fisher information to optimize the experimental trajectories. In this case and a number of others, the dynamics are solved as a discretized, constrained optimization problem @cite_22 @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_22", "@cite_17" ], "mid": [ "2033574792", "2021436740", "2088102099" ], "abstract": [ "This paper is concerned with the input design problem for a class of structured nonlinear models. This class contains models described by an interconnection of known linear dynamic systems and unknown static nonlinearities. Many widely used model structures are included in this class. The model class considered naturally accommodates a priori knowledge in terms of signal interconnections. Under certain structural conditions, the identification problem for this model class reduces to standard least squares. We treat the input design problem in this situation. An expression for the expected estimate variance is derived. A method for synthesizing an informative input sequence that minimizes an upper bound on this variance is developed. This reduces to a convex optimization problem. Features of the solution include parameterization of the expected estimate variance by the input distribution, and a graph-based method for input generation.", "This paper surveys the field of optimal input design for parameter estimation as it has developed over the last two decades. Many of the developments covered are only recent and have not appeared in the open literature elsewhere. After a brief introduction, the paper discusses the historical background of the subject both in the engineering and in the statistical literature. The concepts of optimality and input design are then discussed, followed by a derivation of the Fisher information matrix for multiinput multioutput systems with process noise. The design procedures are divided into the categories of time-domain methods and frequency-domain methods, with the former being more general, but also more time consuming (computationally). Several extensions to state constraints, continuous-time systems, etc., are discussed. A number of examples are given to illustrate the nature of optimal inputs. The results on time-domain synthesis with state constraints and their relationship to \"dual control\" are new.", "Optimal experiment design is the definition of the conditions under which an experiment is to be conducted in order to maximize the accuracy with which the results are obtained. This paper summarizes a number of methods by which the parameters of the mathematical model of the system are estimated and describes the application of the Fisher information matrix. Examples are given for thermal property estimation in which the estimation is affected both by measurement noise, which is present during any experiment, but also by uncertainties in the parameters of the model used to describe the system." ] }
1709.03426
1989506746
Estimation of model parameters in a dynamic system can be significantly improved with the choice of experimental trajectory. For general nonlinear dynamic systems, finding globally “best” trajectories is typically not feasible; however, given an initial estimate of the model parameters and an initial trajectory, we present a continuous-time optimization method that produces a locally optimal trajectory for parameter estimation in the presence of measurement noise. The optimization algorithm is formulated to find system trajectories that improve a norm on the Fisher information matrix (FIM). A double-pendulum cart apparatus is used to numerically and experimentally validate this technique. In simulation, the optimized trajectory increases the minimum eigenvalue of the FIM by three orders of magnitude, compared with the initial trajectory. Experimental results show that this optimized trajectory translates to an order-of-magnitude improvement in the parameter estimate error in practice.
Discretization of the dynamics has several problems. First and foremost, an arbitrary choice about the discretization needs to be made. Secondly, adaptive time-stepping methods cannot be used, and a discretization appropriate for the initial trajectory cannot be expected to be appropriate for the final trajectory. Lastly, discretization can lead to high-dimensional constrained optimizations (dimensions of @math to @math are common in practical problems) that are impractical to solve numerically. To avoid discretization of continuous dynamics, a class of methods has been developed that relies on sets of basis functions to synthesize optimal controls for the system @cite_2 @cite_9 @cite_34 . These methods allow the full trajectory to be optimized on a continuous-time domain; however, the optimization problem is still subject to a finite set of basis function coefficients. One example of the basis function set includes the Fourier basis, which is used create a class of trajectories over which the optimization can be performed @cite_4 .
{ "cite_N": [ "@cite_9", "@cite_34", "@cite_4", "@cite_2" ], "mid": [ "2151186798", "2141537501", "2035587171", "2100101211" ], "abstract": [ "In this paper we adressed the problem of finding exciting trajectories for the identification of manipulator link inertia parameters. This can be formulated as a constraint nonlinear optimization problem. The new approach in the presented method is the parameterization of the trajectories with optimized B-splines. Experiments are carried out on a 7 joint Light-Weight robot with torque sensoring in each joint. Thus, unmodeled joint friction and noisy motor current measurements must not be taken into account. The estimated dynamic model is verified on a different validation trajectory. The results show a clear improvement of the estimated dynamic model compared to a CAD-valued model.", "This paper concerns the problem of dynamic parameter identification of robot manipulators and proposes a closed-loop identification procedure using modified Fourier series (MFS) as exciting trajectories. First, a static continuous friction model is involved to model joint friction for realizable friction compensation in controller design. Second, MFS satisfying the boundary conditions are firstly designed as periodic exciting trajectories. To minimize the sensitivity to measurement noise, the coefficients of MFS are optimized according to the condition number criterion. Moreover, to obtain accurate parameter estimates, the maximum likelihood estimation (MLE) method considering the influence of measurement noise is adopted. The proposed identification procedure has been implemented on the first three axes of the QIANJIANG-I 6-DOF robot manipulator. Experiment results verify the effectiveness of the proposed approach, and comparison between identification using MFS and that using finite Fourier series (FFS)...", "This paper describes a new approach to the parameterization of robot excitation trajectories for optimal robot identification. The trajectory parameterization is based on a combined Fourier series and polynomial functions. The coefficients of the Fourier series are optimized for minimal sensitivity of the identification to measurement disturbances, which is measured as the d-optimality criterion, taking into account motion constraints in joint and Cartesian space. This parameterization satisfies both the guarantees of convergence by adding terms and the matching of the boundary conditions. Application of the method for the identification of the CRS A465 industrial robot proves the validity of the proposed approach.", "This paper considers a recently proposed framework for experiment design in system identification for control. We study model based control design methods, such as Model Predictive Control, where the model is obtained by means of a prediction error system identification method. The degradation in control performance due to uncertainty in the model estimate is specified by an application cost function. The objective is to find a minimum variance input signal, to be used in system identification experiment, such that the control application specification is guaranteed with a given probability when using the estimated model in the control design. We provide insight in the potentials of this approach by finite impulse response model examples, for which it is possible to analytically solve the optimal input problem. The examples show how the control specifications directly affect the excitation conditions in the system identification experiment." ] }
1709.03426
1989506746
Estimation of model parameters in a dynamic system can be significantly improved with the choice of experimental trajectory. For general nonlinear dynamic systems, finding globally “best” trajectories is typically not feasible; however, given an initial estimate of the model parameters and an initial trajectory, we present a continuous-time optimization method that produces a locally optimal trajectory for parameter estimation in the presence of measurement noise. The optimization algorithm is formulated to find system trajectories that improve a norm on the Fisher information matrix (FIM). A double-pendulum cart apparatus is used to numerically and experimentally validate this technique. In simulation, the optimized trajectory increases the minimum eigenvalue of the FIM by three orders of magnitude, compared with the initial trajectory. Experimental results show that this optimized trajectory translates to an order-of-magnitude improvement in the parameter estimate error in practice.
In the robotics field, trajectory optimization and parameter identification algorithms have been developed for special classes of robotic systems. For serial robot arms and similarly connected systems, chain-based techniques and linear separation of parameters can be used @cite_12 @cite_36 . Techniques have also been adapted for parallel robots and manipulators @cite_5 @cite_3 @cite_27 . While these techniques perform well for the intended class of robots, we seek an algorithm that has the ability to work on general nonlinear systems, only requiring differentiability of the dynamics and some form of control authority.
{ "cite_N": [ "@cite_36", "@cite_3", "@cite_27", "@cite_5", "@cite_12" ], "mid": [ "2121287102", "2080405136", "2031704815", "2135879303", "2018911853" ], "abstract": [ "Dynamic Parameter Identification is a useful tool for developing and evaluating robot control strategies. However, a multi degree of freedom robot arm has many parameters, and the process of determining them is challenging. Much research has been done in this area and experimental methods have been applied on several robot arms. To our knowledge, there is currently no set of inertial parameters, either by modelling or by estimation, available for the CRS A460 A465 arm, a popular laboratory table top robot. In this paper we review and compare a number of methods for dynamic parameter identification and for generating trajectories suitable for estimating the identifiable dynamic parameters of a given robot. We then present a step by step process for dynamic parameter identification of a serial manipulator, and demonstrate this process by experimentally identifying the dynamic parameters of the CRS A460 robot.", "Utilizing the virtual work principle, this paper presents a method for the dynamic formulation of redundant and non-redundant parallel manipulators for dynamic parameter identification. In modeling, the selection of pivotal point and the computation of inertia force and moment about the pivotal point are more crucial. The selection principle of pivotal point and force transmission on a rigid body are studied. In order to validate the method, the linear form of dynamic models of a 3-DOF parallel manipulator with actuation redundancy and its corresponding non-redundant parallel manipulator are derived.", "Abstract In this paper, the dynamic parameters, both inertial and frictional, of a 3-DOF RPS parallel manipulator are identified considering two important issues: the physical feasibility of the identified inertial parameters and the use of nonlinear friction models in the identification process in order to model the friction phenomenon at robot joints. The dynamic model of the parallel manipulator is obtained starting from the Gibbs–Appell equations of motion along with the Gauss principle of Least Action, and these equations of motion are rewritten in a their linear form with respect to the inertial parameters of the mechanical system. At this point, in accordance with the friction model considered, either linear or nonlinear, two types of dynamic models are dealt with: the totally and the partially linear with respect to the parameters to be identified. In order to solve the identification problem when nonlinear friction models are included, a nonlinear constrained optimization problem will be formulated and solved, instead of the Least Square Method, which is valid only for linear identification problems. It must be mentioned that the above-mentioned optimization problem will include the physical feasibility of the identified parameters in its formulation. The proposed procedure will be verified against a virtual parallel manipulator and finally, experimental identification processes are carried out over an actual parallel manipulator and a comparison is made between the LSM and the optimization process in the case of linear friction models, and between the linear and nonlinear friction models in the optimization process.", "This paper deals with the experimental identification of the dynamic parameters of parallel machines. The dynamic parameters are estimated by using the weighted least squares solution of an over determined linear system obtained from the sampling of the dynamic model along a closed loop exciting trajectory. Experimental results are exhibited for the H4 robot, a fully parallel structure providing 3 degrees of freedom (DOF) in translation and 1 DOF in rotation. A comparative study is performed depending on the available measurements, i.e., different sensor locations (motor, end effector).", "The determination of the minimum set of inertial parameters of robots contributes to the reduction of the computational cost of the dynamic models and simplifies the identification of the inertial parameters. These parameters can be obtained from the classical inertial parameters by eliminating those that have no effect on the dynamic model and by regrouping some others. A direct method is presented for determining the minimum set of inertial parameters of serial robots. The method permits determination of most of the regrouped parameters by means of closed-form relations. >" ] }
1709.03410
2754560798
Low-shot learning methods for image classification support learning from sparse data. We extend these techniques to support dense semantic image segmentation. Specifically, we train a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN). We use this FCN to perform dense pixel-level prediction on a test image for the new semantic class. Our architecture shows a 25 relative meanIoU improvement compared to the best baseline methods for one-shot segmentation on unseen classes in the PASCAL VOC 2012 dataset and is at least 3 times faster.
Weak and semi-supervised methods for Semantic Segmentation reduce the requirement on expensive pixel-level annotations, thus attracting recent interest. Weak supervision refers to training from coarse annotations like bounding boxes @cite_2 or image labels @cite_37 @cite_47 @cite_7 . A notable example is co-segmentation, where the goal is to find and segment co-occurring objects in images from the same semantic class @cite_42 @cite_23 . Many co-segmentation algorithms @cite_12 @cite_13 @cite_45 assume object visual appearances in a batch are similar and either rely on hand-tuned low-level features or high-level CNN features trained for different tasks or objects @cite_9 . In contrast, we meta-learn a network to produce a high-level representation of a new semantic class given a single labeled example. Semi-supervised approaches @cite_47 @cite_5 @cite_46 combine weak labels with a small set of pixel-level annotations. However, they assume a large set of weak labels for each of the desired objects. For instance, Pathak al @cite_24 use image-level annotations for all classes and images in the PASCAL 2012 training set @cite_6 , while we exclude all annotations of the testing classes from the PASCAL training set.
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_46", "@cite_9", "@cite_42", "@cite_6", "@cite_24", "@cite_45", "@cite_23", "@cite_2", "@cite_5", "@cite_47", "@cite_13", "@cite_12" ], "mid": [ "1961881037", "1931270512", "2949847866", "2002754212", "2157244733", "", "2952004933", "", "", "2949086864", "2257483379", "1529410181", "", "1948751323" ], "abstract": [ "", "Multiple instance learning (MIL) can reduce the need for costly annotation in tasks such as semantic segmentation by weakening the required degree of supervision. We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network. In this setting, we seek to learn a semantic segmentation model from just weak image-level labels. The model is trained end-to-end to jointly optimize the representation while disambiguating the pixel-image label assignment. Fully convolutional training accepts inputs of any size, does not need object proposal pre-processing, and offers a pixelwise loss map for selecting latent instances. Our multi-class MIL loss exploits the further supervision given by images with multiple labels. We evaluate this approach through preliminary experiments on the PASCAL VOC segmentation challenge.", "We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches even with much less training images with strong annotations in PASCAL VOC dataset.", "There have been some recent efforts to build visual knowledge bases from Internet images. But most of these approaches have focused on bounding box representation of objects. In this paper, we propose to enrich these knowledge bases by automatically discovering objects and their segmentations from noisy Internet images. Specifically, our approach combines the power of generative modeling for segmentation with the effectiveness of discriminative models for detection. The key idea behind our approach is to learn and exploit top-down segmentation priors based on visual subcategories. The strong priors learned from these visual subcategories are then combined with discriminatively trained detectors and bottom up cues to produce clean object segmentations. Our experimental results indicate state-of-the-art performance on the difficult dataset introduced by [29] We have integrated our algorithm in NEIL for enriching its knowledge base [5]. As of 14th April 2014, NEIL has automatically generated approximately 500K segmentations using web data.", "We introduce the term cosegmentation which denotes the task of segmenting simultaneously the common parts of an image pair. A generative model for cosegmentation is presented. Inference in the model leads to minimizing an energy with an MRF term encoding spatial coherency and a global constraint which attempts to match the appearance histograms of the common parts. This energy has not been proposed previously and its optimization is challenging and NP-hard. For this problem a novel optimization scheme which we call trust region graph cuts is presented. We demonstrate that this framework has the potential to improve a wide range of research: Object driven image retrieval, video tracking and segmentation, and interactive image editing. The power of the framework lies in its generality, the common part can be a rigid non-rigid object (or scene), observed from different viewpoints or even similar objects of the same class.", "", "We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.", "", "", "Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called BoxSup, produces competitive results supervised by boxes only, on par with strong baselines fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further unleashes the power of deep convolutional networks and yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT.", "We propose a novel weakly-supervised semantic segmentation algorithm based on Deep Convolutional Neural Network (DCNN). Contrary to existing weakly-supervised approaches, our algorithm exploits auxiliary segmentation annotations available for different categories to guide segmentations on images with only image-level class labels. To make segmentation knowledge transferrable across categories, we design a decoupled encoder-decoder architecture with attention model. In this architecture, the model generates spatial highlights of each category presented in images using an attention model, and subsequently performs binary segmentation for each highlighted region using decoder. Combining attention model, the decoder trained with segmentation annotations in different categories boosts accuracy of weakly-supervised semantic segmentation. The proposed algorithm demonstrates substantially improved performance compared to the state-of-theart weakly-supervised techniques in PASCAL VOC 2012 dataset when our model is trained with the annotations in 60 exclusive categories in Microsoft COCO dataset.", "Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL", "", "Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline." ] }
1709.03247
2754793361
Convolutional highways are deep networks based on multiple stacked convolutional layers for feature preprocessing. We introduce an evolutionary algorithm (EA) for optimization of the structure and hyperparameters of convolutional highways and demonstrate the potential of this optimization setting on the well-known MNIST data set. The (1+1)-EA employs Rechenberg's mutation rate control and a niching mechanism to overcome local optima adapts the optimization approach. An experimental study shows that the EA is capable of improving the state-of-the-art network contribution and of evolving highway networks from scratch.
The line of research on neuroevolution began in the nineties with many interesting approaches, of which most concentrated on the number of neurons and the structure of MLPs. One of the most famous contributions in this line of research is NEAT @cite_14 , which is able to evolve MLPs employing techniques like augmenting topologies and niching. Its successor HyperNEAT @cite_8 is able to evolve networks, but does not achieve state-of-the-art performance. Compositional pattern-producing networks (CPPN) @cite_16 assume the general network structure is predefined, but its components are independent of each other. Fernando al @cite_3 extend CPPN for autoencoders by a Lamarckian approach that inherits the learned weights. Ilya Loshchilov and Hutter @cite_13 employ the CMA-ES to evolve the hyperparameters of convolutional networks optimizing dropout and learning rates, batch sizes, numbers of filters, and numbers of units in dense layers. Suganuma al @cite_7 propose a genetic programming (GP) approach for designing convolutional networks achieving competitive results to state-of-the-art convolutional networks. Recently, Real al (Google) @cite_10 invested exhaustive evolutionary search to evolve convolutional networks for image classification on CIFAR. Also LSTM cells have been subject to evolutionary architecture search by J ' o zefowicz al (Google) @cite_11 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_10", "@cite_3", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2148872333", "2606006859", "2119814172", "", "", "2020399841", "2344075909", "581956982" ], "abstract": [ "", "The convolutional neural network (CNN), which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, we attempt to automatically construct CNN architectures for an image classification task based on Cartesian genetic programming (CGP). In our method, we adopt highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The CNN structure and connectivity represented by the CGP encoding method are optimized to maximize the validation accuracy. To evaluate the proposed method, we constructed a CNN architecture for the image classification task with the CIFAR-10 dataset. The experimental result shows that the proposed method can be used to automatically find the competitive CNN architecture compared with state-of-the-art models.", "Research in neuroevolution---that is, evolving artificial neural networks (ANNs) through evolutionary algorithms---is inspired by the evolution of biological brains, which can contain trillions of connections. Yet while neuroevolution has produced successful results, the scale of natural brains remains far beyond reach. This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT employs an indirect encoding called connective compositional pattern-producing networks (CPPNs) that can produce connectivity patterns with symmetries and repeating motifs by interpreting spatial patterns generated within a hypercube as connectivity patterns in a lower-dimensional space. This approach can exploit the geometry of the task by mapping its regularities onto the topology of the network, thereby shifting problem difficulty away from dimensionality to the underlying problem structure. Furthermore, connective CPPNs can represent the same connectivity pattern at any resolution, allowing ANNs to scale to new numbers of inputs and outputs without further evolution. HyperNEAT is demonstrated through visual discrimination and food-gathering tasks, including successful visual discrimination networks containing over eight million connections. The main conclusion is that the ability to explore the space of regular connectivity patterns opens up a new class of complex high-dimensional tasks to neuroevolution.", "", "", "Natural DNA can encode complexity on an enormous scale. Researchers are attempting to achieve the same representational efficiency in computers by implementing developmental encodings, i.e. encodings that map the genotype to the phenotype through a process of growth from a small starting point to a mature form. A major challenge in in this effort is to find the right level of abstraction of biological development to capture its essential properties without introducing unnecessary inefficiencies. In this paper, a novel abstraction of natural development, called Compositional Pattern Producing Networks (CPPNs), is proposed. Unlike currently accepted abstractions such as iterative rewrite systems and cellular growth simulations, CPPNs map to the phenotype without local interaction, that is, each individual component of the phenotype is determined independently of every other component. Results produced with CPPNs through interactive evolution of two-dimensional images show that such an encoding can nevertheless produce structural motifs often attributed to more conventional developmental abstractions, suggesting that local interaction may not be essential to the desirable properties of natural encoding in the way that is usually assumed.", "Hyperparameters of deep neural networks are often optimized by grid search, random search or Bayesian optimization. As an alternative, we propose to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known for its state-of-the-art performance in derivative-free optimization. CMA-ES has some useful invariance properties and is friendly to parallel evaluations of solutions. We provide a toy example comparing CMA-ES and state-of-the-art Bayesian optimization algorithms for tuning the hyperparameters of a convolutional neural network for the MNIST dataset on 30 GPUs in parallel.", "The Recurrent Neural Network (RNN) is an extremely powerful sequence model that is often difficult to train. The Long Short-Term Memory (LSTM) is a specific RNN architecture whose design makes it much easier to train. While wildly successful in practice, the LSTM's architecture appears to be ad-hoc so it is not clear if it is optimal, and the significance of its individual components is unclear. In this work, we aim to determine whether the LSTM architecture is optimal or whether much better architectures exist. We conducted a thorough architecture search where we evaluated over ten thousand different RNN architectures, and identified an architecture that outperforms both the LSTM and the recently-introduced Gated Recurrent Unit (GRU) on some but not all tasks. We found that adding a bias of 1 to the LSTM's forget gate closes the gap between the LSTM and the GRU." ] }
1709.03247
2754793361
Convolutional highways are deep networks based on multiple stacked convolutional layers for feature preprocessing. We introduce an evolutionary algorithm (EA) for optimization of the structure and hyperparameters of convolutional highways and demonstrate the potential of this optimization setting on the well-known MNIST data set. The (1+1)-EA employs Rechenberg's mutation rate control and a niching mechanism to overcome local optima adapts the optimization approach. An experimental study shows that the EA is capable of improving the state-of-the-art network contribution and of evolving highway networks from scratch.
Recent related approaches by Bello al (Google) @cite_9 and Baker (MIT) al @cite_12 employ reinforcement learning for evolving deep convolutional networks. Machine learning pipelines can be evolved with EAs for example with the tree-based pipeline optimization tool (TPOT) by Olson al @cite_4 or for kernel PCA pipelines @cite_1 , for which an integer-based representation has been used.
{ "cite_N": [ "@cite_1", "@cite_9", "@cite_4", "@cite_12" ], "mid": [ "2754607647", "2619307294", "2953359974", "2556833785" ], "abstract": [ "This paper introduces an evolutionary tuning approach for a pipeline of preprocessing methods and kernel principal component analysis (PCA) employing evolution strategies (ES). A simple (1+1)-ES adapts the imputation method, various preprocessing steps like normalization and standardization, and optimizes the parameters of kernel PCA. A small experimental study on a benchmark data set with missing values demonstrates that the evolutionary kernel PCA pipeline can be tuned with relatively few optimization steps, which makes evolutionary tuning applicable to scenarios with very large data sets.", "We present an approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. We train a Recurrent Neural Network controller to generate a string in a domain specific language that describes a mathematical update equation based on a list of primitive functions, such as the gradient, running average of the gradient, etc. The controller is trained with Reinforcement Learning to maximize the performance of a model after a few epochs. On CIFAR-10, our method discovers several update rules that are better than many commonly used optimizers, such as Adam, RMSProp, or SGD with and without Momentum on a ConvNet model. We introduce two new optimizers, named PowerSign and AddSign, which we show transfer well and improve training on a variety of different tasks and architectures, including ImageNet classification and Google's neural machine translation system.", "As the field of data science continues to grow, there will be an ever-increasing demand for tools that make machine learning accessible to non-experts. In this paper, we introduce the concept of tree-based pipeline optimization for automating one of the most tedious parts of machine learning---pipeline design. We implement an open source Tree-based Pipeline Optimization Tool (TPOT) in Python and demonstrate its effectiveness on a series of simulated and real-world benchmark data sets. In particular, we show that TPOT can design machine learning pipelines that provide a significant improvement over a basic machine learning analysis while requiring little to no input nor prior knowledge from the user. We also address the tendency for TPOT to design overly complex pipelines by integrating Pareto optimization, which produces compact pipelines without sacrificing classification accuracy. As such, this work represents an important step toward fully automating machine learning pipeline design.", "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks." ] }
1709.03008
2754281896
Power grids are critical infrastructure assets that face non-technical losses (NTL) such as electricity theft or faulty meters. NTL may range up to 40 of the total electricity distributed in emerging countries. Industrial NTL detection systems are still largely based on expert knowledge when deciding whether to carry out costly on-site inspections of customers. Electricity providers are reluctant to move to large-scale deployments of automated systems that learn NTL profiles from data due to the latter's propensity to suggest a large number of unnecessary inspections. In this paper, we propose a novel system that combines automated statistical decision making with expert knowledge. First, we propose a machine learning framework that classifies customers into NTL or non-NTL using a variety of features derived from the customers' consumption data. The methodology used is specifically tailored to the level of noise in the data. Second, in order to allow human experts to feed their knowledge in the decision loop, we propose a method for visualizing prediction results at various granularity levels in a spatial hologram. Our approach allows domain experts to put the classification results into the context of the data and to incorporate their knowledge for making the final decisions of which customers to inspect. This work has resulted in appreciable results on a real-world data set of 3.6M customers. Our system is being deployed in a commercial NTL detection software.
In the literature, different approaches for visualization of NTL are reported. In order to support the decision making, the visualization of the network topology on feeder level as well as load curves on transformer level is proposed in @cite_6 . In addition, the density of NTL in a 2D map is visualized in @cite_8 . For analytics in power grids as a whole, the need for novel and more powerful visualization techniques is argued in @cite_13 . The proposed approaches include heat maps and risk maps. All methods for visualization of NTL proposed in the literature focus only on 2D representations.
{ "cite_N": [ "@cite_13", "@cite_6", "@cite_8" ], "mid": [ "", "2038992224", "2011944723" ], "abstract": [ "", "The distribution systems need to adjust and intensify the use of new available technologies and incorporate intelligence to combat non-technical loss processes. Its detection is one of the greatest challenges to energy distribution companies. Studies and research involving load modeling or load estimation (LM LE) on power system distribution combined with smart grid functionality will help control commercial loses in electric power distribution systems. The use of algorithms is proposed for LM LE using monthly energy consumption data converted to hourly demand. This information and the remote monitoring consumption data via smart meters will be processed by a computational tool that will indicate the feeder with the greater probability of presenting non-technical losses, with the objective of locating energy fraud, directing company's investment on combating non-technical losses.", "In this work we expose a software solution proposal to support the detection process of non-technical losses of electricity in the Energy Company of the Quind´io region in Colombia (EDEQ by its acronym in Spanish). In the development of this prototype we used approaches for data integration and software engineering in order to implement a technical solution to improve the activities for analysis and visualization of electricity losses, according with the specifications and expectatives identified in the roles that participate in these process." ] }
1709.02940
2756042011
Training triplet networks with large-scale data is challenging in face recognition. Due to the number of possible triplets explodes with the number of samples, previous studies adopt the online hard negative mining(OHNM) to handle it. However, as the number of identities becomes extremely large, the training will suffer from bad local minima because effective hard triplets are difficult to be found. To solve the problem, in this paper, we propose training triplet networks with subspace learning, which splits the space of all identities into subspaces consisting of only similar identities. Combined with the batch OHNM, hard triplets can be found much easier. Experiments on the large-scale MS-Celeb-1M challenge with 100K identities demonstrate that the proposed method can largely improve the performance. In addition, to deal with heavy noise and large-scale retrieval, we also make some efforts on robust noise removing and efficient image retrieval, which are used jointly with the subspace learning to obtain the state-of-the-art performance on the MS-Celeb-1M competition (without external data in Challenge1).
In this part, we introduce some previous studies on how to accelerate the training of triplet networks. One big difficulty is that the number of possible triplets scales cubically with the number of training samples. To avoid directly searching the whole space, some researchers @cite_7 @cite_8 @cite_14 convert the triplet loss into a form of softmax loss. Sankaranarayanan al. @cite_14 propose to transfer the Euclidean distance between positive and negative pairs into probabilistic similarity, and they use low-dimensional embedding for fast retrieval. Similar to @cite_14 , Zhuang al. @cite_7 convert the retrieval problem into a multi-label classification problem with binary codes, which is optimized by the binary quadratic algorithm to achieve faster retrieval. To simplify the optimization, Hoffer al. @cite_8 propose a Siamese-like triplet network by transferring the retrieval problem into a @math -class classification problem. These methods have shown promising speedup, but no hard triplets are considered, which will result in inferior performance.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8" ], "mid": [ "", "2296324964", "1839408879" ], "abstract": [ "", "In this paper, we aim to learn a mapping (or embedding) from images to a compact binary space in which Hamming distances correspond to a ranking measure for the image retrieval task. We make use of a triplet loss because this has been shown to be most effective for ranking problems. However, training in previous works can be prohibitively expensive due to the fact that optimization is directly performed on the triplet space, where the number of possible triplets for training is cubic in the number of training examples. To address this issue, we propose to formulate high-order binary codes learning as a multi-label classification problem by explicitly separating learning into two interleaved stages. To solve the first stage, we design a large-scale high-order binary codes inference algorithm to reduce the high-order objective to a standard binary quadratic problem such that graph cuts can be used to efficiently infer the binary code which serve as the label of each training datum. In the second stage we propose to map the original image to compact binary codes via carefully designed deep convolutional neural networks (CNNs) and the hashing function fitting can be solved by training binary CNN classifiers. An incremental interleaved optimization strategy is proffered to ensure that these two steps are interactive with each other during training for better accuracy. We conduct experiments on several benchmark datasets, which demonstrate both improved training time (by as much as two orders of magnitude) as well as producing state-of-the-art hashing for various retrieval tasks.", "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning." ] }
1709.02940
2756042011
Training triplet networks with large-scale data is challenging in face recognition. Due to the number of possible triplets explodes with the number of samples, previous studies adopt the online hard negative mining(OHNM) to handle it. However, as the number of identities becomes extremely large, the training will suffer from bad local minima because effective hard triplets are difficult to be found. To solve the problem, in this paper, we propose training triplet networks with subspace learning, which splits the space of all identities into subspaces consisting of only similar identities. Combined with the batch OHNM, hard triplets can be found much easier. Experiments on the large-scale MS-Celeb-1M challenge with 100K identities demonstrate that the proposed method can largely improve the performance. In addition, to deal with heavy noise and large-scale retrieval, we also make some efforts on robust noise removing and efficient image retrieval, which are used jointly with the subspace learning to obtain the state-of-the-art performance on the MS-Celeb-1M competition (without external data in Challenge1).
Inspired by the efficiency of classification, some studies @cite_10 @cite_2 @cite_13 @cite_4 combine the advantages of classification and hard triplets. Wang al. @cite_10 use a pretrained classification model to select possible hard triplets offline, but the offline selection is fixed as the classification model will not be updated. To achieve faster training and handle variant triplets, Parkhi al. @cite_2 train a classification network that is further fine-tuned with triplet loss. They use the Online Hard Negative Mining(OHNM), wherein only the triplets violating the margin constraint are considered as the hard ones for learning. Instead of fine-tuning with only triplet loss, Chen al. @cite_13 propose to train networks jointly with softmax and triplet loss to preserve both inter-class and intra-class information, and they also adopt OHNM in training. To apply OHNM in large-scale data, Schroff al. propose the FaceNet @cite_4 that trains triplet networks with @math identities, and it takes a few months to finish with a large batchsize of @math . One limitation of OHNM is that triplets are predefined in the batch, and this will miss possible hard negative samples that contained in the batch.
{ "cite_N": [ "@cite_13", "@cite_10", "@cite_4", "@cite_2" ], "mid": [ "", "1975517671", "2096733369", "2325939864" ], "abstract": [ "", "Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks." ] }
1709.02940
2756042011
Training triplet networks with large-scale data is challenging in face recognition. Due to the number of possible triplets explodes with the number of samples, previous studies adopt the online hard negative mining(OHNM) to handle it. However, as the number of identities becomes extremely large, the training will suffer from bad local minima because effective hard triplets are difficult to be found. To solve the problem, in this paper, we propose training triplet networks with subspace learning, which splits the space of all identities into subspaces consisting of only similar identities. Combined with the batch OHNM, hard triplets can be found much easier. Experiments on the large-scale MS-Celeb-1M challenge with 100K identities demonstrate that the proposed method can largely improve the performance. In addition, to deal with heavy noise and large-scale retrieval, we also make some efforts on robust noise removing and efficient image retrieval, which are used jointly with the subspace learning to obtain the state-of-the-art performance on the MS-Celeb-1M competition (without external data in Challenge1).
To make full use of the batch, some studies @cite_1 generate hard triplets online within the batch. Hermans al. @cite_1 propose the batch OHNM, in which negative samples are searched within the batch based on their distance to the anchor, and the top nearest ones are considered as candidate hard negative samples. In this way, more hard triplets can be found easily, and the best performance is obtained in person re-identification with @math identities. Due to their small scale, the probability of sampling similar identities with an usual batchsize( @math or @math ) is large. However, in the large-scale case, randomly sampling similar identities will be much more difficult, thus the batch OHNM will fail. In this paper, we focus on how to find effective hard triplets in large-scale face recognition.
{ "cite_N": [ "@cite_1" ], "mid": [ "2598634450" ], "abstract": [ "In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin." ] }
1709.03043
2755176086
In recent years, designing distributed optimization algorithms for empirical risk minimization (ERM) has become an active research topic, mainly because of the practical need to deal with the huge volume of data. In this paper, we propose a general framework for training an ERM model by solving its dual problem in parallel over multiple machines. Viewed as special cases of our framework, several existing methods can be better understood. Our method provides a versatile approach for many large-scale machine learning problems, including linear binary multi- class classification, regression, and structured prediction. We show that our method, compared with existing approaches, enjoys global linear convergence for a broader class of problems and achieves faster empirical performance.
Our algorithm can be viewed from two different perspectives. If we consider solving , then it is similar to proximal (quasi-)Newton methods with some specific pick of the second-order approximation. A generalization of it appears as the block coordinate descent method , where the proximal quasi-Newton method is the special case that there is only one block of variables. One thing worth noticing is that @cite_10 requires the matrix in to be positive definite with a positive lower bound of the smallest eigenvalue over all iterations. We relaxed this condition to allow @math be indefinite or positive semidefinite when the @math term is strongly convex. This condition is used when Assumption holds, and in this case we do not need to add a damping term in our second order approximation. Namely, we can set @math in . The convergence theory of the inexact version of our framework follows from @cite_9 on the line of inexact variable metric methods for regularized optimization. The analysis of the exact version, derived independent of the theory in @cite_9 , is applicable to a broader class of problems, with the price of a slightly worse convergence rate.
{ "cite_N": [ "@cite_9", "@cite_10" ], "mid": [ "1842705303", "2039050532" ], "abstract": [ "Training machine learning models sometimes needs to be done on large amounts of data that exceed the capacity of a single machine, motivating recent works on developing algorithms that train in a distributed fashion. This paper proposes an efficient box-constrained quadratic optimization algorithm for distributedly training linear support vector machines (SVMs) with large data. Our key technical contribution is an analytical solution to the problem of computing the optimal step size at each iteration, using an efficient method that requires only O(1) communication cost to ensure fast convergence. With this optimal step size, our approach is superior to other methods by possessing global linear convergence, or, equivalently, O(log(1 e)) iteration complexity for an e-accurate solution, for distributedly solving the non-strongly-convex linear SVM dual problem. Experiments also show that our method is significantly faster than state-of-the-art distributed linear SVM algorithms including DSVM-AVE, DisDCA and TRON.", "We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with l1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method. The local Lipschitzian error bound holds under assumptions analogous to those for constrained smooth optimization, e.g., the convex function is polyhedral and the smooth function is (nonconvex) quadratic or is the composition of a strongly convex function with a linear mapping. We report numerical experience with solving the l1-regularization of unconstrained optimization problems from in ACM Trans. Math. Softw. 7, 17–41, 1981 and from the CUTEr set (Gould and Orban in ACM Trans. Math. Softw. 29, 373–394, 2003). Comparison with L-BFGS-B and MINOS, applied to a reformulation of the l1-regularized problem as a bound-constrained optimization problem, is also reported." ] }
1709.03043
2755176086
In recent years, designing distributed optimization algorithms for empirical risk minimization (ERM) has become an active research topic, mainly because of the practical need to deal with the huge volume of data. In this paper, we propose a general framework for training an ERM model by solving its dual problem in parallel over multiple machines. Viewed as special cases of our framework, several existing methods can be better understood. Our method provides a versatile approach for many large-scale machine learning problems, including linear binary multi- class classification, regression, and structured prediction. We show that our method, compared with existing approaches, enjoys global linear convergence for a broader class of problems and achieves faster empirical performance.
On the other hand, our focus is on how to devise a good approximation of the Hessian matrix of the smooth term to work efficiently for distributed optimization. Works focusing on this direction for dual ERM problems include @cite_2 @cite_16 @cite_1 . discusses how to solve the SVM dual problem in a distributed manner. This problem is a special case of , see Section for more details. They proposed a method called that iteratively solves with @math defined in with @math to obtain the update direction, while the step size @math is fixed to @math . Though they did not provide theoretical convergence guarantee in @cite_2 , we can see the reasoning of this choice starting from the following observation. In the case of SVM, the objective is quadratic, with @math . Thus one can easily see that with the equality holds when @math are identical and @math is a multiple of the vector of ones. Therefore, taking @math in and plug in the bound in , we can see that since @math in the SVM case, minimizing with a step size of @math leads to decrease of the objective value.
{ "cite_N": [ "@cite_16", "@cite_1", "@cite_2" ], "mid": [ "2123154536", "1697545848", "2129727551" ], "abstract": [ "We present and study a distributed optimization algorithm by employing a stochastic dual coordinate ascent method. Stochastic dual coordinate ascent methods enjoy strong theoretical guarantees and often have better performances than stochastic gradient descent methods in optimizing regularized loss minimization problems. It still lacks of efforts in studying them in a distributed framework. We make a progress along the line by presenting a distributed stochastic dual coordinate ascent algorithm in a star network, with an analysis of the tradeoff between computation and communication. We verify our analysis by experiments on real data sets. Moreover, we compare the proposed algorithm with distributed stochastic gradient descent methods and distributed alternating direction methods of multipliers for optimizing SVMs in the same distributed framework, and observe competitive performances.", "We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1 √n show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines.", "A key task for connectionist research is the development and analysis of learning algorithms. An examination is made of several supervised learning algorithms for single-cell and network models. The heart of these algorithms is the pocket algorithm, a modification of perceptron learning that makes perceptron learning well-behaved with nonseparable training data, even if the data are noisy and contradictory. Features of these algorithms include speed algorithms fast enough to handle large sets of training data; network scaling properties, i.e. network methods scale up almost as well as single-cell models when the number of inputs is increased; analytic tractability, i.e. upper bounds on classification error are derivable; online learning, i.e. some variants can learn continually, without referring to previous data; and winner-take-all groups or choice groups, i.e. algorithms can be adapted to select one out of a number of possible classifications. These learning algorithms are suitable for applications in machine learning, pattern recognition, and connectionist expert systems. >" ] }
1709.03043
2755176086
In recent years, designing distributed optimization algorithms for empirical risk minimization (ERM) has become an active research topic, mainly because of the practical need to deal with the huge volume of data. In this paper, we propose a general framework for training an ERM model by solving its dual problem in parallel over multiple machines. Viewed as special cases of our framework, several existing methods can be better understood. Our method provides a versatile approach for many large-scale machine learning problems, including linear binary multi- class classification, regression, and structured prediction. We show that our method, compared with existing approaches, enjoys global linear convergence for a broader class of problems and achieves faster empirical performance.
The second approach in @cite_16 , called the practical variant, considers and takes unit step sizes. Similar to our discussion above for their basic variant, @math in this case is also an upper bound of the function value decrease if the step size is fixed to @math , and we can expect this method to work better than the basic variant as the approximation is closer to the real Hessian and the scaling factor is closer to one. Empirical results show that this variant is as expected faster than the basic variant, despite the lack of theoretical convergence guarantee in @cite_16 .
{ "cite_N": [ "@cite_16" ], "mid": [ "2123154536" ], "abstract": [ "We present and study a distributed optimization algorithm by employing a stochastic dual coordinate ascent method. Stochastic dual coordinate ascent methods enjoy strong theoretical guarantees and often have better performances than stochastic gradient descent methods in optimizing regularized loss minimization problems. It still lacks of efforts in studying them in a distributed framework. We make a progress along the line by presenting a distributed stochastic dual coordinate ascent algorithm in a star network, with an analysis of the tradeoff between computation and communication. We verify our analysis by experiments on real data sets. Moreover, we compare the proposed algorithm with distributed stochastic gradient descent methods and distributed alternating direction methods of multipliers for optimizing SVMs in the same distributed framework, and observe competitive performances." ] }
1709.03110
2756272696
This paper proposes a general system for compute-intensive graph mining tasks that find from a big graph all subgraphs that satisfy certain requirements (e.g., graph matching and community detection). Due to the broad range of applications of such tasks, many single-threaded algorithms have been proposed. However, graphs such as online social networks and knowledge graphs often have billions of vertices and edges, which require distributed processing in order to scale. Unfortunately, existing distributed graph processing systems such as Pregel and GraphLab are designed for data-intensive analytics, and are inefficient for compute-intensive graph mining tasks since computation over any data is coupled with the data's access that involves network transmission. We propose a distributed graph mining framework, called G-thinker, which is designed for compute-intensive graph mining workloads. G-thinker provides an intuitive graph-exploration API for the convenient implementation of various graph mining algorithms, and the runtime engine provides efficient execution with bounded memory consumption, light network communication, and parallelism between computation and communication. Extensive experiments were conducted, which demonstrate that G-thinker is orders of magnitude faster than existing solution, and can scale to graphs that are two orders of magnitude larger given the same hardware resources.
Vertex-Centric In-Memory Solutions. Most vertex-centric systems are in-memory systems, where vertices (along with their adjacency lists) are partitioned among different machines in a cluster and kept in memory @cite_11 @cite_13 @cite_4 @cite_21 @cite_20 @cite_5 . Vertices communicate with each other by message passing, and messages are also buffered in memory to avoid slow disk access.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_5", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "78077100", "1448681276", "2949679549", "2962740062", "1969970763", "2296407087" ], "abstract": [ "Large-scale graph-structured computation is central to tasks ranging from targeted advertising to natural language processing and has led to the development of several graph-parallel abstractions including Pregel and GraphLab. However, the natural graphs commonly found in the real-world have highly skewed power-law degree distributions, which challenge the assumptions made by these abstractions, limiting performance and scalability. In this paper, we characterize the challenges of computation on natural graphs in the context of existing graph-parallel abstractions. We then introduce the PowerGraph abstraction which exploits the internal structure of graph programs to address these challenges. Leveraging the PowerGraph abstraction we introduce a new approach to distributed graph placement and representation that exploits the structure of power-law graphs. We provide a detailed analysis and experimental evaluation comparing PowerGraph to two popular graph-parallel systems. Finally, we describe three different implementation strategies for PowerGraph and discuss their relative merits with empirical evaluations on large-scale real-world problems demonstrating order of magnitude gains.", "In pursuit of graph processing performance, the systems community has largely abandoned general-purpose distributed dataflow frameworks in favor of specialized graph processing systems that provide tailored programming abstractions and accelerate the execution of iterative graph algorithms. In this paper we argue that many of the advantages of specialized graph processing systems can be recovered in a modern general-purpose distributed dataflow system. We introduce GraphX, an embedded graph processing framework built on top of Apache Spark, a widely used distributed dataflow system. GraphX presents a familiar composable graph abstraction that is sufficient to express existing graph APIs, yet can be implemented using only a few basic dataflow operators (e.g., join, map, group-by). To achieve performance parity with specialized graph systems, GraphX recasts graph-specific optimizations as distributed join optimizations and materialized view maintenance. By leveraging advances in distributed dataflow frameworks, GraphX brings low-cost fault tolerance to graph processing. We evaluate GraphX on real workloads and demonstrate that GraphX achieves an order of magnitude performance gain over the base dataflow framework and matches the performance of specialized graph processing systems while enabling a wider range of computation.", "Massive graphs, such as online social networks and communication networks, have become common today. To efficiently analyze such large graphs, many distributed graph computing systems have been developed. These systems employ the \"think like a vertex\" programming paradigm, where a program proceeds in iterations and at each iteration, vertices exchange messages with each other. However, using Pregel's simple message passing mechanism, some vertices may send receive significantly more messages than others due to either the high degree of these vertices or the logic of the algorithm used. This forms the communication bottleneck and leads to imbalanced workload among machines in the cluster. In this paper, we propose two effective message reduction techniques: (1)vertex mirroring with message combining, and (2)an additional request-respond API. These techniques not only reduce the total number of messages exchanged through the network, but also bound the number of messages sent received by any single vertex. We theoretically analyze the effectiveness of our techniques, and implement them on top of our open-source Pregel implementation called Pregel+. Our experiments on various large real graphs demonstrate that our message reduction techniques significantly improve the performance of distributed graph computation.", "", "GPS (for Graph Processing System) is a complete open-source system we developed for scalable, fault-tolerant, and easy-to-program execution of algorithms on extremely large graphs. This paper serves the dual role of describing the GPS system, and presenting techniques and experimental results for graph partitioning in distributed graph-processing systems like GPS. GPS is similar to Google's proprietary Pregel system, with three new features: (1) an extended API to make global computations more easily expressed and more efficient; (2) a dynamic repartitioning scheme that reassigns vertices to different workers during the computation, based on messaging patterns; and (3) an optimization that distributes adjacency lists of high-degree vertices across all compute nodes to improve performance. In addition to presenting the implementation of GPS and its novel features, we also present experimental results on the performance effects of both static and dynamic graph partitioning schemes, and we describe the compilation of a high-level domain-specific programming language to GPS, enabling easy expression of complex algorithms.", "Analyzing large graphs provides valuable insights for social networking and web companies in content ranking and recommendations. While numerous graph processing systems have been developed and evaluated on available benchmark graphs of up to 6.6B edges, they often face significant difficulties in scaling to much larger graphs. Industry graphs can be two orders of magnitude larger - hundreds of billions or up to one trillion edges. In addition to scalability challenges, real world applications often require much more complex graph processing workflows than previously evaluated. In this paper, we describe the usability, performance, and scalability improvements we made to Apache Giraph, an open-source graph processing system, in order to use it on Facebook-scale graphs of up to one trillion edges. We also describe several key extensions to the original Pregel model that make it possible to develop a broader range of production graph applications and workflows as well as improve code reuse. Finally, we report on real-world operations as well as performance characteristics of several large-scale production applications." ] }
1709.03110
2756272696
This paper proposes a general system for compute-intensive graph mining tasks that find from a big graph all subgraphs that satisfy certain requirements (e.g., graph matching and community detection). Due to the broad range of applications of such tasks, many single-threaded algorithms have been proposed. However, graphs such as online social networks and knowledge graphs often have billions of vertices and edges, which require distributed processing in order to scale. Unfortunately, existing distributed graph processing systems such as Pregel and GraphLab are designed for data-intensive analytics, and are inefficient for compute-intensive graph mining tasks since computation over any data is coupled with the data's access that involves network transmission. We propose a distributed graph mining framework, called G-thinker, which is designed for compute-intensive graph mining workloads. G-thinker provides an intuitive graph-exploration API for the convenient implementation of various graph mining algorithms, and the runtime engine provides efficient execution with bounded memory consumption, light network communication, and parallelism between computation and communication. Extensive experiments were conducted, which demonstrate that G-thinker is orders of magnitude faster than existing solution, and can scale to graphs that are two orders of magnitude larger given the same hardware resources.
However, the vertex-centric API is not suitable for , and each vertex @math needs to communicate with its surrounding vertices in a breadth-first manner (one more hop per iteration) to get their information for constructing @math . The solution cannot scale to large graphs since the total volume of (possibly overlapping) decomposed subgraphs may easily exceed the memory capacity of a cluster. Vertex-centric systems also do not provide any mechanism for decomposed subgraphs to share common vertex's information Pregel's message combiner @cite_9 does not help, since it is to aggregate messages towards the same target vertex, while we consider getting information from the same source vertex. .
{ "cite_N": [ "@cite_9" ], "mid": [ "2170616854" ], "abstract": [ "Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs - in some cases billions of vertices, trillions of edges - poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing large graphs that is expressive and easy to program." ] }
1709.03110
2756272696
This paper proposes a general system for compute-intensive graph mining tasks that find from a big graph all subgraphs that satisfy certain requirements (e.g., graph matching and community detection). Due to the broad range of applications of such tasks, many single-threaded algorithms have been proposed. However, graphs such as online social networks and knowledge graphs often have billions of vertices and edges, which require distributed processing in order to scale. Unfortunately, existing distributed graph processing systems such as Pregel and GraphLab are designed for data-intensive analytics, and are inefficient for compute-intensive graph mining tasks since computation over any data is coupled with the data's access that involves network transmission. We propose a distributed graph mining framework, called G-thinker, which is designed for compute-intensive graph mining workloads. G-thinker provides an intuitive graph-exploration API for the convenient implementation of various graph mining algorithms, and the runtime engine provides efficient execution with bounded memory consumption, light network communication, and parallelism between computation and communication. Extensive experiments were conducted, which demonstrate that G-thinker is orders of magnitude faster than existing solution, and can scale to graphs that are two orders of magnitude larger given the same hardware resources.
The key problem is, nevertheless, that vertex-centric computation is mainly for data-intensive computation, and generates a large number of messages to transmit for (e.g., for pulling vertices to construct decomposed subgraphs). We call the problem as communication-in-the-chain . In fact, @cite_30 indicates that a vertex-centric program is most scalable if each iteration requires linear computation and communication cost, and it runs for a small number of iterations. This essentially implies that vertex-centric systems are for graph problems with a low computational complexity.
{ "cite_N": [ "@cite_30" ], "mid": [ "2219156867" ], "abstract": [ "Graphs in real life applications are often huge, such as the Web graph and various social networks. These massive graphs are often stored and processed in distributed sites. In this paper, we study graph algorithms that adopt Google's Pregel, an iterative vertex-centric framework for graph processing in the Cloud. We first identify a set of desirable properties of an efficient Pregel algorithm, such as linear space, communication and computation cost per iteration, and logarithmic number of iterations. We define such an algorithm as a practical Pregel algorithm (PPA). We then propose PPAs for computing connected components (CCs), biconnected components (BCCs) and strongly connected components (SCCs). The PPAs for computing BCCs and SCCs use the PPAs of many fundamental graph problems as building blocks, which are of interest by themselves. Extensive experiments over large real graphs verified the efficiency of our algorithms." ] }
1709.03110
2756272696
This paper proposes a general system for compute-intensive graph mining tasks that find from a big graph all subgraphs that satisfy certain requirements (e.g., graph matching and community detection). Due to the broad range of applications of such tasks, many single-threaded algorithms have been proposed. However, graphs such as online social networks and knowledge graphs often have billions of vertices and edges, which require distributed processing in order to scale. Unfortunately, existing distributed graph processing systems such as Pregel and GraphLab are designed for data-intensive analytics, and are inefficient for compute-intensive graph mining tasks since computation over any data is coupled with the data's access that involves network transmission. We propose a distributed graph mining framework, called G-thinker, which is designed for compute-intensive graph mining workloads. G-thinker provides an intuitive graph-exploration API for the convenient implementation of various graph mining algorithms, and the runtime engine provides efficient execution with bounded memory consumption, light network communication, and parallelism between computation and communication. Extensive experiments were conducted, which demonstrate that G-thinker is orders of magnitude faster than existing solution, and can scale to graphs that are two orders of magnitude larger given the same hardware resources.
Vertex-Centric Disk-Based Solutions. The prohibitive memory requirement can be eliminated using a disk-based system. For example, MapReduce @cite_16 can be used to simulate vertex-centric graph computation (e.g., message sending & receiving) @cite_18 , and Pregelix @cite_8 translates a vertex-centric program into a dataflow execution plan for out-of-memory processing. However, the large amount of intermediate data (including messages and subgraphs) need to be dumped to disk and then loaded back for each iteration of synchronous computation, making the running time prohibitive. We call the problem as disk-in-the-chain , which adds upon the communication-in-the-chain problem already suffered by a vertex-centric model. In fact, MapReduce even writes intermediate data to Hadoop Distributed File System (HDFS), which is much slower than local disk writes since HDFS replicates each data block on three machines for fault tolerance (termed the remote write problem ).
{ "cite_N": [ "@cite_18", "@cite_16", "@cite_8" ], "mid": [ "1993219230", "2173213060", "2574229471" ], "abstract": [ "MapReduce has become one of the most popular parallel computing paradigms in cloud, due to its high scalability, reliability, and fault-tolerance achieved for a large variety of applications in big data processing. In the literature, there are MapReduce Class MRC and Minimal MapReduce Class MMC to define the memory consumption, communication cost, CPU cost, and number of MapReduce rounds for an algorithm to execute in MapReduce. However, neither of them is designed for big graph processing in MapReduce, since the constraints in MMC can be hardly achieved simultaneously on graphs and the conditions in MRC may induce scalability problems when processing big graph data. In this paper, we study scalable big graph processing in MapReduce. We introduce a Scalable Graph processing Class SGC by relaxing some constraints in MMC to make it suitable for scalable graph processing. We define two graph join operators in SGC, namely, EN join and NE join, using which a wide range of graph algorithms can be designed, including PageRank, breadth first search, graph keyword search, Connected Component (CC) computation, and Minimum Spanning Forest (MSF) computation. Remarkably, to the best of our knowledge, for the two fundamental graph problems CC and MSF computation, this is the first work that can achieve O(log(n)) MapReduce rounds with @math total communication cost in each round and constant memory consumption on each machine, where @math and @math are the number of nodes and edges in the graph respectively. We conducted extensive performance studies using two web-scale graphs Twitter and Friendster with different graph characteristics. The experimental results demonstrate that our algorithms can achieve high scalability in big graph processing.", "MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.", "Due to the growing need to process large graph and network datasetscreated by modern applications, recent years have witnessed a surginginterest in developing big graph platforms. Tens of such big graphsystems have already been developed, but there lacks a systematic categorizationand comparison of these systems. This article provides atimely and comprehensive survey of existing big graph systems, andsummarizes their key ideas and technical contributions from variousaspects. In addition to the popular vertex-centric systems which espousea think-like-a-vertex paradigm for developing parallel graph applications,this survey also covers other programming and computationmodels, contrasts those against each other, and provides a vision forthe future research on big graph analytics platforms. This survey aimsto help readers get a systematic picture of the landscape of recent biggraph systems, focusing not just on the systems themselves, but alsoon the key innovations and design philosophies underlying them." ] }
1709.03110
2756272696
This paper proposes a general system for compute-intensive graph mining tasks that find from a big graph all subgraphs that satisfy certain requirements (e.g., graph matching and community detection). Due to the broad range of applications of such tasks, many single-threaded algorithms have been proposed. However, graphs such as online social networks and knowledge graphs often have billions of vertices and edges, which require distributed processing in order to scale. Unfortunately, existing distributed graph processing systems such as Pregel and GraphLab are designed for data-intensive analytics, and are inefficient for compute-intensive graph mining tasks since computation over any data is coupled with the data's access that involves network transmission. We propose a distributed graph mining framework, called G-thinker, which is designed for compute-intensive graph mining workloads. G-thinker provides an intuitive graph-exploration API for the convenient implementation of various graph mining algorithms, and the runtime engine provides efficient execution with bounded memory consumption, light network communication, and parallelism between computation and communication. Extensive experiments were conducted, which demonstrate that G-thinker is orders of magnitude faster than existing solution, and can scale to graphs that are two orders of magnitude larger given the same hardware resources.
Systems with Subgraph-Based API. Recently, NScale @cite_32 and Arabesque @cite_0 attempted to attack subgraph finding problems through a subgraph-based API rather than a vertex-centric one. Albeit becoming more user-friendly, the execution engines of these systems still perform data-intensive processing like vertex-centric solutions mentioned before, and they actually introduce new performance issues.
{ "cite_N": [ "@cite_0", "@cite_32" ], "mid": [ "1996229963", "2146838049" ], "abstract": [ "Distributed data processing platforms such as MapReduce and Pregel have substantially simplified the design and deployment of certain classes of distributed graph analytics algorithms. However, these platforms do not represent a good match for distributed graph mining problems, as for example finding frequent subgraphs in a graph. Given an input graph, these problems require exploring a very large number of subgraphs and finding patterns that match some \"interestingness\" criteria desired by the user. These algorithms are very important for areas such as social networks, semantic web, and bioinformatics. In this paper, we present Arabesque, the first distributed data processing platform for implementing graph mining algorithms. Arabesque automates the process of exploring a very large number of subgraphs. It defines a high-level filter-process computational model that simplifies the development of scalable graph mining algorithms: Arabesque explores subgraphs and passes them to the application, which must simply compute outputs and decide whether the subgraph should be further extended. We use Arabesque's API to produce distributed solutions to three fundamental graph mining problems: frequent subgraph mining, counting motifs, and finding cliques. Our implementations require a handful of lines of code, scale to trillions of subgraphs, and represent in some cases the first available distributed solutions.", "There is an increasing interest in executing complex analyses over large graphs, many of which require processing a large number of multi-hop neighborhoods or subgraphs. Examples include ego network analysis, motif counting, finding social circles, personalized recommendations, link prediction, anomaly detection, analyzing influence cascades, and others. These tasks are not well served by existing vertex-centric graph processing frameworks, where user programs are only able to directly access the state of a single vertex at a time, resulting in high communication, scheduling, and memory overheads in executing such tasks. Further, most existing graph processing frameworks ignore the challenges in extracting the relevant portions of the graph that an analysis task is interested in, and loading those onto distributed memory. This paper introduces NScale, a novel end-to-end graph processing framework that enables the distributed execution of complex subgraph-centric analytics over large-scale graphs in the cloud. NScale enables users to write programs at the level of subgraphs rather than at the level of vertices. Unlike most previous graph processing frameworks, which apply the user program to the entire graph, NScale allows users to declaratively specify subgraphs of interest. Our framework includes a novel graph extraction and packing (GEP) module that utilizes a cost-based optimizer to partition and pack the subgraphs of interest into memory on as few machines as possible. The distributed execution engine then takes over and runs the user program in parallel on those subgraphs, restricting the scope of the execution appropriately, and utilizes novel techniques to minimize memory consumption by exploiting overlaps among the subgraphs. We present a comprehensive empirical evaluation comparing against three state-of-the-art systems, namely Giraph, GraphLab, and GraphX, on several real-world datasets and a variety of analysis tasks. Our experimental results show orders-of-magnitude improvements in performance and drastic reductions in the cost of analytics compared to vertex-centric approaches." ] }
1709.03110
2756272696
This paper proposes a general system for compute-intensive graph mining tasks that find from a big graph all subgraphs that satisfy certain requirements (e.g., graph matching and community detection). Due to the broad range of applications of such tasks, many single-threaded algorithms have been proposed. However, graphs such as online social networks and knowledge graphs often have billions of vertices and edges, which require distributed processing in order to scale. Unfortunately, existing distributed graph processing systems such as Pregel and GraphLab are designed for data-intensive analytics, and are inefficient for compute-intensive graph mining tasks since computation over any data is coupled with the data's access that involves network transmission. We propose a distributed graph mining framework, called G-thinker, which is designed for compute-intensive graph mining workloads. G-thinker provides an intuitive graph-exploration API for the convenient implementation of various graph mining algorithms, and the runtime engine provides efficient execution with bounded memory consumption, light network communication, and parallelism between computation and communication. Extensive experiments were conducted, which demonstrate that G-thinker is orders of magnitude faster than existing solution, and can scale to graphs that are two orders of magnitude larger given the same hardware resources.
NScale @cite_32 uses the MapReduce solution we mentioned above, and it brings additional overheads. NScale only supports the top-level decomposed subgraphs, and there is no mechanism to balance workload through recursive decomposition. Assuming that each @math spans the @math -hop neighborhood around @math , then NScale first constructs all decomposed subgraphs using @math rounds of MapReduce. The large number of decomposed subgraphs are then packed into larger compact subgraphs, each of which can fit in the memory of a reducer. Vertices common to multiple decomposed subgraphs are stored only once in their packed subgraph. Finally, each compact subgraph is distributed to a reducer, which processes all decomposed subgraphs packed in the compact subgraph in memory. Obviously, NScale suffers from all the performance issues of a MapReduce-based vertex-centric solution; moreover, NScale further packs decomposed subgraphs through expensive disk-based computation, and it is very likely that the cost of packing @math already surpasses that of processing @math right after it is constructed in memory.
{ "cite_N": [ "@cite_32" ], "mid": [ "2146838049" ], "abstract": [ "There is an increasing interest in executing complex analyses over large graphs, many of which require processing a large number of multi-hop neighborhoods or subgraphs. Examples include ego network analysis, motif counting, finding social circles, personalized recommendations, link prediction, anomaly detection, analyzing influence cascades, and others. These tasks are not well served by existing vertex-centric graph processing frameworks, where user programs are only able to directly access the state of a single vertex at a time, resulting in high communication, scheduling, and memory overheads in executing such tasks. Further, most existing graph processing frameworks ignore the challenges in extracting the relevant portions of the graph that an analysis task is interested in, and loading those onto distributed memory. This paper introduces NScale, a novel end-to-end graph processing framework that enables the distributed execution of complex subgraph-centric analytics over large-scale graphs in the cloud. NScale enables users to write programs at the level of subgraphs rather than at the level of vertices. Unlike most previous graph processing frameworks, which apply the user program to the entire graph, NScale allows users to declaratively specify subgraphs of interest. Our framework includes a novel graph extraction and packing (GEP) module that utilizes a cost-based optimizer to partition and pack the subgraphs of interest into memory on as few machines as possible. The distributed execution engine then takes over and runs the user program in parallel on those subgraphs, restricting the scope of the execution appropriately, and utilizes novel techniques to minimize memory consumption by exploiting overlaps among the subgraphs. We present a comprehensive empirical evaluation comparing against three state-of-the-art systems, namely Giraph, GraphLab, and GraphX, on several real-world datasets and a variety of analysis tasks. Our experimental results show orders-of-magnitude improvements in performance and drastic reductions in the cost of analytics compared to vertex-centric approaches." ] }
1709.03110
2756272696
This paper proposes a general system for compute-intensive graph mining tasks that find from a big graph all subgraphs that satisfy certain requirements (e.g., graph matching and community detection). Due to the broad range of applications of such tasks, many single-threaded algorithms have been proposed. However, graphs such as online social networks and knowledge graphs often have billions of vertices and edges, which require distributed processing in order to scale. Unfortunately, existing distributed graph processing systems such as Pregel and GraphLab are designed for data-intensive analytics, and are inefficient for compute-intensive graph mining tasks since computation over any data is coupled with the data's access that involves network transmission. We propose a distributed graph mining framework, called G-thinker, which is designed for compute-intensive graph mining workloads. G-thinker provides an intuitive graph-exploration API for the convenient implementation of various graph mining algorithms, and the runtime engine provides efficient execution with bounded memory consumption, light network communication, and parallelism between computation and communication. Extensive experiments were conducted, which demonstrate that G-thinker is orders of magnitude faster than existing solution, and can scale to graphs that are two orders of magnitude larger given the same hardware resources.
Arabesque @cite_0 proposed an embedding-centric model where an embedding is a subgraph of the input graph @math . Arabesque requires the entire @math to reside in the memory of every machine, and constructs and processes subgraphs iteratively. In the @math -th iteration, it grows the set of embeddings with @math edges vertices by one adjacent edge vertex, to construct embeddings with @math edges vertices for processing. New embeddings that pass a filtering condition are further processed and then passed to the next iteration. For example, to find cliques, the filtering condition checks whether an embedding @math is a clique; if so, @math is reported and passed to the next iteration to grow larger clique candidates.
{ "cite_N": [ "@cite_0" ], "mid": [ "1996229963" ], "abstract": [ "Distributed data processing platforms such as MapReduce and Pregel have substantially simplified the design and deployment of certain classes of distributed graph analytics algorithms. However, these platforms do not represent a good match for distributed graph mining problems, as for example finding frequent subgraphs in a graph. Given an input graph, these problems require exploring a very large number of subgraphs and finding patterns that match some \"interestingness\" criteria desired by the user. These algorithms are very important for areas such as social networks, semantic web, and bioinformatics. In this paper, we present Arabesque, the first distributed data processing platform for implementing graph mining algorithms. Arabesque automates the process of exploring a very large number of subgraphs. It defines a high-level filter-process computational model that simplifies the development of scalable graph mining algorithms: Arabesque explores subgraphs and passes them to the application, which must simply compute outputs and decide whether the subgraph should be further extended. We use Arabesque's API to produce distributed solutions to three fundamental graph mining problems: frequent subgraph mining, counting motifs, and finding cliques. Our implementations require a handful of lines of code, scale to trillions of subgraphs, and represent in some cases the first available distributed solutions." ] }
1709.03110
2756272696
This paper proposes a general system for compute-intensive graph mining tasks that find from a big graph all subgraphs that satisfy certain requirements (e.g., graph matching and community detection). Due to the broad range of applications of such tasks, many single-threaded algorithms have been proposed. However, graphs such as online social networks and knowledge graphs often have billions of vertices and edges, which require distributed processing in order to scale. Unfortunately, existing distributed graph processing systems such as Pregel and GraphLab are designed for data-intensive analytics, and are inefficient for compute-intensive graph mining tasks since computation over any data is coupled with the data's access that involves network transmission. We propose a distributed graph mining framework, called G-thinker, which is designed for compute-intensive graph mining workloads. G-thinker provides an intuitive graph-exploration API for the convenient implementation of various graph mining algorithms, and the runtime engine provides efficient execution with bounded memory consumption, light network communication, and parallelism between computation and communication. Extensive experiments were conducted, which demonstrate that G-thinker is orders of magnitude faster than existing solution, and can scale to graphs that are two orders of magnitude larger given the same hardware resources.
Other Systems. Blogel @cite_6 and Giraph++ @cite_31 proposed a block-centric model which partitions a graph into disjoint subgraphs called blocks to be distributed among machines for iterative processing, eliminating the need of communication inside each block. However, these systems do not target problems, but rather the acceleration of vertex-centric models.
{ "cite_N": [ "@cite_31", "@cite_6" ], "mid": [ "217817341", "2259576664" ], "abstract": [ "To meet the challenge of processing rapidly growing graph and network data created by modern applications, a number of distributed graph processing systems have emerged, such as Pregel and GraphLab. All these systems divide input graphs into partitions, and employ a \"think like a vertex\" programming model to support iterative graph computation. This vertex-centric model is easy to program and has been proved useful for many graph algorithms. However, this model hides the partitioning information from the users, thus prevents many algorithm-specific optimizations. This often results in longer execution time due to excessive network messages (e.g. in Pregel) or heavy scheduling overhead to ensure data consistency (e.g. in GraphLab). To address this limitation, we propose a new \"think like a graph\" programming paradigm. Under this graph-centric model, the partition structure is opened up to the users, and can be utilized so that communication within a partition can bypass the heavy message passing or scheduling machinery. We implemented this model in a new system, called Giraph++, based on Apache Giraph, an open source implementation of Pregel. We explore the applicability of the graph-centric model to three categories of graph algorithms, and demonstrate its flexibility and superior performance, especially on well-partitioned data. For example, on a web graph with 118 million vertices and 855 million edges, the graph-centric version of connected component detection algorithm runs 63X faster and uses 204X fewer network messages than its vertex-centric counterpart.", "The rapid growth in the volume of many real-world graphs (e.g., social networks, web graphs, and spatial networks) has led to the development of various vertex-centric distributed graph computing systems in recent years. However, real-world graphs from different domains have very different characteristics, which often create bottlenecks in vertex-centric parallel graph computation. We identify three such important characteristics from a wide spectrum of real-world graphs, namely (1)skewed degree distribution, (2)large diameter, and (3)(relatively) high density. Among them, only (1) has been studied by existing systems, but many real-world power-law graphs also exhibit the characteristics of (2) and (3). In this paper, we propose a block-centric framework, called Blogel, which naturally handles all the three adverse graph characteristics. Blogel programmers may think like a block and develop efficient algorithms for various graph problems. We propose parallel algorithms to partition an arbitrary graph into blocks efficiently, and block-centric programs are then run over these blocks. Our experiments on large real-world graphs verified that Blogel is able to achieve orders of magnitude performance improvements over the state-of-the-art distributed graph computing systems." ] }
1709.02833
2754876573
In this paper, we examine the problem of robotic manipulation of granular media. We evaluate multiple predictive models used to infer the dynamics of scooping and dumping actions. These models are evaluated on a task that involves manipulating the media in order to deform it into a desired shape. Our best performing model is based on a highly-tailored convolutional network architecture with domain-specific optimizations, which we show accurately models the physical interaction of the robotic scoop with the underlying media. We empirically demonstrate that explicitly predicting physical mechanics results in a policy that out-performs both a hand-crafted dynamics baseline, and a "value-network", which must otherwise implicitly predict the same mechanics in order to produce accurate value estimates.
In recent years, there has been some work in robotics in areas related to interaction and manipulation of granular media. For example, there has been a significant amount of work on legged locomotion over granular media @cite_22 @cite_1 @cite_0 . There has also been work on automated operation of construction equipment for scooping @cite_9 @cite_19 @cite_32 @cite_14 . Additionally, much of the work related to robotic pouring has utilized granular media rather than liquids @cite_24 @cite_27 @cite_12 @cite_4 @cite_25 . Recent work by Xu and Cakmak @cite_20 explored the ability of robots to clean, among other things, granular media from surfaces using a low-cost tool. In contrast to this prior work, here we directly tackle the problem of manipulating granular media in a robust, flexible manner.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_22", "@cite_9", "@cite_1", "@cite_32", "@cite_0", "@cite_19", "@cite_24", "@cite_27", "@cite_20", "@cite_25", "@cite_12" ], "mid": [ "2041111443", "2570797809", "1992878443", "2145451962", "1991866013", "179585674", "1974461875", "2082907074", "2081857580", "2158582431", "2050363886", "2419438630", "2215032184" ], "abstract": [ "As a part of research on autonomous loading system by wheel loader, a method for appropriate arrangement of bucket trajectory for scooping motion will be proposed. In this paper, relation between resistance force and advancing direction of the bucket is analyzed theoretically. Advancing direction is dominant factor for resistance force on the bucket during scooping. Based on this analysis, simple rules for bucket trajectory arrangement are proposed. In scooping procedure, scooped volume is estimated using 3D model obtained with stereo-vision system. The shape of the pile is measured and converted into pile model prior to scooping. The developed method and system are installed on experimental scale model of wheel loader. The results show good performance for different condition of pile.", "We explore differential dynamic programming for dynamical systems that form a directed graph structure. This planning method is applicable to complicated tasks where sub-tasks are sequentially connected and different skills are selected according to the situation. A pouring task is an example: it involves grasping and moving a container, and selection of skills, e.g. tipping and shaking. Our method can handle these situations; we plan the continuous parameters of each subtask and skill, as well as select skills. Our method is based on stochastic differential dynamic programming. We use stochastic neural networks to learn dynamical systems when they are unknown. Our method is a form of reinforcement learning. On the other hand, we use ideas from artificial intelligence, such as graph-structured dynamical systems, and frame-and-slots to represent a large state-action vector. This work is a partial unification of these different fields. We demonstrate our method in a simulated pouring task, where we show that our method generalizes over material property and container shape. Accompanying video: https: youtu.be _ECmnG2BLE8.", "Legged locomotion on flowing ground (e.g., granular media) is unlike locomotion on hard ground because feet experience both solid- and fluid-like forces during surface penetration. Recent bioinspired legged robots display speed relative to body size on hard ground comparable with high-performing organisms like cockroaches but suffer significant performance loss on flowing materials like sand. In laboratory experiments, we study the performance (speed) of a small (2.3 kg) 6-legged robot, SandBot, as it runs on a bed of granular media (1-mm poppy seeds). For an alternating tripod gait on the granular bed, standard gait control parameters achieve speeds at best 2 orders of magnitude smaller than the 2 body lengths s (≈60 cm s) for motion on hard ground. However, empirical adjustment of these control parameters away from the hard ground settings restores good performance, yielding top speeds of 30 cm s. Robot speed depends sensitively on the packing fraction φ and the limb frequency ω, and a dramatic transition from rotary walking to slow swimming occurs when φ becomes small enough and or ω large enough. We propose a kinematic model of the rotary walking mode based on generic features of penetration and slip of a curved limb in granular media. The model captures the dependence of robot speed on limb frequency and the transition between walking and swimming modes but highlights the need for a deeper understanding of the physics of granular media.", "Mining operations have great potential for automation, from a worker safety viewpoint as well as productivity and efficiency. Among the operations that can be automated are loading, guiding, and unloading of LHD loaders. This paper concerns automatic loading of these units, based on modeling a loader as a robot manipulator. For this purpose, analysis of the forces concerned in the process of scooping, and information about the kinematics of motion become essential. This work is about the trajectory of motion during scooping. Based an the results of a preliminary study on the nature of the forces involved, and bearing in mind the preference for simplicity in the control action, an easy to follow trajectory is determined for the cutting edge of the bucket of a loader, which is regarded as the end-effector in a robot manipulator. The way to find a minimum energy consuming trajectory for each individual bucket and for a particular material to be loaded is discussed. Since following this trajectory is more efficient it can be used for faster execution of bucket loading, if desired. >", "Achieving effective locomotion on diverse terrestrial substrates can require subtle changes of limb kinematics. Biologically inspired legged robots (physical models of organisms) have shown impressive mobility on hard ground but suffer performance loss on unconsolidated granular materials like sand. Because comprehensive limb–ground interaction models are lacking, optimal gaits on complex yielding terrain have been determined empirically. To develop predictive models for legged devices and to provide hypotheses for biological locomotors, we systematically study the performance of SandBot, a small legged robot, on granular media as a function of gait parameters. High performance occurs only in a small region of parameter space. A previously introduced kinematic model of the robot combined with a new anisotropic granular penetration force law predicts the speed. Performance on granular media is maximized when gait parameters utilize solidification features of the granular medium and minimize limb interference.", "A controller for a miniature wheel loader is developed to scoop rock piles autonomously. During a scooping task operation, the load of the bucket varies momentarily according to the phase of scooping. Before the insertion of the bucket into the rock pile, the load is just the bucket weight, but after the insertion the reaction force from the rock piles is applied to the bucket. The values of the reaction forces changes significantly and they can not be identified in advance. To achieve autonomous loading, it is required for the bucket controller to work stably in both cases with the same algorithm. Therefore, a disturbance observer mechanism is installed into a miniature wheel loader \"Yamazumi 3\" to adapt various amounts of loads. Its effectiveness is verified by fundamental experiments.", "The theories of aero- and hydrodynamics predict animal movement and device design in air and water through the computation of lift, drag, and thrust forces. Although models of terrestrial legged locomotion have focused on interactions with solid ground, many animals move on substrates that flow in response to intrusion. However, locomotor-ground interaction models on such flowable ground are often unavailable. We developed a force model for arbitrarily-shaped legs and bodies moving freely in granular media, and used this “terradynamics” to predict a small legged robot’s locomotion on granular media using various leg shapes and stride frequencies. Our study reveals a complex but generic dependence of stresses in granular media on intruder depth, orientation, and movement direction and gives insight into the effects of leg morphology and kinematics on movement.", "There is a demand of mining automation by heavy equipment in an open mine. Authors aim to automization of motion for wheel loader in scooping and loading operation. In this paper, we applied the optimization method by Genetic Algorithm to the path planning for the wheel loader on scooping and loading operation. We showed the quasi-optimized path and demonstrated by the miniature wheel loader robot.", "Robot learning from demonstration faces new challenges when applied to tasks in which forces play a key role. Pouring liquid from a bottle into a glass is one such task, where not just a motion with a certain force profile needs to be learned, but the motion is subtly conditioned by the amount of liquid in the bottle. In this paper, the pouring skill is taught to a robot as follows. In a training phase, the human teleoperates the robot using a haptic device, and data from the demonstrations are statistically encoded by a parametric hidden Markov model, which compactly encapsulates the relation between the task parameter (dependent on the bottle weight) and the force-torque traces. Gaussian mixture regression is then used at the reproduction stage for retrieving the suitable robot actions based on the force perceptions. Computational and experimental results show that the robot is able to learn to pour drinks using the proposed framework, outperforming other approaches such as the classical hidden Markov models in that it requires less training, yields more compact encodings and shows better generalization capabilities.", "Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions. This idea, called Active Learning, has recently caught a lot of attention in the robotics community. However, it has not been explored from a human-robot interaction perspective. In this paper, we identify three types of questions (label, demonstration and feature queries) and discuss how a robot can use these while learning new skills. Then, we present an experiment on human question asking which characterizes the extent to which humans use these question types. Finally, we evaluate the three question types within a human-robot teaching interaction. We investigate the ease with which different types of questions are answered and whether or not there is a general preference of one type of question over another. Based on our findings from both experiments we provide guidelines for designing question asking behaviors on a robot learner.", "Robots that can reliably manipulate human tools can do a diverse range of useful tasks in human environments. However, these tools are often difficult to manipulate, particularly given force requirements for applying the tool. This is often due to the mismatch between the robot's gripper and the tool handle designed for human hands. In this paper, we present the design of a low-cost universal tool attachment that makes the tool gripper-friendly. We demonstrate the performance gain provided by the attachment on 10 different tools in the three stages of tool use: grasping the tool, applying the tool, and placing the tool. Our experiments demonstrate that the attachment performs significantly better in all three stages of tool use.", "We explore a model-based approach to reinforcement learning where partially or totally unknown dynamics are learned and explicit planning is performed. We learn dynamics with neural networks, and plan behaviors with differential dynamic programming (DDP). In order to handle complicated dynamics, such as manipulating liquids (pouring), we consider temporally decomposed dynamics. We start from our recent work [1] where we used locally weighted regression (LWR) to model dynamics. The major contribution of this paper is making use of deep learning in the form of neural networks with stochastic DDP, and showing the advantages of neural networks over LWR. For this purpose, we extend neural networks for: (1) modeling prediction error and output noise, (2) computing an output probability distribution for a given input distribution, and (3) computing gradients of output expectation with respect to an input. Since neural networks have nonlinear activation functions, these extensions were not easy. We provide an analytic solution for these extensions using some simplifying assumptions. We verified this method in pouring simulation experiments. The learning performance with neural networks was better than that of LWR. The amount of spilled materials was reduced. We also present early results of robot experiments using a PR2. Accompanying video: https: youtu.be aM3hE1J5W98", "We explore a temporal decomposition of dynamics in order to enhance policy learning with unknown dynamics. There are model-free methods and model-based methods for policy learning with unknown dynamics, but both approaches have problems: in general, model-free methods have less generalization ability, while model-based methods are often limited by the assumed model structure or need to gather many samples to make models. We consider a temporal decomposition of dynamics to make learning models easier. To obtain a policy, we apply differential dynamic programming (DDP). A feature of our method is that we consider decomposed dynamics even when there is no action to be taken, which allows us to decompose dynamics more flexibly. Consequently learned dynamics become more accurate. Our DDP is a first-order gradient descent algorithm with a stochastic evaluation function. In DDP with learned models, typically there are many local maxima. In order to avoid them, we consider multiple criteria evaluation functions. In addition to the stochastic evaluation function, we use a reference value function. This method was verified with pouring simulation experiments where we created complicated dynamics. The results show that we can optimize actions with DDP while learning dynamics models." ] }
1709.02833
2754876573
In this paper, we examine the problem of robotic manipulation of granular media. We evaluate multiple predictive models used to infer the dynamics of scooping and dumping actions. These models are evaluated on a task that involves manipulating the media in order to deform it into a desired shape. Our best performing model is based on a highly-tailored convolutional network architecture with domain-specific optimizations, which we show accurately models the physical interaction of the robotic scoop with the underlying media. We empirically demonstrate that explicitly predicting physical mechanics results in a policy that out-performs both a hand-crafted dynamics baseline, and a "value-network", which must otherwise implicitly predict the same mechanics in order to produce accurate value estimates.
One of the main types of models that the robot learns in this paper is a predictive model using ConvNets. Recent work in robotics has shown how ConvNets can learn pixel-wise predictions for a video prediction task @cite_23 , as well as the dynamics of objects when subjected to robotic poke actions @cite_2 . However, work by Byraven and Fox @cite_8 showed that for dense, unstructured state spaces (in their case raw point clouds from a depth camera), it is often necessary to enforce structure in the network to achieve maximal performance. In this paper, we compare a standard unstructured network to a structured network for learning dense predictions involving granular media.
{ "cite_N": [ "@cite_8", "@cite_23", "@cite_2" ], "mid": [ "2410156224", "2953317238", "2473208550" ], "abstract": [ "We introduce SE3-Nets, which are deep networks designed to model rigid body motion from raw point cloud data. Based only on pairs of depth images along with an action vector and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. Rather than learning point wise flow vectors, SE3-Nets predict SE3 transformations for different parts of the scene. Using simulated depth data of a table top scene and a robot manipulator, we show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks.", "A key challenge in scaling up robot learning to many skills and environments is removing the need for human supervision, so that robots can collect their own data and improve their own performance without being limited by the cost of requesting human feedback. Model-based reinforcement learning holds the promise of enabling an agent to learn to predict the effects of its actions, which could provide flexible predictive models for a wide range of tasks and environments, without detailed human supervision. We develop a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data. Our approach does not require a calibrated camera, an instrumented training set-up, nor precise sensing and actuation. Our results show that our method enables a real robot to perform nonprehensile manipulation -- pushing objects -- and can handle novel objects not seen during training.", "We investigate an experiential learning paradigm for acquiring an internal model of intuitive physics. Our model is evaluated on a real-world robotic manipulation task that requires displacing objects to target locations by poking. The robot gathered over 400 hours of experience by executing more than 100K pokes on different objects. We propose a novel approach based on deep neural networks for modeling the dynamics of robot's interactions directly from images, by jointly estimating forward and inverse models of dynamics. The inverse model objective provides supervision to construct informative visual features, which the forward model can then predict and in turn regularize the feature space for the inverse model. The interplay between these two objectives creates useful, accurate models that can then be used for multi-step decision making. This formulation has the additional benefit that it is possible to learn forward models in an abstract feature space and thus alleviate the need of predicting pixels. Our experiments show that this joint modeling approach outperforms alternative methods." ] }
1709.02833
2754876573
In this paper, we examine the problem of robotic manipulation of granular media. We evaluate multiple predictive models used to infer the dynamics of scooping and dumping actions. These models are evaluated on a task that involves manipulating the media in order to deform it into a desired shape. Our best performing model is based on a highly-tailored convolutional network architecture with domain-specific optimizations, which we show accurately models the physical interaction of the robotic scoop with the underlying media. We empirically demonstrate that explicitly predicting physical mechanics results in a policy that out-performs both a hand-crafted dynamics baseline, and a "value-network", which must otherwise implicitly predict the same mechanics in order to produce accurate value estimates.
Similar to this work, ConvNets have recently been utilized for learning intuition of physical interactions by mimicking physics simulators @cite_28 @cite_17 , for detecting and pouring liquids @cite_6 @cite_16 , and for explicit modeling of rigid body dynamics @cite_8 @cite_18 or fluid dynamics @cite_15 . To our knowledge, this work is the first to use ConvNets to predict the dynamics of granular media.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_28", "@cite_6", "@cite_15", "@cite_16", "@cite_17" ], "mid": [ "2613576910", "2410156224", "2951384764", "2483265793", "", "2531280530", "2952115723" ], "abstract": [ "", "We introduce SE3-Nets, which are deep networks designed to model rigid body motion from raw point cloud data. Based only on pairs of depth images along with an action vector and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. Rather than learning point wise flow vectors, SE3-Nets predict SE3 transformations for different parts of the scene. Using simulated depth data of a table top scene and a robot manipulator, we show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks.", "Wooden blocks are a common toy for infants, allowing them to develop motor skills and gain intuition about the physical behavior of the world. In this paper, we explore the ability of deep feed-forward models to learn such intuitive physics. Using a 3D game engine, we create small towers of wooden blocks whose stability is randomized and render them collapsing (or remaining upright). This data allows us to train large convolutional network models which can accurately predict the outcome, as well as estimating the block trajectories. The models are also able to generalize in two important ways: (i) to new physical scenarios, e.g. towers with an additional block and (ii) to images of real wooden blocks, where it obtains a performance comparable to human subjects.", "Recent advances in AI and robotics have claimed many incredible results with deep learning, yet no work to date has applied deep learning to the problem of liquid perception and reasoning. In this paper, we apply fully-convolutional deep neural networks to the tasks of detecting and tracking liquids. We evaluate three models: a single-frame network, multi-frame network, and a LSTM recurrent network. Our results show that the best liquid detection results are achieved when aggregating data over multiple frames and that the LSTM network outperforms the other two in both tasks. This suggests that LSTM-based neural networks have the potential to be a key component for enabling robots to handle liquids using robust, closed-loop controllers.", "", "Pouring a specific amount of liquid is a challenging task. In this paper we develop methods for robots to use visual feedback to perform closed-loop control for pouring liquids. We propose both a model-based and a model-free method utilizing deep learning for estimating the volume of liquid in a container. Our results show that the model-free method is better able to estimate the volume. We combine this with a simple PID controller to pour specific amounts of liquid, and show that the robot is able to achieve an average 38ml deviation from the target amount. To our knowledge, this is the first use of raw visual feedback to pour liquids in robotics.", "We present the Neural Physics Engine (NPE), a framework for learning simulators of intuitive physics that naturally generalize across variable object count and different scene configurations. We propose a factorization of a physical scene into composable object-based representations and a neural network architecture whose compositional structure factorizes object dynamics into pairwise interactions. Like a symbolic physics engine, the NPE is endowed with generic notions of objects and their interactions; realized as a neural network, it can be trained via stochastic gradient descent to adapt to specific object properties and dynamics of different worlds. We evaluate the efficacy of our approach on simple rigid body dynamics in two-dimensional worlds. By comparing to less structured architectures, we show that the NPE's compositional representation of the structure in physical interactions improves its ability to predict movement, generalize across variable object count and different scene configurations, and infer latent properties of objects such as mass." ] }
1709.02888
2755530029
Markov Chain Monte Carlo (MCMC) sampling methods are widely used but often encounter either slow convergence or biased sampling when applied to multimodal high dimensional distributions. In this paper, we present a general framework of improving classical MCMC samplers by employing a global optimization method. The global optimization method first reduces a high dimensional search to an one dimensional geodesic to find a starting point close to a local mode. The search is accelerated and completed by using a local search method such as BFGS. We modify the target distribution by extracting a local Gaussian distribution aound the found mode. The process is repeated to find all the modes during sampling on the fly. We integrate the optimization algorithm into the Wormhole Hamiltonian Monte Carlo (WHMC) method. Experimental results show that, when applied to high dimensional, multimodal Gaussian mixture models and the network sensor localization problem, the proposed method achieves much faster convergence, with relative error from the mean improved by about an order of magnitude than WHMC in some cases.
Attempts have been made to tune HMC parameters adaptively. The No-U-Turn Sampler (NUTS) uses a recusrive algorithm to propose candidate points in a wide region of the target distribution and stops the trajectory as it makes a U-turn. The step sizes can also be adjusted adaptively in NUTS. The Adaptive Hamiltonian and Riemann Monte Carlo (AHRMC) sampler @cite_0 uses a Bayesian optimization method to tune HMC parameters.
{ "cite_N": [ "@cite_0" ], "mid": [ "1545319692" ], "abstract": [ "The paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when sampling from target densities that may be high dimensional and exhibit strong correlations. The methods provide fully automated adaptation mechanisms that circumvent the costly pilot runs that are required to tune proposal densities for Metropolis–Hastings or indeed Hamiltonian Monte Carlo and Metropolis adjusted Langevin algorithms. This allows for highly efficient sampling even in very high dimensions where different scalings may be required for the transient and stationary phases of the Markov chain. The methodology proposed exploits the Riemann geometry of the parameter space of statistical models and thus automatically adapts to the local structure when simulating paths across this manifold, providing highly efficient convergence and exploration of the target density. The performance of these Riemann manifold Monte Carlo methods is rigorously assessed by performing inference on logistic regression models, log-Gaussian Cox point processes, stochastic volatility models and Bayesian estimation of dynamic systems described by non-linear differential equations. Substantial improvements in the time-normalized effective sample size are reported when compared with alternative sampling approaches. MATLAB code that is available from http: www.ucl.ac.uk statistics research rmhmc allows replication of all the results reported." ] }
1709.02707
2769022801
Consider the following estimation problem: there are @math entities, each with an unknown parameter @math , and we observe @math independent random variables, @math , with @math Binomial @math . How accurately can one recover the "histogram" (i.e. cumulative density function) of the @math 's? While the empirical estimates would recover the histogram to earth mover distance @math (equivalently, @math distance between the CDFs), we show that, provided @math is sufficiently large, we can achieve error @math which is information theoretically optimal. We also extend our results to the multi-dimensional parameter case, capturing settings where each member of the population has multiple associated parameters. Beyond the theoretical results, we demonstrate that the recovery algorithm performs well in practice on a variety of datasets, providing illuminating insights into several domains, including politics, sports analytics, and variation in the gender ratio of offspring.
The seminal paper of Charles Stein @cite_5 was one of the earliest papers to identify the surprising possibility of leveraging the availability of independent data reflecting a large number of parameters of interest, to partially compensate for having little information about each parameter. The specific setting examined considered the problem of estimating a list of unknown means, @math given access to @math independent Gaussian random variables, @math , with @math . Stein showed that, perhaps surprisingly, that there is an estimator for the list of parameters @math that has smaller expected squared error than the naive unbiased empirical estimates of @math . This improved estimator shrinks'' the empirical estimates towards the average of the @math 's. In our setting, the process of recovering the set histogram of unknown @math 's and then leveraging this recovered set as a prior to correct the empirical estimates of each @math can be viewed as an analog of Stein's shrinkage'', and will have the property that the empirical estimates are shifted (in a non-linear fashion) towards the average of the @math 's.
{ "cite_N": [ "@cite_5" ], "mid": [ "1598266570" ], "abstract": [ "Abstract : If one observes the real random variables Xi, Xn independently normally distributed with unknown means xi...x in and variance 1, it is customary to estimate xi by Xi. If the loss is the sum of squares of the errors, this estimator is admissible for n or equal to 2, but inadmissible for n more than or equal to 3. Since the usual estimator is best among those which transform correctly under translation, any admissible estimator for n equals more than or equal to 3 involves an arbitrary choice. While the results of this paper are not in a form suitable for immediate practical application, the possible improvement over the usual estimator seems to be large enough to be of practical importance if n is large." ] }
1709.02707
2769022801
Consider the following estimation problem: there are @math entities, each with an unknown parameter @math , and we observe @math independent random variables, @math , with @math Binomial @math . How accurately can one recover the "histogram" (i.e. cumulative density function) of the @math 's? While the empirical estimates would recover the histogram to earth mover distance @math (equivalently, @math distance between the CDFs), we show that, provided @math is sufficiently large, we can achieve error @math which is information theoretically optimal. We also extend our results to the multi-dimensional parameter case, capturing settings where each member of the population has multiple associated parameters. Beyond the theoretical results, we demonstrate that the recovery algorithm performs well in practice on a variety of datasets, providing illuminating insights into several domains, including politics, sports analytics, and variation in the gender ratio of offspring.
More closely related to the problem considered in this paper is the work on recovering an approximation to the unlabeled of probabilities of domain elements, given independent draws from a distribution of large discrete support (see e.g. @cite_12 @cite_13 @cite_14 @cite_19 @cite_8 ). Instead of learning the distribution, these works considered the alternate goal of simply returning an approximation to the multiset of probabilities with which the domain elements arise but without specifying which element occurs with which probability. Such a multiset can be used to estimate useful properties of the distribution that do not depend on the labels of the domain of the distribution, such as the entropy or support size of the distribution, or the number of elements likely to be observed in a new, larger sample @cite_15 @cite_1 . The benefit of pursuing this weaker goal of returning the unlabeled multiset is that it can be learned to significantly higher accuracy for a given sample size---essentially as accurate as the empirical distribution of a sample that is a logarithmic factor larger @cite_14 @cite_1 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_1", "@cite_19", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2127090196", "2951753179", "2419099043", "2762763087", "", "2163450622", "" ], "abstract": [ "We introduce a new approach to characterizing the unobserved portion of a distribution, which provides sublinear--sample estimators achieving arbitrarily small additive constant error for a class of properties that includes entropy and distribution support size. Additionally, we show new matching lower bounds. Together, this settles the longstanding question of the sample complexities of these estimation problems, up to constant factors. Our algorithm estimates these properties up to an arbitrarily small additive constant, using O(n log n) samples, where n is a bound on the support size, or in the case of estimating the support size, 1 n is a lower bound on the probability of any element of the domain. Previously, no explicit sublinear--sample algorithms for either of these problems were known. Our algorithm is also computationally extremely efficient, running in time linear in the number of samples used. In the second half of the paper, we provide a matching lower bound of Ω(n log n) samples for estimating entropy or distribution support size to within an additive constant. The previous lower-bounds on these sample complexities were n 2O(√log n). To show our lower bound, we prove two new and natural multivariate central limit theorems (CLTs); the first uses Stein's method to relate the sum of independent distributions to the multivariate Gaussian of corresponding mean and covariance, under the earthmover distance metric (also known as the Wasserstein metric). We leverage this central limit theorem to prove a stronger but more specific central limit theorem for \"generalized multinomial\" distributions---a large class of discrete distributions, parameterized by matrices, that represents sums of independent binomial or multinomial distributions, and describes many distributions encountered in computer science. Convergence here is in the strong sense of statistical distance, which immediately implies that any algorithm with input drawn from a generalized multinomial distribution behaves essentially as if the input were drawn from a discretized Gaussian with the same mean and covariance. Such tools in the multivariate setting are rare, and we hope this new tool will be of use to the community.", "The advent of data science has spurred interest in estimating properties of distributions over large alphabets. Fundamental symmetric properties such as support size, support coverage, entropy, and proximity to uniformity, received most attention, with each property estimated using a different technique and often intricate analysis tools. We prove that for all these properties, a single, simple, plug-in estimator---profile maximum likelihood (PML)---performs as well as the best specialized techniques. This raises the possibility that PML may optimally estimate many other symmetric properties.", "We consider the following basic learning task: given independent draws from an unknown distribution over a discrete support, output an approximation of the distribution that is as accurate as possible in L1 distance (equivalently, total variation distance, or \"statistical distance\"). Perhaps surprisingly, it is often possible to \"de-noise\" the empirical distribution of the samples to return an approximation of the true distribution that is significantly more accurate than the empirical distribution, without relying on any prior assumptions on the distribution. We present an instance optimal learning algorithm which optimally performs this de-noising for every distribution for which such a de-noising is possible. More formally, given n independent draws from a distribution p, our algorithm returns a labelled vector whose expected distance from p is equal to the minimum possible expected error that could be obtained by any algorithm, even one that is given the true unlabeled vector of probabilities of distribution p and simply needs to assign labels---up to an additive subconstant term that is independent of p and goes to zero as n gets large. This somewhat surprising result has several conceptual implications, including the fact that, for any large sample from a distribution over discrete support, prior knowledge of the rates of decay of the tails of the distribution (e.g. power-law type assumptions) is not significantly helpful for the task of learning the distribution. As a consequence of our techniques, we also show that given a set of n samples from an arbitrary distribution, one can accurately estimate the expected number of distinct elements that will be observed in a sample of any size up to n log n. This sort of extrapolation is practically relevant, particularly to domains such as genomics where it is important to understand how much more might be discovered given larger sample sizes, and we are optimistic that our approach is practically viable.", "We show that a class of statistical properties of distributions, which includes such practically relevant properties as entropy, the number of distinct elements, and distance metrics between pairs of distributions, can be estimated given a sublinear sized sample. Specifically, given a sample consisting of independent draws from any distribution over at most k distinct elements, these properties can be estimated accurately using a sample of size O(k log k). For these estimation tasks, this performance is optimal, to constant factors. Complementing these theoretical results, we also demonstrate that our estimators perform exceptionally well, in practice, for a variety of estimation tasks, on a variety of natural distributions, for a wide range of parameters. The key step in our approach is to first use the sample to characterize the “unseen” portion of the distribution—effectively reconstructing this portion of the distribution as accurately as if one had a logarithmic factor larger sample. This goes beyond such tools as the Good-Turing frequency estimation scheme, which estimates the total probability mass of the unobserved portion of the distribution: We seek to estimate the shape of the unobserved portion of the distribution. This work can be seen as introducing a robust, general, and theoretically principled framework that, for many practical applications, essentially amplifies the sample size by a logarithmic factor; we expect that it may be fruitfully used as a component within larger machine learning and statistical analysis systems.", "", "We derive some general sufficient conditions for the uniformity of the Pattern Maximum Likelihood distribution (PML). We also provide upper bounds on the support size of a class of patterns, and mention some recent results about the PML of 1112234.", "" ] }
1709.02707
2769022801
Consider the following estimation problem: there are @math entities, each with an unknown parameter @math , and we observe @math independent random variables, @math , with @math Binomial @math . How accurately can one recover the "histogram" (i.e. cumulative density function) of the @math 's? While the empirical estimates would recover the histogram to earth mover distance @math (equivalently, @math distance between the CDFs), we show that, provided @math is sufficiently large, we can achieve error @math which is information theoretically optimal. We also extend our results to the multi-dimensional parameter case, capturing settings where each member of the population has multiple associated parameters. Beyond the theoretical results, we demonstrate that the recovery algorithm performs well in practice on a variety of datasets, providing illuminating insights into several domains, including politics, sports analytics, and variation in the gender ratio of offspring.
Building on the above work, the recent work @cite_9 considered the problem of recovering the frequency spectrum'' of rare genetic variants. This problem is similar to the problem we consider, but focuses on a rather different regime. Specifically, the model considered posits that each location @math in the genome has some probability @math of being mutated in a given individual. Given the sequences of @math individuals, the goal is to recover the set of @math 's. The work @cite_9 focused on the regime in which many of the @math 's are significantly less than @math , and hence correspond to mutations that have never been observed; one conclusion of that work was that one can accurately estimate the number of such rare mutations that would be discovered in larger sequencing cohorts. Our work, in contrast, focuses on the regime where the @math 's are constant, and do not scale as a function of @math , and the results are incomparable.
{ "cite_N": [ "@cite_9" ], "mid": [ "2545606300" ], "abstract": [ "Accurate estimations of the frequency distribution of rare variants are needed to quantify the discovery power and guide large-scale human sequencing projects. This study describes an algorithm called UnseenEst to estimate the distribution of genetic variations using tens of thousands of exomes." ] }
1709.02457
2751686959
The last decade has seen a surge of interest in adaptive learning algorithms for data stream classification, with applications ranging from predicting ozone level peaks, learning stock market indicators, to detecting computer security violations. In addition, a number of methods have been developed to detect concept drifts in these streams. Consider a scenario where we have a number of classifiers with diverse learning styles and different drift detectors. Intuitively, the current ‘best’ (classifier, detector) pair is application dependent and may change as a result of the stream evolution. Our research builds on this observation. We introduce the Tornado framework that implements a reservoir of diverse classifiers, together with a variety of drift detection algorithms. In our framework, all (classifier, detector) pairs proceed, in parallel, to construct models against the evolving data streams. At any point in time, we select the pair which currently yields the best performance. To this end, we introduce the CAR measure, which is employed to balance classification, adaptation and resource utilization requirements. We further incorporate two novel stacking-based drift detection methods, namely the FHDDMS and ( FHDDMS _ add ) approaches. The experimental evaluation confirms that the current ‘best’ (classifier, detector) pair is not only heavily dependent on the characteristics of the stream, but also that this selection evolves as the stream flows. Further, our FHDDMS variants detect concept drifts accurately in a timely fashion while outperforming the state-of-the-art.
Researchers agree that the evaluation of data stream algorithms is a complex task. This fact is due to many challenges, including the presence of concept drift, limited processing time in real-world applications and the need for time-oriented evaluation, amongst others . The error-rate (or accuracy) is most often used as the defining measures of the classification performance for evaluating learning algorithms in most streaming studies . The error-rate is calculated incrementally using either the prequential or hold-out evaluation procedures . The interplay between the error-rate and other factors, such as memory usage and runtime considerations, has received limited attention. @cite_6 considered the memory, time and accuracy measures separately, in order to compare the performances of ensembles of classifiers. @cite_12 further introduced the RAM-Hour measure, where every RAM-Hour equals to 1 GB of RAM occupied for one hour, to compare the performances of three versions of perceptron-based Hoeffding Trees. @cite_11 introduced the EMR measure which combines error-rate, memory usage and runtime for evaluating and ranking learning algorithms.
{ "cite_N": [ "@cite_12", "@cite_6", "@cite_11" ], "mid": [ "1577823635", "2016159616", "2523646960" ], "abstract": [ "Mining of data streams must balance three evaluation dimensions: accuracy, time and memory Excellent accuracy on data streams has been obtained with Naive Bayes Hoeffding Trees—Hoeffding Trees with naive Bayes models at the leaf nodes—albeit with increased runtime compared to standard Hoeffding Trees In this paper, we show that runtime can be reduced by replacing naive Bayes with perceptron classifiers, while maintaining highly competitive accuracy We also show that accuracy can be increased even further by combining majority vote, naive Bayes, and perceptrons We evaluate four perceptron-based learning strategies and compare them against appropriate baselines: simple perceptrons, Perceptron Hoeffding Trees, hybrid Naive Bayes Perceptron Trees, and bagged versions thereof We implement a perceptron that uses the sigmoid activation function instead of the threshold activation function and optimizes the squared error, with one perceptron per class value We test our methods by performing an evaluation study on synthetic and real-world datasets comprising up to ten million examples.", "Advanced analysis of data streams is quickly becoming a key area of data mining research as the number of applications demanding such processing increases. Online mining when such data streams evolve over time, that is when concepts drift or change completely, is becoming one of the core issues. When tackling non-stationary concepts, ensembles of classifiers have several advantages over single classifier methods: they are easy to scale and parallelize, they can adapt to change quickly by pruning under-performing parts of the ensemble, and they therefore usually also generate more accurate concept descriptions. This paper proposes a new experimental data stream framework for studying concept drift, and two new variants of Bagging: ADWIN Bagging and Adaptive-Size Hoeffding Tree (ASHT) Bagging. Using the new experimental framework, an evaluation study on synthetic and real-world datasets comprising up to ten million examples shows that the new ensemble methods perform very well compared to several known methods.", "Adaptive online learning algorithms have been successfully applied to fast-evolving data streams. Such streams are susceptible to concept drift, which implies that the most suitable type of classifier often changes over time. In this setting, a system that is able to seamlessly select the type of learner that presents the current “best” model holds much value. For example, in a scenario such as user profiling for security applications, model adaptation is of the utmost importance. We have implemented a multi-strategy framework, the so-called Tornado environment, which is able to run multiple and diverse classifiers simultaneously for decision making. In our framework, the current learner with the highest performance, at a specific point in time, is selected and the corresponding model is then provided to the user. In our implementation, we employ an Error-Memory-Runtime (EMR) measure which combines the error-rate, the memory usage and the runtime of classifiers as a performance indicator. We conducted experiments on synthetic and real-world datasets with the Hoeffding Tree, Naive Bayes, Perceptron, K-Nearest Neighbours and Decision Stumps algorithms. Our results indicate that our environment is able to adapt to changes and to continuously select the best current type of classifier, as the data evolve." ] }
1709.02457
2751686959
The last decade has seen a surge of interest in adaptive learning algorithms for data stream classification, with applications ranging from predicting ozone level peaks, learning stock market indicators, to detecting computer security violations. In addition, a number of methods have been developed to detect concept drifts in these streams. Consider a scenario where we have a number of classifiers with diverse learning styles and different drift detectors. Intuitively, the current ‘best’ (classifier, detector) pair is application dependent and may change as a result of the stream evolution. Our research builds on this observation. We introduce the Tornado framework that implements a reservoir of diverse classifiers, together with a variety of drift detection algorithms. In our framework, all (classifier, detector) pairs proceed, in parallel, to construct models against the evolving data streams. At any point in time, we select the pair which currently yields the best performance. To this end, we introduce the CAR measure, which is employed to balance classification, adaptation and resource utilization requirements. We further incorporate two novel stacking-based drift detection methods, namely the FHDDMS and ( FHDDMS _ add ) approaches. The experimental evaluation confirms that the current ‘best’ (classifier, detector) pair is not only heavily dependent on the characteristics of the stream, but also that this selection evolves as the stream flows. Further, our FHDDMS variants detect concept drifts accurately in a timely fashion while outperforming the state-of-the-art.
@cite_4 introduced the return on investment (ROI) measure to determine whether the adaptation of a learning algorithm is beneficial. They concluded that adaptation should only take place if the expected gain in performance, measured by accuracy, exceeds the cost of other resources (e.g. memory and time) required for adaptation. In their work, the ROI measure was used to indicate whether an adaptation to a concept drift is beneficial, over time. @cite_2 extended the above-mentioned ROI measure, in order to dynamically adapt the number of base learners in online bagging ensembles. @cite_10 proposed an approach to count true positive (TP), false positive (FP), and false negative (FN) of drift detection, in order to evaluate the performances of concept drift detectors. They introduced the acceptable delay length notion as a threshold that determines how far a detected drift could be from the real location of drift to be considered as a true positive.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_2" ], "mid": [ "2517990807", "2015121669", "2461118250" ], "abstract": [ "Decision makers increasingly require near-instant models to make sense of fast evolving data streams. Learning from such evolving environments is, however, a challenging task. This challenge is partially due to the fact that the distribution of data often changes over time, thus potentially leading to degradation in the overall performance. In particular, classification algorithms need to adapt their models after facing such distributional changes (also referred to as concept drifts). Usually, drift detection methods are utilized in order to accomplish this task. It follows that detecting concept drifts as soon as possible, while resulting in fewer false positives and false negatives, is a major objective of drift detectors. To this end, we introduce the Fast Hoeffding Drift Detection Method (FHDDM) which detects the drift points using a sliding window and Hoeffding’s inequality. FHDDM detects a drift when a significant difference between the maximum probability of correct predictions and the most recent probability of correct predictions is observed. Experimental results confirm that FHDDM detects drifts with less detection delay, less false positive and less false negative, when compared to the state-of-the-art.", "In the era of \"big\" data, data analysis algorithms need to be efficient. Traditionally researchers would tackle this problem by considering \"small\" data algorithms, and investigating how to make them computationally more efficient for big data applications. The main means to achieve computational efficiency would be to revise the necessity and order of subroutines, or to approximate calculations. This paper presents a viewpoint that in order to be able to cope with the new challenges of the growing digital universe, research needs to take a combined view towards data analysis algorithm design and hardware design, and discusses a potential research direction in taking an intreated approach in terms of algorithm design and hardware design. Analyzing how data mining algorithms operate at the elementary operations level can help do design more specialized and dedicated hardware, that, for instance, would be more energy efficient. In turn, understanding hardware design can help to develop more effective algorithms.", "Online ensemble methods have been very successful to create accurate models against data streams that are susceptible to concept drift. The success of data stream mining has allowed diverse users to analyse their data in multiple domains, ranging from monitoring stock markets to analysing network traffic and exploring ATM transactions. Increasingly, data stream mining applications are running on mobile devices, utilizing the variety of data generated by sensors and network technologies. Subsequently, there has been a surge in interest in mobile or so-called pocket data stream mining, aiming to construct near real-time models. However, it follows that the computational resources are limited and that there is a need to adapt analytics to map the resource usage requirements. In this context, the resultant models produced by such algorithms should thus not only be highly accurate and be able to swiftly adapt to changes. Rather, the data mining techniques should also be fast, scalable, and efficient in terms of resource allocation. It then becomes important to consider Return on Investment ROI issues such as storage space needs and memory utilization. This paper introduces the Adaptive Ensemble Size AES algorithm, an extension of the Online Bagging method, to address this issue. Our AES method dynamically adapts the sizes of ensembles, based on the most recent memory usage requirements. Our results when comparing our AES algorithm with the state-of-the-art indicate that we are able to obtain a high Return on Investment ROI without compromising on the accuracy of the results." ] }
1709.02529
2750875638
Many applications need to process massive streams of spatio-textual data in real-time against continuous spatio-textual queries. For example, in location-aware ad targeting publish subscribe systems, it is required to disseminate millions of ads and promotions to millions of users based on the locations and textual profiles of users. In this paper, we study indexing of continuous spatio-textual queries. There exist several related spatio-textual indexes that typically integrate a spatial index with a textual index. However, these indexes usually have a high demand for main-memory and assume that the entire vocabulary of keywords is known in advance. Also, these indexes do not successfully capture the variations in the frequencies of keywords across different spatial regions and treat frequent and infrequent keywords in the same way. Moreover, existing indexes do not adapt to the changes in workload over space and time. For example, some keywords may be trending at certain times in certain locations and this may change as time passes. This affects the indexing and searching performance of existing indexes significantly. In this paper, we introduce FAST, a Frequency-Aware Spatio-Textual index for continuous spatio-textual queries. FAST is a main-memory index that requires up to one third of the memory needed by the state-of-the-art index. FAST does not assume prior knowledge of the entire vocabulary of indexed objects. FAST adaptively accounts for the difference in the frequencies of keywords within their corresponding spatial regions to automatically choose the best indexing approach that optimizes the insertion and search times. Extensive experimental evaluation using real and synthetic datasets demonstrates that FAST is up to 3x faster in search time and 5x faster in insertion time than the state-of-the-art indexes.
Spatio-Textual Indexing. Recently, several spatio-textual indexes have been proposed to answer snap-shot queries over spatio-textual data. Examples of these queries include the filter, top-k, and collective group queries. @cite_22 surveys spatio-textual indexes and benchmarks their performance under various spatio-textual queries. The most relevant indexes are the IQ-tree @cite_8 and the R @math -tree @cite_13 . These indexes are mainly disk-based and have been outperformed by the AP-tree @cite_6 .
{ "cite_N": [ "@cite_13", "@cite_22", "@cite_6", "@cite_8" ], "mid": [ "2159393864", "2167275936", "2006307108", "2169307587" ], "abstract": [ "Location-based services have become widely available on mobile devices. Existing methods employ a pull model or user-initiated model, where a user issues a query to a server which replies with location-aware answers. To provide users with instant replies, a push model or server-initiated model is becoming an inevitable computing model in the next-generation location-based services. In the push model, subscribers register spatio-textual subscriptions to capture their interests, and publishers post spatio-textual messages. This calls for a high-performance location-aware publish subscribe system to deliver publishers' messages to relevant subscribers.In this paper, we address the research challenges that arise in designing a location-aware publish subscribe system. We propose an rtree based index structure by integrating textual descriptions into rtree nodes. We devise efficient filtering algorithms and develop effective pruning techniques to improve filtering efficiency. Experimental results show that our method achieves high performance. For example, our method can filter 500 tweets in a second for 10 million registered subscriptions on a commodity computer.", "Geo-textual indices play an important role in spatial keyword querying. The existing geo-textual indices have not been compared systematically under the same experimental framework. This makes it difficult to determine which indexing technique best supports specific functionality. We provide an all-around survey of 12 state-of-the-art geo-textual indices. We propose a benchmark that enables the comparison of the spatial keyword query performance. We also report on the findings obtained when applying the benchmark to the indices, thus uncovering new insights that may guide index selection as well as further research.", "Many applications require finding objects closest to a specified location that contains a set of keywords. For example, online yellow pages allow users to specify an address and a set of keywords. In return, the user obtains a list of businesses whose description contains these keywords, ordered by their distance from the specified address. The problems of nearest neighbor search on spatial data and keyword search on text data have been extensively studied separately. However, to the best of our knowledge there is no efficient method to answer spatial keyword queries, that is, queries that specify both a location and a set of keywords. In this work, we present an efficient method to answer top-k spatial keyword queries. To do so, we introduce an indexing structure called IR2-Tree (Information Retrieval R-Tree) which combines an R-Tree with superimposed text signatures. We present algorithms that construct and maintain an IR2-Tree, and use it to answer top-k spatial keyword queries. Our algorithms are experimentally compared to current methods and are shown to have superior performance and excellent scalability.", "Geographic web search engines allow users to constrain and order search results in an intuitive manner by focusing a query on a particular geographic region. Geographic search technology, also called local search, has recently received significant interest from major search engine companies. Academic research in this area has focused primarily on techniques for extracting geographic knowledge from the web. In this paper, we study the problem of efficient query processing in scalable geographic search engines. Query processing is a major bottleneck in standard web search engines, and the main reason for the thousands of machines used by the major engines. Geographic search engine query processing is different in that it requires a combination of text and spatial data processing techniques. We propose several algorithms for efficient query processing in geographic search engines, integrate them into an existing web search query processor, and evaluate them on large sets of real data and query traces." ] }
1709.02529
2750875638
Many applications need to process massive streams of spatio-textual data in real-time against continuous spatio-textual queries. For example, in location-aware ad targeting publish subscribe systems, it is required to disseminate millions of ads and promotions to millions of users based on the locations and textual profiles of users. In this paper, we study indexing of continuous spatio-textual queries. There exist several related spatio-textual indexes that typically integrate a spatial index with a textual index. However, these indexes usually have a high demand for main-memory and assume that the entire vocabulary of keywords is known in advance. Also, these indexes do not successfully capture the variations in the frequencies of keywords across different spatial regions and treat frequent and infrequent keywords in the same way. Moreover, existing indexes do not adapt to the changes in workload over space and time. For example, some keywords may be trending at certain times in certain locations and this may change as time passes. This affects the indexing and searching performance of existing indexes significantly. In this paper, we introduce FAST, a Frequency-Aware Spatio-Textual index for continuous spatio-textual queries. FAST is a main-memory index that requires up to one third of the memory needed by the state-of-the-art index. FAST does not assume prior knowledge of the entire vocabulary of indexed objects. FAST adaptively accounts for the difference in the frequencies of keywords within their corresponding spatial regions to automatically choose the best indexing approach that optimizes the insertion and search times. Extensive experimental evaluation using real and synthetic datasets demonstrates that FAST is up to 3x faster in search time and 5x faster in insertion time than the state-of-the-art indexes.
Publish Subscribe Systems. One main use case of FAST is in location-aware publish subscribe systems. Publish subscribe systems maintain subscriptions for long durations and match incoming messages against stored subscriptions. Publish subscribe systems can be categorized according to their matching approach into the following categories: (1) content-based @cite_19 , (2) TopK-similarity-based @cite_18 , and (3) location-aware @cite_2 . These publish subscribe systems do not simultaneously account for the spatial and textual properties of subscriptions and messages. Recently, several spatio-textual publish subscribe systems @cite_6 @cite_26 have been proposed. To the best of our knowledge, the AP-tree @cite_6 is the most relevant work for indexing continuous spatio-textual queries in a streaming environment.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_6", "@cite_19", "@cite_2" ], "mid": [ "2117071212", "1488414089", "2006307108", "2048198841", "2014093272" ], "abstract": [ "Social content, such as Twitter updates, often have the quickest first-hand reports of news events, as well as numerous commentaries that are indicative of public view of such events. As such, social updates provide a good complement to professionally written news articles. In this paper we consider the problem of automatically annotating news stories with social updates (tweets), at a news website serving high volume of pageviews. The high rate of both the pageviews (millions to billions a day) and of the incoming tweets (more than 100 millions a day) make real-time indexing of tweets ineffective, as this requires an index that is both queried and updated extremely frequently. The rate of tweet updates makes caching techniques almost unusable since the cache would become stale very quickly. We propose a novel architecture where each story is treated as a subscription for tweets relevant to the story's content, and new algorithms that efficiently match tweets to stories, proactively maintaining the top-k tweets for each story. Such top-k pub-sub consumes only a small fraction of the resource cost of alternative solutions, and can be applicable to other large scale content-based publish-subscribe problems. We demonstrate the effectiveness of our approach on realworld data: a corpus of news stories from Yahoo! News and a log of Twitter updates.", "With the rapid progress of mobile Internet and the growing popularity of smartphones, location-aware publish subscribe systems have recently attracted significant attention. Different from traditional content-based publish subscribe, subscriptions registered by subscribers and messages published by publishers include both spatial information and textual descriptions, and messages should be delivered to relevant subscribers whose subscriptions have high relevancy to the messages. To evaluate the relevancy between spatio-textual messages and subscriptions, we should combine the spatial proximity and textual relevancy. Since subscribers have different preferences - some subscribers prefer messages with high spatial proximity and some subscribers pay more attention to messages with high textual relevancy, it calls for new location-aware publish subscribe techniques to meet various needs from different subscribers. In this paper, we allow subscribers to parameterize their subscriptions and study the location-aware publish subscribe problem on parameterized spatio-textual subscriptions. One big challenge is to achieve high performance. To meet this requirement, we propose a filter-verification framework to efficiently deliver messages to relevant subscribers. In the filter step, we devise effective filters to prune large numbers of irreverent results and obtain some candidates. In the verification step, we verify the candidates to generate the answers. We propose three effective filters by integrating prefix filtering and spatial pruning techniques. Experimental results show our method achieves higher performance and better quality than baseline approaches.", "Many applications require finding objects closest to a specified location that contains a set of keywords. For example, online yellow pages allow users to specify an address and a set of keywords. In return, the user obtains a list of businesses whose description contains these keywords, ordered by their distance from the specified address. The problems of nearest neighbor search on spatial data and keyword search on text data have been extensively studied separately. However, to the best of our knowledge there is no efficient method to answer spatial keyword queries, that is, queries that specify both a location and a set of keywords. In this work, we present an efficient method to answer top-k spatial keyword queries. To do so, we introduce an indexing structure called IR2-Tree (Information Retrieval R-Tree) which combines an R-Tree with superimposed text signatures. We present algorithms that construct and maintain an IR2-Tree, and use it to answer top-k spatial keyword queries. Our algorithms are experimentally compared to current methods and are shown to have superior performance and excellent scalability.", "The advance in wireless Internet and mobile computing brought the booming of intelligent Location-Based Services(LBS), which can actively push location-dependent information to mobile users according to their predefined interest. The successful development of push-based LBS applications relies on the existence of a publish subscribe middleware that can handle spatial relationship. This paper presents an efficient spatial publish subscribe system that can serve as the middleware for intelligent LBS applications. The basic models, including spatial event model, spatial subscription model and notification model, are introduced and the over-all architecture is presented. Two kinds of spatial predicate that can meet most common requirement of intelligent location aware applications are also discussed. Furthermore, we propose a novel spatial event processing approach that dispatches the spatial subscriptions to self-positioning mobile devices. By leveraging client-side computing resource and decreasing the communication times, the server-side workload is relieved and the communication cost is reduced. Experimental results clearly demonstrate the efficiency of our approach.", "This paper presents the Geo Feed system, a location-aware news feed system that provides a new platform for its users to get spatially related message updates from either their friends or favorite news sources. Geo Feed distinguishes itself from all existing news feed systems in that it takes into account the spatial extents of messages and user locations when deciding upon the selected news feed. Geo Feed is equipped with three different approaches for delivering the news feed to its users, namely, spatial pull, spatial push, and shared push. Then, the main challenge of Geo Feed is to decide on when to use each of these three approaches to which users. Geo Feed is equipped with a smart decision model that decides about using these approaches in a way that: (a) minimizes the system overhead for delivering the location-aware news feed, and (b) guarantees a certain response time for each user to obtain the requested location-aware news feed. Experimental results, based on real and synthetic data, show that Geo Feed outperforms existing news feed systems in terms of response time and maintenance cost." ] }
1709.02529
2750875638
Many applications need to process massive streams of spatio-textual data in real-time against continuous spatio-textual queries. For example, in location-aware ad targeting publish subscribe systems, it is required to disseminate millions of ads and promotions to millions of users based on the locations and textual profiles of users. In this paper, we study indexing of continuous spatio-textual queries. There exist several related spatio-textual indexes that typically integrate a spatial index with a textual index. However, these indexes usually have a high demand for main-memory and assume that the entire vocabulary of keywords is known in advance. Also, these indexes do not successfully capture the variations in the frequencies of keywords across different spatial regions and treat frequent and infrequent keywords in the same way. Moreover, existing indexes do not adapt to the changes in workload over space and time. For example, some keywords may be trending at certain times in certain locations and this may change as time passes. This affects the indexing and searching performance of existing indexes significantly. In this paper, we introduce FAST, a Frequency-Aware Spatio-Textual index for continuous spatio-textual queries. FAST is a main-memory index that requires up to one third of the memory needed by the state-of-the-art index. FAST does not assume prior knowledge of the entire vocabulary of indexed objects. FAST adaptively accounts for the difference in the frequencies of keywords within their corresponding spatial regions to automatically choose the best indexing approach that optimizes the insertion and search times. Extensive experimental evaluation using real and synthetic datasets demonstrates that FAST is up to 3x faster in search time and 5x faster in insertion time than the state-of-the-art indexes.
Superset Containment Search. AKI addresses the problem of superset containment search, where it is required to retrieve indexed items with keywords that are fully contained in the search keywords. Several indexes have been proposed to address the superset containment problem, e.g., @cite_23 @cite_7 @cite_15 @cite_10 . OKT @cite_7 and RIL @cite_23 are the most adopted structures for superset containment search @cite_0 . @cite_15 @cite_10 present two structures for superset containment search. However, these structures are mainly disk-based and require knowing the frequencies of the entire vocabulary. AKI is a main-memory index and does not assume prior knowledge of the frequencies of keywords.
{ "cite_N": [ "@cite_7", "@cite_0", "@cite_23", "@cite_15", "@cite_10" ], "mid": [ "2054311069", "1976360682", "", "2140314681", "1483240827" ], "abstract": [ "The number, size, and user population of bibliographic and full-text document databases are rapidly growing. With a high document arrival rate, it becomes essential for users of such databases to have access to the very latest documents; yet the high document arrival rate also makes it difficult for users to keep themselves updated. It is desirable to allow users to submit profiles, i.e., queries that are constantly evaluated, so that they will be automatically informed of new additions that may be of interest. Such service is traditionally called Selective Dissemination of Information (SDI). The high document arrival rate, the huge number of users, and the timeliness requirement of the service pose a challenge in achieving efficient SDL. In this article, we propose several index structures for indexing profiles and algorithms that efficiently match documents against large number of profiles. We also present analysis and simulation results to compare their performance under different scenarios.", "The explosion of published information on the Web leads to the emergence of a Web syndication paradigm, which transforms the passive reader into an active information collector. Information consumers subscribe to RSS Atom feeds and are notified whenever a piece of news (item) is published. The success of this Web syndication now offered on Web sites, blogs, and social media, however raises scalability issues. There is a vital need for efficient real-time filtering methods across feeds, to allow users to follow effectively personally interesting information. We investigate in this paper three indexing techniques for users' subscriptions based on inverted lists or on an ordered trie. We present analytical models for memory requirements and matching time and we conduct a thorough experimental evaluation to exhibit the impact of critical workload parameters on these structures.", "", "Consider a text filtering server that monitors a stream of incoming documents for a set of users, who register their interests in the form of continuous text search queries. The task of the server is to constantly maintain for each query a ranked result list, comprising the recent documents (drawn from a sliding window) with the highest similarity to the query. Such a system underlies many text monitoring applications that need to cope with heavy document traffic, such as news and email monitoring. In this paper, we propose the first solution for processing continuous text queries efficiently. Our objective is to support a large number of user queries while sustaining high document arrival rates. Our solution indexes the streamed documents in main memory with a structure based on the principles of the inverted file, and processes document arrival and expiration events with an incremental threshold-based method. We distinguish between two versions of the monitoring algorithm, an eager and a lazy one, which differ in how aggressively they manage the thresholds on the inverted index. Using benchmark queries over a stream of real documents, we experimentally verify the efficiency of our methodology; both its versions are at least an order of magnitude faster than a competitor constructed from existing techniques, with lazy being the best approach overall.", "Many web documents refer to specific geographic localities and many people include geographic context in queries to web search engines. Standard web search engines treat the geographical terms in the same way as other terms. This can result in failure to find relevant documents that refer to the place of interest using alternative related names, such as those of included or nearby places. This can be overcome by associating text indexing with spatial indexing methods that exploit geo-tagging procedures to categorise documents with respect to geographic space. We describe three methods for spatio-textual indexing based on multiple spatially indexed text indexes, attaching spatial indexes to the document occurrences of a text index, and merging text index access results with results of access to a spatial index of documents. These schemes are compared experimentally with a conventional text index search engine, using a collection of geo-tagged web documents, and are shown to be able to compete in speed and storage performance with pure text indexing." ] }
1709.02448
2751932958
We propose a neural embedding algorithm called Network Vector, which learns distributed representations of nodes and the entire networks simultaneously. By embedding networks in a low-dimensional space, the algorithm allows us to compare networks in terms of structural similarity and to solve outstanding predictive problems. Unlike alternative approaches that focus on node level features, we learn a continuous global vector that captures each node's global context by maximizing the predictive likelihood of random walk paths in the network. Our algorithm is scalable to real world graphs with many nodes. We evaluate our algorithm on datasets from diverse domains, and compare it with state-of-the-art techniques in node classification, role discovery and concept analogy tasks. The empirical results show the effectiveness and the efficiency of our algorithm.
By exchanging the notions of nodes in a network and words in a document, recent research @cite_0 @cite_26 @cite_22 @cite_11 attempt to learn node representations in a network in a similar way of learning word embeddings in neural language models. Our work follows this line of approaches in which nodes in a neighborhood will have similar embeddings in vector space. Different node sampling strategies are explored for characterizing the neighborhood structure. For example, DeepWalk @cite_0 samples node sequences from a network using a stream of short first-order random walks, and model them just like word sequences in documents using neural embeddings. LINE @cite_26 samples nodes in pairwise manner and model the first-order and second-order proximity between them. GrapRep @cite_22 extends LINE to exploit structural information beyond second-order proximity. To offer a flexible node sampling scheme, node2vec @cite_11 utilizes second-order random walks, and combines Depth-First Search (DFS) and Breadth-First Search (BFS) strategies to explore the local neighborhood structure.
{ "cite_N": [ "@cite_0", "@cite_26", "@cite_22", "@cite_11" ], "mid": [ "2154851992", "", "2090891622", "2366141641" ], "abstract": [ "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "", "In this paper, we present GraRep , a novel model for learning vertex representations of weighted graphs. This model learns low dimensional vectors to represent vertices appearing in a graph and, unlike existing work, integrates global structural information of the graph into the learning process. We also formally analyze the connections between our work and several previous research efforts, including the DeepWalk model of as well as the skip-gram model with negative sampling of We conduct experiments on a language network, a social network as well as a citation network and show that our learned global representations can be effectively used as features in tasks such as clustering, classification and visualization. Empirical results demonstrate that our representation significantly outperforms other state-of-the-art methods in such tasks.", "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks." ] }
1709.02508
2753276256
We present a novel deep neural network architecture for unsupervised subspace clustering. This architecture is built upon deep auto-encoders, which non-linearly map the input data into a latent space. Our key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the "self-expressiveness" property that has proven effective in traditional subspace clustering. Being differentiable, our new self-expressive layer provides a simple but effective way to learn pairwise affinities between all data points through a standard back-propagation procedure. Being nonlinear, our neural-network based method is able to cluster data points having complex (often nonlinear) structures. We further propose pre-training and fine-tuning strategies that let us effectively learn the parameters of our subspace clustering networks. Our experiments show that the proposed method significantly outperforms the state-of-the-art unsupervised subspace clustering methods.
Auto-encoders (AEs) can non-linearly transform data into a latent space. When this latent space has lower dimension than the original one @cite_36 , this can be viewed as a form of non-linear PCA. An auto-encoder typically consists of an encoder and a decoder to define the data reconstruction cost. With the success of deep learning @cite_11 , deep (or stacked) AEs have become popular for unsupervised learning. For instance, deep AEs have proven useful for dimensionality reduction @cite_36 and image denoising @cite_51 . Recently, deep AEs have also been used to initialize deep embedding networks for unsupervised clustering @cite_27 . A convolutional version of deep AEs was also applied to extract hierarchical features and to initialize convolutional neural networks (CNNs) @cite_52 .
{ "cite_N": [ "@cite_36", "@cite_52", "@cite_27", "@cite_51", "@cite_11" ], "mid": [ "2100495367", "2136655611", "2173649752", "2145094598", "" ], "abstract": [ "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.", "We present a novel convolutional auto-encoder (CAE) for unsupervised feature learning. A stack of CAEs forms a convolutional neural network (CNN). Each CAE is trained using conventional on-line gradient descent without additional regularization terms. A max-pooling layer is essential to learn biologically plausible features consistent with those found by previous approaches. Initializing a CNN with filters of a trained CAE stack yields superior performance on a digit (MNIST) and an object recognition (CIFAR10) benchmark.", "Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.", "We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.", "" ] }
1709.02508
2753276256
We present a novel deep neural network architecture for unsupervised subspace clustering. This architecture is built upon deep auto-encoders, which non-linearly map the input data into a latent space. Our key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the "self-expressiveness" property that has proven effective in traditional subspace clustering. Being differentiable, our new self-expressive layer provides a simple but effective way to learn pairwise affinities between all data points through a standard back-propagation procedure. Being nonlinear, our neural-network based method is able to cluster data points having complex (often nonlinear) structures. We further propose pre-training and fine-tuning strategies that let us effectively learn the parameters of our subspace clustering networks. Our experiments show that the proposed method significantly outperforms the state-of-the-art unsupervised subspace clustering methods.
There has been little work in the literature combining deep learning with subspace clustering. To the best of our knowledge, the only exception is @cite_29 , which first extracts SIFT @cite_17 or HOG @cite_37 features from the images and feeds them to a fully connected deep auto-encoder with a sparse subspace clustering (SSC) @cite_21 prior. The final clustering is then obtained by applying k-means or SSC on the learned auto-encoder features. In essence, @cite_29 can be thought of as a subspace clustering method based on k-means or SSC with deep auto-encoder features. Our method significantly differs from @cite_29 in that our network is designed to directly learn the affinities, thanks to our new layer.
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_37", "@cite_17" ], "mid": [ "2571899125", "1993962865", "2161969291", "2151103935" ], "abstract": [ "Subspace clustering aims to cluster unlabeled samples into multiple groups by implicitly seeking a subspace to fit each group. Most of existing methods are based on a shallow linear model, which may fail in handling data with nonlinear structure. In this paper, we propose a novel subspace clustering method -- deeP subspAce clusteRing with sparsiTY prior (PARTY) -- based on a new deep learning architecture. PARTY explicitly learns to progressively transform input data into nonlinear latent space and to be adaptive to the local and global subspace structure simultaneously. In particular, considering local structure, PARTY learns representation for the input data with minimal reconstruction error. Moreover, PARTY incorporates a prior sparsity information into the hidden representation learning to preserve the sparse reconstruction relation over the whole data set. To the best of our knowledge, PARTY is the first deep learning based subspace clustering method. Extensive experiments verify the effectiveness of our method.", "Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance." ] }
1709.02600
2512422503
Forward-looking sonar can capture high resolution images of underwater scenes, but their interpretation is complex. Generic object detection in such images has not been solved, specially in cases of small and unknown objects. In comparison, detection proposal algorithms have produced top performing object detectors in real-world color images. In this work we develop a Convolutional Neural Network that can reliably score objectness of image windows in forward-looking sonar images and by thresholding objectness, we generate detection proposals. In our dataset of marine garbage objects, we obtain 94 recall, generating around 60 proposals per image. The biggest strength of our method is that it can generalize to previously unseen objects. We show this by detecting chain links, walls and a wrench without previous training in such objects. We strongly believe our method can be used for class-independent object detection, with many real-world applications such as chain following and mine detection.
Detection Proposals @cite_21 are class-agnostic object detectors. The basic idea is to extract all object bounding boxes from an image, and compute an objectness score @cite_0 that can be used to rank and determine interesting objects, with the purpose of posterior classification.
{ "cite_N": [ "@cite_0", "@cite_21" ], "mid": [ "2066624635", "1555385401" ], "abstract": [ "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.", "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on BSDS and PASCAL VOC 2008 demonstrate our ability to find most objects within a small bag of proposed regions." ] }
1709.02600
2512422503
Forward-looking sonar can capture high resolution images of underwater scenes, but their interpretation is complex. Generic object detection in such images has not been solved, specially in cases of small and unknown objects. In comparison, detection proposal algorithms have produced top performing object detectors in real-world color images. In this work we develop a Convolutional Neural Network that can reliably score objectness of image windows in forward-looking sonar images and by thresholding objectness, we generate detection proposals. In our dataset of marine garbage objects, we obtain 94 recall, generating around 60 proposals per image. The biggest strength of our method is that it can generalize to previously unseen objects. We show this by detecting chain links, walls and a wrench without previous training in such objects. We strongly believe our method can be used for class-independent object detection, with many real-world applications such as chain following and mine detection.
Many methods to extract detection proposals in color images exist. @cite_13 uses cascade of objectness features to detect category-independent objects. @cite_0 use different cues to score objectness, such as saliency, color contrast and edge density.
{ "cite_N": [ "@cite_0", "@cite_13" ], "mid": [ "2066624635", "2161198271" ], "abstract": [ "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.", "Cascades are a popular framework to speed up object detection systems. Here we focus on the first layers of a category independent object detection cascade in which we sample a large number of windows from an objectness prior, and then discriminatively learn to filter these candidate windows by an order of magnitude. We make a number of contributions to cascade design that substantially improve over the state of the art: (i) our novel objectness prior gives much higher recall than competing methods, (ii) we propose objectness features that give high performance with very low computational cost, and (iii) we make use of a structured output ranking approach to learn highly effective, but inexpensive linear feature combinations by directly optimizing cascade performance. Thorough evaluation on the PASCAL VOC data set shows consistent improvement over the current state of the art, and over alternative discriminative learning strategies." ] }
1709.02600
2512422503
Forward-looking sonar can capture high resolution images of underwater scenes, but their interpretation is complex. Generic object detection in such images has not been solved, specially in cases of small and unknown objects. In comparison, detection proposal algorithms have produced top performing object detectors in real-world color images. In this work we develop a Convolutional Neural Network that can reliably score objectness of image windows in forward-looking sonar images and by thresholding objectness, we generate detection proposals. In our dataset of marine garbage objects, we obtain 94 recall, generating around 60 proposals per image. The biggest strength of our method is that it can generalize to previously unseen objects. We show this by detecting chain links, walls and a wrench without previous training in such objects. We strongly believe our method can be used for class-independent object detection, with many real-world applications such as chain following and mine detection.
Selective search by @cite_11 uses a large number of engineered features and superpixel segmentation to generate proposals in color images, which achieves a 99 @cite_15 use edge information to score proposals from a sliding window in a color image. @cite_5 use a data driver approach where regions are matched over a large annotated dataset and objectness is computed from segment properties.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_11" ], "mid": [ "", "7746136", "2088049833" ], "abstract": [ "", "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1709.02600
2512422503
Forward-looking sonar can capture high resolution images of underwater scenes, but their interpretation is complex. Generic object detection in such images has not been solved, specially in cases of small and unknown objects. In comparison, detection proposal algorithms have produced top performing object detectors in real-world color images. In this work we develop a Convolutional Neural Network that can reliably score objectness of image windows in forward-looking sonar images and by thresholding objectness, we generate detection proposals. In our dataset of marine garbage objects, we obtain 94 recall, generating around 60 proposals per image. The biggest strength of our method is that it can generalize to previously unseen objects. We show this by detecting chain links, walls and a wrench without previous training in such objects. We strongly believe our method can be used for class-independent object detection, with many real-world applications such as chain following and mine detection.
@cite_10 shows how to learn objectness with a CNN with the purpose of reranking proposals generated by EdgeBoxes @cite_15 , with improved detection performance. A good extensive evaluation of many proposal algorithms is @cite_17 .
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "7746136", "1578066333", "1958328135" ], "abstract": [ "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "Existing object proposal approaches use primarily bottom-up cues to rank proposals, while we believe that \"objectness\" is in fact a high level construct. We argue for a data-driven, semantic approach for ranking object proposals. Our framework, which we call DeepBox, uses convolutional neural networks (CNNs) to rerank proposals from a bottom-up method. We use a novel four-layer CNN architecture that is as good as much larger networks on the task of evaluating objectness while being much faster. We show that DeepBox significantly improves over the bottom-up ranking, achieving the same recall with 500 proposals as achieved by bottom-up methods with 2000. This improvement generalizes to categories the CNN has never seen before and leads to a 4.5-point gain in detection mAP. Our implementation achieves this performance while running at 260 ms per image.", "Current top performing object detectors employ detection proposals to guide the search for objects, thereby avoiding exhaustive sliding window search across images. Despite the popularity and widespread use of detection proposals, it is unclear which trade-offs are made when using them during object detection. We provide an in-depth analysis of twelve proposal methods along with four baselines regarding proposal repeatability, ground truth annotation recall on PASCAL, ImageNet, and MS COCO, and their impact on DPM, R-CNN, and Fast R-CNN detection performance. Our analysis shows that for object detection improving proposal localisation accuracy is as important as improving recall. We introduce a novel metric, the average recall (AR), which rewards both high recall and good localisation and correlates surprisingly well with detection performance. Our findings show common strengths and weaknesses of existing methods, and provide insights and metrics for selecting and tuning proposal methods." ] }
1709.02600
2512422503
Forward-looking sonar can capture high resolution images of underwater scenes, but their interpretation is complex. Generic object detection in such images has not been solved, specially in cases of small and unknown objects. In comparison, detection proposal algorithms have produced top performing object detectors in real-world color images. In this work we develop a Convolutional Neural Network that can reliably score objectness of image windows in forward-looking sonar images and by thresholding objectness, we generate detection proposals. In our dataset of marine garbage objects, we obtain 94 recall, generating around 60 proposals per image. The biggest strength of our method is that it can generalize to previously unseen objects. We show this by detecting chain links, walls and a wrench without previous training in such objects. We strongly believe our method can be used for class-independent object detection, with many real-world applications such as chain following and mine detection.
More recent proposal approaches also use CNNs, such as Fast R-CNN @cite_19 and Faster R-CNN @cite_4 . Fast R-CNN uses bounding box regression trained over a convolutional feature map that can be shared and used for both detection and classification, but still using initial Selective Search proposals @cite_11 , while Faster R-CNN uses region proposal networks to predict proposals and objectness directly from the input image, while sharing layers with a classifier and bounding box regressor in a similar way that of Fast R-CNN.
{ "cite_N": [ "@cite_19", "@cite_4", "@cite_11" ], "mid": [ "", "2613718673", "2088049833" ], "abstract": [ "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1709.02600
2512422503
Forward-looking sonar can capture high resolution images of underwater scenes, but their interpretation is complex. Generic object detection in such images has not been solved, specially in cases of small and unknown objects. In comparison, detection proposal algorithms have produced top performing object detectors in real-world color images. In this work we develop a Convolutional Neural Network that can reliably score objectness of image windows in forward-looking sonar images and by thresholding objectness, we generate detection proposals. In our dataset of marine garbage objects, we obtain 94 recall, generating around 60 proposals per image. The biggest strength of our method is that it can generalize to previously unseen objects. We show this by detecting chain links, walls and a wrench without previous training in such objects. We strongly believe our method can be used for class-independent object detection, with many real-world applications such as chain following and mine detection.
Object detection in sonar images is mostly done with several kinds of engineered features over sliding windows and a machine learning classifier @cite_12 @cite_18 @cite_14 @cite_7 , template matching @cite_1 @cite_22 is also very popular, as well as computer vision techniques like boosted cascade of weak classifiers @cite_8 . In all cases this type of approach only produces class-specific detectors, where generalization outside of the training set is poor.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_7", "@cite_8", "@cite_1", "@cite_12" ], "mid": [ "1954412909", "2079902523", "2083769217", "2165364928", "2607190376", "2096084339", "1972623374" ], "abstract": [ "Automatic target recognition (ATR) for unexploded ordnance (UXO) detection and classification using sonar data of opportunity from open oceans survey sites is an open research area. The goal here is to develop ATR spanning real-aperture and synthetic aperture sonar imagery. The classical paradigm of anomaly detection in images breaks down in cluttered and noisy environments. In this work we present an upgraded and ultimately more robust approach to object detection and classification in image sensor data. In this approach, object detection is performed using an in-situ weighted highlight-shadow detector; features are generated on geometric moments in the imaging domain; and finally, classification is performed using an Ada-boosted decision tree classifier. These techniques are demonstrated on simulated real aperture sonar data with varying noise levels.", "A new algorithm for the detection of underwater man-made objects in sonar imagery is proposed. The algorithm is made extremely fast by employing a cascaded architecture and by exploiting integral-image representations. As a result, the method makes real-time detection of objects of interest in streaming sonar data collected by an autonomous underwater vehicle feasible. No training data is required because the proposed method is adaptively tailored to the environmental characteristics of the sensed data that is collected in situ. The flexible yet rigorous approach also addresses and overcomes five major limitations that plague the most popular detection algorithms that are in common use. The power and utility of the proposed approach is demonstrated on a large, challenging data set of synthetic aperture sonar imagery collected at sea.", "Underwater chain cleaning and inspection tasks are costly and time consuming operations that must be performed periodically to guarantee the safety of the moorings. We propose a framework towards an efficient and cost-effective solution by using an autonomous underwater vehicle equipped with a forward-looking sonar. As a first step, we tackle the problem of individual chain link detection from the challenging forward-looking sonar data. To cope with occlusions and intensity variations due to viewpoint changes, the recognition problem is addressed as local pattern matching of the different link parts. We exploit the high frame-rate of the sonar to improve, by registration, the signal-to-noise ratio of the individual sonar frames and to cluster the local detections over time to increase robustness. Experiments with sonar images of a real chain are reported, showing a high percentage of correct link detections with good accuracy while potentially keeping real-time capabilities.", "The problem of automatic detection and classification for mine hunting applications is addressed. We propose a set of algorithms which are tested using a large database of real synthetic aperture sonar (SAS) images. The highlights and shadows of the objects in an SAS image are segmented using both a Markovian algorithm and the active contours algorithm. The comparison of both segmentation results is used as a feature for classification. In addition, other features are considered. These include geometrical shape descriptors, not only of the shadow region, but also of the object highlight, which demonstrates a significant improvement of the performance. Furthermore, a novel set of features based on the image statistics is described. Finally, we propose an optimal feature set that leads to the best classification results for the available database.", "Detection of underwater objects is a critical task for a variety of underwater applications (off-shore, archeology, marine science, mine detection). This task is traditionally carried out by a skilled human operator. However, with the appearance of Autonomous Underwater Vehicles, automated processing is now needed to tackle the large amount of data produced and to enable on the fly adaptation of the missions and near real time update of the operator. In this paper we propose a new method for object detection in sonar imagery capable of processing images extremely rapidly based on the Viola and Jones boosted classifiers cascade. Unlike most previously proposed approaches based on a model of the target, our method is based on in-situ learning of the target responses and of the local clutter. Learning the clutter is vitally important in complex terrains to obtain low false alarm rates while achieving high detection accuracy. Results obtained on real and synthetic images on a variety of challenging terrains are presented to show the discriminative power of such an approach.", "A method for classifying objects in sonar imagery is proposed. Motivated by the high-resolution achievable by modern imaging sonars, a novel template matching technique is developed that compares a target signature generated from a simple acoustic model with the actual image of an object being classified. The approach uses both the correlation with target echoes as well as projected acoustic shadow, and is tested on data obtained from a synthetic aperture sonar during experiments at sea. It is compared to two commonly used methods that are based on normalized cross-correlation, and results show that the proposed method outperforms the standard methods in terms of receiver-operating characteristic (ROC) curves as well as confusion matrices.", "The majority of existing automatic mine detection algorithms which have been developed are robust at detecting mine-like objects (MLOs) at the expense of detecting many false alarms. These objects must later be classified as mine or not-mine. The authors present a model based technique using Dempster–Shafer information theory to extend the standard mine not-mine classification procedure to provide both shape and size information on the object. A sonar simulator is used to produce synthetic realisations of mine-like object shadow regions which are compared to those of the unknown object using the Hausdorff distance. This measurement is fused with other available information from the object's shadow and highlight regions to produce a membership function for each of the considered object classes. Dempster–Shafer information theory is used to classify the objects using both mono-view and multi-view analysis. In both cases, results are presented on real data." ] }
1709.02600
2512422503
Forward-looking sonar can capture high resolution images of underwater scenes, but their interpretation is complex. Generic object detection in such images has not been solved, specially in cases of small and unknown objects. In comparison, detection proposal algorithms have produced top performing object detectors in real-world color images. In this work we develop a Convolutional Neural Network that can reliably score objectness of image windows in forward-looking sonar images and by thresholding objectness, we generate detection proposals. In our dataset of marine garbage objects, we obtain 94 recall, generating around 60 proposals per image. The biggest strength of our method is that it can generalize to previously unseen objects. We show this by detecting chain links, walls and a wrench without previous training in such objects. We strongly believe our method can be used for class-independent object detection, with many real-world applications such as chain following and mine detection.
While proposal methods are been successful on computer vision tasks, color image features are not appropriate for sonar images, due to the different interpretation of the image content. Some methods such as EdgeBoxes @cite_15 could be applied to sonar images, but it is well known that edges are unreliable in this kind of images due to noise and point of view dependence.
{ "cite_N": [ "@cite_15" ], "mid": [ "7746136" ], "abstract": [ "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy." ] }
1709.02601
2751875176
Convolutional Neural Networks (CNN) have revolutionized perception for color images, and their application to sonar images has also obtained good results. But in general CNNs are difficult to train without a large dataset, need manual tuning of a considerable number of hyperparameters, and require many careful decisions by a designer. In this work, we evaluate three common decisions that need to be made by a CNN designer, namely the performance of transfer learning, the effect of object image size and the relation between training set size. We evaluate three CNN models, namely one based on LeNet, and two based on the Fire module from SqueezeNet. Our findings are: Transfer learning with an SVM works very well, even when the train and transfer sets have no classes in common, and high classification performance can be obtained even when the target dataset is small. The ADAM optimizer combined with Batch Normalization can make a high accuracy CNN classifier, even with small image sizes (16 pixels). At least 50 samples per class are required to obtain @math test accuracy, and using Dropout with a small dataset helps improve performance, but Batch Normalization is better when a large dataset is available.
An evaluation of object size versus recognition accuracy in high-resolution sonars is done by @cite_0 . Their work uses a sonar image simulator and a simple PCA classifier and the authors conclude that only the highlight of the object is required to obtain low misclassification performance, but their analysis only considers simple mine-like objects, while our dataset contains real world marine debris, which is much more complex in their shape (and often has no shadow). @cite_0 analysis concentrates more on the sonar sensor, while our work is focused on feature learning and the capabilities of CNNs.
{ "cite_N": [ "@cite_0" ], "mid": [ "1994188792" ], "abstract": [ "Target recognition in sonar imagery has long been an active research area in the maritime domain, especially in the mine-counter measure context. Recently it has received even more attention as new sensors with increased resolution have been developed; new threats to critical maritime assets and a new paradigm for target recognition based on autonomous platforms have emerged. With the recent introduction of Synthetic Aperture Sonar systems and high-frequency sonars, sonar resolution has dramatically increased and noise levels decreased. Sonar images are distance images but at high resolution they tend to appear visually as optical images. Traditionally algorithms have been developed specifically for imaging sonars because of their limited resolution and high noise levels. With high-resolution sonars, algorithms developed in the image processing field for natural images become applicable. However, the lack of large datasets has hampered the development of such algorithms. Here we present a fast and realistic sonar simulator enabling development and evaluation of such algorithms. We develop a classifier and then analyse its performances using our simulated synthetic sonar images. Finally, we discuss sensor resolution requirements to achieve effective classification of various targets and demonstrate that with high resolution sonars target highlight analysis is the key for target recognition." ] }
1709.02458
2753202988
We present an end-to-end system for detecting and clustering faces by identity in full-length movies. Unlike works that start with a predefined set of detected faces, we consider the end-to-end problem of detection and clustering together. We make three separate contributions. First, we combine a state-of-the-art face detector with a generic tracker to extract high quality face tracklets. We then introduce a novel clustering method, motivated by the classic graph theory results of Erdős and Renyi. It is based on the observations that large clusters can be fully connected by joining just a small fraction of their point pairs, while just a single connection between two different people can lead to poor clustering results. This suggests clustering using a verification system with very few false positives but perhaps moderate recall. We introduce a novel verification method, rank-1 counts verification, that has this property, and use it in a link-based clustering scheme. Finally, we define a novel end-to-end detection and clustering evaluation metric allowing us to assess the accuracy of the entire end-to-end system. We present state-of-the-art results on multiple video data sets and also on standard face databases.
Recent work on robust face tracking @cite_11 @cite_19 @cite_28 has gradually expanded the length of face tracklets, starting from face detection results. Ozerov al @cite_28 merge results from different detectors by clustering based on spatio-temporal similarity. Clusters are then merged, interpolated, and smoothed for face tracklet creation. Similarly, @cite_19 generate low-level tracklets by merging detection results, form high-level tracklets by linking low-level tracklets, and apply the Hungarian algorithm to form even longer tracklets. @cite_11 improve on this @cite_19 by removing false positive tracklets.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_11" ], "mid": [ "1986730198", "2150090564", "1977573448" ], "abstract": [ "Automatic extraction of face tracks is a key component of systems that analyse people in audio-visual content such as TV programs and movies. Due to the lack of properly annotated content of this type, popular algorithms for extracting face tracks have not been fully assessed in the literature. We introduce and make publicly available a new dataset, based on the full annotation of a feature movie, to help fill this gap. We show in particular that, thanks to this dataset, state-of-art tracking metrics can now be exploited to evaluate face tracks used by, e.g., automatic character naming systems. We conduct such an evaluation on different variants of a novel system that we introduce as a generalization of existing ones.", "We propose an approach for multi-pose face tracking by association of face detection responses in two stages using multiple cues. The low-level stage uses a two-threshold strategy to merge detection responses based on location, size and pose, resulting in short but reliable tracklets. The high-level stage uses different cues for computing a joint similarity measure between tracklets. The facial cue compares facial features of the most frontal face detections in pairs of tracklets. The classifier cue learns a discriminative appearance model for each tracklet, using detection pairs within reliable tracklets and between overlapping tracklets as training data. The constraint cue observes the compatibility of motion of two tracklets. The association of tracklets is globally optimized with the Hungarian algorithm. We validate our approach on two challenging episodes of two TV series and report a Multiple Object Tracking Accuracy (MOTA) of 82 and 68.2 , respectively.", "Automatic person identification in TV series has gained popularity over the years. While most of the works rely on using face-based recognition, errors during tracking such as false positive face tracks are typically ignored. We propose a variety of methods to remove false positive face tracks and categorize the methods into confidence- and context-based. We evaluate our methods on a large TV series data set and show that up to 75 of the false positive face tracks are removed at the cost of 3.6 true positive tracks. We further show that the proposed method is general and applicable to other detectors or trackers." ] }
1709.02458
2753202988
We present an end-to-end system for detecting and clustering faces by identity in full-length movies. Unlike works that start with a predefined set of detected faces, we consider the end-to-end problem of detection and clustering together. We make three separate contributions. First, we combine a state-of-the-art face detector with a generic tracker to extract high quality face tracklets. We then introduce a novel clustering method, motivated by the classic graph theory results of Erdős and Renyi. It is based on the observations that large clusters can be fully connected by joining just a small fraction of their point pairs, while just a single connection between two different people can lead to poor clustering results. This suggests clustering using a verification system with very few false positives but perhaps moderate recall. We introduce a novel verification method, rank-1 counts verification, that has this property, and use it in a link-based clustering scheme. Finally, we define a novel end-to-end detection and clustering evaluation metric allowing us to assess the accuracy of the entire end-to-end system. We present state-of-the-art results on multiple video data sets and also on standard face databases.
With the development of multi-face tracking techniques, the problem of naming TV characters Another related problem is person re-identification @cite_32 @cite_46 @cite_6 in which the goal is to tell whether a person of interest seen in one camera has been observed by another camera. Re-identification typically uses the whole body on short time scales while naming TV characters focuses on faces, but over a longer period of time. has been also widely studied @cite_26 @cite_38 @cite_27 @cite_2 @cite_17 @cite_36 @cite_22 . Given precomputed face tracklets, the goal is to assign a name or an ID to a group of face tracklets with the same identity. @cite_17 @cite_36 iteratively cluster face tracklets and link clusters into longer tracks in a bootstrapping manner. @cite_22 train classifiers to find thresholds for joining tracklets in two stages: within a scene and across scenes. Similarly, we aim to generate face clusters in a fully unsupervised manner.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_22", "@cite_36", "@cite_32", "@cite_6", "@cite_27", "@cite_2", "@cite_46", "@cite_17" ], "mid": [ "2400416707", "", "2055622086", "1969014310", "", "", "2168996682", "2093153344", "", "2150469677" ], "abstract": [ "Person identification in TV series has been a popular research topic over the last decade. In this area, most approaches either use manually annotated data or extract character supervision from a combination of subtitles and transcripts. However, both approaches have key drawbacks that hinder application of these methods at a large scale — manual annotation is expensive and transcripts are often hard to obtain. We investigate the topic of automatically labeling all character appearances in TV series using information obtained solely from subtitles. This task is extremely difficult as the dialogs between characters provide very sparse and weakly supervised data. We address these challenges by exploiting recent advances in face descriptors and Multiple Instance Learning methods. We propose methods to create MIL bags and evaluate and discuss several MIL techniques. The best combination achieves an average precision over 80 on three diverse TV series. We demonstrate that only using subtitles provides good results on identifying characters in TV series and wish to encourage the community towards this problem.", "", "The goal of this paper is unsupervised face clustering in edited video material – where face tracks arising from different people are assigned to separate clusters, with one cluster for each person. In particular we explore the extent to which faces can be clustered automatically without making an error. This is a very challenging problem given the variation in pose, lighting and expressions that can occur, and the similarities between different people. The novelty we bring is three fold: first, we show that a form of weak supervision is available from the editing structure of the material – the shots, threads and scenes that are standard in edited video; second, we show that by first clustering within scenes the number of face tracks can be significantly reduced with almost no errors; third, we propose an extension of the clustering method to entire episodes using exemplar SVMs based on the negative training data automatically harvested from the editing structure. The method is demonstrated on multiple episodes from two very different TV series, Scrubs and Buffy. For both series it is shown that we move towards our goal, and also outperform a number of baselines from previous works.", "In this paper, we focus on face clustering in videos. Given the detected faces from real-world videos, we partition all faces into K disjoint clusters. Different from clustering on a collection of facial images, the faces from videos are organized as face tracks and the frame index of each face is also provided. As a result, many pair wise constraints between faces can be easily obtained from the temporal and spatial knowledge of the face tracks. These constraints can be effectively incorporated into a generative clustering model based on the Hidden Markov Random Fields (HMRFs). Within the HMRF model, the pair wise constraints are augmented by label-level and constraint-level local smoothness to guide the clustering process. The parameters for both the unary and the pair wise potential functions are learned by the simulated field algorithm, and the weights of constraints can be easily adjusted. We further introduce an efficient clustering framework specially for face clustering in videos, considering that faces in adjacent frames of the same face track are very similar. The framework is applicable to other clustering algorithms to significantly reduce the computational cost. Experiments on two face data sets from real-world videos demonstrate the significantly improved performance of our algorithm over state-of-the art algorithms.", "", "", "We investigate the problem of automatically labelling appearances of characters in TV or film material. This is tremendously challenging due to the huge variation in imaged appearance of each character and the weakness and ambiguity of available annotation. However, we demonstrate that high precision can be achieved by combining multiple sources of information, both visual and textual. The principal novelties that we introduce are: (i) automatic generation of time stamped character annotation by aligning subtitles and transcripts; (ii) strengthening the supervisory information by identifying when characters are speaking; (iii) using complementary cues of face matching and clothing matching to propose common annotations for face tracks. Results are presented on episodes of the TV series “Buffy the Vampire Slayer”.", "We address the problem of person identification in TV series. We propose a unified learning framework for multi-class classification which incorporates labeled and unlabeled data, and constraints between pairs of features in the training. We apply the framework to train multinomial logistic regression classifiers for multi-class face recognition. The method is completely automatic, as the labeled data is obtained by tagging speaking faces using subtitles and fan transcripts of the videos. We demonstrate our approach on six episodes each of two diverse TV series and achieve state-of-the-art performance.", "", "We describe a novel method that simultaneously clusters and associates short sequences of detected faces (termed as face track lets) in videos. The rationale of our method is that face track let clustering and linking are related problems that can benefit from the solutions of each other. Our method is based on a hidden Markov random field model that represents the joint dependencies of cluster labels and track let linking associations. We provide an efficient algorithm based on constrained clustering and optimal matching for the simultaneous inference of cluster labels and track let associations. We demonstrate significant improvements on the state-of-the-art results in face tracking and clustering performances on several video datasets." ] }
1709.02458
2753202988
We present an end-to-end system for detecting and clustering faces by identity in full-length movies. Unlike works that start with a predefined set of detected faces, we consider the end-to-end problem of detection and clustering together. We make three separate contributions. First, we combine a state-of-the-art face detector with a generic tracker to extract high quality face tracklets. We then introduce a novel clustering method, motivated by the classic graph theory results of Erdős and Renyi. It is based on the observations that large clusters can be fully connected by joining just a small fraction of their point pairs, while just a single connection between two different people can lead to poor clustering results. This suggests clustering using a verification system with very few false positives but perhaps moderate recall. We introduce a novel verification method, rank-1 counts verification, that has this property, and use it in a link-based clustering scheme. Finally, we define a novel end-to-end detection and clustering evaluation metric allowing us to assess the accuracy of the entire end-to-end system. We present state-of-the-art results on multiple video data sets and also on standard face databases.
Though solving this problem may yield a better result for face tracking, some forms of supervision specific to the video or characters in the test data can improve performance. @cite_26 perform face recognition, clothing clustering and speaker identification, where face models and speaker models are first trained on other videos containing the same main characters as in the test set. @cite_27 @cite_2 , subtitles and transcripts are used to obtain weak labels for face tracks. More recently, @cite_38 solve the problem without transcripts by resolving name references only in subtitles. Our approach is more broadly applicable because it does not use subtitles, transcripts, or any other supervision related to the identities in the test data, unlike these other works @cite_26 @cite_38 @cite_27 @cite_2 .
{ "cite_N": [ "@cite_38", "@cite_27", "@cite_26", "@cite_2" ], "mid": [ "2400416707", "2168996682", "", "2093153344" ], "abstract": [ "Person identification in TV series has been a popular research topic over the last decade. In this area, most approaches either use manually annotated data or extract character supervision from a combination of subtitles and transcripts. However, both approaches have key drawbacks that hinder application of these methods at a large scale — manual annotation is expensive and transcripts are often hard to obtain. We investigate the topic of automatically labeling all character appearances in TV series using information obtained solely from subtitles. This task is extremely difficult as the dialogs between characters provide very sparse and weakly supervised data. We address these challenges by exploiting recent advances in face descriptors and Multiple Instance Learning methods. We propose methods to create MIL bags and evaluate and discuss several MIL techniques. The best combination achieves an average precision over 80 on three diverse TV series. We demonstrate that only using subtitles provides good results on identifying characters in TV series and wish to encourage the community towards this problem.", "We investigate the problem of automatically labelling appearances of characters in TV or film material. This is tremendously challenging due to the huge variation in imaged appearance of each character and the weakness and ambiguity of available annotation. However, we demonstrate that high precision can be achieved by combining multiple sources of information, both visual and textual. The principal novelties that we introduce are: (i) automatic generation of time stamped character annotation by aligning subtitles and transcripts; (ii) strengthening the supervisory information by identifying when characters are speaking; (iii) using complementary cues of face matching and clothing matching to propose common annotations for face tracks. Results are presented on episodes of the TV series “Buffy the Vampire Slayer”.", "", "We address the problem of person identification in TV series. We propose a unified learning framework for multi-class classification which incorporates labeled and unlabeled data, and constraints between pairs of features in the training. We apply the framework to train multinomial logistic regression classifiers for multi-class face recognition. The method is completely automatic, as the labeled data is obtained by tagging speaking faces using subtitles and fan transcripts of the videos. We demonstrate our approach on six episodes each of two diverse TV series and achieve state-of-the-art performance." ] }
1709.02458
2753202988
We present an end-to-end system for detecting and clustering faces by identity in full-length movies. Unlike works that start with a predefined set of detected faces, we consider the end-to-end problem of detection and clustering together. We make three separate contributions. First, we combine a state-of-the-art face detector with a generic tracker to extract high quality face tracklets. We then introduce a novel clustering method, motivated by the classic graph theory results of Erdős and Renyi. It is based on the observations that large clusters can be fully connected by joining just a small fraction of their point pairs, while just a single connection between two different people can lead to poor clustering results. This suggests clustering using a verification system with very few false positives but perhaps moderate recall. We introduce a novel verification method, rank-1 counts verification, that has this property, and use it in a link-based clustering scheme. Finally, we define a novel end-to-end detection and clustering evaluation metric allowing us to assess the accuracy of the entire end-to-end system. We present state-of-the-art results on multiple video data sets and also on standard face databases.
As in the proposed verification system, some existing work @cite_5 @cite_9 uses reference images. For example, index code methods @cite_9 map each single image to a code based upon a set of reference images, and then compare these codes. On the other hand, our method compares the relative distance of two images with the distance of one of the images to the reference set, which is different. In addition, we use the newly defined rank-1 counts, rather than traditional Euclidean or Mahalanobis distance measures to compare images @cite_5 @cite_9 for similarity measures.
{ "cite_N": [ "@cite_5", "@cite_9" ], "mid": [ "2519769969", "2140293521" ], "abstract": [ "Clustering faces in movies or videos is extremely challenging since characters’ appearance can vary drastically under different scenes. In addition, the various cinematic styles make it difficult to learn a universal face representation for all videos. Unlike previous methods that assume fixed handcrafted features for face clustering, in this work, we formulate a joint face representation adaptation and clustering approach in a deep learning framework. The proposed method allows face representation to gradually adapt from an external source domain to a target video domain. The adaptation of deep representation is achieved without any strong supervision but through iteratively discovered weak pairwise identity constraints derived from potentially noisy face clustering result. Experiments on three benchmark video datasets demonstrate that our approach generates character clusters with high purity compared to existing video face clustering methods, which are either based on deep face representation (without adaptation) or carefully engineered features.", "In a biometric identification system, the identity corresponding to the input data (probe) is typically determined by comparing it against the templates of all identities in a database (gallery). Exhaustive matching against a large number of identities increases the response time of the system and may also reduce the accuracy of identification. One way to reduce the response time is by designing biometric templates that allow for rapid matching, as in the case of IrisCodes. An alternative approach is to limit the number of identities against which matching is performed based on criteria that are fast to evaluate. We propose a method for generating fixed-length codes for indexing biometric databases. An index code is constructed by computing match scores between a biometric image and a fixed set of reference images. Candidate identities are retrieved based on the similarity between the index code of the probe image and those of the identities in the database. The proposed technique can be easily extended to retrieve pertinent identities from multimodal databases. Experiments on a chimeric face and fingerprint bimodal database resulted in an 84 average reduction in the search space at a hit rate of 100 . These results suggest that the proposed indexing scheme has the potential to substantially reduce the response time without compromising the accuracy of identification." ] }
1709.02150
2751799747
Matching sonar images with high accuracy has been a problem for a long time, as sonar images are inherently hard to model due to reflections, noise and viewpoint dependence. Autonomous Underwater Vehicles require good sonar image matching capabilities for tasks such as tracking, simultaneous localization and mapping (SLAM) and some cases of object detection recognition. We propose the use of Convolutional Neural Networks (CNN) to learn a matching function that can be trained from labeled sonar data, after pre-processing to generate matching and non-matching pairs. In a dataset of 39K training pairs, we obtain 0.91 Area under the ROC Curve (AUC) for a CNN that outputs a binary classification matching decision, and 0.89 AUC for another CNN that outputs a matching score. In comparison, classical keypoint matching methods like SIFT, SURF, ORB and AKAZE obtain AUC 0.61 to 0.68. Alternative learning methods obtain similar results, with a Random Forest Classifier obtaining AUC 0.79, and a Support Vector Machine resulting in AUC 0.66.
A large portion of the research about matching sonar images is devoted to registration and mosaicing @cite_3 @cite_8 . Both processes require many assumptions on the kind of images and their content, specially when considering non-uniform insonification and simple transformations between images.
{ "cite_N": [ "@cite_3", "@cite_8" ], "mid": [ "2036674525", "2069257422" ], "abstract": [ "An algorithm for image registration and mosaicing on underwater sonar image sequences characterised by a high noise level, inhomogeneous illumination and low frame rate is presented. Imaging geometry of acoustic cameras is significantly different from that of pinhole cameras. For a planar surface viewed through a pinhole camera undergoing translational and rotational motion, registration can be obtained via a projective transformation. For an acoustic camera, it is shown that, under the same conditions, an affine transformation is a good approximation. A novel image fusion method, which maximises the signal-to-noise ratio of the mosaic image is proposed. The full procedure includes illumination correction, feature based transformation estimation, and image fusion for mosaicing.", "This paper presents a method to build large-scale mosaics adapted to underwater sonar imagery. By assuming a simplified imaging model, we propose to address the registrations between images using Fourier-based methods which, unlike feature-based methods, prove well suited to handle the characteristics of forward-looking sonar images, such as low resolution, noise, occlusions and moving shadows. The registration between spatially and temporally distant images resulting from loop-closing situations or registrations in featureless areas are feasible, overcoming the main difficulties of feature-based methods. The problem is cast as a pose-based graph optimization, taking into account the uncertainties of the pairwise registrations and being able to incorporate navigation information. After the optimization, a consistent mosaic from different tracklines is generated with increased resolution and higher signal-to-noise ratio than the original images, while the vehicle motion in x,y and heading is also estimated." ] }
1709.02150
2751799747
Matching sonar images with high accuracy has been a problem for a long time, as sonar images are inherently hard to model due to reflections, noise and viewpoint dependence. Autonomous Underwater Vehicles require good sonar image matching capabilities for tasks such as tracking, simultaneous localization and mapping (SLAM) and some cases of object detection recognition. We propose the use of Convolutional Neural Networks (CNN) to learn a matching function that can be trained from labeled sonar data, after pre-processing to generate matching and non-matching pairs. In a dataset of 39K training pairs, we obtain 0.91 Area under the ROC Curve (AUC) for a CNN that outputs a binary classification matching decision, and 0.89 AUC for another CNN that outputs a matching score. In comparison, classical keypoint matching methods like SIFT, SURF, ORB and AKAZE obtain AUC 0.61 to 0.68. Alternative learning methods obtain similar results, with a Random Forest Classifier obtaining AUC 0.79, and a Support Vector Machine resulting in AUC 0.66.
Zbontar and LeCun @cite_18 also use CNNs for stereo matching, improving over the state of the art in several datasets. These recent results using CNNs motivate us to explore such algorithm for matching sonar images. CNNs have several advantages when applied to sonar imaging: they can learn sonar-specific information directly from raw data, they do not require feature engineering or specific data preprocessing, and they make little assumptions on input data.
{ "cite_N": [ "@cite_18" ], "mid": [ "2963502507" ], "abstract": [ "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets." ] }
1709.02285
2750762556
We present a processing technique for a robust reconstruction of motion properties for single points in large scale, dynamic environments. We assume that the acquisition camera is moving and that there are other independently moving agents in a large environment, like road scenarios. The separation of direction and magnitude of the reconstructed motion allows for robust reconstruction of the dynamic state of the objects in situations, where conventional binocular systems fail due to a small signal (disparity) from the images due to a constant detection error, and where structure from motion approaches fail due to unobserved motion of other agents between the camera frames. We present the mathematical framework and the sensitivity analysis for the resulting system.
Recently, multiple approaches have been published that combine Structure-from-motion and optical flow @cite_5 @cite_3 @cite_6 @cite_8 . The current top method applied to the KITTI-2012 benchmark @cite_5 calculates the fundamental matrix and computes the epipolar lines of the flow. This computation is limited to rigid scenes. A similar calculation based on fundamental matrix and regularization of the optical flow to align with the epipolar lines can be found in @cite_6 . The independent motion in the scene is detected by reverting it to the optical flow of the entire scene. Roussos @cite_9 finds a solution for the depth and motion parameters for moving objects in the scene from batch processing on a sequence of about 30 frames. There have been multiple approaches to motion segmentation of the scenes into regions corresponding to independently moving objects by exploiting 3D motion cues and epipolar motion @cite_13 @cite_10 @cite_17 .
{ "cite_N": [ "@cite_8", "@cite_10", "@cite_9", "@cite_6", "@cite_3", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "1951289974", "2155191357", "2063759133", "1977194745", "1588809898", "", "", "1974306404" ], "abstract": [ "We propose a novel approach for optical flow estimation, targeted at large displacements with significant occlusions. It consists of two steps: i) dense matching by edge-preserving interpolation from a sparse set of matches; ii) variational energy minimization initialized with the dense matches. The sparse-to-dense interpolation relies on an appropriate choice of the distance, namely an edge-aware geodesic distance. This distance is tailored to handle occlusions and motion boundaries - two common and difficult issues for optical flow computation. We also propose an approximation scheme for the geodesic distance to allow fast computation without loss of performance. Subsequent to the dense interpolation step, standard one-level variational energy minimization is carried out on the dense matches to obtain the final flow estimation. The proposed approach, called Edge-Preserving Interpolation of Correspondences (EpicFlow) is fast and robust to large displacements. It significantly outperforms the state of the art on MPI-Sintel and performs on par on Kitti and Middlebury.", "We present an algorithm for identifying and tracking independently moving rigid objects from optical flow. Some previous attempts at segmentation via optical flow have focused on finding discontinuities in the flow field. While discontinuities do indicate a change in scene depth, they do not in general signal a boundary between two separate objects. The proposed method uses the fact that each independently moving object has a unique epipolar constraint associated with its motion. Thus motion discontinuities based on self-occlusion can be distinguished from those due to separate objects. The use of epipolar geometry allows for the determination of individual motion parameters for each object as well as the recovery of relative depth for each point on the object. The algorithm assumes an affine camera where perspective effects are limited to changes in overall scale. No camera calibration parameters are required. A Kalman filter based approach is used for tracking motion parameters with time.", "Existing approaches to camera tracking and reconstruction from a single handheld camera for Augmented Reality (AR) focus on the reconstruction of static scenes. However, most real world scenarios are dynamic and contain multiple independently moving rigid objects. This paper addresses the problem of simultaneous segmentation, motion estimation and dense 3D reconstruction of dynamic scenes. We propose a dense solution to all three elements of this problem: depth estimation, motion label assignment and rigid transformation estimation directly from the raw video by optimizing a single cost function using a hill-climbing approach. We do not require prior knowledge of the number of objects present in the scene — the number of independent motion models and their parameters are automatically estimated. The resulting inference method combines the best techniques in discrete and continuous optimization: a state of the art variational approach is used to estimate the dense depth maps while the motion segmentation is achieved using discrete graph-cut based optimization. For the rigid motion estimation of the independently moving objects we propose a novel tracking approach designed to cope with the small fields of view they induce and agile motion. Our experimental results on real sequences show how accurate segmentations and dense depth maps can be obtained in a completely automated way and used in marker-free AR applications.", "The accurate estimation of motion in image sequences is of central importance to numerous computer vision applications. Most competitive algorithms compute flow fields by minimizing an energy made of a data and a regularity term. To date, the best performing methods rely on rather simple purely geometric regularizes favoring smooth motion. In this paper, we revisit regularization and show that appropriate adaptive regularization substantially improves the accuracy of estimated motion fields. In particular, we systematically evaluate regularizes which adoptively favor rigid body motion (if supported by the image data) and motion field discontinuities that coincide with discontinuities of the image structure. The proposed algorithm relies on sequential convex optimization, is real-time capable and outperforms all previously published algorithms by more than one average rank on the Middlebury optic flow benchmark.", "Traditional estimation methods for the fundamental matrix rely on a sparse set of point correspondences that have been established by matching salient image features between two images. Recovering the fundamental matrix from dense correspondences has not been extensively researched until now. In this paper we propose a new variational model that recovers the fundamental matrix from a pair of uncalibrated stereo images, and simultaneously estimates an optical flow field that is consistent with the corresponding epipolar geometry. The model extends the highly accurate optical flow technique of (2004) by taking the epipolar constraint into account. In experiments we demonstrate that our approach is able to produce excellent estimates for the fundamental matrix and that the optical flow computation is on par with the best techniques to date.", "", "", "The detection of moving objects is important in many tasks. This paper examines moving object detection based primarily on optical flow. We conclude that in realistic situations, detection using visual information alone is quite difficult, particularly when the camera may also be moving. The availability of additional information about camera motion and or scene structure greatly simplifies the problem. Two general classes of techniques are examined. The first is based upon the motion epipolar constraint—translational motion produces a flow field radially expanding from a “focus of expansion” (FOE). Epipolar methods depend on knowing at least partial information about camera translation and or rotation. The second class of methods is based on comparison of observed optical flow with other information about depth, for example from stereo vision. Examples of several of these techniques are presented." ] }
1709.02285
2750762556
We present a processing technique for a robust reconstruction of motion properties for single points in large scale, dynamic environments. We assume that the acquisition camera is moving and that there are other independently moving agents in a large environment, like road scenarios. The separation of direction and magnitude of the reconstructed motion allows for robust reconstruction of the dynamic state of the objects in situations, where conventional binocular systems fail due to a small signal (disparity) from the images due to a constant detection error, and where structure from motion approaches fail due to unobserved motion of other agents between the camera frames. We present the mathematical framework and the sensitivity analysis for the resulting system.
This approach goes beyond the problem of clustering of independent motion components. It provides a framework for motion estimation in dynamic scenes using an extension of the Time-to-Collision Approach presented in @cite_14 . It is interesting to see that under some restricted conditions, the system is able to reconstruct the depth relations entirely based on pixel information of single points in the images. The remaining paper is structured as follows. In Section 2, the new method of the depth calculation for point features in presented. In Section 3, the error propagation in the presented framework is presented We conclude with an evaluation of the achieved results.
{ "cite_N": [ "@cite_14" ], "mid": [ "2136071851" ], "abstract": [ "Time-to-contact is an important quantity for controlling activities which involve the timing of interactions with objects and surfaces in motion relative to an observer. Two alternative means for obtaining perceptual information that might be used to obtain the time-to-contact required to correctly time an interaction have been contrasted: a method based on the perception of distance and velocity, and a method due to Lee involving a perceptual variable called tau. A monocular version of the first method is presented and shown to place a highly unrealistic and arbitrary limitation on the capabilities of the visual system. The second method is reviewed and its limitations discussed. Several means by which these limitations can be overcome are presented. Recently reported results from experiments which involved catching self-luminous balls in the dark are interpreted in terms of timing information available to the subject, and the notions of intermodal and multimodal timing information are introduced. Finall..." ] }
1709.02142
2741691423
It is possible to associate a highly constrained subset of relative 6 DoF poses between two 3D shapes, as long as the local surface orientation, the normal vector, is available at every surface point. Local shape features can be used to find putative point correspondences between the models due to their ability to handle noisy and incomplete data. However, this correspondence set is usually contaminated by outliers in practical scenarios, which has led to many past contributions based on robust detectors such as the Hough transform or RANSAC. The key insight of our work is that a single correspondence between oriented points on the two models is constrained to cast votes in a 1 DoF rotational subgroup of the full group of poses, SE(3). Kernel density estimation allows combining the set of votes efficiently to determine a full 6 DoF candidate pose between the models. This modal pose with the highest density is stable under challenging conditions, such as noise, clutter, and occlusions, and provides the output estimate of our method. We first analyze the robustness of our method in relation to noise and show that it handles high outlier rates much better than RANSAC for the task of 6 DoF pose estimation. We then apply our method to four state of the art data sets for 3D object recognition that contain occluded and cluttered scenes. Our method achieves perfect recall on two LIDAR data sets and outperforms competing methods on two RGB-D data sets, thus setting a new standard for general 3D object recognition using point cloud data.
A very different class of methods rely on 2.5D data, from RGB-D sensors. The best-known method is arguably LINEMOD @cite_28 , which allowed for real-time matching of thousands of object templates in RGB-D data. Many competing methods using template-based approaches were introduced afterward, including @cite_29 @cite_4 @cite_13 and most recently @cite_10 @cite_24 @cite_25 . The last three achieved very high detection rates by learning an intermediate feature layer with either convolutional or autoencoder neural networks.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_28", "@cite_29", "@cite_24", "@cite_10", "@cite_25" ], "mid": [ "1022526533", "2050966058", "", "132147841", "2488101876", "", "1909903157" ], "abstract": [ "In this paper we propose a novel framework, Latent-Class Hough Forests, for 3D object detection and pose estimation in heavily cluttered and occluded scenes. Firstly, we adapt the state-of-the-art template matching feature, LINEMOD [14], into a scale-invariant patch descriptor and integrate it into a regression forest using a novel template-based split function. In training, rather than explicitly collecting representative negative samples, our method is trained on positive samples only and we treat the class distributions at the leaf nodes as latent variables. During the inference process we iteratively update these distributions, providing accurate estimation of background clutter and foreground occlusions and thus a better detection rate. Furthermore, as a by-product, the latent class distributions can provide accurate occlusion aware segmentation masks, even in the multi-instance scenario. In addition to an existing public dataset, which contains only single-instance sequences with large amounts of clutter, we have collected a new, more challenging, dataset for multiple-instance detection containing heavy 2D and 3D clutter as well as foreground occlusions. We evaluate the Latent-Class Hough Forest on both of these datasets where we outperform state-of-the art methods.", "In this paper we propose a new method for detecting multiple specific 3D objects in real time. We start from the template-based approach based on the LINE2D LINEMOD representation introduced recently by , yet extend it in two ways. First, we propose to learn the templates in a discriminative fashion. We show that this can be done online during the collection of the example images, in just a few milliseconds, and has a big impact on the accuracy of the detector. Second, we propose a scheme based on cascades that speeds up detection. Since detection of an object is fast, new objects can be added with very low cost, making our approach scale well. In our experiments, we easily handle 10-30 3D objects at frame rates above 10fps using a single CPU core. We outperform the state-of-the-art both in terms of speed as well as in terms of accuracy, as validated on 3 different datasets. This holds both when using monocular color images (with LINE2D) and when using RGBD images (with LINEMOD). Moreover, we propose a challenging new dataset made of 12 objects, for future competing methods on monocular color images.", "", "This work addresses the problem of estimating the 6D Pose of specific objects from a single RGB-D image. We present a flexible approach that can deal with generic objects, both textured and texture-less. The key new concept is a learned, intermediate representation in form of a dense 3D object coordinate labelling paired with a dense class labelling. We are able to show that for a common dataset with texture-less objects, where template-based techniques are suitable and state of the art, our approach is slightly superior in terms of accuracy. We also demonstrate the benefits of our approach, compared to template-based techniques, in terms of robustness with respect to varying lighting conditions. Towards this end, we contribute a new ground truth dataset with 10k images of 20 objects captured each under three different lighting conditions. We demonstrate that our approach scales well with the number of objects and has capabilities to run fast.", "We present a 3D object detection method that uses regressed descriptors of locally-sampled RGB-D patches for 6D vote casting. For regression, we employ a convolutional auto-encoder that has been trained on a large collection of random local patches. During testing, scene patch descriptors are matched against a database of synthetic model view patches and cast 6D object votes which are subsequently filtered to refined hypotheses. We evaluate on three datasets to show that our method generalizes well to previously unseen input data, delivers robust detection results that compete with and surpass the state-of-the-art while being scalable in the number of objects.", "", "Detecting poorly textured objects and estimating their 3D pose reliably is still a very challenging problem. We introduce a simple but powerful approach to computing descriptors for object views that efficiently capture both the object identity and 3D pose. By contrast with previous manifold-based approaches, we can rely on the Euclidean distance to evaluate the similarity between descriptors, and therefore use scalable Nearest Neighbor search methods to efficiently handle a large number of objects under a large range of poses. To achieve this, we train a Convolutional Neural Network to compute these descriptors by enforcing simple similarity and dissimilarity constraints between the descriptors. We show that our constraints nicely untangle the images from different objects and different views into clusters that are not only well-separated but also structured as the corresponding sets of poses: The Euclidean distance between descriptors is large when the descriptors are from different objects, and directly related to the distance between the poses when the descriptors are from the same object. These important properties allow us to outperform state-of-the-art object views representations on challenging RGB and RGB-D data." ] }
1709.02063
2753574082
When selecting ideas or trying to find inspiration, designers often must sift through hundreds or thousands of ideas. This paper provides an algorithm to rank design ideas such that the ranked list simultaneously maximizes the quality and diversity of recommended designs. To do so, we first define and compare two diversity measures using Determinantal Point Processes (DPP) and additive sub-modular functions. We show that DPPs are more suitable for items expressed as text and that a greedy algorithm diversifies rankings with both theoretical guarantees and empirical performance on what is otherwise an NP-Hard problem. To produce such rankings, this paper contributes a novel way to extend quality and diversity metrics from sets to permutations of ranked lists. These rank metrics open up the use of multi-objective optimization to describe trade-offs between diversity and quality in ranked lists. We use such trade-off fronts to help designers select rankings using indifference curves. However, we also show that rankings on trade-off front share a number of top-ranked items; this means reviewing items (for a given depth like the top 10) from across the entire diversity-to-quality front incurs only a marginal increase in the number of designs considered. While the proposed techniques are general purpose enough to be used across domains, we demonstrate concrete performance on selecting items in an online design community (OpenIDEO), where our approach reduces the time required to review diverse, high-quality ideas from around 25 hours to 90 minutes. This makes evaluation of crowd-generated ideas tractable for a single designer. Our code is publicly accessible for further research.
The main research questions within both recommender systems and information retrieval are two-fold: (1) how do we represent this diminishing marginal utility, and once we do (2) how do we optimize over it efficiently? For the former question, researchers have proposed alternate scoring methods to diversify rankings. An early exemplar of this was Ziegler al @cite_14 who modeled the topics in text documents and then tried to balance the topics within recommended lists. Their large scale user survey showed that a user's overall satisfaction with lists depended on both accuracy and the perceived diversity of list items. Approaches that followed largely centered around the notion of that a diverse set should somehow cover a space of items well. The main differentiators of past approaches are how this coverage is measured and then combined with other objectives such as document relevance.
{ "cite_N": [ "@cite_14" ], "mid": [ "2155912844" ], "abstract": [ "In this work we present topic diversification, a novel method designed to balance and diversify personalized recommendation lists in order to reflect the user's complete spectrum of interests. Though being detrimental to average accuracy, we show that our method improves user satisfaction with recommendation lists, in particular for lists generated using the common item-based collaborative filtering algorithm.Our work builds upon prior research on recommender systems, looking at properties of recommendation lists as entities in their own right rather than specifically focusing on the accuracy of individual recommendations. We introduce the intra-list similarity metric to assess the topical diversity of recommendation lists and the topic diversification approach for decreasing the intra-list similarity. We evaluate our method using book recommendation data, including offline analysis on 361, !, 349 ratings and an online study involving more than 2, !, 100 subjects." ] }
1709.02249
2751423104
In this paper, we propose an uncertainty-aware learning from demonstration method by presenting a novel uncertainty estimation method utilizing a mixture density network appropriate for modeling complex and noisy human behaviors. The proposed uncertainty acquisition can be done with a single forward path without Monte Carlo sampling and is suitable for real-time robotics applications. The properties of the proposed uncertainty measure are analyzed through three different synthetic examples, absence of data, heavy measurement noise, and composition of functions scenarios. We show that each case can be distinguished using the proposed uncertainty measure and presented an uncertainty-aware learn- ing from demonstration method of an autonomous driving using this property. The proposed uncertainty-aware learning from demonstration method outperforms other compared methods in terms of safety using a complex real-world driving dataset.
Kendall and Gal @cite_15 decomposed the predictive uncertainty into two major types, uncertainty and uncertainty. First, uncertainty captures our ignorance about the predictive model. It is often referred to as a reducible uncertainty as this type of uncertainty can be reduced as we collect more training data from diverse scenarios. On the other hand, uncertainty captures irreducible aspects of the predictive variance, such as the randomness inherent in the coin flipping. To this end, Kendall and Gal utilized a density network similar to @cite_25 but used a slightly different cost function for numerical stability. The variance outputs directly from the density network indicates heteroscedastic aleatoric uncertainty where the overall predictive uncertainty of the output @math given an input @math is approximated using where @math are @math samples of mean and variance functions of a density network with stochastic forward paths.
{ "cite_N": [ "@cite_15", "@cite_25" ], "mid": [ "2950517871", "2560321925" ], "abstract": [ "There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model -- uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.", "Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet." ] }
1709.02249
2751423104
In this paper, we propose an uncertainty-aware learning from demonstration method by presenting a novel uncertainty estimation method utilizing a mixture density network appropriate for modeling complex and noisy human behaviors. The proposed uncertainty acquisition can be done with a single forward path without Monte Carlo sampling and is suitable for real-time robotics applications. The properties of the proposed uncertainty measure are analyzed through three different synthetic examples, absence of data, heavy measurement noise, and composition of functions scenarios. We show that each case can be distinguished using the proposed uncertainty measure and presented an uncertainty-aware learn- ing from demonstration method of an autonomous driving using this property. The proposed uncertainty-aware learning from demonstration method outperforms other compared methods in terms of safety using a complex real-world driving dataset.
Modeling and incorporating uncertainty in predictions have been widely used in robotics, mostly to ensure safety in the training phase of reinforcement learning @cite_13 or to avoid false classification of learned cost function @cite_17 . In @cite_13 , an uncertainty-aware collision prediction method is proposed by training multiple deep neural networks using bootstrapping and dropout. Once multiple networks are trained, the sample mean and variance of multiple stochastic forward paths of different networks are used to compute the predictive variance. Once the predictive variance is higher than a certain threshold, a risk-averse cost function is used instead of the learned cost function leading to a low-speed control. This approach can be seen as extending @cite_20 by adding additional randomness from bootstrapping. However, as multiple networks are required, computational complexities of both training and test phases are increased.
{ "cite_N": [ "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "2586067474", "582134693", "2734960757" ], "abstract": [ "Reinforcement learning can enable complex, adaptive behavior to be learned automatically for autonomous robotic platforms. However, practical deployment of reinforcement learning methods must contend with the fact that the training process itself can be unsafe for the robot. In this paper, we consider the specific case of a mobile robot learning to navigate an a priori unknown environment while avoiding collisions. In order to learn collision avoidance, the robot must experience collisions at training time. However, high-speed collisions, even at training time, could damage the robot. A successful learning method must therefore proceed cautiously, experiencing only low-speed collisions until it gains confidence. To this end, we present an uncertainty-aware model-based learning algorithm that estimates the probability of collision together with a statistical estimate of uncertainty. By formulating an uncertainty-dependent cost function, we show that the algorithm naturally chooses to proceed cautiously in unfamiliar environments, and increases the velocity of the robot in settings where it has high confidence. Our predictive model is based on bootstrapped neural networks using dropout, allowing it to process raw sensory inputs from high-bandwidth sensors such as cameras. Our experimental evaluation demonstrates that our method effectively minimizes dangerous collisions at training time in an obstacle avoidance task for a simulated and real-world quadrotor, and a real-world RC car. Videos of the experiments can be found at this https URL.", "Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.", "" ] }
1709.02184
2901719872
Our work presented in this paper focuses on the translation of terminological expressions represented in semantically structured resources, like ontologies or knowledge graphs. The challenge of translating ontology labels or terminological expressions documented in knowledge bases lies in the highly specific vocabulary and the lack of contextual information, which can guide a machine translation system to translate ambiguous words into the targeted domain. Due to these challenges, we evaluate the translation quality of domain-specific expressions in the medical and financial domain with statistical as well as with neural machine translation methods and experiment domain adaptation of the translation models with terminological expressions only. Furthermore, we perform experiments on the injection of external terminological expressions into the translation systems. Through these experiments, we observed a significant advantage in domain adaptation for the domain-specific resource in the medical and financial domain and the benefit of subword models over word-based neural machine translation models for terminology translation.
Most of the previous work on translation knowledge resources, e.g. ontologies or taxonomies, tackled this problem by accessing multilingual lexical resources, such as EuroWordNet or IATE @cite_27 @cite_22 . Their work focuses on the identification of the lexical overlap between the ontology labels and the multilingual resource. Since the replacement of the source and target vocabulary guarantees a high precision but a low recall, external translation services, such as BabelFish, SDL FreeTranslation tool or Google Translate, were used to overcome this issue @cite_13 @cite_21 . Additionally, ontology label disambiguation was performed by and , where the structure of the ontology along with existing multilingual ontologies was used to annotate the labels with their semantic senses. Furthermore, show positive effects of different domain adaptation techniques, i.e., using Web resources as additional bilingual knowledge, re-scoring translations with Explicit Semantic Analysis (ESA) and language model adaptation for automatic ontology translation. A different approach on ontology label disambiguation was shown in , where the authors identified relevant in-domain parallel sentences and used them to train an ontology-specific SMT system.
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_13", "@cite_22" ], "mid": [ "162052402", "2251753547", "1503434654", "1887118224" ], "abstract": [ "We describe the integration of some multilingual language resources in ontological descriptions, with the purpose of providing ontologies, which are normally using concept labels in just one (natural) language, with multilingual facility in their design and use in the context of Semantic Web applications, supporting both the semantic annotation of textual documents with multilingual ontology labels and ontology extraction from multilingual text sources.", "This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI 12 RC 2289 (Insight) and the European Union supported projects LIDER (ICT-2013.4.1-610782) and MixedEmotions (H2020-644632).", "Ontologies are at the heart of knowledge management and make use of information that is not only written in English but also in many other natural languages. In order to enable knowledge discovery, sharing and reuse of these multilingual ontologies, it is necessary to support ontology mapping despite natural language barriers. This paper examines the soundness of a generic approach that involves machine translation tools and monolingual ontology matching techniques in cross-lingual ontology mapping scenarios. In particular, experimental results collected from case studies which engage mappings of independent ontologies that are labeled in English and Chinese are presented. Based on findings derived from these studies, limitations of this generic approach are discussed. It is shown with evidence that appropriate translations of conceptual labels in ontologies are of crucial importance when applying monolingual matching techniques in cross-lingual ontology mapping. Finally, to address the identified challenges, a semantic-oriented cross-lingual ontology mapping (SOCOM) framework is proposed and discussed.", "We revisit the notion of ontology localization, propose a new definition and clearly specify the layers of an ontology that can be affected by the process of localizing it. We also work out a number of dimensions that allow to characterize the type of ontology localization performed and to predict the layers that will be affected. Overall our aim is to contribute to a better understanding of the task of localizing an ontology." ] }
1709.02260
2725001169
We study embedded Binarized Neural Networks (eBNNs) with the aim of allowing current binarized neural networks (BNNs) in the literature to perform feedforward inference efficiently on small embedded devices. We focus on minimizing the required memory footprint, given that these devices often have memory as small as tens of kilobytes (KB). Beyond minimizing the memory required to store weights, as in a BNN, we show that it is essential to minimize the memory used for temporaries which hold intermediate results between layers in feedforward inference. To accomplish this, eBNN reorders the computation of inference while preserving the original BNN structure, and uses just a single floating-point temporary for the entire neural network. All intermediate results from a layer are stored as binary values, as opposed to floating-points used in current BNN implementations, leading to a 32x reduction in required temporary space. We provide empirical evidence that our proposed eBNN approach allows efficient inference (10s of ms) on devices with severely limited memory (10s of KB). For example, eBNN achieves 95 accuracy on the MNIST dataset running on an Intel Curie with only 15 KB of usable memory with an inference runtime of under 50 ms per sample. To ease the development of applications in embedded contexts, we make our source code available that allows users to train and discover eBNN models for a learning task at hand, which fit within the memory constraint of the target device.
In 2015, proposed BinaryConnect, a method of training DNNs where all propagations (both forward and backward step) use binary weights @cite_9 . In 2016, they expanded on this work with BinaryNet and formally introduced Binarized Neural Networks (BNNs) @cite_8 . The second paper provides implementation details on how to perform efficiently binary matrix multiplication, used in both fully connected and convolutional layers, through the use of bit operations (xnor and popcount). In BNNs, all weights of filters in a layer must be @math or @math (which is stored as @math and @math respectively) instead of a 32-bit floating-point value. This representation leads to much more space efficient models compared to standard floating-point DNNs. A key to the success of BNNs it the binary activation function, which clamps all negatives inputs to @math and all positive inputs to @math . XNor-Net provides a different network structure for BNNs where pooling occurs before binary activation @cite_6 . In this paper, we use the BNN formulation described in the BinaryNet paper.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_8" ], "mid": [ "2963114950", "2951978180", "2260663238" ], "abstract": [ "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.", "We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9 less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16 in top-1 accuracy.", "" ] }
1709.02260
2725001169
We study embedded Binarized Neural Networks (eBNNs) with the aim of allowing current binarized neural networks (BNNs) in the literature to perform feedforward inference efficiently on small embedded devices. We focus on minimizing the required memory footprint, given that these devices often have memory as small as tens of kilobytes (KB). Beyond minimizing the memory required to store weights, as in a BNN, we show that it is essential to minimize the memory used for temporaries which hold intermediate results between layers in feedforward inference. To accomplish this, eBNN reorders the computation of inference while preserving the original BNN structure, and uses just a single floating-point temporary for the entire neural network. All intermediate results from a layer are stored as binary values, as opposed to floating-points used in current BNN implementations, leading to a 32x reduction in required temporary space. We provide empirical evidence that our proposed eBNN approach allows efficient inference (10s of ms) on devices with severely limited memory (10s of KB). For example, eBNN achieves 95 accuracy on the MNIST dataset running on an Intel Curie with only 15 KB of usable memory with an inference runtime of under 50 ms per sample. To ease the development of applications in embedded contexts, we make our source code available that allows users to train and discover eBNN models for a learning task at hand, which fit within the memory constraint of the target device.
These networks have been shown to achieve similar performance on several standard community datasets when compared to traditional deep networks that use float precision. Research on BNNs thus far has primarily focused on improving the classification performance of these binary network structures and reducing the training time of the networks on GPUs. While the 32x memory reduction from floats to bits of the weights makes BNNs an obvious candidate for low-power embedded systems, current BNN implementations are for large GPUs written in one of several popular GPU frameworks (Theano, Torch) @cite_2 @cite_3 . However, the computational model of GPUs is organized for high parallelism by reusing large temporary buffers efficiently. This computational model is a poor fit for embedded devices that have no hardware-supported parallelism and has only a relatively small amount of memory. In , we show that the optimal order of computation changes drastically when transitioning from a GPU environment with large memory and high parallelism to an embedded environment with small memory and no parallelism. Our implementation optimizations based on computation reordering are general and can be applied to other BNN structures.
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "1548328233", "2384495648" ], "abstract": [ "Keywords: learning Reference EPFL-REPORT-82802 URL: http: publications.idiap.ch downloads reports 2002 rr02-46.pdf Record created on 2006-03-10, modified on 2017-05-10", "Theano is a Python library that allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements. Theano is being actively and continuously developed since 2008, multiple frameworks have been built on top of it and it has been used to produce many state-of-the-art machine learning models. The present article is structured as follows. Section I provides an overview of the Theano software and its community. Section II presents the principal features of Theano and how to use them, and compares them with other similar projects. Section III focuses on recently-introduced functionalities and improvements. Section IV compares the performance of Theano against Torch7 and TensorFlow on several machine learning models. Section V discusses current limitations of Theano and potential ways of improving it." ] }
1709.02169
2753635132
In outdoor environments, mobile robots are required to navigate through terrain with varying characteristics, some of which might significantly affect the integrity of the platform. Ideally, the robot should be able to identify areas that are safe for navigation based on its own percepts about the environment while avoiding damage to itself. Bayesian optimisation (BO) has been successfully applied to the task of learning a model of terrain traversability while guiding the robot through more traversable areas. An issue, however, is that localisation uncertainty can end up guiding the robot to unsafe areas and distort the model being learnt. In this paper, we address this problem and present a novel method that allows BO to consider localisation uncertainty by applying a Gaussian process model for uncertain inputs as a prior. We evaluate the proposed method in simulation and in experiments with a real robot navigating over rough terrain and compare it to standard BO methods which assume deterministic inputs.
@cite_11 presented a method to learn a GP model for terrain roughness from vehicle experience. The authors applied Bayesian optimisation (BO) @cite_3 in an active perception approach to reduce experienced vibration during navigation while learning the model from IMU measurements online. In the BO framework, the terrain roughness model is learnt online as the algorithm drives the robot around selecting locations to visit balancing a trade-off between exploration and exploitation @cite_3 @cite_11 . Nevertheless, BO usually considers deterministic query locations within its search space, as BO is typically used in problems where a fixed number of parameters have to be optimised @cite_25 @cite_17 .
{ "cite_N": [ "@cite_17", "@cite_25", "@cite_3", "@cite_11" ], "mid": [ "2145957964", "2950182411", "2950338507", "2009906883" ], "abstract": [ "Recently, Bayesian Optimization (BO) has been used to successfully optimize parametric policies in several challenging Reinforcement Learning (RL) applications. BO is attractive for this problem because it exploits Bayesian prior information about the expected return and exploits this knowledge to select new policies to execute. Effectively, the BO framework for policy search addresses the exploration-exploitation tradeoff. In this work, we show how to more effectively apply BO to RL by exploiting the sequential trajectory information generated by RL agents. Our contributions can be broken into two distinct, but mutually beneficial, parts. The first is a new Gaussian process (GP) kernel for measuring the similarity between policies using trajectory data generated from policy executions. This kernel can be used in order to improve posterior estimates of the expected return thereby improving the quality of exploration. The second contribution, is a new GP mean function which uses learned transition and reward functions to approximate the surface of the objective. We show that the model-based approach we develop can recover from model inaccuracies when good transition and reward models cannot be learned. We give empirical results in a standard set of RL benchmarks showing that both our model-based and model-free approaches can speed up learning compared to competing methods. Further, we show that our contributions can be combined to yield synergistic improvement in some domains.", "Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a \"black art\" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks.", "", "A key challenge for long-term autonomy is to enable a robot to automatically model properties of the environment while actively searching for better decisions to accomplish its task. This amounts to the problem of exploration-exploitation in the context of active perception. This paper addresses active perception and presents a technique to incrementally model the roughness of the terrain a robot navigates on while actively searching for paths that reduce the overall vibration experienced during travel. The approach employs Gaussian processes in conjunction with Bayesian optimisation for decision making. The algorithms are executed in real-time on the robot while it explores the environment. We present experiments with an outdoor vehicle navigating over several types of terrains demonstrating the properties and effectiveness of the approach." ] }
1709.02169
2753635132
In outdoor environments, mobile robots are required to navigate through terrain with varying characteristics, some of which might significantly affect the integrity of the platform. Ideally, the robot should be able to identify areas that are safe for navigation based on its own percepts about the environment while avoiding damage to itself. Bayesian optimisation (BO) has been successfully applied to the task of learning a model of terrain traversability while guiding the robot through more traversable areas. An issue, however, is that localisation uncertainty can end up guiding the robot to unsafe areas and distort the model being learnt. In this paper, we address this problem and present a novel method that allows BO to consider localisation uncertainty by applying a Gaussian process model for uncertain inputs as a prior. We evaluate the proposed method in simulation and in experiments with a real robot navigating over rough terrain and compare it to standard BO methods which assume deterministic inputs.
Recent work @cite_23 presented a method to apply BO to problems where the execution of a query is uncertain, such as robotic grasping. The authors propose querying BO's surrogate model using a Gaussian distribution by applying the unscented transform @cite_22 . In this, even with uncertainty in the execution of the query, the algorithm chooses to sample the objective function at locations where interesting values are more likely to be observed, instead of trying to reach a narrow peak. Despite that, @cite_23 still applies a deterministic-inputs GP model as a prior for BO. In navigation problems, the robot is usually able to obtain a probability distribution estimating its location from a localisation system. Making use of a GP model that takes into account such distributions as inputs should then allow BO to learn a better model of the true underlying objective function.
{ "cite_N": [ "@cite_22", "@cite_23" ], "mid": [ "1749494163", "2963481418" ], "abstract": [ "This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital operation performed in the Kalman filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. In this paper, the algorithms are further developed and illustrated with a number of additional examples.", "Safe and robust grasping of unknown objects is a major challenge in robotics, which has no general solution yet. A promising approach relies on haptic exploration, where active optimization strategies can be employed to reduce the number of exploration trials. One critical problem is that certain optimal grasps discoverd by the optimization procedure may be very sensitive to small deviations of the parameters from their nominal values: we call these unsafe grasps because small errors during motor execution may turn optimal grasps into bad grasps. To reduce the risk of grasp failure, safe grasps should be favoured. Therefore, we propose a new algorithm, unscented Bayesian optimization, that performs efficient optimization while considering uncertainty in the input space, leading to the discovery of safe optima. The results highlight how our method outperforms the classical Bayesian optimization both in synthetic problems and in realistic robot grasp simulations, finding robust and safe grasps after a few exploration trials." ] }
1709.01921
2734927154
We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance. As a proof of concept, we show a DDNN can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost. In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.
The framework of a large-scale distributed computing hierarchy has assumed new significance in the emerging era of IoT. It is widely expected that most of data generated by the massive number of IoT devices must be processed locally at the devices or at the edge, for otherwise the total amount of sensor data for a centralized cloud would overwhelm the communication network bandwidth. In addition, a distributed computing hierarchy offers opportunities for system scalability, data security and privacy, as well as shorter response times (see, , @cite_4 @cite_1 ). For example, in @cite_22 , a face recognition application shows a reduced response time is achieved when a smartphone's photos are proceeded by the edge (fog) as opposed to the cloud. In this paper, we show that DDNN can systematically exploit the inherent advantages of a distributed computing hierarchy for DNN applications and achieve similar benefits.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_22" ], "mid": [ "", "2208484250", "2245189809" ], "abstract": [ "", "The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high- and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.", "Despite the broad utilization of cloud computing, some applications and services still cannot benefit from this popular computing paradigm due to inherent problems of cloud computing such as unacceptable latency, lack of mobility support and location-awareness. As a result, fog computing, has emerged as a promising infrastructure to provide elastic resources at the edge of network. In this paper, we have discussed current definitions of fog computing and similar concepts, and proposed a more comprehensive definition. We also analyzed the goals and challenges in fog computing platform, and presented platform design with several exemplar applications. We finally implemented and evaluated a prototype fog computing platform." ] }
1709.01921
2734927154
We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance. As a proof of concept, we show a DDNN can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost. In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.
Binarized neural networks (BNNs) are a recent type of neural networks, where the weights in linear and convolutional layers are constrained to @math (stored as @math and @math respectively). This representation has been shown to achieve similar classification accuracy for some datasets such as MNIST and CIFAR-10 @cite_15 when compared to a standard floating-point neural network while using less memory and reduced computation due to the binary format @cite_16 . Embedded binarized neural networks (eBNNs) extends BNNs to allow the network to fit on embedded devices by reducing floating-point temporaries through reordering the operations in inference @cite_8 . These compact models are especially attractive in end device settings, where memory can be a limiting factor and low power consumption is required. In DDNN, we use BNNs, eBNNs and the alike to accommodate the end devices, so that they can be jointly trained with the NN layers in the edge and cloud.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_8" ], "mid": [ "2951978180", "2963114950", "2725001169" ], "abstract": [ "We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9 less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16 in top-1 accuracy.", "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.", "We study embedded Binarized Neural Networks (eBNNs) with the aim of allowing current binarized neural networks (BNNs) in the literature to perform feedforward inference efficiently on small embedded devices. We focus on minimizing the required memory footprint, given that these devices often have memory as small as tens of kilobytes (KB). Beyond minimizing the memory required to store weights, as in a BNN, we show that it is essential to minimize the memory used for temporaries which hold intermediate results between layers in feedforward inference. To accomplish this, eBNN reorders the computation of inference while preserving the original BNN structure, and uses just a single floating-point temporary for the entire neural network. All intermediate results from a layer are stored as binary values, as opposed to floating-points used in current BNN implementations, leading to a 32x reduction in required temporary space. We provide empirical evidence that our proposed eBNN approach allows efficient inference (10s of ms) on devices with severely limited memory (10s of KB). For example, eBNN achieves 95 accuracy on the MNIST dataset running on an Intel Curie with only 15 KB of usable memory with an inference runtime of under 50 ms per sample. To ease the development of applications in embedded contexts, we make our source code available that allows users to train and discover eBNN models for a learning task at hand, which fit within the memory constraint of the target device." ] }
1709.01921
2734927154
We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance. As a proof of concept, we show a DDNN can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost. In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.
BranchyNet proposed a solution of classifying samples at earlier points in a neural network, called early exit points, through the use of an entropy-based confidence criteria @cite_6 . If at an early exit point a sample is deemed confident based on the entropy of the computed probability vector for target classes, then it is classified and no further computation is performed by the higher NN layers. In DDNN, exit points are placed at physical boundaries ( , between the last NN layer on an end device and the first NN layer in the next higher layer of the distributed computing hierarchy such as the edge or the cloud). Input samples that can already be classified early will exit locally, thereby achieving a lowered response latency and saving communication to the next physical boundary. With similar objectives, SACT @cite_20 allocates computation on a per region basis in an image, and exits each region independently when it is deemed to be of sufficient quality.
{ "cite_N": [ "@cite_20", "@cite_6" ], "mid": [ "2952922798", "2610140147" ], "abstract": [ "This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation. We present experimental results showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets. Additionally, we evaluate the computation time maps on the visual saliency dataset cat2000 and find that they correlate surprisingly well with human eye fixation positions.", "Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer. However, the improved performance of additional layers in a deep network comes at the cost of added latency and energy usage in feedforward inference. As networks continue to get deeper and larger, these costs become more prohibitive for real-time and energy-sensitive applications. To address this issue, we present BranchyNet, a novel deep network architecture that is augmented with additional side branch classifiers. The architecture allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet exploits the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. For more difficult samples, which are expected less frequently, BranchyNet will use further or all network layers to provide the best likelihood of correct prediction. We study the BranchyNet architecture using several well-known networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that it can both improve accuracy and significantly reduce the inference time of the network." ] }
1709.01779
2750671411
Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the state-of-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the reliabilities of the annotators. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels of multiple annotators, using only backpropagation. We empirically show that the proposed approach is able to internally capture the reliability and biases of different annotators and achieve new state-of-the-art results for various crowdsourced datasets across different settings, namely classification, regression and sequence labeling.
The increasing popularity of crowdsourcing as a way to label large collections of data in an inexpensive and scalable manner has led to much interest of the machine learning community in developing methods to address the noise and trustworthiness issues associated with it. In this direction, one of the key early contributions is the work of Dawid and Skene , who proposed an EM algorithm to obtain point estimates of the error rates of patients given repeated but conflicting responses to medical questions. This work was the basis for many other variants for aggregating labels from multiple annotators with different levels of expertise, such as the one proposed in @cite_12 , which further extends Dawid and Skene's model by also accounting for item difficulty in the context of image classification. Similarly, propose using Dawid and Skene's approach to extract a single quality score for each worker that allows to prune low-quality workers. The approach proposed in our paper contrast with this line of work, by allowing neural networks to be trained directly on the noisy labels of multiple annotators, thereby avoiding the need to resort to prior label aggregation schemes.
{ "cite_N": [ "@cite_12" ], "mid": [ "2142518823" ], "abstract": [ "Modern machine learning-based approaches to computer vision require very large databases of hand labeled images. Some contemporary vision systems already require on the order of millions of images for training (e.g., Omron face detector [9]). New Internet-based services allow for a large number of labelers to collaborate around the world at very low cost. However, using these services brings interesting theoretical and practical challenges: (1) The labelers may have wide ranging levels of expertise which are unknown a priori, and in some cases may be adversarial; (2) images may vary in their level of difficulty; and (3) multiple labels for the same image must be combined to provide an estimate of the actual label of the image. Probabilistic approaches provide a principled way to approach these problems. In this paper we present a probabilistic model and use it to simultaneously infer the label of each image, the expertise of each labeler, and the difficulty of each image. On both simulated and real data, we demonstrate that the model outperforms the commonly used \"Majority Vote\" heuristic for inferring image labels, and is robust to both noisy and adversarial labelers." ] }
1709.01779
2750671411
Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the state-of-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the reliabilities of the annotators. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels of multiple annotators, using only backpropagation. We empirically show that the proposed approach is able to internally capture the reliability and biases of different annotators and achieve new state-of-the-art results for various crowdsourced datasets across different settings, namely classification, regression and sequence labeling.
Recently, guan2017said also proposed an approach for training deep neural networks that exploits information about the annotators. The idea is to model the multiple experts individually in the neural network and then, while keeping their predictions fixed, independently learning averaging weights for combining them using backpropagation. Like our proposed approach, this two-stage procedure does not require an EM algorithm to estimate the annotators weights. However, while our approach has the ability to capture the biases of the different annotators ( confusing class 2 with class 4) and correct them, the approach in @cite_15 only learns how to combine the predicted answers of multiple annotators by weighting them differently. Moreover, its two-stage learning procedure increases the computation complexity of training, whereas in our proposed approach is kept the same. Lastly, while the work in @cite_15 focuses only on classification, we consider regression and structured prediction problems as well.
{ "cite_N": [ "@cite_15" ], "mid": [ "2604132367" ], "abstract": [ "Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label or to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computer-aided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by Welinder and Perona (2010), and by Mnih and Hinton (2012). Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels for training." ] }
1709.01779
2750671411
Over the last few years, deep learning has revolutionized the field of machine learning by dramatically improving the state-of-the-art in various domains. However, as the size of supervised artificial neural networks grows, typically so does the need for larger labeled datasets. Recently, crowdsourcing has established itself as an efficient and cost-effective solution for labeling large sets of data in a scalable manner, but it often requires aggregating labels from multiple noisy contributors with different levels of expertise. In this paper, we address the problem of learning deep neural networks from crowds. We begin by describing an EM algorithm for jointly learning the parameters of the network and the reliabilities of the annotators. Then, a novel general-purpose crowd layer is proposed, which allows us to train deep neural networks end-to-end, directly from the noisy labels of multiple annotators, using only backpropagation. We empirically show that the proposed approach is able to internally capture the reliability and biases of different annotators and achieve new state-of-the-art results for various crowdsourced datasets across different settings, namely classification, regression and sequence labeling.
Regarding applications areas for multiple-annotator learning, some of the most popular ones are: image classification @cite_14 @cite_4 , computer-aided diagnosis radiology @cite_19 @cite_22 , object detection @cite_18 , text classification @cite_21 , natural language processing @cite_1 and speech-related tasks @cite_10 . In this paper, we will use data from some of these areas to evaluate different approaches. Given that these are precisely some of the areas that have seen the most dramatic improvements due to recent contributions in deep learning @cite_2 @cite_5 , developing novel efficient algorithms for learning deep neural networks from crowds is of great importance to the field.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_22", "@cite_21", "@cite_1", "@cite_19", "@cite_2", "@cite_5", "@cite_10" ], "mid": [ "2605991684", "2144660879", "2149273804", "2345010043", "", "1970381522", "2134305421", "", "2076063813", "2185757438" ], "abstract": [ "", "In remote sensing applications \"ground-truth\" data is often used as the basis for training pattern recognition algorithms to generate thematic maps or to detect objects of interest. In practical situations, experts may visually examine the images and provide a subjective noisy estimate of the truth. Calibrating the reliability and bias of expert labellers is a non-trivial problem. In this paper we discuss some of our recent work on this topic in the context of detecting small volcanoes in Magellan SAR images of Venus. Empirical results (using the Expectation-Maximization procedure) suggest that accounting for subjective noise can be quite significant in terms of quantifying both human and algorithm detection performance.", "Distributing labeling tasks among hundreds or thousands of annotators is an increasingly important method for annotating large datasets. We present a method for estimating the underlying value (e.g. the class) of each image from (noisy) annotations provided by multiple annotators. Our method is based on a model of the image formation and annotation process. Each image has different characteristics that are represented in an abstract Euclidean space. Each annotator is modeled as a multidimensional entity with variables representing competence, expertise and bias. This allows the model to discover and represent groups of annotators that have different sets of skills and knowledge, as well as groups of images that differ qualitatively. We find that our model predicts ground truth labels on both synthetic and real data more accurately than state of the art methods. Experiments also show that our model, starting from a set of binary labels, may discover rich information, such as different \"schools of thought\" amongst the annotators, and can group together images belonging to separate categories.", "The papers in this special section focus on the technology and applications supported by deep learning. Deep learning is a growing trend in general data analysis and has been termed one of the 10 breakthrough technologies of 2013. Deep learning is an improvement of artificial neural networks, consisting of more layers that permit higher levels of abstraction and improved predictions from data. To date, it is emerging as the leading machine-learning tool in the general imaging and computer vision domains. In particular, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. Deep CNNs automatically learn mid-level and high-level abstractions obtained from raw data (e.g., images). Recent results indicate that the generic descriptors extracted from CNNs are extremely effective in object recognition and localization in natural images. Medical image analysis groups across the world are quickly entering the field and applying CNNs and other deep learning methodologies to a wide variety of applications.", "", "Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web. We investigate five tasks: affect recognition, word similarity, recognizing textual entailment, event temporal ordering, and word sense disambiguation. For all five, we show high agreement between Mechanical Turk non-expert annotations and existing gold standard labels provided by expert labelers. For the task of affect recognition, we also show that using non-expert labels for training machine learning algorithms can be as effective as using gold standard annotations from experts. We propose a technique for bias correction that significantly improves annotation quality on two tasks. We conclude that many large labeling tasks can be effectively designed and carried out in this method at a fraction of the usual expense.", "For many supervised learning tasks it may be infeasible (or very expensive) to obtain objective and reliable labels. Instead, we can collect subjective (possibly noisy) labels from multiple experts or annotators. In practice, there is a substantial amount of disagreement among the annotators, and hence it is of great practical interest to address conventional supervised learning problems in this scenario. In this paper we describe a probabilistic approach for supervised learning when we have multiple annotators providing (possibly noisy) labels but no absolute gold standard. The proposed algorithm evaluates the different experts and also gives an estimate of the actual hidden labels. Experimental results indicate that the proposed method is superior to the commonly used majority voting baseline.", "", "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.", "This paper examines the literature on the use of crowdsourcing for speech-related tasks: speech acquisition, transcription and annotation as well as the assessment of speech technology. 29 papers were found, representing, 37 different experiments, which were annotated and analyzed to find trends in the field. The paper focuses on the different techniques used for quality control and the variety of sources of “crowds”. Finally, we propose several challenges for the future of crowdsourcing for speech processing." ] }
1709.01613
2753735207
This paper presents the EACare project, an ambitious multi-disciplinary collaboration with the aim to develop an embodied system, capable of carrying out neuropsychological tests to detect early signs of dementia, e.g., due to Alzheimer's disease. The system will use methods from Machine Learning and Social Robotics, and be trained with examples of recorded clinician-patient interactions. The interaction will be developed using a participatory design approach. We describe the scope and method of the project, and report on a first Wizard of Oz prototype.
Several efforts have been made to investigate the acceptance of social robots by the elderly @cite_20 , some of them with a particular focus on home environments @cite_23 @cite_18 . For example, through a series of questionnaires and interviews to understand the attitudes and preferences of older adults for robot assistance with everyday tasks, @cite_18 found that this user group is quite open for robots to perform a wide range of tasks in their homes. Surprisingly, for tasks such as chores or information management (e.g., reminders and monitoring their daily activity), older adults reported to prefer robot assistance over human assistance.
{ "cite_N": [ "@cite_18", "@cite_23", "@cite_20" ], "mid": [ "1964686234", "2037270833", "2137577228" ], "abstract": [ "The population of older adults in America is expected to reach an unprecedented level in the near future. Some of them have difficulties with performing daily tasks and caregivers may not be able to match pace with the increasing need for assistance. Robots, especially mobile manipulators, have the potential for assisting older adults with daily tasks enabling them to live independently in their homes. However, little is known about their views of robot assistance in the home. Twenty-one independently living older Americans (65–93 years old) were asked about their preferences for and attitudes toward robot assistance via a structured group interview and questionnaires. In the group interview, they generated a diverse set of 121 tasks they would want a robot to assist them with in their homes. These data, along with their questionnaire responses, suggest that the older adults were generally open to robot assistance but were discriminating in their acceptance of assistance for different tasks. They preferred robot assistance over human assistance for tasks related to chores, manipulating objects, and information management. In contrast, they preferred human assistance to robot assistance for tasks related to personal care and leisure activities. Our study provides insights into older adults’ attitudes and preferences for robot assistance with everyday living tasks in the home which may inform the design of robots that will be more likely accepted by older adults.", "For elders who remain independent in their homes, the home becomes more than just a place to eat and sleep. The home becomes a place where people care for each other, and it gradually subsumes all activities. This article reports on an ethnographic study of aging adults who live independently in their homes. Seventeen elders aged 60 through 90 were interviewed and observed in their homes in 2 Midwestern cities. The goal is to understand how robotic products might assist these people, helping them to stay independent and active longer. The experience of aging is described as an ecology of aging made up of people, products, and activities taking place in a local environment of the home and the surrounding community. In this environment, product successes and failures often have a dramatic impact on the ecology, throwing off a delicate balance. When a breakdown occurs, family members and other caregivers have to intervene, threatening elders' independence and identity. This article highlights the interest in how the elder ecology can be supported by new robotic products that are conceived of as a part of this interdependent system. It is recommended that the design of these products fit the ecology as part of the system, support elders' values, and adapt to all of the members of the ecology who will interact with them.", "This paper proposes a model of technology acceptance that is specifically developed to test the acceptance of assistive social agents by elderly users. The research in this paper develops and tests an adaptation and theoretical extension of the Unified Theory of Acceptance and Use of Technology (UTAUT) by explaining intent to use not only in terms of variables related to functional evaluation like perceived usefulness and perceived ease of use, but also variables that relate to social interaction. The new model was tested using controlled experiment and longitudinal data collected regarding three different social agents at elderly care facilities and at the homes of older adults. The model was strongly supported accounting for 59–79 of the variance in usage intentions and 49–59 of the variance in actual use. These findings contribute to our understanding of how elderly users accept assistive social agents." ] }
1709.01613
2753735207
This paper presents the EACare project, an ambitious multi-disciplinary collaboration with the aim to develop an embodied system, capable of carrying out neuropsychological tests to detect early signs of dementia, e.g., due to Alzheimer's disease. The system will use methods from Machine Learning and Social Robotics, and be trained with examples of recorded clinician-patient interactions. The interaction will be developed using a participatory design approach. We describe the scope and method of the project, and report on a first Wizard of Oz prototype.
One of the main motivations of developing assistive robots for the elderly is the fact that this technology can promote longer independent living @cite_23 . However, in-home experiments are still scarce because of privacy concerns and limited robot autonomy. One of the few exceptions is the work by de @cite_22 , who investigated reasons for technology abandonment in a study where 70 autonomous robots were deployed in people’s homes for about six months. Regardless of age (their participant pool ranged from 8 to 77 years), they found that the main reasons why people stopped using the robot were lack of enjoyment and perceived usefulness. These findings indicate that involving the target users from the early stages of development can be crucial for the success and acceptance of social robots.
{ "cite_N": [ "@cite_22", "@cite_23" ], "mid": [ "2592799411", "2037270833" ], "abstract": [ "Research on why people refuse or abandon the use of technology in general, and robots specifically, is still scarce. Consequently, the academic understanding of people's underlying reasons for non-use remains weak. Thus, vital information about the design of these robots including their acceptance and refusal or abandonment by its users is needed. We placed 70 autonomous robots within people's homes for a period of six months and collected reasons for refusal and abandonment through questionnaires and interviews. Based on our findings, the challenge for robot designers is to create robots that are enjoyable and easy to use to capture users in the short-term, and functionally-relevant to keep those users in the longer-term. understanding the thoughts and motives behind non-use may help to identify obstacles for acceptance, and therefore enable developers to better adapt technological designs to the benefit of the users.", "For elders who remain independent in their homes, the home becomes more than just a place to eat and sleep. The home becomes a place where people care for each other, and it gradually subsumes all activities. This article reports on an ethnographic study of aging adults who live independently in their homes. Seventeen elders aged 60 through 90 were interviewed and observed in their homes in 2 Midwestern cities. The goal is to understand how robotic products might assist these people, helping them to stay independent and active longer. The experience of aging is described as an ecology of aging made up of people, products, and activities taking place in a local environment of the home and the surrounding community. In this environment, product successes and failures often have a dramatic impact on the ecology, throwing off a delicate balance. When a breakdown occurs, family members and other caregivers have to intervene, threatening elders' independence and identity. This article highlights the interest in how the elder ecology can be supported by new robotic products that are conceived of as a part of this interdependent system. It is recommended that the design of these products fit the ecology as part of the system, support elders' values, and adapt to all of the members of the ecology who will interact with them." ] }
1709.01613
2753735207
This paper presents the EACare project, an ambitious multi-disciplinary collaboration with the aim to develop an embodied system, capable of carrying out neuropsychological tests to detect early signs of dementia, e.g., due to Alzheimer's disease. The system will use methods from Machine Learning and Social Robotics, and be trained with examples of recorded clinician-patient interactions. The interaction will be developed using a participatory design approach. We describe the scope and method of the project, and report on a first Wizard of Oz prototype.
Perhaps most similar to our work both in terms of the interaction modality (language-based) and condition of the user group (older adults diagnosed with dementia), @cite_25 conducted a laboratory Wizard of Oz experiment to evaluate the challenges in speech recognition and dialogue between participants and a personal assistant robot while performing daily household tasks such as making tea. Their results suggest that autonomous language-based interactions in this setting can be challenging not only because of speech recognition errors but also because robots will often need to proactively employ conversational repair strategies during moments of confusion.
{ "cite_N": [ "@cite_25" ], "mid": [ "2294978157" ], "abstract": [ "Increases in the prevalence of dementia and Alzheimer’s disease (AD) are a growing challenge in many nations where healthcare infrastructures are ill-prepared for the upcoming demand for personal caregiving. To help individuals with AD live at home for longer, we are developing a mobile robot, called ED, intended to assist with activities of daily living through visual monitoring and verbal prompts in cases of difficulty. In a series of experiments, we study speech-based interactions between ED and each of 10 older adults with AD as the latter complete daily tasks in a simulated home environment. Traditional automatic speech recognition is evaluated in this environment, along with rates of verbal behaviors that indicate confusion or trouble with the conversation. Analysis reveals that speech recognition remains a challenge in this setup, especially during household tasks with individuals with AD. Across the verbal behaviors that indicate confusion, older adults with AD are very likely to simply ignore the robot, which accounts for over 40p of all such behaviors when interacting with the robot. This work provides a baseline assessment of the types of technical and communicative challenges that will need to be overcome for robots to be used effectively in the home for speech-based assistance with daily living." ] }