aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1710.05627 | 2766207971 | How can a delivery robot navigate reliably to a destination in a new office building, with minimal prior information? To tackle this challenge, this paper introduces a two-level hierarchical approach, which integrates model-free deep learning and model-based path planning. At the low level, a neural-network motion controller, called the intention-net, is trained end-to-end to provide robust local navigation. The intention-net maps images from a single monocular camera and "intentions" directly to robot controls. At the high level, a path planner uses a crude map, e.g., a 2-D floor plan, to compute a path from the robot's current location to the goal. The planned path provides intentions to the intention-net. Preliminary experiments suggest that the learned motion controller is robust against perceptual uncertainty and by integrating with a path planner, it generalizes effectively to new environments and goals. | Hierarchies are crucial for coping with computational complexity, both in learning and in planning. Combining learning and planning in the same hierarchy is, however, less common. Kaelbling al proposed to learn composable models of parameterized skills with pre- and post-conditions so that a high-level planner can compose these skills to complete complex tasks @cite_15 . Our hierarchical method shares similar thinking, but specializes to navigation tasks. It uses intention instead of general pre- and post-conditions to bind together hierarchical layers and achieves great efficiency. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2736337778"
],
"abstract": [
"There has been a great deal of work on learning new robot skills, but very little consideration of how these newly acquired skills can be integrated into an overall intelligent system. A key aspect of such a system is compositionality: newly learned abilities have to be characterized in a form that will allow them to be flexibly combined with existing abilities, affording a (good!) combinatorial explosion in the robot's abilities. In this paper, we focus on learning models of the preconditions and effects of new parameterized skills, in a form that allows those actions to be combined with existing abilities by a generative planning and execution system."
]
} |
1710.05627 | 2766207971 | How can a delivery robot navigate reliably to a destination in a new office building, with minimal prior information? To tackle this challenge, this paper introduces a two-level hierarchical approach, which integrates model-free deep learning and model-based path planning. At the low level, a neural-network motion controller, called the intention-net, is trained end-to-end to provide robust local navigation. The intention-net maps images from a single monocular camera and "intentions" directly to robot controls. At the high level, a path planner uses a crude map, e.g., a 2-D floor plan, to compute a path from the robot's current location to the goal. The planned path provides intentions to the intention-net. Preliminary experiments suggest that the learned motion controller is robust against perceptual uncertainty and by integrating with a path planner, it generalizes effectively to new environments and goals. | Navigation is one of the most important robotic tasks. One classic approach consists of three steps: build a map of the environment through, , SLAM @cite_8 , plan a path using the map, and finally execute the path. High-fidelity geometric maps make it easier for path planning and execution. However, building such maps is time-consuming. Further, even small environmental changes may render them partially invalid. Alternatively, the optical flow approach does not use maps at all and relies on the visual perception of the local environment for navigation @cite_24 . While this avoids the difficulty of building and maintaining accurate geometric maps, it is difficult to achieve effective goal-directed global navigation in complex geometric environments without maps. Our approach uses crude maps, , 2-D floor plans, and sits between the two extremes. Floor plans are widely available for many indoor environments. One may also sketch them by hand. | {
"cite_N": [
"@cite_24",
"@cite_8"
],
"mid": [
"2150839555",
"207116529"
],
"abstract": [
"Surveys the developments of the last 20 years in the area of vision for mobile robot navigation. Two major components of the paper deal with indoor navigation and outdoor navigation. For each component, we have further subdivided our treatment of the subject on the basis of structured and unstructured environments. For indoor robots in structured environments, we have dealt separately with the cases of geometrical and topological models of space. For unstructured environments, we have discussed the cases of navigation using optical flows, using methods from the appearance-based paradigm, and by recognition of specific objects in the environment.",
"This paper surveys the most recent published techniques in the field of Simultaneous Localization and Mapping (SLAM). In particular it is focused on the existing techniques available to speed up the process, with the purpose to handel large scale scenarios. The main research field we plan to investigate is the filtering algorithms as a way of reducing the amount of data. It seems that almost all the current approaches can not perform consistent maps for large areas, mainly due to the increase of the computational cost and due to the uncertainties that become prohibitive when the scenario becomes larger."
]
} |
1710.05604 | 2767083185 | Research Objects (ROs) are semantically enhanced aggregations of resources associated to scientific experiments, such as data, provenance of these data, the scientific workflow used to run the experiment, intermediate results, logs and the interpretation of the results. As the number of ROs increases, it is becoming difficult to find ROs to be used, reused or re-purposed. New search and retrieval techniques are required to find the most appropriate ROs for a given researcher, paying attention to provide an intuitive user interface. In this paper we show CollabSpheres, a user interface that provides a new visual metaphor to find ROs by means of a recommendation system that takes advantage of the social aspects of ROs. The experimental evaluation of this tool shows that users perceive high values of usability, user satisfaction, usefulness and ease of use. From the analysis of these results we argue that users perceive the simplicity, intuitiveness and cleanness of this tool, as well as this tool increases collaboration and reuse of research objects. | The rationale behind the Collaboration Spheres is closely related with circles of collaborative search introduced by Russell-Rose @cite_16 . The definition of the circles of collaborative search provides a holistic view of collaboration in search by defining a three-circle model composed by the inner and intermediate social circles; and the outer circle. The inner and intermediate circles represent and intentional collaboration between individuals who share some degree of social connectedness, and form the nucleus of collaboration. The outer circle comprises , unintentional collaboration. This is the case in our Collaboration Spheres for the statistical data extracted from the preferences of the whole research community in collaborative filtering. However, our approach goes further, because we show an additional circle of recommendations based on the context (explicit preferences) of the user. | {
"cite_N": [
"@cite_16"
],
"mid": [
"635810450"
],
"abstract": [
"Search is not just a box and ten blue links. Search is a journey: an exploration where what we encounter along the way changes what we seek. But in order to guide people along this journey, designers must understand both the art and science of search.In Designing the Search Experience, authors Tony Russell-Rose and Tyler Tate weave together the theories of information seeking with the practice of user interface design. Understand how people search, and how the concepts of information seeking, information foraging, and sensemaking underpin the search process. Apply the principles of user-centered design to the search box, search results, faceted navigation, mobile interfaces, social search, and much more. Design the cross-channel search experiences of tomorrow that span desktop, tablet, mobile, and other devices. Table of Contents PART 1 A Framework for Search and Discovery Chapter 1 The User Chapter 2 Information Seeking Chapter 3 Context Chapter 4 Modes of Search and Discovery PART 2 Design Solutions Chapter 5 Formulating the Query Chapter 6 Displaying and Manipulating Results Chapter 7 Faceted Search Chapter 8 Mobile Search Chapter 9 Social Search Part 3 Designing the Future Chapter 10 Cross-Channel Information Interaction"
]
} |
1710.04981 | 2766927912 | There have been several attempts at modeling context in robots. However, either these attempts assume a fixed number of contexts or use a rule-based approach to determine when to increment the number of contexts. In this paper, we pose the task of when to increment as a learning problem, which we solve using a Recurrent Neural Network. We show that the network successfully (with 98 testing accuracy) learns to predict when to increment, and demonstrate, in a scene modeling problem (where the correct number of contexts is not known), that the robot increments the number of contexts in an expected manner (i.e., the entropy of the system is reduced). We also present how the incremental model can be used for various scene reasoning tasks. | Scene modeling is the task of modeling what is in the scene. Such scene models are critical in robots since they allow reasoning about the scene and the objects in it. Many models have been proposed in computer vision and robotics using probabilistic models such as Markov Random Fields @cite_24 @cite_16 , Bayesian Networks @cite_2 @cite_14 , Latent Dirichlet Allocation variants @cite_23 @cite_0 , predicate logic @cite_17 @cite_9 , and Scene Graphs @cite_6 . There have also been many attempts for ontology-based scene modeling where objects and various types of relations are modeled @cite_9 @cite_25 @cite_20 . | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"2667996719",
"1549213380",
"2287823145",
"2022153165",
"",
"2149035855",
"2166978545",
"2143927289",
"1718499378",
"",
"2282626006"
],
"abstract": [
"This paper presents ongoing research in the SWARMs project towards facilitating context awareness in underwater robots. In particular, the focus of this paper is put on the context reasoning part. The underwater environment introduces uncertainties in context data which lead to difficulties in the context reasoning phase. As probability is the best well-known formalism for computational scientific reasoning under uncertainties, the emerging and effective probabilistic reasoning method, namely, Multi-Entity Bayesian Network (MEBN), is explored for its feasibility to reason under uncertainties in the SWARMs project. A simple use case for oil spill monitoring is used to verify the usefulness of MEBN. The results show that the MEBN is a promising approach to reason about context in the presence of uncertainties in the underwater robot field.",
"This paper introduces Multi-layered Context Ontology Framework (MLCOF) for comprehensive, integrated robot context modeling and reasoning for object recognition. MLCOF consists of six knowledge layers (KLayer) including rules such as an image layer, three geometry (1D, 2D and 3D geometry) layers, an object layer, and a space layer. For each KLayer, we use a 6-tuple ontology structure including concepts, relations, relational functions, concept hierarchies, relation hierarchies and axioms. The axioms specify the semantics of concepts and relational constraints between ontological elements at each KLayer. The rules are used to specify or infer the relationships between ontological elements at different KLayers. Thus, MLCOF enables to model integrated robot context information from a low level image to high level object and space semantics. With the integrated context knowledge, a robot can understand objects not only through unidirectional reasoning between two adjacent layers but also through bidirectional reasoning among several layers even with partial information.",
"Robot world model representations are a vital part of robotic applications. However, there is no support for such representations in model-driven engineering tool chains. This work proposes a novel Domain Specific Language (DSL) for robotic world models that are based on the Robot Scene Graph (RSG) approach. The RSG-DSL can express (a) application specific scene configurations, (b) semantic scene structures and (c) inputs and outputs for the computational entities that are loaded into an instance of a world model.",
"RGB-D cameras, which give an RGB image together with depths, are becoming increasingly popular for robotic perception. In this paper, we address the task of detecting commonly found objects in the three-dimensional (3D) point cloud of indoor scenes obtained from such cameras. Our method uses a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurrence relationships and geometric relationships. With a large number of object classes and relations, the model's parsimony becomes important and we address that by using multiple types of edge potentials. We train the model using a maximum-margin learning approach. In our experiments concerning a total of 52 3D scenes of homes and offices (composed from about 550 views), we get a performance of 84.06 and 73.38 in labeling office and home scenes respectively for 17 object classes each. We also present a method for a robot to search for an object using the learned model and the contextual information available from the current labelings of the scene. We applied this algorithm successfully on a mobile robot for the task of finding 12 object classes in 10 different offices and achieved a precision of 97.56 with 78.43 recall.1",
"",
"In recent years, the language model Latent Dirichlet Allocation (LDA), which clusters co-occurring words into topics, has been widely applied in the computer vision field. However, many of these applications have difficulty with modeling the spatial and temporal structure among visual words, since LDA assumes that a document is a \"bag-of-words\". It is also critical to properly design \"words\" and \"documents\" when using a language model to solve vision problems. In this paper, we propose a topic model Spatial Latent Dirichlet Allocation (SLDA), which better encodes spatial structures among visual words that are essential for solving many vision problems. The spatial information is not encoded in the values of visual words but in the design of documents. Instead of knowing the partition of words into documents a priori, the word-document assignment becomes a random hidden variable in SLDA. There is a generative procedure, where knowledge of spatial structure can be flexibly added as a prior, grouping visual words which are close in space into the same document. We use SLDA to discover objects from a collection of images, and show it achieves better performance than LDA.",
"Accurate detection of moving objects is an important precursor to stable tracking or recognition. In this paper, we present an object detection scheme that has three innovations over existing approaches. First, the model of the intensities of image pixels as independent random variables is challenged and it is asserted that useful correlation exists in intensities of spatially proximal pixels. This correlation is exploited to sustain high levels of detection accuracy in the presence of dynamic backgrounds. By using a nonparametric density estimation method over a joint domain-range representation of image pixels, multimodal spatial uncertainties and complex dependencies between the domain (location) and range (color) are directly modeled. We propose a model of the background as a single probability density. Second, temporal persistence is proposed as a detection criterion. Unlike previous approaches to object detection which detect objects by building adaptive models of the background, the foregrounds modeled to augment the detection of objects (without explicit tracking) since objects detected in the preceding frame contain substantial evidence for detection in the current frame. Finally, the background and foreground models are used competitively in a MAP-MRF decision framework, stressing spatial context as a condition of detecting interesting objects and the posterior function is maximized efficiently by finding the minimum cut of a capacitated graph. Experimental validation of the proposed method is performed and presented on a diverse set of dynamic scenes.",
"It is now widely accepted that concepts and conceptualization are key elements towards achieving cognition on a humanoid robot. An important problem on this path is the grounded representation of individual concepts and the relationships between them. In this article, we propose a probabilistic method based on Markov Random Fields to model a concept web on a humanoid robot where individual concepts and the relations between them are captured. In this web, each individual concept is represented using a prototype-based conceptualization method that we proposed in our earlier work. Relations between concepts are linked to the cooccurrences of concepts in interactions. By conveying input from perception, action, and language, the concept web forms rich, structured, grounded information about objects, their affordances, words, etc. We demonstrate that, given an interaction, a word, or the perceptual information from an object, the corresponding concepts in the web are activated, much the same way as they are in humans. Moreover, we show that the robot can use these activations in its concept web for several tasks to disambiguate its understanding of the scene.",
"In this paper we introduce a knowledge engine, which learns and shares knowledge representations, for robots to carry out a variety of tasks. Building such an engine brings with it the challenge of dealing with multiple data modalities including symbols, natural language, haptic senses, robot trajectories, visual features and many others. The stored in the engine comes from multiple sources including physical interactions that robots have while performing tasks (perception, planning and control), knowledge bases from the Internet and learned representations from several robotics research groups. We discuss various technical aspects and associated challenges such as modeling the correctness of knowledge, inferring latent information and formulating different robotic tasks as queries to the knowledge engine. We describe the system architecture and how it supports different mechanisms for users and robots to interact with the engine. Finally, we demonstrate its use in three important research areas: grounding natural language, perception, and planning, which are the key building blocks for many robotic tasks. This knowledge engine is a collaborative effort and we call it RoboBrain.",
"",
"Ubiquitous Robotics is a novel paradigm aimed at addressing the coordinated behaviour of robots in environments that are intelligent per se. To this aim, suitable methods to enforce cooperative activities must be assessed. In this article, a formalism to encode spatio-temporal situations whose occurrences must be detected by a context-aware system is introduced. The Situation Definition Language is a tool used to specify relationships among classes of sensory data in distributed systems (such as those adhering to the Ubiquitous Robotics paradigm), without posing any assumption on how data themselves are acquired. The capabilities offered by the language are discussed with respect to a real-world scenario, where a team of mobile robots cooperates with an intelligent environment to perform service tasks. Specifically, the article focuses on the system ability to combine in a centralized representation information originating from distributed sources, either mobile (i.e., the robots) or fixed (i.e., the intelligent devices in the network)."
]
} |
1710.04981 | 2766927912 | There have been several attempts at modeling context in robots. However, either these attempts assume a fixed number of contexts or use a rule-based approach to determine when to increment the number of contexts. In this paper, we pose the task of when to increment as a learning problem, which we solve using a Recurrent Neural Network. We show that the network successfully (with 98 testing accuracy) learns to predict when to increment, and demonstrate, in a scene modeling problem (where the correct number of contexts is not known), that the robot increments the number of contexts in an expected manner (i.e., the entropy of the system is reduced). We also present how the incremental model can be used for various scene reasoning tasks. | Although context has been widely studied in other fields, it has not received sufficient attention in the robotics community, except for, e.g., @cite_24 who used spatial relations between objects as contextual prior for object detection in a Markov Random Field; @cite_8 , who adapted Latent Dirichlet Allocation on top of object concepts for rule-based incremental context modeling; and @cite_14 , who proposed using a variant of Bayesian Networks for context modeling in underwater robots. | {
"cite_N": [
"@cite_24",
"@cite_14",
"@cite_8"
],
"mid": [
"2022153165",
"2667996719",
"2152608974"
],
"abstract": [
"RGB-D cameras, which give an RGB image together with depths, are becoming increasingly popular for robotic perception. In this paper, we address the task of detecting commonly found objects in the three-dimensional (3D) point cloud of indoor scenes obtained from such cameras. Our method uses a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurrence relationships and geometric relationships. With a large number of object classes and relations, the model's parsimony becomes important and we address that by using multiple types of edge potentials. We train the model using a maximum-margin learning approach. In our experiments concerning a total of 52 3D scenes of homes and offices (composed from about 550 views), we get a performance of 84.06 and 73.38 in labeling office and home scenes respectively for 17 object classes each. We also present a method for a robot to search for an object using the learned model and the contextual information available from the current labelings of the scene. We applied this algorithm successfully on a mobile robot for the task of finding 12 object classes in 10 different offices and achieved a precision of 97.56 with 78.43 recall.1",
"This paper presents ongoing research in the SWARMs project towards facilitating context awareness in underwater robots. In particular, the focus of this paper is put on the context reasoning part. The underwater environment introduces uncertainties in context data which lead to difficulties in the context reasoning phase. As probability is the best well-known formalism for computational scientific reasoning under uncertainties, the emerging and effective probabilistic reasoning method, namely, Multi-Entity Bayesian Network (MEBN), is explored for its feasibility to reason under uncertainties in the SWARMs project. A simple use case for oil spill monitoring is used to verify the usefulness of MEBN. The results show that the MEBN is a promising approach to reason about context in the presence of uncertainties in the underwater robot field.",
"In this paper, we formalize and model context in terms of a set of concepts grounded in the sensorimotor interactions of a robot. The concepts are modeled as a web using Markov Random Field (MRF), inspired from the concept web hypothesis for representing concepts in humans. On this concept web, we treat context as a latent variable of Latent Dirichlet Allocation (LDA), which is a widely-used method in computational linguistics for modeling topics in texts. We extend the standard LDA method in order to make it incremental so that: 1) it does not relearn everything from scratch given new interactions (i.e., it is online); and 2) it can discover and add a new context into its model when necessary. We demonstrate on the iCub platform that, partly owing to modeling context on top of the concept web, our approach is adaptive, online, and robust: it is adaptive and online since it can learn and discover a new context from new interactions. It is robust since it is not affected by irrelevant stimuli and it can discover contexts after a few interactions only. Moreover, we show how to use the context learned in such a model for two important tasks: object recognition and planning."
]
} |
1710.04981 | 2766927912 | There have been several attempts at modeling context in robots. However, either these attempts assume a fixed number of contexts or use a rule-based approach to determine when to increment the number of contexts. In this paper, we pose the task of when to increment as a learning problem, which we solve using a Recurrent Neural Network. We show that the network successfully (with 98 testing accuracy) learns to predict when to increment, and demonstrate, in a scene modeling problem (where the correct number of contexts is not known), that the robot increments the number of contexts in an expected manner (i.e., the entropy of the system is reduced). We also present how the incremental model can be used for various scene reasoning tasks. | There have been some attempts at incremental context or topic modeling, in text modeling @cite_22 , computer vision @cite_19 and in robotics @cite_3 @cite_8 . However, these attempts are generally rule-based approaches, looking at the errors or the entropy (perplexity) of the system to determine when to increment. Since these rules are hand-crafted by intuition, they may fail to capture the underlying structure of the data for determining when to increment. There are also methods such as Hierarchical Dirichlet Processes @cite_10 or its nested version @cite_18 that assume the availability of all data to estimate the number of topics or assume an infinite number of topics, which are both unrealistic for robots continually interacting with the environment and getting into new contexts through their lifetime. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_8",
"@cite_3",
"@cite_19",
"@cite_10"
],
"mid": [
"2005564522",
"159230833",
"2152608974",
"2033272336",
"",
"2158266063"
],
"abstract": [
"We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP generalizes the nested Chinese restaurant process (nCRP) to allow each word to follow its own path to a topic node according to a per-document distribution over the paths on a shared tree. This alleviates the rigid, single-path formulation assumed by the nCRP, allowing documents to easily express complex thematic borrowings. We derive a stochastic variational inference algorithm for the model, which enables efficient inference for massive collections of text documents. We demonstrate our algorithm on 1.8 million documents from The New York Times and 2.7 million documents from Wikipedia .",
"Inference algorithms for topic models are typically designed to be run over an entire collection of documents after they have been observed. However, in many applications of these models, the collection grows over time, making it infeasible to run batch algorithms repeatedly. This problem can be addressed by using online algorithms, which update estimates of the topics as each document is observed. We introduce two related RaoBlackwellized online inference algorithms for the latent Dirichlet allocation (LDA) model – incremental Gibbs samplers and particle filters – and compare their runtime and performance to that of existing algorithms.",
"In this paper, we formalize and model context in terms of a set of concepts grounded in the sensorimotor interactions of a robot. The concepts are modeled as a web using Markov Random Field (MRF), inspired from the concept web hypothesis for representing concepts in humans. On this concept web, we treat context as a latent variable of Latent Dirichlet Allocation (LDA), which is a widely-used method in computational linguistics for modeling topics in texts. We extend the standard LDA method in order to make it incremental so that: 1) it does not relearn everything from scratch given new interactions (i.e., it is online); and 2) it can discover and add a new context into its model when necessary. We demonstrate on the iCub platform that, partly owing to modeling context on top of the concept web, our approach is adaptive, online, and robust: it is adaptive and online since it can learn and discover a new context from new interactions. It is robust since it is not affected by irrelevant stimuli and it can discover contexts after a few interactions only. Moreover, we show how to use the context learned in such a model for two important tasks: object recognition and planning.",
"In the context of developmental robotics, a robot has to cope with complex sensorimotor spaces by reducing their dimensionality. In the case of sensor space reduction, classical approaches for pattern recognition use either hard- coded feature detection or supervised learning. We believe supervised learning and hard-coded feature extraction must be extended with unsupervised learning of feature representations. In this paper, we present an approach to learn represen- tations using space-variant images and saccades. The saccades are driven by a measure of quantity of information in the visual scene, emerging from the activations of Restricted Boltzmann Machines (RBMs). The RBM, a generative model, is trained incrementally on locations where the system saccades. Our approach is implemented using real data captured by a NAO robot in indoor conditions. I. INTRODUCTION",
"",
"We consider problems involving groups of data where each observation within a group is a draw from a mixture model and where it is desirable to share mixture components between groups. We assume that the number of mixture components is unknown a priori and is to be inferred from the data. In this setting it is natural to consider sets of Dirichlet processes, one for each group, where the well-known clustering property of the Dirichlet process provides a nonparametric prior for the number of mixture components within each group. Given our desire to tie the mixture models in the various groups, we consider a hierarchical model, specifically one in which the base measure for the child Dirichlet processes is itself distributed according to a Dirichlet process. Such a base measure being discrete, the child Dirichlet processes necessarily share atoms. Thus, as desired, the mixture models in the different groups necessarily share mixture components. We discuss representations of hierarchical Dirichlet processes ..."
]
} |
1710.04890 | 2766233171 | With the advent of the internet of things and industry 4.0 an enormous amount of data is produced at the edge of the network. Due to a lack of computing power, this data is currently send to the cloud where centralized machine learning models are trained to derive higher level knowledge. With the recent development of specialized machine learning hardware for mobile devices, a new era of distributed learning is about to begin that raises a new research question: How can we search in distributed machine learning models? Machine learning at the edge of the network has many benefits, such as low-latency inference and increased privacy. Such distributed machine learning models can also learn personalized for a human user, a specific context, or application scenario. As training data stays on the devices, control over possibly sensitive data is preserved as it is not shared with a third party. This new form of distributed learning leads to the partitioning of knowledge between many devices which makes access difficult. In this paper we tackle the problem of finding specific knowledge by forwarding a search request (query) to a device that can answer it best. To that end, we use a entropy based quality metric that takes the context of a query and the learning quality of a device into account. We show that our forwarding strategy can achieve over 95 accuracy in a urban mobility scenario where we use data from 30 000 people commuting in the city of Trento, Italy. | Information retrieval from peer to peer (P2P) systems and machine learning are both well studied areas. Today, machine learning is often done in Big Data scenarios, where all training data is logically centralized. There exist many approaches to distribute the training data and machine learning models between several machines, for example on a cluster. These systems have the benefit of a centralized controller that actively manages how information is distributed between the different machines. In such scenarios, communication-efficient distribution of data between machines is a hard research problem of its own @cite_10 . | {
"cite_N": [
"@cite_10"
],
"mid": [
"2516174802"
],
"abstract": [
"Vertex-centric graph processing systems such as Pregel, PowerGraph, or GraphX recently gained popularity due to their superior performance of data analytics on graph-structured data. These systems exploit the graph structure to improve data access locality during computation, making use of specialized graph partitioning algorithms. Recent partitioning techniques assume a uniform and constant amount of data exchanged between graph vertices (i.e., uniform vertex traffic) and homogeneous underlying network costs. However, in real-world scenarios vertex traffic and network costs are heterogeneous. This leads to suboptimal partitioning decisions and inefficient graph processing. To this end, we designed GrapH, the first graph processing system using vertex-cut graph partitioning that considers both, diverse vertex traffic and heterogeneous network, to minimize overall communication costs. The main idea is to avoid frequent communication over expensive network links using an adaptive edge migration strategy. Our evaluations show an improvement of 60 in communication costs compared to state-of-the-art partitioning approaches."
]
} |
1710.04890 | 2766233171 | With the advent of the internet of things and industry 4.0 an enormous amount of data is produced at the edge of the network. Due to a lack of computing power, this data is currently send to the cloud where centralized machine learning models are trained to derive higher level knowledge. With the recent development of specialized machine learning hardware for mobile devices, a new era of distributed learning is about to begin that raises a new research question: How can we search in distributed machine learning models? Machine learning at the edge of the network has many benefits, such as low-latency inference and increased privacy. Such distributed machine learning models can also learn personalized for a human user, a specific context, or application scenario. As training data stays on the devices, control over possibly sensitive data is preserved as it is not shared with a third party. This new form of distributed learning leads to the partitioning of knowledge between many devices which makes access difficult. In this paper we tackle the problem of finding specific knowledge by forwarding a search request (query) to a device that can answer it best. To that end, we use a entropy based quality metric that takes the context of a query and the learning quality of a device into account. We show that our forwarding strategy can achieve over 95 accuracy in a urban mobility scenario where we use data from 30 000 people commuting in the city of Trento, Italy. | First P2P systems such as Chord @cite_33 , CAN @cite_25 and Pastry @cite_30 tackled the problem of how to find specific data items in a distributed system. Except for CAN, most of these early work focuses on retrieval based on one unique key such as a hash value. CAN allows for multidimensional keys in euclidean space to locate data items. All approaches, however, share the draw back that they can only retrieve items that are identified by a unique index. | {
"cite_N": [
"@cite_30",
"@cite_25",
"@cite_33"
],
"mid": [
"2167898414",
"2163059190",
"2158049821"
],
"abstract": [
"This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer ap- plications. Pastry performs application-level routing and object location in a po- tentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops. Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties.",
"Hash tables - which map \"keys\" onto \"values\" - are an essential building block in modern software systems. We believe a similar functionality would be equally valuable to large distributed systems. In this paper, we introduce the concept of a Content-Addressable Network (CAN) as a distributed infrastructure that provides hash table-like functionality on Internet-like scales. The CAN is scalable, fault-tolerant and completely self-organizing, and we demonstrate its scalability, robustness and low-latency properties through simulation.",
"A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes."
]
} |
1710.04890 | 2766233171 | With the advent of the internet of things and industry 4.0 an enormous amount of data is produced at the edge of the network. Due to a lack of computing power, this data is currently send to the cloud where centralized machine learning models are trained to derive higher level knowledge. With the recent development of specialized machine learning hardware for mobile devices, a new era of distributed learning is about to begin that raises a new research question: How can we search in distributed machine learning models? Machine learning at the edge of the network has many benefits, such as low-latency inference and increased privacy. Such distributed machine learning models can also learn personalized for a human user, a specific context, or application scenario. As training data stays on the devices, control over possibly sensitive data is preserved as it is not shared with a third party. This new form of distributed learning leads to the partitioning of knowledge between many devices which makes access difficult. In this paper we tackle the problem of finding specific knowledge by forwarding a search request (query) to a device that can answer it best. To that end, we use a entropy based quality metric that takes the context of a query and the learning quality of a device into account. We show that our forwarding strategy can achieve over 95 accuracy in a urban mobility scenario where we use data from 30 000 people commuting in the city of Trento, Italy. | The second generation of P2P systems (e.g. Mercury @cite_21 , Squid @cite_3 , and Znet @cite_27 ) introduced the support for more complex, multidimensional, and range queries. This enabled searches like . These systems enable search in multidimensional space, where Data locality is usually achieved by dimension reduction techniques, such as space filling curves (e.g. @cite_19 ). A general problem is that range queries might be too restricted in cases with sparse data. For example, if there are very few results for the above mentioned query, results for persons slightly older than 20 years would also be interesting for the user. | {
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_21",
"@cite_3"
],
"mid": [
"2133843880",
"2097455771",
"2096538410",
"2157653075"
],
"abstract": [
"Peer-to-peer systems enable access to data spread over an extremely large number of machines. Most P2P systems support only simple lookup queries. However, many new applications, such as P2P photo sharing and massively multi-player games, would benefit greatly from support for multidimensional range queries. We show how such queries may be supported in a P2P system by adapting traditional spatial-database technologies with novel P2P routing networks and load-balancing algorithms. We show how to adapt two popular spatial-database solutions - kd-trees and space-filling curves - and experimentally compare their effectiveness.",
"Today's peer-to-peer (P2P) systems are unable to cope well with range queries on multi-dimensional data. To extend existing P2P systems and thus support multidimensional range queries, one needs to consider such issues as space partitioning and mapping, efficient query processing, and load balancing. In this paper, the authors describe a scheme called ZNet, which addresses all these issues. Moreover, an extensive performance study which evaluates ZNet against several recent proposals was conducted, and results show that ZNet possesses nearly all desirable properties, while others typically fail in one or another.",
"This paper presents the design of Mercury, a scalable protocol for supporting multi-attribute range-based searches. Mercury differs from previous range-based query systems in that it supports multiple attributes as well as performs explicit load balancing. To guarantee efficient routing and load balancing, Mercury uses novel light-weight sampling mechanisms for uniformly sampling random nodes in a highly dynamic overlay network. Our evaluation shows that Mercury is able to achieve its goals of logarithmic-hop routing and near-uniform load balancing.We also show that Mercury can be used to solve a key problem for an important class of distributed applications: distributed state maintenance for distributed games. We show that the Mercury-based solution is easy to use, and that it reduces the game's messaging overheard significantly compared to a naive approach.",
"A fundamental problem in large scale, decentralized distributed systems is the efficient discovery of information. This paper presents Squid, a peer-to-peer information discovery system that supports flexible searches and provides search guarantees. The fundamental concept underlying the approach is the definition of multi-dimensional information spaces and the maintenance of locality in these spaces. The key innovation is a dimensionality reducing indexing scheme that effectively maps the multi-dimensional information space to physical peers while preserving lexical locality. Squid supports complex queries containing partial keywords, wildcards and ranges. Analytical and simulation results show that Squid is scalable and efficient."
]
} |
1710.04890 | 2766233171 | With the advent of the internet of things and industry 4.0 an enormous amount of data is produced at the edge of the network. Due to a lack of computing power, this data is currently send to the cloud where centralized machine learning models are trained to derive higher level knowledge. With the recent development of specialized machine learning hardware for mobile devices, a new era of distributed learning is about to begin that raises a new research question: How can we search in distributed machine learning models? Machine learning at the edge of the network has many benefits, such as low-latency inference and increased privacy. Such distributed machine learning models can also learn personalized for a human user, a specific context, or application scenario. As training data stays on the devices, control over possibly sensitive data is preserved as it is not shared with a third party. This new form of distributed learning leads to the partitioning of knowledge between many devices which makes access difficult. In this paper we tackle the problem of finding specific knowledge by forwarding a search request (query) to a device that can answer it best. To that end, we use a entropy based quality metric that takes the context of a query and the learning quality of a device into account. We show that our forwarding strategy can achieve over 95 accuracy in a urban mobility scenario where we use data from 30 000 people commuting in the city of Trento, Italy. | This gap was filled by research centered around nearest neighbor queries for P2P systems, like pSearch @cite_37 and Semantic Small World @cite_23 . The main idea is to provide nearest neighborhood search for multidimensional queries. Most work focuses on selecting important dimensions @cite_0 @cite_28 or methods to form an overlay network that connects nodes with similar information @cite_6 . Just like in this paper, some of these approaches also form a small world topology @cite_23 that has a small network diameter, which makes each node reachable with only a few hops and enables efficient routing. There also exists work that relies on a predefined similarity metric, e.g. the euclidean distance, and retrieves the k nearest data items in a large collection of high dimensional data @cite_14 @cite_11 @cite_4 @cite_28 . | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_28",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_11"
],
"mid": [
"1974677553",
"2086179657",
"1566489255",
"2234219295",
"",
"2084949326",
"1494374563",
"1563262396"
],
"abstract": [
"We describe an efficient peer-to-peer information retrieval system, pSearch; that supports state-of-the-art content- and semantic-based full-text searches. pSearch avoids the scalability problem of existing systems that employ centralized indexing, or index query flooding. It also avoids the nondeterminism that is exhibited by heuristic-based approaches. In pSearch; documents in the network are organized around their vector representations (based on modern document ranking algorithms) such that the search space for a given query is organized around related documents, achieving both efficiency and accuracy.",
"Abstract We propose a novel approach to solving the approximate k-nearest neighbor search problem in metric spaces. The search structure is based on a navigable small world graph with vertices corresponding to the stored elements, edges to links between them, and a variation of greedy algorithm for searching. The navigable small world is created simply by keeping old Delaunay graph approximation links produced at the start of construction. The approach is very universal, defined in terms of arbitrary metric spaces and at the same time it is very simple. The algorithm handles insertions in the same way as queries: by finding approximate neighbors for the inserted element and connecting it to them. Both search and insertion can be done in parallel requiring only local information from the structure. The structure can be made distributed. The accuracy of the probabilistic k-nearest neighbor queries can be adjusted without rebuilding the structure. The performed simulation for data in the Euclidean spaces shows that the structure built using the proposed algorithm has small world navigation properties with log 2 ( n ) insertion and search complexity at fixed accuracy, and performs well at high dimensionality. Simulation on a CoPHiR dataset revealed its high efficiency in case of large datasets (more than an order of magnitude less metric computations at fixed recall) compared to permutation indexes. Only 0.03 of the 10 million 208-dimensional vector dataset is needed to be evaluated to achieve 0.999 recall (virtually exact search). For recall 0.93 processing speed 2800 queries s can be achieved on a dual Intel X5675 Xenon server node with Java implementation.",
"Similarity search in metric spaces represents an important paradigm for content-based retrieval in many applications. Existing centralized search structures can speed-up retrieval, but they do not scale up to large volume of data because the response time is linearly increasing with the size of the searched file. In this article, we study the problem of executing the nearest neighbor(s) queries in a distributed metric structure, which is based on the P2P communication paradigm and the generalized hyperplane partitioning. By exploiting parallelism in a dynamic network of computers, the query execution scales up very well considering both the number of distance computations and the hop count between the peers. Results are verified by experiments on real-life data sets.",
"Peer-to-peer systems have been widely used for sharing and exchanging data and resources among numerous computer nodes. Various data objects identifiable with high dimensional feature vectors, such as text, images, genome sequences, are starting to leverage P2P technology. Most of the existing works have been focusing on queries on data objects with one or few attributes and thus are not applicable on high dimensional data objects. In this study, we investigate K nearest neighbors query (KNN) on high dimensional data objects in P2P systems. Efficient query algorithm and solutions that address various technical challenges raised by high dimensionality, such as search space resolution and incremental search space refinement, are proposed. An extensive simulation using both synthetic and real data sets demonstrates that our proposal efficiently supports KNN query on high dimensional data in P2P systems.",
"",
"The retrieval facilities of most peer-to-peer (P2P) systems are limited to queries based on a unique identifier or a small set of keywords. The techniques used for this purpose are hardly applicable for content based image retrieval (CBIR) in a P2P network. Furthermore, we will argue that the curse of dimensionality and the high communication overhead prevent the adaptation of multidimensional search trees or fast sequential scan techniques for P2P CBIR. In the present paper we will propose two compact data representations that can be distributed in a P2P network and used as the basis for a source selection. This allows for communicating with only a small fraction of all peers during query processing without deteriorating the result quality significantly. We will also present experimental results confirming our approach.",
"For a peer-to-peer (P2P) system holding massive amount of data, efficient semantic based search for resources (such as data or services) is a key determinant to its scalability. This work presents the design of an overlay network, namely semantic small world (SSW), that facilitates efficient semantic based search in P2P systems. SSW is based on three innovative ideas: 1) small world network; 2) semantic clustering; 3) dimension reduction. Peers in SSW are clustered according to the semantics of their local data and self-organized as a small world overlay network. To address the maintenance issue of high dimensional overlay networks, a dynamic dimension reduction method, called adaptive space linearization, is used to construct a one-dimensional SSW that supports operations in the high dimensional semantic space. SSW achieves a very competitive trade-off between the search latencies traffic and maintenance overheads. Through extensive simulations, we show that SSW is much more scalable to very large network sizes and very large numbers of data objects compared to pSearch, the state-of-the-art semantic-based search technique for P2P systems. In addition, SSW is adaptive to distribution of data and locality of interest; is very resilient to failures; and has good load balancing property.",
"Given a query point Q, a Reverse Nearest Neighbor (RNN) Query returns all points in the database having Q as their nearest neighbor. The problem of RNN query has received much attention in a centralized database. However, not so much work has been done on this topic in the context of Peer-to-Peer (P2P) systems. In this paper, we shall do pioneering work on supporting distributed RNN query in large distributed and dynamic P2P networks. Our proposed RNN query algorithms are based on a distributed multi-dimensional index structure, called P2PRdNN-tree, which is relying on a super-peer-based P2P overlay. The results of our performance evaluation with real spatial data sets show that our proposed algorithms are indeed practically feasible for answering distributed RNN query in P2P systems."
]
} |
1710.04890 | 2766233171 | With the advent of the internet of things and industry 4.0 an enormous amount of data is produced at the edge of the network. Due to a lack of computing power, this data is currently send to the cloud where centralized machine learning models are trained to derive higher level knowledge. With the recent development of specialized machine learning hardware for mobile devices, a new era of distributed learning is about to begin that raises a new research question: How can we search in distributed machine learning models? Machine learning at the edge of the network has many benefits, such as low-latency inference and increased privacy. Such distributed machine learning models can also learn personalized for a human user, a specific context, or application scenario. As training data stays on the devices, control over possibly sensitive data is preserved as it is not shared with a third party. This new form of distributed learning leads to the partitioning of knowledge between many devices which makes access difficult. In this paper we tackle the problem of finding specific knowledge by forwarding a search request (query) to a device that can answer it best. To that end, we use a entropy based quality metric that takes the context of a query and the learning quality of a device into account. We show that our forwarding strategy can achieve over 95 accuracy in a urban mobility scenario where we use data from 30 000 people commuting in the city of Trento, Italy. | All these approaches have been designed to retrieve items that are explicitly defined by matching a specific identifier (id, hash value), fall in a specific range of a set of attributes, or are close to a given query. In order to retrieve knowledge this notion has to be extended by some sort of confidence metric that can take the quality of available knowledge (information) into account. Such a confidence metric needs to express the expertise of a node, reflecting for example that it holds a lot of similar information @cite_38 or can do reliable prediction. In our previous work @cite_38 we have tackled this issue for knowledge modeled as N-Dimensional point-clouds. We proposed a point-cluster-based confidence metric that took the variance and number of points in each cluster as an indicator of quality into account. | {
"cite_N": [
"@cite_38"
],
"mid": [
"2286959057"
],
"abstract": [
"By 2020, the Internet of Things will consist of 26 Billion connected devices. All these devices will be collecting an innumerable amount of raw observations, for example, GPS positions or communication patterns. In order to benefit from this enormous amount of information, machine learning algorithms are used to derive knowledge from the gathered observations. This benefit can be increased further, if the devices are enabled to collaborate by sharing gathered knowledge. In a massively distributed environment, this is not an easy task, as the knowledge on each device can be very heterogeneous and based on a different amount of observations in diverse contexts. In this paper, we propose two strategies to route a query for specific knowledge to a device that can answer it with high confidence. To that end, we developed a confidence metric that takes the number and variance of the observations of a device into account. Our routing strategies are based on local routing tables that can either be learned from previous queries over time or actively maintained by interchanging knowledge models. We evaluated both routing strategies on real world and synthetic data. Our evaluations show that the knowledge retrieved by the presented approaches is up to @math as accurate as the global optimum."
]
} |
1710.05104 | 2765381867 | Optic disk segmentation is a prerequisite step in automatic retinal screening systems. In this paper, we propose an algorithm for optic disk segmentation based on a local adaptive thresholding method. Location of the optic disk is validated by intensity and average vessel width of retinal images. Then an adaptive thresholding is applied on the temporal and nasal part of the optic disc separately. Adaptive thresholding, makes our algorithm robust to illumination variations and various image acquisition conditions. Moreover, experimental results on the DRIVE and KHATAM databases show promising results compared to the recent literature. In the DRIVE database, the optic disk in all images is correctly located and the mean overlap reached to 43.21 . The optic disk is correctly detected in 98 of the images with the mean overlap of 36.32 in the KHATAM database. | There have been several approaches for optic disk segmentation. The major efforts in this domain can be divided in two techniques; template-based methods (methods for obtaining optic disk boundary approximations) @cite_4 and methods based on deformable models of snakes for extracting the optic disk boundary as exactly as possible. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2104324599"
],
"abstract": [
"Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99 of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86 . The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper."
]
} |
1710.05104 | 2765381867 | Optic disk segmentation is a prerequisite step in automatic retinal screening systems. In this paper, we propose an algorithm for optic disk segmentation based on a local adaptive thresholding method. Location of the optic disk is validated by intensity and average vessel width of retinal images. Then an adaptive thresholding is applied on the temporal and nasal part of the optic disc separately. Adaptive thresholding, makes our algorithm robust to illumination variations and various image acquisition conditions. Moreover, experimental results on the DRIVE and KHATAM databases show promising results compared to the recent literature. In the DRIVE database, the optic disk in all images is correctly located and the mean overlap reached to 43.21 . The optic disk is correctly detected in 98 of the images with the mean overlap of 36.32 in the KHATAM database. | @cite_3 proposed an algorithm based on curvelet transform and deformable vibrational level set model. In this algorithm probable optic disk areas are obtained by applying curvelet transform and selecting the brightest area on the enhanced images. Then the region with the high value coefficients in the modified reconstructed image is choosen as the optic disk location. Finally, blood vessels are removed by morphological operation and the optic disk boundary extracted using level set deformable model. This algorithm has two problems: first, applying curvelet transform and morphological operation to the entire image is time consuming. Also, using @cite_8 algorithm to modify the curvelet coefficient following by the constant threshold fails to detect reliable blood vessels, because of vessel-like patterns such as hemorrages and micro aneurysms. Therefore, making decision based on these results affect the final result. | {
"cite_N": [
"@cite_3",
"@cite_8"
],
"mid": [
"2127605333",
"2133251749"
],
"abstract": [
"Efficient optic disk (OD) localization and segmentation are important tasks in automated retinal screening. In this paper, we take digital curvelet transform (DCUT) of the enhanced retinal image and modify its coefficients based on the sparsity of curvelet coefficients to get probable location of OD. If there are not yellowish objects in retinal images or their size are negligible, we can then directly detect OD location by performing Canny edge detector to reconstructed image with modified coefficients. Otherwise, if the size of these objects is eminent, we can see circular regions in edge map as candidate regions for OD. In this case, we use some morphological operations to fill these circular regions and erode them to get final locations for candidate regions and remove undesired pixels in edge map. Finally, we choose the candidate region that has maximum summation of pixels in strongest edge map that obtained by performing threshold to curvelet-based enhanced image, as final location of OD. This method has been tested on different retinal image datasets and quantitative results are presented.",
"We present a new method for contrast enhancement based on the curvelet transform. The curvelet transform represents edges better than wavelets, and is therefore well-suited for multiscale edge enhancement. We compare this approach with enhancement based on the wavelet transform, and the multiscale retinex. In a range of examples, we use edge detection and segmentation, among other processing applications, to provide for quantitative comparative evaluation. Our findings are that curvelet based enhancement out-performs other enhancement methods on noisy images, but on noiseless or near noiseless images curvelet based enhancement is not remarkably better than wavelet based enhancement."
]
} |
1710.05104 | 2765381867 | Optic disk segmentation is a prerequisite step in automatic retinal screening systems. In this paper, we propose an algorithm for optic disk segmentation based on a local adaptive thresholding method. Location of the optic disk is validated by intensity and average vessel width of retinal images. Then an adaptive thresholding is applied on the temporal and nasal part of the optic disc separately. Adaptive thresholding, makes our algorithm robust to illumination variations and various image acquisition conditions. Moreover, experimental results on the DRIVE and KHATAM databases show promising results compared to the recent literature. In the DRIVE database, the optic disk in all images is correctly located and the mean overlap reached to 43.21 . The optic disk is correctly detected in 98 of the images with the mean overlap of 36.32 in the KHATAM database. | @cite_6 proposed a method based on template matching. They used template matching to estimate the position of the optic disk. This position is also used to initialize points for the active contour. Then, they used morphological operation to remove vessels and improve the boundary around the optic disk. Finally, the optic disk boundary is extracted by applying Snake to the derived region. @cite_1 used an iterative thresholding algorithm to approximate the center of the optic disk. Then they applied the PCA (Principal Component Analysis) to the candidate regions obtained from the previous stage to find the final location of the optic disk. Finally, like @cite_6 a morphological method is used to eliminate blood vessels followed by applying an active contour to extract optic disk boundary. | {
"cite_N": [
"@cite_1",
"@cite_6"
],
"mid": [
"2542433306",
"1868104444"
],
"abstract": [
"An efficient optic disk localization and segmentation are important tasks in an automated retinal image analysis system. General-purpose edge detection algorithms often fail to segment the optic disk due to fuzzy boundaries, inconsistent image contrast or missing edge features. This paper presents a method to automatically locate and boundary detect of the optic disk. The detection procedure comprises two independent methodologies. On one hand, a location methodology obtains a pixel that belongs to the OD using iterative thresholding method followed by Principal Component Analysis techniques (PCA) and, on the other hand, a boundary segmentation methodology estimates the OD boundary by applying region-based active contour model in a variational level set formulation (RSF). The method uses an improved geometric active contour model which can not only solve the boundary leakage problem but also is less sensitive to intensity inhomogeneity The results from the RSF method were compared with conventional optic disk detection using a geometric active contour models (ACM) and later verified with hand-drawn ground truth. Results indicate 89 accuracy for identification and 95.05 average accuracy in localizing the optic disc boundary.",
"The location of the optic disc is of critical importance in retinal image analysis. In this work we improve on an approach introduced by Mendels, Heneghan and Thiran (1999) which localises an optic disc region through grey level morphology, followed by snake fitting. We propose and implement both the automatic initialisation of the snake and the application of morphology in colour space. We examine various methods of performing the morphology step (to remove the interference of blood vessels) and compare them against each other. We demonstrate that our proposed simple Lab colour morphology method is particularly suitable for the characteristics of our optic disc images. Results indicate 90.32 average accuracy in localising, the optic disc boundary."
]
} |
1710.05104 | 2765381867 | Optic disk segmentation is a prerequisite step in automatic retinal screening systems. In this paper, we propose an algorithm for optic disk segmentation based on a local adaptive thresholding method. Location of the optic disk is validated by intensity and average vessel width of retinal images. Then an adaptive thresholding is applied on the temporal and nasal part of the optic disc separately. Adaptive thresholding, makes our algorithm robust to illumination variations and various image acquisition conditions. Moreover, experimental results on the DRIVE and KHATAM databases show promising results compared to the recent literature. In the DRIVE database, the optic disk in all images is correctly located and the mean overlap reached to 43.21 . The optic disk is correctly detected in 98 of the images with the mean overlap of 36.32 in the KHATAM database. | Active contour methods work on a gradient of the image and lock onto homogenous region enclosed by strong gradient information. Since the gradient along the optic disk boundary is not homogeneous, active contour methods should be modified or combined with other methods. @cite_6 and @cite_1 used morphological operation to address this problem. However, using morphological operations can also blur and change the location of the optic disk boundary. Therefore, the result of optic disk boundary segmentation is not reliable. | {
"cite_N": [
"@cite_1",
"@cite_6"
],
"mid": [
"2542433306",
"1868104444"
],
"abstract": [
"An efficient optic disk localization and segmentation are important tasks in an automated retinal image analysis system. General-purpose edge detection algorithms often fail to segment the optic disk due to fuzzy boundaries, inconsistent image contrast or missing edge features. This paper presents a method to automatically locate and boundary detect of the optic disk. The detection procedure comprises two independent methodologies. On one hand, a location methodology obtains a pixel that belongs to the OD using iterative thresholding method followed by Principal Component Analysis techniques (PCA) and, on the other hand, a boundary segmentation methodology estimates the OD boundary by applying region-based active contour model in a variational level set formulation (RSF). The method uses an improved geometric active contour model which can not only solve the boundary leakage problem but also is less sensitive to intensity inhomogeneity The results from the RSF method were compared with conventional optic disk detection using a geometric active contour models (ACM) and later verified with hand-drawn ground truth. Results indicate 89 accuracy for identification and 95.05 average accuracy in localizing the optic disc boundary.",
"The location of the optic disc is of critical importance in retinal image analysis. In this work we improve on an approach introduced by Mendels, Heneghan and Thiran (1999) which localises an optic disc region through grey level morphology, followed by snake fitting. We propose and implement both the automatic initialisation of the snake and the application of morphology in colour space. We examine various methods of performing the morphology step (to remove the interference of blood vessels) and compare them against each other. We demonstrate that our proposed simple Lab colour morphology method is particularly suitable for the characteristics of our optic disc images. Results indicate 90.32 average accuracy in localising, the optic disc boundary."
]
} |
1710.05104 | 2765381867 | Optic disk segmentation is a prerequisite step in automatic retinal screening systems. In this paper, we propose an algorithm for optic disk segmentation based on a local adaptive thresholding method. Location of the optic disk is validated by intensity and average vessel width of retinal images. Then an adaptive thresholding is applied on the temporal and nasal part of the optic disc separately. Adaptive thresholding, makes our algorithm robust to illumination variations and various image acquisition conditions. Moreover, experimental results on the DRIVE and KHATAM databases show promising results compared to the recent literature. In the DRIVE database, the optic disk in all images is correctly located and the mean overlap reached to 43.21 . The optic disk is correctly detected in 98 of the images with the mean overlap of 36.32 in the KHATAM database. | @cite_7 introduced a modified active contour model based on estimating the optic disk boundary. All pixels are listed in descending order of gray-level; the top 13 In this paper we propose a new algorithm for optic disk segmentation based on adaptive thresholding. We use grey level information of the nasal and temporal parts of the optic disk to address the optic disk non-homogeneity problem. Moreover, we improve the accuracy of optic disk localization by adding the vessel width information as well as the overall intensity of the region. Applying a fast vessel segmentation algorithm on the small part of the optic disk followed by thresholding approach reduces the time complexity of our algorithm considerably. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2140072113"
],
"abstract": [
"This paper presents a novel deformable-model-based algorithm for fully automated detection of optic disk boundary in fundus images. The proposed method improves and extends the original snake (deforming-only technique) in two aspects: clustering and smoothing update. The contour points are first self-separated into edge-point group or uncertain-point group by clustering after each deformation, and these contour points are then updated by different criteria based on different groups. The updating process combines both the local and global information of the contour to achieve the balance of contour stability and accuracy. The modifications make the proposed algorithm more accurate and robust to blood vessel occlusions, noises, ill-defined edges and fuzzy contour shapes. The comparative results show that the proposed method can estimate the disk boundaries of 100 test images closer to the groundtruth, as measured by mean distance to closest point (MDCP) <3 pixels, with the better success rate when compared to those obtained by gradient vector flow snake (GVF-snake) and modified active shape models (ASM)"
]
} |
1710.04971 | 2964022677 | Scheduling the transmission of status updates over an error-prone communication channel is studied in order to minimize the long-term average age of information (AoI) at the destination under a constraint on the average number of transmissions at the source node. After each transmission, the source receives an instantaneous ACK NACK feedback, and decides on the next update without prior knowledge on the success of future transmissions. First, the optimal scheduling policy is studied under different feedback mechanisms when the channel statistics are known; in particular, the standard automatic repeat request (ARQ) and hybrid ARQ (HARQ) protocols are considered. Then, for an unknown environment, an average-cost reinforcement learning (RL) algorithm is proposed that learns the system parameters and the transmission policy in real time. The effectiveness of the proposed methods are verified through numerical simulations. | Most of the earlier work on AoI consider queue-based models, in which the status updates arrive at the source node randomly following a memoryless Poisson process, and are stored in a buffer before being transmitted to the destination @cite_23 @cite_9 . Instead, in the so-called model, @cite_0 @cite_27 @cite_10 @cite_24 @cite_17 , also adopted in this paper, the status of the underlying process can be sampled at any time by the source node. | {
"cite_N": [
"@cite_9",
"@cite_0",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"1570668034",
"2744483094",
"1932416753",
"2146487252",
"2588468001",
"1993918491"
],
"abstract": [
"",
"",
"Age of information is a newly proposed metric that captures delay from an application layer perspective. The age measures the amount of time that elapsed from the moment the mostly recently received update was generated until the present time. In this paper, we study an age minimization problem over a wireless broadcast network with many users, where only one user can be served at a time. We formulate a Markov decision process (MDP) to find dynamic transmission scheduling schemes, with the purpose of minimizing the long-run average age. While showing that an optimal scheduling algorithm for the MDP is a simple stationary switch-type, we propose a sequence of finite-state approximations for our infinite-state MDP and prove its convergence. We then propose both optimal off-line and online scheduling algorithms for the finite-approximate MDPs, depending on knowledge of time-varying arrivals.",
"We consider managing the freshness of status updates sent from a source (such as a sensor) to a monitoring node. The time-varying availability of energy at the sender limits the rate of update packet transmissions. At any time, the age of information is defined as the amount of time since the most recent update was successfully received. An offline solution that minimizes not only the time average age, but also the peak age for an arbitrary energy replenishment profile is derived. The related decision problem under stochastic energy arrivals at the sender is studied through a discrete time dynamic programming formulation, and the structure of the optimal policy that minimizes the expected age is shown. It is found that tracking the expected value of the current age (which is a linear operation), together with the knowledge of the current energy level at the sender side is sufficient for generating an optimal threshold policy. An effective online heuristic, Balance Updating (BU), that achieves performance close to an omniscient (offline) policy is proposed. Simulations of the policies indicate that they can significantly improve the age over greedy approaches. An extension of the formulation to stochastically formed updates is considered.",
"Emerging applications rely on wireless broadcast to disseminate time-critical information. For example, vehicular networks may exchange vehicle position and velocity information to enable safety applications. The number of nodes in one-hop communication range in such networks can be very large, leading to congestion and undesirable levels of packet collisions. Earlier work has examined such broadcasting protocols primarily from a MAC perspective and focused on selective aspects such as packet error rate. In this work, we propose a more comprehensive metric, the average system information age, which captures the requirement of such applications to maintain current state information from all other nearby nodes. We show that information age is minimized at an optimal operating point that lies between the extremes of maximum throughput and minimum delay. Further, while age can be minimized by saturating the MAC and setting the CW size to its throughput-optimal value, the same cannot be achieved without changes in existing hardware. Also, via simulations we show that simple contention window size adaptations like increasing or decreasing the window size are unsuitable for reducing age. This motivates our design of an application-layer broadcast rate adaptation algorithm. It uses local decisions at nodes in the network to adapt their messaging rate to keep the system age to a minimum. Our simulations and experiments with 300 ORBIT nodes show that the algorithm effectively adapts the messaging rates and minimizes the system age.",
"We consider a wireless broadcast network with a base station sending time-sensitive information to a number of clients. The Age of Information (AoI), namely the amount of time that elapsed since the most recently delivered packet was generated, captures the freshness of the information. We formulate a discrete-time decision problem to find a scheduling policy that minimizes the expected weighted sum AoI of the clients in the network. To the best of our knowledge, this is the first work to provide a scheduling policy that optimizes AoI in a wireless network with unreliable channels. The results are twofold: first, we show that a Greedy Policy, which transmits the packet with highest current age, is optimal for the case of symmetric networks. Then, for the general network case, we establish that the problem is indexable and obtain the Whittle Index in closed-form. Numerical results are presented to demonstrate the performance of the policies.",
"Increasingly ubiquitous communication networks and connectivity via portable devices have engendered a host of applications in which sources, for example people and environmental sensors, send updates of their status to interested recipients. These applications desire status updates at the recipients to be as timely as possible; however, this is typically constrained by limited network resources. In this paper, we employ a time-average age metric for the performance evaluation of status update systems. We derive general methods for calculating the age metric that can be applied to a broad class of service systems. We apply these methods to queue-theoretic system abstractions consisting of a source, a service facility and monitors, with the model of the service facility (physical constraints) a given. The queue discipline of first-come-first-served (FCFS) is explored. We show the existence of an optimal rate at which a source must generate its information to keep its status as timely as possible at all its monitors. This rate differs from those that maximize utilization (throughput) or minimize status packet delivery delay. While our abstractions are simpler than their real-world counterparts, the insights obtained, we believe, are a useful starting point in understanding and designing systems that support real time status updates."
]
} |
1710.04971 | 2964022677 | Scheduling the transmission of status updates over an error-prone communication channel is studied in order to minimize the long-term average age of information (AoI) at the destination under a constraint on the average number of transmissions at the source node. After each transmission, the source receives an instantaneous ACK NACK feedback, and decides on the next update without prior knowledge on the success of future transmissions. First, the optimal scheduling policy is studied under different feedback mechanisms when the channel statistics are known; in particular, the standard automatic repeat request (ARQ) and hybrid ARQ (HARQ) protocols are considered. Then, for an unknown environment, an average-cost reinforcement learning (RL) algorithm is proposed that learns the system parameters and the transmission policy in real time. The effectiveness of the proposed methods are verified through numerical simulations. | A constant packet failure probability for a status update system is investigated for the first time in @cite_26 , where status updates arrive according to a Poisson process, while the transmission time for each packet is exponentially distributed. Packet loss and large queuing delay due to old packets in the queue result in an increase in the AoI. Different scheduling decisions at the source node are investigated; including the last-come-first-served (LCFS) principle, which always transmits the most up-to-date packet, and retransmissions with preemptive priority, which preempts the current packet in service when a new packet arrives. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2345637103"
],
"abstract": [
"We consider the peak age-of-information (PAoI) in an M M 1 queueing system with packet delivery error, i.e., update packets can get lost during transmissions to their destination. We focus on two types of policies, one is to adopt Last-Come-First-Served (LCFS) scheduling, and the other is to utilize retransmissions, i.e., keep transmitting the most recent packet. Both policies can effectively avoid the queueing delay of a busy channel and ensure a small PAoI. Exact PAoI expressions under both policies with different error probabilities are derived, including First-Come-First-Served (FCFS), LCFS with preemptive priority, LCFS with non-preemptive priority, Retransmission with preemptive priority, and Retransmission with non-preemptive priority. Numerical results obtained from analysis and simulation are presented to validate our results."
]
} |
1710.04971 | 2964022677 | Scheduling the transmission of status updates over an error-prone communication channel is studied in order to minimize the long-term average age of information (AoI) at the destination under a constraint on the average number of transmissions at the source node. After each transmission, the source receives an instantaneous ACK NACK feedback, and decides on the next update without prior knowledge on the success of future transmissions. First, the optimal scheduling policy is studied under different feedback mechanisms when the channel statistics are known; in particular, the standard automatic repeat request (ARQ) and hybrid ARQ (HARQ) protocols are considered. Then, for an unknown environment, an average-cost reinforcement learning (RL) algorithm is proposed that learns the system parameters and the transmission policy in real time. The effectiveness of the proposed methods are verified through numerical simulations. | Broadcasting of status updates to multiple receivers over an unreliable broadcast channel is considered in @cite_10 . A low complexity sub-optimal scheduling policy is proposed when the AoI at each receiver and the transmission error probabilities to all the receivers are known. However, only work-conserving policies are considered in @cite_10 , which update the information at every time slot, since no constraint is imposed on the number of updates. Optimizing the scheduling decisions with multiple receivers is also investigated in @cite_24 , focusing on a perfect transmission medium, and an optimal scheduling algorithm for the MDP is shown to be threshold-type. To the best of our knowledge, @cite_24 is the only prior work in the literature which applies RL in the AoI framework. However, their goal is to learn the data arrival statistics, and it does not consider an unreliable communication link. Moreover, we employ an average-cost RL method, which has significant advantages over discounted-cost methods, such as @cite_21 . | {
"cite_N": [
"@cite_24",
"@cite_21",
"@cite_10"
],
"mid": [
"2744483094",
"2101915445",
"2588468001"
],
"abstract": [
"Age of information is a newly proposed metric that captures delay from an application layer perspective. The age measures the amount of time that elapsed from the moment the mostly recently received update was generated until the present time. In this paper, we study an age minimization problem over a wireless broadcast network with many users, where only one user can be served at a time. We formulate a Markov decision process (MDP) to find dynamic transmission scheduling schemes, with the purpose of minimizing the long-run average age. While showing that an optimal scheduling algorithm for the MDP is a simple stationary switch-type, we propose a sequence of finite-state approximations for our infinite-state MDP and prove its convergence. We then propose both optimal off-line and online scheduling algorithms for the finite-approximate MDPs, depending on knowledge of time-varying arrivals.",
"This paper presents a detailed study of average reward reinforcement learning, an undiscounted optimality framework that is more appropriate for cyclical tasks than the much better studied discounted framework. A wide spectrum of average reward algorithms are described, ranging from synchronous dynamic programming methods to several (provably convergent) asynchronous algorithms from optimal control and learning automata. A general sensitive discount optimality metric called n-discount-optimality is introduced, and used to compare the various algorithms. The overview identifies a key similarity across several asynchronous algorithms that is crucial to their convergence, namely independent estimation of the average reward and the relative values. The overview also uncovers a surprising limitation shared by the different algorithms: while several algorithms can provably generate gain-optimal policies that maximize average reward, none of them can reliably filter these to produce bias-optimal (or T-optimal) policies that also maximize the finite reward to absorbing goal states. This paper also presents a detailed empirical study of R-learning, an average reward reinforcement learning method, using two empirical testbeds: a stochastic grid world domain and a simulated robot environment. A detailed sensitivity analysis of R-learning is carried out to test its dependence on learning rates and exploration levels. The results suggest that R-learning is quite sensitive to exploration strategies, and can fall into sub-optimal limit cycles. The performance of R-learning is also compared with that of Q-learning, the best studied discounted RL method. Here, the results suggest that R-learning can be fine-tuned to give better performance than Q-learning in both domains.",
"We consider a wireless broadcast network with a base station sending time-sensitive information to a number of clients. The Age of Information (AoI), namely the amount of time that elapsed since the most recently delivered packet was generated, captures the freshness of the information. We formulate a discrete-time decision problem to find a scheduling policy that minimizes the expected weighted sum AoI of the clients in the network. To the best of our knowledge, this is the first work to provide a scheduling policy that optimizes AoI in a wireless network with unreliable channels. The results are twofold: first, we show that a Greedy Policy, which transmits the packet with highest current age, is optimal for the case of symmetric networks. Then, for the general network case, we establish that the problem is indexable and obtain the Whittle Index in closed-form. Numerical results are presented to demonstrate the performance of the policies."
]
} |
1710.04623 | 2763435510 | Planar ornaments, a.k.a. wallpapers, are regular repetitive patterns which exhibit translational symmetry in two independent directions. There are exactly 17 distinct planar symmetry groups. We present a fully automatic method for complete analysis of planar ornaments in 13 of these groups, specifically, the groups called p6, p6m, p4g, p4m, p4, p31m, p3m, p3, cmm, pgg, pg, p2 and p1. Given the image of an ornament fragment, we present a method to simultaneously classify the input into one of the 13 groups and extract the so-called fundamental domain, the minimum region that is sufficient to reconstruct the entire ornament. A nice feature of our method is that even when the given ornament image is a small portion such that it does not contain multiple translational units, the symmetry group as well as the fundamental domain can still be defined. This is because, in contrast to common approach, we do not attempt to first identify a global translational repetition lattice. Though the presented constructions work for quite a wide range of ornament patterns, a key assumption we make is that the perceivable motifs (shapes that repeat) alone do not provide clues for the underlying symmetries of the ornament. In this sense, our main target is the planar arrangements of asymmetric interlocking shapes, as in the symmetry art of Escher. | Ornament patterns have always been a source of curiosity and interest, not only in arts and crafts but also other fields including mathematics, computation, cultural studies etc. Early researchers mostly examined ornaments in cultural contexts, e.g. , @cite_39 , with a goal of revealing social structures and their interaction via dominant symmetries used in the ornament designs of individual cultures or geographical regions. In mathematics, the ornament patterns are studied in terms of the groups formed by the symmetry operations. Few examples include @cite_18 @cite_20 @cite_7 . The Dutch artist, Escher took a particular interest in patterns formed by repeating asymmetric shapes and discovered a local structure leading to the same wallpaper patterns; his work on symmetry is examined in @cite_23 . Regular repetitive patterns such as Wallpaper and Frieze groups are even utilized in quite practical problems; for example, to analyze human gait @cite_35 or to achieve automatic fabric defect detection in 2D patterned textures @cite_30 @cite_3 @cite_40 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_18",
"@cite_7",
"@cite_3",
"@cite_39",
"@cite_40",
"@cite_23",
"@cite_20"
],
"mid": [
"2019654077",
"1575198668",
"2080504245",
"",
"2055375882",
"1989601806",
"2031577896",
"",
""
],
"abstract": [
"This paper proposes a generalized motif-based method for detecting defects in 16 out of 17 wallpaper groups in 2D patterned texture. It assumes that most patterned texture can be decomposed into lattices and their constituents-motifs. It then utilizes the symmetry property of motifs to calculate the energy of moving subtraction and its variance among different motifs. By learning the distribution of these values over a number of defect-free patterns, boundary conditions for discerning defective and defect-free patterns can be determined. This paper presents the theoretical foundation of the method, and defines the relations between motifs and lattice, from which a new concept called energy of moving subtraction is derived using norm metric measurement between a collection of circular shift matrices of motif and itself. It has been shown in this paper that the energy of moving subtraction amplifies the defect information of the defective motif. Together with its variance, an energy-variance space is further defined where decision boundaries are drawn for classifying defective and defect-free motifs. As the 16 wallpaper groups of patterned fabric can be transformed into three major groups, the proposed method is evaluated over these three major groups, from which 160 defect-free lattices samples are used for defining the decision boundaries, with 140 defect-free and 113 defective samples used for testing. An overall detection success rate of 93.32 is achieved for the proposed method. No other generalized approach can achieve this success rate has been reported before, and hence this result outperforms all other previously published approaches.",
"We analyze walking people using a gait sequence representation that bypasses the need for frame-to-frame tracking of body parts. The gait representation maps a video sequence of silhouettes into a pair of two-dimensional spatio-temporal patterns that are near-periodic along the time axis. Mathematically, such patterns are called \"frieze\" patterns and associated symmetry groups \"frieze groups\". With the help of a walking humanoid avatar, we explore variation in gait frieze patterns with respect to viewing angle, and find that the frieze groups of the gait patterns and their canonical tiles enable us to estimate viewing direction of human walking videos. In addition, analysis of periodic patterns allows us to determine the dynamic time warping and affine scaling that aligns two gait sequences from similar viewpoints. We also show how gait alignment can be used to perform human identification and model-based body part segmentation.",
"Abstract An investigation of the Moorish ornaments from the Alhambra (in Granada, Spain) shows that their symmetry groups belong to 13 different crystallographic (wallpaper) classes; this corrects several earlier enumerations and claims. The four classes of wallpaper groups missing in Alhambra (pg, p2, pgg, p3ml) have not been found in other Moorish ornaments, either. But the classification of repeating patterns by their symmetry groups is in many cases not really appropriate—account should be taken of the coloring of the patterns, of their interlace characteristics, etc. This leads to a variety of “symmetry groups”, not all of which have been fully investigated. Moreovr, the “global” approach to repeating ornaments is only of limited applicability, since it does not correspond to the way of thinking of the artisans involved, and does not cover all the possibilities of “local” order. The proper mathematical tools for the study of such structures which are only “locally orderly” remain to be developed.",
"",
"In this paper, we propose a method of automatic detection of texture-periodicity using superposition of distance matching functions (DMFs) followed by computation of their forward differences. The method has been specifically devised for automatically identifying row and column periodicities and thereby the size of periodic units from textile fabrics belonging to any of the 17 wallpaper groups and is a part of automatic fabric defect detection scheme being developed by us that needs periodicities along row and column directions. Overall row-DMF (or overall column-DMF) is obtained based on superposition of DMF of all rows (or columns) from the input image and its second forward difference is computed to get the overall maximum which is a direct measure of periodicity along row (or column) direction. Results from experiments on various near-regular textures demonstrate the capability of the proposed method for automatic periodicity extraction without the need of human intervention.",
"AcknowledgmentsIntroduction1. History and Theory of Plane Pattern Analysis2. Mathematical Principles and Terminology3. Color Symmetry4. One-Dimensional Patterns5. Two-Dimensional Patterns6. Finite Designs7. Problems in ClassificationConclusionAppendixes--1. The Four Rigid Motions of the Plane--2. The Seven Classes of One-Dimensional Design--3. Comparative Notation for the Two-Color, Two-Dimensional PatternsBibliographyIndexes",
"This paper presents a study of using ellipsoidal decision regions for motif-based patterned fabric defect detection, the result of which is found to improve the original detection success using max-min decision region of the energy-variance values. In our previous research, max-min decision region was found to be effective in distinct cases but ill detect the ambiguous false-positive and false-negative cases. To alleviate this problem, we first assume that the energy-variance values can be described by a Gaussian mixture model. Second, we apply k-means clustering to roughly identify the various clusters that make up the entire data population. Third, convex hull of each cluster is employed as a basis for fitting an ellipsoidal decision region over it. Defect detection is then based on these ellipsoidal regions. To validate the method, three wallpaper groups are evaluated using the new ellipsoidal regions, and compared with those results obtained using the max-min decision region. For the p2 group, success rate improves from 93.43 to 100 . For the pmm group, success rate improves from 95.9 to 96.72 , while the p4m group records the same success rate at 90.77 . This demonstrates the superiority of using ellipsoidal decision regions in motif-based defect detection.",
"",
""
]
} |
1710.04623 | 2763435510 | Planar ornaments, a.k.a. wallpapers, are regular repetitive patterns which exhibit translational symmetry in two independent directions. There are exactly 17 distinct planar symmetry groups. We present a fully automatic method for complete analysis of planar ornaments in 13 of these groups, specifically, the groups called p6, p6m, p4g, p4m, p4, p31m, p3m, p3, cmm, pgg, pg, p2 and p1. Given the image of an ornament fragment, we present a method to simultaneously classify the input into one of the 13 groups and extract the so-called fundamental domain, the minimum region that is sufficient to reconstruct the entire ornament. A nice feature of our method is that even when the given ornament image is a small portion such that it does not contain multiple translational units, the symmetry group as well as the fundamental domain can still be defined. This is because, in contrast to common approach, we do not attempt to first identify a global translational repetition lattice. Though the presented constructions work for quite a wide range of ornament patterns, a key assumption we make is that the perceivable motifs (shapes that repeat) alone do not provide clues for the underlying symmetries of the ornament. In this sense, our main target is the planar arrangements of asymmetric interlocking shapes, as in the symmetry art of Escher. | In the general pool of works in computational symmetry, the main focus has been finding symmetry axes in single objects. Since a single object can exhibit only mirror reflections and rotational symmetries, the efforts are heavily focused on reflections and rotations @cite_25 @cite_34 @cite_12 @cite_10 @cite_14 @cite_27 @cite_42 @cite_17 or finding local symmetries a.k.a shape skeletons. To our knowledge, @cite_38 @cite_1 are the only works that address finding a glide reflection axis in an image, though the goal is to study one dimensional arrangements of symmetry, e.g. , leaves. The works targeting shape symmetry, whether directly from an image or from a segmented region, fall out of our focus. Our focus is on the symmetries of planar patterns formed by regular repetition of shapes via four primitive geometric transformations. | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_42",
"@cite_1",
"@cite_27",
"@cite_34",
"@cite_10",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"1554810293",
"2154063579",
"2141710021",
"2116015281",
"2161586490",
"2089841394",
"1566328901",
"2141299360",
"2119782970",
"2118798945"
],
"abstract": [
"We propose a novel, self-validating approach for detecting curved reflection symmetry patterns from real, unsegmented images. Our method benefits from the observation that any curved symmetry pattern can be approximated by a sequence of piecewise rigid reflection patterns. Pairs of symmetric feature points are first detected (including both inliers and outliers) and treated as 'particles'. Multiple-hypothesis sampling and pruning are used to sample a smooth path going through inlier particles to recover the curved reflection axis. Our approach generates an explicit supporting region of the curved reflection symmetry, which is further used for intermediate self-validation, making the detection process more robust than prior state-of-the-art algorithms. Experimental results on 200+ images demonstrate the effectiveness and superiority of the proposed approach.",
"This paper addresses the problem of detecting axes of bilateral symmetry in images. In order to achieve robustness to variation in illumination, only edge-gradient information is used. To overcome the problem of edge breaks, a potential field is developed from the edge map which spreads the information in the image plane. Pairs of points in the image plane are made to vote for their axes of symmetry with some confidence values. To make the method robust to overlapping objects, only local features in the form of Taylor coefficients are used for quantifying symmetry. We define an axis of symmetry histogram, which is used to accumulate the weighted votes for all possible axes of symmetry. To reduce the computational complexity of voting, a hashing scheme is proposed, wherein pairs of points, whose potential fields are too asymmetric, are pruned by not being counted for the vote. Experimental results indicate that the proposed method is fairly robust to edge breaks and is able to detect symmetries even when only 0.05 of the possible pairs are used for voting.",
"We present a novel and effective algorithm for rotation symmetry group detection from real-world images. We propose a frieze-expansion method that transforms rotation symmetry group detection into a simple translation symmetry detection problem. We define and construct a dense symmetry strength map from a given image, and search for potential rotational symmetry centers automatically. Frequency analysis, using discrete Fourier transform (DFT), is applied to the frieze-expansion patterns to uncover the types and the cardinality of multiple rotation symmetry groups in an image, concentric or otherwise. Furthermore, our detection algorithm can discriminate discrete versus continuous and cyclic versus dihedral symmetry groups, and identify the corresponding supporting regions in the image. Experimental results on over 80 synthetic and natural images demonstrate superior performance of our rotation detection algorithm in accuracy and in speed over the state of the art rotation detection algorithms.",
"We generalize the concept of bilateral reflection symmetry to curved glide-reflection symmetry in 2D euclidean space, such that classic reflection symmetry becomes one of its six special cases. We propose a local feature-based approach for curved glide-reflection symmetry detection from real, unsegmented 2D images. Furthermore, we apply curved glide-reflection axis detection for curved reflection surface detection in 3D images. Our method discovers, groups, and connects statistically dominant local glide-reflection axes in an Axis-Parameter-Space (APS) without preassumptions on the types of reflection symmetries. Quantitative evaluations and comparisons against state-of-the-art algorithms on a diverse 64-test-image set and 1,125 Swedish leaf-data images show a promising average detection rate of the proposed algorithm at 80 and 40 percent, respectively, and superior performance over existing reflection symmetry detection algorithms. Potential applications in computer vision, particularly biomedical imaging, include saliency detection from unsegmented images and quantification of deviations from normality. We make our 64-test-image set publicly available.",
"We present an algorithm for detecting multiple rotational symmetries in natural images. Given an image, its gradient magnitude field is computed, and information from the gradients is spread using a diffusion process in the form of a gradient vector flow (GVF) field. We construct a graph whose nodes correspond to pixels in tire image, connecting points that are likely to be rotated versions of one another The n-cycles present in tire graph are made to vote for C sub n symmetries, their votes being weighted by the errors in transformation between GVF in the neighborhood of the voting points, and the irregularity of the n-sided polygons formed by the voters. The votes are accumulated at tire centroids of possible rotational symmetries, generating a confidence map for each order of symmetry. We tested the method with several natural images.",
"A simple and fast reflectional symmetry detection algorithm has been developed in this paper. The algorithm employs only the original gray scale image and the gradient information of the image, and it is able to detect multiple reflectional symmetry axes of an object in the image. The directions of the symmetry axes are obtained from the gradient orientation histogram of the input gray scale image by using the Fourier method. Both synthetic and real images have been tested using the proposed algorithm.",
"A novel and efficient method is presented for grouping feature points on the basis of their underlying symmetry and characterising the symmetries present in an image. We show how symmetric pairs of features can be efficiently detected, how the symmetry bonding each pair is extracted and evaluated, and how these can be grouped into symmetric constellations that specify the dominant symmetries present in the image. Symmetries over all orientations and radii are considered simultaneously, and the method is able to detect local or global symmetries, locate symmetric figures in complex backgrounds, detect bilateral or rotational symmetry, and detect multiple incidences of symmetry.",
"Symmetry detection is important in the area of computer vision. A 3D symmetry detection algorithm is presented in this paper. The symmetry detection problem is converted to the correlation of the Gaussian image. Once the Gaussian image of the object has been obtained, the algorithm is independent of the input format. The algorithm can handle different kinds of images or objects. Simulated and real images have been tested in a variety of formats, and the results show that the symmetry can be determined using the Gaussian image.",
"We present an algorithm that detects rotational and reflectional symmetries of two-dimensional objects. Both symmetry types are effectively detected and analyzed using the angular correlation (AC), which measures the correlation between images in the angular direction. The AC is accurately computed using the pseudopolar Fourier transform, which rapidly computes the Fourier transform of an image on a near-polar grid. We prove that the AC of symmetric images is a periodic signal whose frequency is related to the order of the symmetry. This frequency is recovered via spectrum estimation, which is a proven technique in signal processing with a variety of efficient solutions. We also provide a novel approach for finding the center of symmetry and demonstrate the applicability of our scheme to the analysis of real images",
"We present a novel and effective algorithm for affinely skewed rotation symmetry group detection from real-world images. We define a complete skewed rotation symmetry detection problem as discovering five independent properties of a skewed rotation symmetry group: 1) the center of rotation, 2) the affine deformation, 3) the type of the symmetry group, 4) the cardinality of the symmetry group, and 5) the supporting region of the symmetry group in the image. We propose a frieze-expansion (FE) method that transforms rotation symmetry group detection into a simple, 1D translation symmetry detection problem. We define and construct a pair of rotational symmetry saliency maps, complemented by a local feature method. Frequency analysis, using Discrete Fourier Transform (DFT), is applied to the frieze-expansion patterns (FEPs) to uncover the types (cyclic, dihedral, and O(2)), the cardinalities, and the corresponding supporting regions, concentric or otherwise, of multiple rotation symmetry groups in an image. The phase information of the FEP is used to rectify affinely skewed rotation symmetry groups. Our result advances the state of the art in symmetry detection by offering a unique combination of region-based, feature-based, and frequency-based approaches. Experimental results on 170 synthetic and natural images demonstrate superior performance of our rotation symmetry detection algorithm over existing methods."
]
} |
1710.04623 | 2763435510 | Planar ornaments, a.k.a. wallpapers, are regular repetitive patterns which exhibit translational symmetry in two independent directions. There are exactly 17 distinct planar symmetry groups. We present a fully automatic method for complete analysis of planar ornaments in 13 of these groups, specifically, the groups called p6, p6m, p4g, p4m, p4, p31m, p3m, p3, cmm, pgg, pg, p2 and p1. Given the image of an ornament fragment, we present a method to simultaneously classify the input into one of the 13 groups and extract the so-called fundamental domain, the minimum region that is sufficient to reconstruct the entire ornament. A nice feature of our method is that even when the given ornament image is a small portion such that it does not contain multiple translational units, the symmetry group as well as the fundamental domain can still be defined. This is because, in contrast to common approach, we do not attempt to first identify a global translational repetition lattice. Though the presented constructions work for quite a wide range of ornament patterns, a key assumption we make is that the perceivable motifs (shapes that repeat) alone do not provide clues for the underlying symmetries of the ornament. In this sense, our main target is the planar arrangements of asymmetric interlocking shapes, as in the symmetry art of Escher. | In analyzing planar periodic patterns, translational symmetry, the most primitive repetition operation, is encountered in several works on recurring structure discovery @cite_8 @cite_9 @cite_21 @cite_6 @cite_15 . The general flow of such works is to detect visual words and cluster them based on their appearance and spatial layout. Among these works, @cite_9 @cite_21 @cite_6 further perform image retrieval based on the discovered recurring structures. @cite_9 @cite_21 instead of directly using the recurring structures for image matching, the authors first detect a translational repetition lattice of an image. There can be multiple lattices for an image. Thus, given a query image with various detected lattices, they search a database for images with equivalent lattices. In each search, the matching score between two lattices is a product of two measurements: the similarity of the grayscale mean of the representative unit cell and the similarity of the color histograms. @cite_2 @cite_16 detection of deformed lattice in a given pattern is proposed. They first propose a seed lattice from detected interest points. Using those interest points a commonly occurring lattice vectors are extracted. Subsequently, the seed lattice is refined, and grown outward until it covers whole pattern. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_2",
"@cite_15",
"@cite_16"
],
"mid": [
"2103442564",
"1536180355",
"2097595043",
"2013270301",
"1537492940",
"2108451171",
"2131394160"
],
"abstract": [
"We propose a novel unsupervised method for discovering recurring patterns from a single view. A key contribution of our approach is the formulation and validation of a joint assignment optimization problem where multiple visual words and object instances of a potential recurring pattern are considered simultaneously. The optimization is achieved by a greedy randomized adaptive search procedure (GRASP) with moves specifically designed for fast convergence. We have quantified systematically the performance of our approach under stressed conditions of the input (missing features, geometric distortions). We demonstrate that our proposed algorithm outperforms state of the art methods for recurring pattern discovery on a diverse set of 400+ real world and synthesized test images.",
"The chromatic polynomial @math gives the number of @math -colourings of a graph. If @math , then the graph @math is said to have a chromatic factorisation with chromatic factors @math and @math . It is known that the chromatic polynomial of any clique-separable graph has a chromatic factorisation. In this paper we construct an infinite family of graphs that have chromatic factorisations, but have chromatic polynomials that are not the chromatic polynomial of any clique-separable graph. A certificate of factorisation, that is, a sequence of rewritings based on identities for the chromatic polynomial, is given that explains the chromatic factorisations of graphs from this family. We show that the graphs in this infinite family are the only graphs that have a chromatic factorisation satisfying this certificate and having the odd cycle @math , @math , as a chromatic factor.",
"Detection of repetitive patterns in images has been studied for a long time in computer vision. This paper discusses a method for representing a lattice or line pattern by shift-invariant descriptor of the repeating element. The descriptor overcomes shift ambiguity and can be matched between different a views. The pattern matching is then demonstrated in retrieval experiment, where different images of the same buildings are retrieved solely by repetitive patterns.",
"Repeated structures such as building facades, fences or road markings often represent a significant challenge for place recognition. Repeated structures are notoriously hard for establishing correspondences using multi-view geometry. Even more importantly, they violate the feature independence assumed in the bag-of-visual-words representation which often leads to over-counting evidence and significant degradation of retrieval performance. In this work we show that repeated structures are not a nuisance but, when appropriately represented, they form an important distinguishing feature for many places. We describe a representation of repeated structures suitable for scalable retrieval. It is based on robust detection of repeated image structures and a simple modification of weights in the bag-of-visual-word model. Place recognition results are shown on datasets of street-level imagery from Pittsburgh and San Francisco demonstrating significant gains in recognition performance compared to the standard bag-of-visual-words baseline and more recently proposed burstiness weighting.",
"We introduce a novel framework for automatic detection of repeated patterns in real images. The novelty of our work is to formulate the extraction of an underlying deformed lattice as a spatial, multi-target tracking problem using a new and efficient Mean-Shift Belief Propagation (MSBP) method. Compared to existing work, our approach has multiple advantages, including: 1) incorporating higher order constraints early-on to propose highly plausible lattice points; 2) growing a lattice in multiple directions simultaneously instead of one at a time sequentially; and 3) achieving more efficient and more accurate performance than state-of-the-art algorithms. These advantages are demonstrated by quantitative experimental results on a diverse set of real world photos.",
"Structural semantics are fundamental to understanding both natural and man-made objects from languages to buildings. They are manifested as repeated structures or patterns and are often captured in images. Finding repeated patterns in images, therefore, has important applications in scene understanding, 3D reconstruction, and image retrieval as well as image compression. Previous approaches in visual-pattern mining limited themselves by looking for frequently co-occurring features within a small neighborhood in an image. However, semantics of a visual pattern are typically defined by specific spatial relationships between features regardless of the spatial proximity. In this paper, semantics are represented as visual elements and geometric relationships between them. A novel unsupervised learning algorithm finds pair-wise associations of visual elements that have consistent geometric relationships sufficiently often. The algorithms are efficient - maximal matchings are determined without combinatorial search. High-order structural semantics are extracted by mining patterns that are composed of pairwise spatially consistent associations of visual elements. We demonstrate the effectiveness of our approach for discovering repeated visual patterns on a variety of image collections.",
"We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images."
]
} |
1710.04623 | 2763435510 | Planar ornaments, a.k.a. wallpapers, are regular repetitive patterns which exhibit translational symmetry in two independent directions. There are exactly 17 distinct planar symmetry groups. We present a fully automatic method for complete analysis of planar ornaments in 13 of these groups, specifically, the groups called p6, p6m, p4g, p4m, p4, p31m, p3m, p3, cmm, pgg, pg, p2 and p1. Given the image of an ornament fragment, we present a method to simultaneously classify the input into one of the 13 groups and extract the so-called fundamental domain, the minimum region that is sufficient to reconstruct the entire ornament. A nice feature of our method is that even when the given ornament image is a small portion such that it does not contain multiple translational units, the symmetry group as well as the fundamental domain can still be defined. This is because, in contrast to common approach, we do not attempt to first identify a global translational repetition lattice. Though the presented constructions work for quite a wide range of ornament patterns, a key assumption we make is that the perceivable motifs (shapes that repeat) alone do not provide clues for the underlying symmetries of the ornament. In this sense, our main target is the planar arrangements of asymmetric interlocking shapes, as in the symmetry art of Escher. | In @cite_0 , for translational symmetry, model based lattice estimation is performed where the model comparison for hypotheses generated via peaks of the autocorrelation is implemented using approximate marginal likelihood. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1993992624"
],
"abstract": [
"The analysis of regular texture images is cast in a model comparison framework. Texel lattice hypotheses are used to define statistical models which are compared in terms of their ability to explain the images. This approach is used to estimate lattice geometry from patterns that exhibit translational symmetry (regular textures). It is also used to determine whether images consist of such regular textures. A method based on this approach is described in which lattice hypotheses are generated using analysis of peaks in the image autocorrelation function, statistical models are based on Gaussian or Gaussian mixture clusters, and model comparison is performed using the marginal likelihood as approximated by the Bayes Information Criterion (BIC). Experiments on public domain images and a commercial textile image archive demonstrate substantially improved accuracy compared to several alternative methods."
]
} |
1710.04623 | 2763435510 | Planar ornaments, a.k.a. wallpapers, are regular repetitive patterns which exhibit translational symmetry in two independent directions. There are exactly 17 distinct planar symmetry groups. We present a fully automatic method for complete analysis of planar ornaments in 13 of these groups, specifically, the groups called p6, p6m, p4g, p4m, p4, p31m, p3m, p3, cmm, pgg, pg, p2 and p1. Given the image of an ornament fragment, we present a method to simultaneously classify the input into one of the 13 groups and extract the so-called fundamental domain, the minimum region that is sufficient to reconstruct the entire ornament. A nice feature of our method is that even when the given ornament image is a small portion such that it does not contain multiple translational units, the symmetry group as well as the fundamental domain can still be defined. This is because, in contrast to common approach, we do not attempt to first identify a global translational repetition lattice. Though the presented constructions work for quite a wide range of ornament patterns, a key assumption we make is that the perceivable motifs (shapes that repeat) alone do not provide clues for the underlying symmetries of the ornament. In this sense, our main target is the planar arrangements of asymmetric interlocking shapes, as in the symmetry art of Escher. | Recently, @cite_11 , combined lattice extraction and point symmetry groups of individual motifs to analyse Islamic patterns in mosaics. This method specifically targets Islamic ornaments in which motifs such as n-stars typically provide clues to the underlying plane symmetry group. As such, it is not readily applicable if motifs can not be robustly extracted or motifs do not reflect the symmetries. In, @cite_41 , rotation groups are detected to analyze Islamic rosette patterns. | {
"cite_N": [
"@cite_41",
"@cite_11"
],
"mid": [
"1943173819",
"2075487985"
],
"abstract": [
"Abstract The purpose of this paper is to detect and characterize geometric rosettes which are among many different star-like motifs used in the most common Islamic Geometric Patterns. Based on symmetry group theory, a geometric rosette can be characterized by its center; order and group of symmetry. It is easy to observe that a rosette is formed by a central star surrounded by two types of shapes (mid-cells and outer-cells) called “ furmah” by Moroccan artisans. The center of symmetry is the center of the concentric circles circum-circling respectively the central star, mid-cells and outer-cells. The order of the rosette is connected with the number of mid-cells. To detect the center of symmetry in a rosette image, we propose an original method based on genetic algorithm that extracts the circle passing by the maximum pixels belonging to the binary rosette image. The center and the radius of the detected circle correspond respectively to the symmetry center of the rosette and the radius of its supporting concentric region. To determine the type and the order of symmetry, Frequency analysis using Discrete Fourier Transform is applied to the Frieze-expansion pattern. This later is obtained by applying the Frieze-Expansion technique to the extracted supporting concentric region. Experimental results prove that the proposed algorithm for rotational symmetry detection is rather successful because it manages to localize the real center of symmetry of any rosette pattern. The key point of this algorithm lies at the same time in its simplicity and its efficiency.",
"New method to analyse mosaics based on the mathematical principles of Symmetry Groups.The method includes a higher level of knowledge based on objects.Extraction of objects and their main features of patterns with a Wallpaper Group (WG).Classification of objects according to their shape and obtaining their isometries.The extraction of the WG of the pattern using the relationships between objects. This article presents a new method for analysing mosaics based on the mathematical principles of Symmetry Groups. This method has been developed to get the understanding present in patterns by extracting the objects that form them, their lattice, and the Wallpaper Group. The main novelty of this method resides in the creation of a higher level of knowledge based on objects, which makes it possible to classify the objects, to extract their main features (Point Group, principal axes, etc.), and the relationships between them. In order to validate the method, several tests were carried out on a set of Islamic Geometric Patterns from different sources, for which the Wallpaper Group has been successfully obtained in 85 of the cases. This method can be applied to any kind of pattern that presents a Wallpaper Group. Possible applications of this computational method include pattern classification, cataloguing of ceramic coatings, creating databases of decorative patterns, creating pattern designs, pattern comparison between different cultures, tile cataloguing, and so on."
]
} |
1710.04623 | 2763435510 | Planar ornaments, a.k.a. wallpapers, are regular repetitive patterns which exhibit translational symmetry in two independent directions. There are exactly 17 distinct planar symmetry groups. We present a fully automatic method for complete analysis of planar ornaments in 13 of these groups, specifically, the groups called p6, p6m, p4g, p4m, p4, p31m, p3m, p3, cmm, pgg, pg, p2 and p1. Given the image of an ornament fragment, we present a method to simultaneously classify the input into one of the 13 groups and extract the so-called fundamental domain, the minimum region that is sufficient to reconstruct the entire ornament. A nice feature of our method is that even when the given ornament image is a small portion such that it does not contain multiple translational units, the symmetry group as well as the fundamental domain can still be defined. This is because, in contrast to common approach, we do not attempt to first identify a global translational repetition lattice. Though the presented constructions work for quite a wide range of ornament patterns, a key assumption we make is that the perceivable motifs (shapes that repeat) alone do not provide clues for the underlying symmetries of the ornament. In this sense, our main target is the planar arrangements of asymmetric interlocking shapes, as in the symmetry art of Escher. | It is also possible to perform a continuous characterization of the ornament by comparing ornament images. This is for example encountered in @cite_29 , where ornament images are classified according to a symmetry feature vector calculated based on a prior lattice extraction and yes no questions; for lattice detection, they used method in @cite_32 . In @cite_36 , ornament images are directly compared in a transformed domain after applying a global transformation. Note that among the works addressing planar patterns, there are also several interesting works on pattern synthesis, including how to generate an ornament in a certain symmetry group, how to use a given motif to tile the plane in a certain style, or how to map a given wallpaper pattern to a curved surface @cite_4 @cite_5 . | {
"cite_N": [
"@cite_4",
"@cite_36",
"@cite_29",
"@cite_32",
"@cite_5"
],
"mid": [
"",
"2056159211",
"2177024330",
"49546298",
"1904718371"
],
"abstract": [
"",
"From art to science, ornaments constructed by repeating a base motif (tiling) have been a part of human culture. These ornaments exhibit various kinds of symmetries depending on the construction process as well as the symmetries of the base motif. The scientific study of the ornaments is the study of symmetry, i.e., the repetition structure. There is, however, an artistic side of the problem too: intriguing color permutations, clever choices of asymmetric interlocking forms, several symmetry breaking ideas, all that come with the artistic freedom. In this paper, in the context of Escher's Euclidean ornaments, we study ornaments without reference to fixed symmetry groups. We search for emergent categorical relations among a collection of tiles. We explore how these relations are affected when new tiles are inserted to the collection. We ask and answer whether it is possible to code symmetry group information implicitly without explicitly extracting the repetition structure, grids and motifs.",
"The interest shown recently in the algorithmic treatment of symmetries, also known as Computational Symmetry, covers several application areas among which textile and tiles design are some of the most significant. Designers make new creations based on the symmetric repetition of motifs that they find in databases of wallpaper images. The existing methods for dealing with these images have several drawbacks because the use of heuristics achieves low recovery rates when images exhibit imperfections due to the fabrication or the handmade process.",
"A system and apparatus for visually cuing a human subject wherein an incandescent lamp or lamps (40, 41) are energizable to provide a source of visible illumination. These incandescent lamps have a predetermined rise and fall characteristic. An observation surface (12) is positioned for confronting illumination from the lamps (40, 41) including a pattern through which filtered light of a first intensity may pass, which pattern is configured for visually conveying information to the subject. The pattern is surrounded by a region (54) opaque to illumination, that region extending to a periphery (55). A peripheral surface arrangement (14, 16) is provided extending from the observation surface (12) periphery and is positioned for confronting and transmitting illumination from the lamps (40, 41) at a second intensity selected as greater than the first intensity. Lamps (40, 41) are illuminated to effect the transmission of illumination through the peripheral surface and observation surface in an intermittent fashion at a frequency selected to provide visual stimuli of predetermined temporar pause, p, to evoke a gamma effect with respect to human visual perception of the pattern.",
"In this article we outline a method that automatically transforms an Euclidean ornament into a hyperbolic one. The necessary steps are pattern recognition, symmetry detection, extraction of a Euclidean fundamental region, conformal deformation to a hyperbolic fundamental region and tessellation of the hyperbolic plane with this patch. Each of these steps has its own mathematical subtleties that are discussed in this article. In particular, it is discussed which hyperbolic symmetry groups are suitable generalizations of Euclidean wallpaper groups. Furthermore it is shown how one can take advantage of methods from discrete differential geometry in order to perform the conformal deformation of the fundamental region. Finally it is demonstrated how a reverse pixel lookup strategy can be used to obtain hyperbolic images with optimal resolution."
]
} |
1710.04805 | 2963940643 | Games with large branching factors pose a significant challenge for game tree search algorithms. In this paper, we address this problem with a sampling strategy for Monte Carlo Tree Search (MCTS) algorithms called na " i ve sampling , based on a variant of the Multi-armed Bandit problem called Combinatorial Multi-armed Bandits (CMAB). We analyze the theoretical properties of several variants of na " i ve sampling , and empirically compare it against the other existing strategies in the literature for CMABs. We then evaluate these strategies in the context of real-time strategy (RTS) games, a genre of computer games characterized by their very large branching factors. Our results show that as the branching factor grows, na " i ve sampling outperforms the other sampling strategies. | Since the first call for research on RTS game AI by buro2003rts buro2003rts , a wide range of AI techniques have been explored to play RTS games. For example, reinforcement learning (RL) has been used for controlling individual units @cite_4 @cite_32 , groups of units @cite_5 @cite_13 , and even to make high-level decisions in RTS games @cite_16 . The main issue when deploying RL in RTS games is computational complexity, as the state and action space are very large. The aforementioned techniques address these problems by either focusing on individual units, small-scale combat, or by using domain knowledge to abstract the game state in order to simplify the problem. Although recent approaches are starting to scale up to larger and larger combat situations @cite_13 by using techniques such as deep reinforcement learning, they are still far from scaling all the way up to full-game play. | {
"cite_N": [
"@cite_4",
"@cite_32",
"@cite_5",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"149149418",
"2026615874",
"2166798247",
"2518713116"
],
"abstract": [
"",
"We present CLASSQ-L (for: class Q-learning) an application of the Q-learning reinforcement learning algorithm to play complete Wargus games. Wargus is a real-time strategy game where players control armies consisting of units of different classes (e.g., archers, knights). CLASSQ-L uses a single table for each class of unit so that each unit is controlled and updates its class’ Q-table. This enables rapid learning as in Wargus there are many units of the same class. We present initial results of CLASSQ-L against a variety of opponents.",
"This paper presents an evaluation of the suitability of reinforcement learning (RL) algorithms to perform the task of micro-managing combat units in the commercial real-time strategy (RTS) game StarCraft:Broodwar (SC:BW). The applied techniques are variations of the common Q-learning and Sarsa algorithms, both simple one-step versions as well as more sophisticated versions that use eligibility traces to offset the problem of delayed reward. The aim is the design of an agent that is able to learn in an unsupervised manner in a complex environment, eventually taking over tasks that had previously been performed by non-adaptive, deterministic game AI. The preliminary results presented in this paper show the viability of the RL algorithms at learning the selected task. Depending on whether the focus lies on maximizing the reward or on the speed of learning, among the evaluated algorithms one-step Q-learning and Sarsa(λ) prove best at learning to manage combat units.",
"The goal of transfer learning is to use the knowledge acquired in a set of source tasks to improve performance in a related but previously unseen target task. In this paper, we present a multilayered architecture named CAse-Based Reinforcement Learner (CARL). It uses a novel combination of Case-Based Reasoning (CBR) and Reinforcement Learning (RL) to achieve transfer while playing against the Game AI across a variety of scenarios in MadRTSTM, a commercial Real Time Strategy game. Our experiments demonstrate that CARL not only performs well on individual tasks but also exhibits significant performance gains when allowed to transfer knowledge from previous tasks.",
"We consider scenarios from the real-time strategy game StarCraft as new benchmarks for reinforcement learning algorithms. We propose micromanagement tasks, which present the problem of the short-term, low-level control of army members during a battle. From a reinforcement learning point of view, these scenarios are challenging because the state-action space is very large, and because there is no obvious feature representation for the state-action evaluation function. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. In addition, we present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm allows for the collection of traces for learning using deterministic policies, which appears much more efficient than, for example, -greedy exploration. Experiments show that with this algorithm, we successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle."
]
} |
1710.04826 | 2758003831 | The requiring of large amounts of annotated training data has become a common constraint on various deep learning systems. In this paper, we propose a weakly supervised scene text detection method (WeText) that trains robust and accurate scene text detection models by learning from unannotated or weakly annotated data. With a "light" supervised model trained on a small fully annotated dataset, we explore semi-supervised and weakly supervised learning on a large unannotated dataset and a large weakly annotated dataset, respectively. For the unsupervised learning, the light supervised model is applied to the unannotated dataset to search for more character training samples, which are further combined with the small annotated dataset to retrain a superior character detection model. For the weakly supervised learning, the character searching is guided by high-level annotations of words text lines that are widely available and also much easier to prepare. In addition, we design an unified scene character detector by adapting regression based deep networks, which greatly relieves the error accumulation issue that widely exists in most traditional approaches. Extensive experiments across different unannotated and weakly annotated datasets show that the scene text detection performance can be clearly boosted under both scenarios, where the weakly supervised learning can achieve the state-of-the-art performance by using only 229 fully annotated scene text images. | Most existing text detection methods can be broadly classified into two categories, namely, character detection based and word detection based. The character detection based methods usually first detect multiple character candidates using various techniques, including sliding windows @cite_36 @cite_40 @cite_10 , MSERs @cite_2 @cite_43 @cite_16 @cite_39 @cite_45 @cite_31 , as well as some sophistically designed stroke detector @cite_35 @cite_22 @cite_12 @cite_30 . The detected character candidates are filtered by a text non-text classifier to remove false candidates. Finally, the identified characters are grouped into words text lines by either heuristic rules @cite_44 @cite_45 @cite_31 or sophisticated clustering grouping models @cite_32 @cite_10 . Though the initial character candidate detection can achieve very high recall, the current approach involving multiple sequential steps accumulates error which often degrades the final performance greatly. In particular, the intermediate text non-text classification step requires a large amount of annotated character images which are very time consuming and costly to prepare. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_22",
"@cite_36",
"@cite_32",
"@cite_39",
"@cite_44",
"@cite_43",
"@cite_40",
"@cite_45",
"@cite_2",
"@cite_31",
"@cite_16",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"2142159465",
"",
"2131673214",
"2131163834",
"",
"117491841",
"",
"",
"",
"2217433794",
"",
"",
"",
""
],
"abstract": [
"",
"We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.",
"",
"This paper gives an algorithm for detecting and reading text in natural images. The algorithm is intended for use by blind and visually impaired subjects walking through city scenes. We first obtain a dataset of city images taken by blind and normally sighted subjects. From this dataset, we manually label and extract the text regions. Next we perform statistical analysis of the text regions to determine which image features are reliable indicators of text and have low entropy (i.e. feature response is similar for all text images). We obtain weak classifiers by using joint probabilities for feature responses on and off text. These weak classifiers are used as input to an AdaBoost machine learning algorithm to train a strong classifier. In practice, we trained a cascade with 4 strong classifiers containing 79 features. An adaptive binarization and extension algorithm is applied to those regions selected by the cascade classifier. Commercial OCR software is used to read the text or reject it as a non-text region. The overall algorithm has a success rate of over 90 (evaluated by complete detection and reading of the text) on the test set and the unread text is typically small and distant from the viewer.",
"Text detection and localization in natural scene images is important for content-based image analysis. This problem is challenging due to the complex background, the non-uniform illumination, the variations of text font, size and line orientation. In this paper, we present a hybrid approach to robustly detect and localize texts in natural scene images. A text region detector is designed to estimate the text existing confidence and scale information in image pyramid, which help segment candidate text components by local binarization. To efficiently filter out the non-text components, a conditional random field (CRF) model considering unary component properties and binary contextual component relationships with supervised parameter learning is proposed. Finally, text components are grouped into text lines words with a learning-based energy minimization method. Since all the three stages are learning-based, there are very few parameters requiring manual tuning. Experimental results evaluated on the ICDAR 2005 competition dataset show that our approach yields higher precision and recall performance compared with state-of-the-art methods. We also evaluated our approach on a multilingual image dataset with promising results.",
"",
"Maximally Stable Extremal Regions (MSERs) have achieved great success in scene text detection. However, this low-level pixel operation inherently limits its capability for handling complex text information efficiently (e. g. connections between text or background components), leading to the difficulty in distinguishing texts from background components. In this paper, we propose a novel framework to tackle this problem by leveraging the high capability of convolutional neural network (CNN). In contrast to recent methods using a set of low-level heuristic features, the CNN network is capable of learning high-level features to robustly identify text components from text-like outliers (e.g. bikes, windows, or leaves). Our approach takes advantages of both MSERs and sliding-window based methods. The MSERs operator dramatically reduces the number of windows scanned and enhances detection of the low-quality texts. While the sliding-window with CNN is applied to correctly separate the connections of multiple characters in components. The proposed system achieved strong robustness against a number of extreme text variations and serious real-world problems. It was evaluated on the ICDAR 2011 benchmark dataset, and achieved over 78 in F-measure, which is significantly higher than previous methods.",
"",
"",
"",
"Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"",
"",
"",
""
]
} |
1710.04826 | 2758003831 | The requiring of large amounts of annotated training data has become a common constraint on various deep learning systems. In this paper, we propose a weakly supervised scene text detection method (WeText) that trains robust and accurate scene text detection models by learning from unannotated or weakly annotated data. With a "light" supervised model trained on a small fully annotated dataset, we explore semi-supervised and weakly supervised learning on a large unannotated dataset and a large weakly annotated dataset, respectively. For the unsupervised learning, the light supervised model is applied to the unannotated dataset to search for more character training samples, which are further combined with the small annotated dataset to retrain a superior character detection model. For the weakly supervised learning, the character searching is guided by high-level annotations of words text lines that are widely available and also much easier to prepare. In addition, we design an unified scene character detector by adapting regression based deep networks, which greatly relieves the error accumulation issue that widely exists in most traditional approaches. Extensive experiments across different unannotated and weakly annotated datasets show that the scene text detection performance can be clearly boosted under both scenarios, where the weakly supervised learning can achieve the state-of-the-art performance by using only 229 fully annotated scene text images. | The methods in the second category instead detect words directly @cite_8 @cite_26 @cite_28 @cite_6 @cite_5 @cite_9 @cite_19 . In @cite_1 , object region proposals are employed to first detect multiple word candidates which are then filtered by a random forest classifier and the word bounding boxes are finally fine-tuned with Fast R-CNN @cite_37 . An Inception-RPN word proposal network @cite_33 is proposed which employs Faster R-CNN @cite_3 to improve the word proposal accuracy. Gupta al @cite_8 introduce a Fully-Convolutional Regression Network to jointly achieve text detection and bounding-box regression at multiple image scales. Tian al @cite_7 propose a Connectionist Text Proposal Network that combines CNN and long short-term memory (LSTM) architecture to detect text lines directly. The most recent TextBoxes approach @cite_27 designs an end-to-end trainable network to output the final word boxes directly, exploiting state-of-the-art (SSD) object detector @cite_34 . Though the word detection approach is simpler, it does not work well with multi-oriented texts due to the constraints on word proposals. In addition, visually defining a word boundary may not be feasible for texts in many non-Latin languages such as Chinese. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_19",
"@cite_27",
"@cite_5",
"@cite_34"
],
"mid": [
"",
"",
"2395360388",
"2519818067",
"2952302849",
"",
"",
"1922126009",
"",
"2953106684",
"",
"2550687635",
"",
"2193145675"
],
"abstract": [
"",
"",
"In this paper, we develop a novel unified framework called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the inception region proposal network (Inception-RPN) and design a set of text characteristic prior bounding boxes to achieve high word recall with only hundred level candidate proposals. Next, we present a powerful textdetection network that embeds ambiguous text category (ATC) information and multilevel region-of-interest pooling (MLRP) for text and non-text classification and accurate localization. Finally, we apply an iterative bounding box voting scheme to pursue high recall in a complementary manner and introduce a filtering algorithm to retain the most suitable bounding box, while removing redundant inner and outer boxes for each text instance. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.",
"We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s image, by using the very deep VGG16 model [27]. Online demo is available: http: textdet.com .",
"In this paper we introduce a new method for text detection in natural images. The method comprises two contributions: First, a fast and scalable engine to generate synthetic images of text in clutter. This engine overlays synthetic text to existing background images in a natural way, accounting for the local 3D scene geometry. Second, we use the synthetic images to train a Fully-Convolutional Regression Network (FCRN) which efficiently performs text detection and bounding-box regression at all locations and multiple scales in an image. We discuss the relation of FCRN to the recently-introduced YOLO detector, as well as other end-to-end object detection systems based on deep learning. The resulting detection network significantly out performs current methods for text detection in natural images, achieving an F-measure of 84.2 on the standard ICDAR 2013 benchmark. Furthermore, it can process 15 images per second on a GPU.",
"",
"",
"In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.",
"",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"",
"This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks.",
"",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd."
]
} |
1710.04835 | 2736566689 | In this paper, we propose a method for cloud removal from visible light RGB satellite images by extending the conditional Generative Adversarial Networks (cGANs) from RGB images to multispectral images. Satellite images have been widely utilized for various purposes, such as natural environment monitoring (pollution, forest or rivers), transportation improvement and prompt emergency response to disasters. However, the obscurity caused by clouds makes it unstable to monitor the situation on the ground with the visible light camera. Images captured by a longer wavelength are introduced to reduce the effects of clouds. Synthetic Aperture Radar (SAR) is such an example that improves visibility even the clouds exist. On the other hand, the spatial resolution decreases as the wavelength increases. Furthermore, the images captured by long wavelengths differs considerably from those captured by visible light in terms of their appearance. Therefore, we propose a network that can remove clouds and generate visible light images from the multispectral images taken as inputs. This is achieved by extending the input channels of cGANs to be compatible with multispectral images. The networks are trained to output images that are close to the ground truth using the images synthesized with clouds over the ground truth as inputs. In the available dataset, the proportion of images of the forest or the sea is very high, which will introduce bias in the training dataset if uniformly sampled from the original dataset. Thus, we utilize the t- Distributed Stochastic Neighbor Embedding (t-SNE) to improve the problem of bias in the training dataset. Finally, we confirm the feasibility of the proposed network on the dataset of four bands images, which include three visible light bands and one near-infrared (NIR) band. | Generative Adversarial Networks (GANs) @cite_10 is the most relevant to our work. GANs is consisted of two types of networks, Generator and Discriminator. Generator is trained to generate images that cannot be discriminated by Discriminator with the ground truth, while Discriminator is trained to discriminate between the generated images by the Generator and the ground truth. The conditional version of GANs was also proposed in @cite_1 . However, learning by GANs is unstable. To increase the stability, convolutional networks and Batch Normalization are introduced to Deep Convolutional Generative Adversarial Networks (DCGANs) @cite_14 is proposed. | {
"cite_N": [
"@cite_1",
"@cite_14",
"@cite_10"
],
"mid": [
"2125389028",
"2173520492",
""
],
"abstract": [
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
""
]
} |
1710.04835 | 2736566689 | In this paper, we propose a method for cloud removal from visible light RGB satellite images by extending the conditional Generative Adversarial Networks (cGANs) from RGB images to multispectral images. Satellite images have been widely utilized for various purposes, such as natural environment monitoring (pollution, forest or rivers), transportation improvement and prompt emergency response to disasters. However, the obscurity caused by clouds makes it unstable to monitor the situation on the ground with the visible light camera. Images captured by a longer wavelength are introduced to reduce the effects of clouds. Synthetic Aperture Radar (SAR) is such an example that improves visibility even the clouds exist. On the other hand, the spatial resolution decreases as the wavelength increases. Furthermore, the images captured by long wavelengths differs considerably from those captured by visible light in terms of their appearance. Therefore, we propose a network that can remove clouds and generate visible light images from the multispectral images taken as inputs. This is achieved by extending the input channels of cGANs to be compatible with multispectral images. The networks are trained to output images that are close to the ground truth using the images synthesized with clouds over the ground truth as inputs. In the available dataset, the proportion of images of the forest or the sea is very high, which will introduce bias in the training dataset if uniformly sampled from the original dataset. Thus, we utilize the t- Distributed Stochastic Neighbor Embedding (t-SNE) to improve the problem of bias in the training dataset. Finally, we confirm the feasibility of the proposed network on the dataset of four bands images, which include three visible light bands and one near-infrared (NIR) band. | Research about image generation based on cGANs and DCGANS has been widely applied for image restoration or the removal of certain objects such as rain and snow @cite_15 @cite_4 . In particular, the method in @cite_16 can generate general and high-quality images by combing Generator of U-Net @cite_19 and Discriminator of PatchGAN @cite_17 . The Generator of U-Net spreads the missing spatial features in the convolution layers of Encoder to each layer of Decoder by introducing the skip connection between layers of Encoder and Decoder. PatchGAN is able to model the high frequencies for sharp details by training the Discriminator on the image patches. Generally, these cGANs-based methods predict the obscured regions of the image with the surrounding unobscured information only from the input RGB images. | {
"cite_N": [
"@cite_4",
"@cite_19",
"@cite_15",
"@cite_16",
"@cite_17"
],
"mid": [
"2580458810",
"1901129140",
"2963420272",
"2552465644",
"2339754110"
],
"abstract": [
"Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .",
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.",
"This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required at generation time, our run-time performance (0.25 M pixel images at 25 Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization."
]
} |
1710.04521 | 2763955277 | Deriving insights from high-dimensional data is one of the core problems in data mining. The difficulty mainly stems from the fact that there are exponentially many variable combinations to potentially consider, and there are infinitely many if we consider weighted combinations, even for linear combinations. Hence, an obvious question is whether we can automate the search for interesting patterns and visualizations. In this paper, we consider the setting where a user wants to learn as efficiently as possible about real-valued attributes. For example, to understand the distribution of crime rates in different geographic areas in terms of other (numerical, ordinal and or categorical) variables that describe the areas. We introduce a method to find subgroups in the data that are maximally informative (in the formal Information Theoretic sense) with respect to a single or set of real-valued target attributes. The subgroup descriptions are in terms of a succinct set of arbitrarily-typed other attributes. The approach is based on the Subjective Interestingness framework FORSIED to enable the use of prior knowledge when finding most informative non-redundant patterns, and hence the method also supports iterative data mining. | Tasks similar to SD are Contrast Set Mining @cite_17 and Emerging Pattern Mining @cite_9 . Both these tasks have not been considered for multiple target attributes simultaneously, and hence differ from the current paper in that they do not directly help in understanding interactions between variables. The relationships between Contrast Set Mining, Emerging Pattern Mining, and SD are extensively described in @cite_15 . | {
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_17"
],
"mid": [
"2038812321",
"2156821882",
"1549565124"
],
"abstract": [
"We introduce a new kind of patterns, called emerging patterns (EPs), for knowledge discovery from databases. EPs are defined as itemsets whose supports increase significantly from one dataset to another. EPs can capture emerging trends in timestamped databases, or useful contrasts between data classes. EPs have been proven useful: we have used them to build very powerful classifiers, which are more accurate than C4.5 and CBA, for many datasets. We believe that EPs with low to medium support, such as 1 -20 , can give useful new insights and guidance to experts, in even “well understood” applications. The efficient mining of EPs is a challenging problem, since (i) the Apriori property no longer holds for EPs, and (ii) there are usually too many candidates for high dimensional databases or for small support thresholds such as 0.5 . Naive algorithms are too costly. To solve this problem, (a) we promote the description of large collections of itemsets using their concise borders (the pair of sets of the minimal and of the maximal itemsets in the collections). (b) We design EP mining algorithms which manipulate only borders of collections (especially using our multiborder-differential algorithm), and which represent discovered EPs using borders. All EPs satisfying a constraint can be efficiently discovered by our border-based algorithms, which take the borders, derived by Max-Miner, of large itemsets as inputs. In our experiments on large and high dimensional datasets including the US census and Mushroom datasets, many EPs, including some with large cardinality, are found quickly. We also give other algorithms for discovering general or special types of EPs. Permission to make digital or hard copies of all or part ol‘this work Iht personal or classroom USC is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the lirst page. To copy otherwise, to republish, to post on servers or to redistribute to lists. requires prior specific permission and or a fee. KDD-99 San Diego CA USA Copyright ACM 1999 l-58113-143-7 99 08...$5.00 Jinyan Li Department of CSSE The University of Melbourne jyli@cs.mu.oz.au",
"This paper gives a survey of contrast set mining (CSM), emerging pattern mining (EPM), and subgroup discovery (SD) in a unifying framework named supervised descriptive rule discovery. While all these research areas aim at discovering patterns in the form of rules induced from labeled data, they use different terminology and task definitions, claim to have different goals, claim to use different rule learning heuristics, and use different means for selecting subsets of induced patterns. This paper contributes a novel understanding of these subareas of data mining by presenting a unified terminology, by explaining the apparent differences between the learning tasks as variants of a unique supervised descriptive rule discovery task and by exploring the apparent differences between the approaches. It also shows that various rule learning heuristics used in CSM, EPM and SD algorithms all aim at optimizing a trade off between rule coverage and precision. The commonalities (and differences) between the approaches are showcased on a selection of best known variants of CSM, EPM and SD algorithms. The paper also provides a critical survey of existing supervised descriptive rule discovery visualization methods.",
"A fundamental task in data analysis is understanding the differences between several contrasting groups. These groups can represent different classes of objects, such as male or female students, or the same group over time, e.g. freshman students in 1993 through 1998. We present the problem of mining contrast sets: conjunctions of attributes and values that differ meaningfully in their distribution across groups. We provide a search algorithm for mining contrast sets with pruning rules that drastically reduce the computational complexity. Once the contrast sets are found, we post-process the results to present a subset that are surprising to the user given what we have already shown. We explicitly control the probability of Type I error (false positives) and guarantee a maximum error rate for the entire analysis by using Bonferroni corrections."
]
} |
1710.04521 | 2763955277 | Deriving insights from high-dimensional data is one of the core problems in data mining. The difficulty mainly stems from the fact that there are exponentially many variable combinations to potentially consider, and there are infinitely many if we consider weighted combinations, even for linear combinations. Hence, an obvious question is whether we can automate the search for interesting patterns and visualizations. In this paper, we consider the setting where a user wants to learn as efficiently as possible about real-valued attributes. For example, to understand the distribution of crime rates in different geographic areas in terms of other (numerical, ordinal and or categorical) variables that describe the areas. We introduce a method to find subgroups in the data that are maximally informative (in the formal Information Theoretic sense) with respect to a single or set of real-valued target attributes. The subgroup descriptions are in terms of a succinct set of arbitrarily-typed other attributes. The approach is based on the Subjective Interestingness framework FORSIED to enable the use of prior knowledge when finding most informative non-redundant patterns, and hence the method also supports iterative data mining. | by @cite_1 is the closest related work to this paper. It considers the case where a dataset contains two distinct parts, describing the same entities from two different viewpoints. Redescription Mining treats these two parts symmetrically: it seeks descriptions inducing the same subgroup, resulting in a rule of the form @math . In contrast, we consider the setting where the two parts play distinct roles: one part contains description attributes on which subgroups are defined, the other part forms the numeric data which we aim to learn about and hence on which the informativeness of subgroups is evaluated. This then results in rules of the form @math . | {
"cite_N": [
"@cite_1"
],
"mid": [
"2134495601"
],
"abstract": [
"Redescription mining is a powerful data analysis tool that is used to find multiple descriptions of the same entities. Consider geographical regions as an example. They can be characterized by the fauna that inhabits them on one hand and by their meteorological conditions on the other hand. Finding such redescriptors, a task known as niche-finding, is of much importance in biology. Current redescription mining methods cannot handle other than Boolean data. This restricts the range of possible applications or makes discretization a pre-requisite, entailing a possibly harmful loss of information. In niche-finding, while the fauna can be naturally represented using a Boolean presence absence data, the weather cannot. In this paper, we extend redescription mining to categorical and real-valued data with possibly missing values using a surprisingly simple and efficient approach. We provide extensive experimental evaluation to study the behavior of the proposed algorithm. Furthermore, we show the statistical significance of our results using recent innovations on randomization methods. © 2012 Wiley Periodicals, Inc. Statistical Analysis and Data Mining, 2012 (Part of this work was done when the author was with HIIT.)"
]
} |
1710.04521 | 2763955277 | Deriving insights from high-dimensional data is one of the core problems in data mining. The difficulty mainly stems from the fact that there are exponentially many variable combinations to potentially consider, and there are infinitely many if we consider weighted combinations, even for linear combinations. Hence, an obvious question is whether we can automate the search for interesting patterns and visualizations. In this paper, we consider the setting where a user wants to learn as efficiently as possible about real-valued attributes. For example, to understand the distribution of crime rates in different geographic areas in terms of other (numerical, ordinal and or categorical) variables that describe the areas. We introduce a method to find subgroups in the data that are maximally informative (in the formal Information Theoretic sense) with respect to a single or set of real-valued target attributes. The subgroup descriptions are in terms of a succinct set of arbitrarily-typed other attributes. The approach is based on the Subjective Interestingness framework FORSIED to enable the use of prior knowledge when finding most informative non-redundant patterns, and hence the method also supports iterative data mining. | ' was first used in the context of Association Rule Mining @cite_13 @cite_26 . These papers formalized the prior belief of a user in a belief system, and sought association rules that contrasted with these beliefs. We base our approach on the more recent and systematic approach named FORSIED @cite_12 @cite_30 . This framework has been applied successfully to a variety of data mining problems, such as mining relational patterns @cite_27 , community detection @cite_24 , clustering @cite_4 , and dimensionality reduction @cite_23 . Maximum Entropy modeling for real-valued data has also been studied before @cite_10 , in order to compute the significance of the Weighted Relative Accuracy in SD. That method targets a different pattern syntax than what is introduced here and does not apply to EMM. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_10",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"2120943950",
"",
"2062786477",
"2229578389",
"2185538272",
"2366913357",
"2138420488",
"2076743544"
],
"abstract": [
"",
"One of the central problems in the field of knowledge discovery is the development of good measures of interestingness of discovered patterns. Such measures of interestingness are divided into objective measures - those that depend only on the structure of a pattern and the underlying data used in the discovery process, and the subjective measures - those that also depend on the class of users who examine the pattern. The purpose of this paper is to lay the groundwork for a comprehensive study of subjective measures of interestingness. In the paper, we classify these measures into actionable and unexpected, and examine the relationship between them. The unexpected measure of interestingness is defined in terms of the belief system that the user has. Interestingness of a pattern is expressed in terms of how it affects the belief system.",
"",
"Statistical assessment of the results of data mining is increasingly recognised as a core task in the knowledge discovery process. It is of key importance in practice, as results that might seem interesting at first glance can often be explained by well-known basic properties of the data. In pattern mining, for instance, such trivial results can be so overwhelming in number that filtering them out is a necessity in order to identify the truly interesting patterns. In this paper, we propose an approach for assessing results on real-valued rectangular databases. More specifically, using our analytical model we are able to statistically assess whether or not a discovered structure may be the trivial result of the row and column marginal distributions in the database. Our main approach is to use the Maximum Entropy principle to fit a background model to the data while respecting its marginal distributions. To find these distributions, we employ an MDL based histogram estimator, and we fit these in our model using efficient convex optimization techniques. Subsequently, our model can be used to calculate probabilities directly, as well as to efficiently sample data with the purpose of assessing results by means of empirical hypothesis testing. Notably, our approach is efficient, parameter-free, and naturally deals with missing values. As such, it represents a well-founded alternative to swap randomisation",
"The utility of a dense subgraph in gaining a better understanding of a graph has been formalised in numerous ways, each striking a different balance between approximating actual interestingness and computational efficiency. A difficulty in making this trade-off is that, while computational cost of an algorithm is relatively well-defined, a pattern's interestingness is fundamentally subjective. This means that this latter aspect is often treated only informally or neglected, and instead some form of density is used as a proxy. We resolve this difficulty by formalising what makes a dense subgraph pattern interesting to a given user. Unsurprisingly, the resulting measure is dependent on the prior beliefs of the user about the graph. For concreteness, in this paper we consider two cases: one case where the user only has a belief about the overall density of the graph, and another case where the user has prior beliefs about the degrees of the vertices. Furthermore, we illustrate how the resulting interestingness measure is different from previous proposals. We also propose effective exact and approximate algorithms for mining the most interesting dense subgraph according to the proposed measure. Usefully, the proposed interestingness measure and approach lend themselves well to iterative dense subgraph discovery. Contrary to most existing approaches, our method naturally allows subsequently found patterns to be overlapping. The empirical evaluation highlights the properties of the new interestingness measure given different prior belief sets, and our approach's ability to find interesting subgraphs that other methods are unable to find.",
"Local pattern mining methods are fragmented along two dimensions: the pattern syntax, and the data types on which they are applicable. Pattern syntaxes considered in the literature include subgroups, n-sets, itemsets, and many more; common data types include binary, categorical, and real-valued. Recent research on pattern mining in relational databases has shown how the aforementioned pattern syntaxes can be unified in a single framework. However, a unified understanding of how to deal with various data types is lacking, certainly for more complexly structured types such as time of day (which is circular), geographical location, terms from a taxonomy, etc. In this paper, we introduce a generic approach for mining interesting local patterns in (relational) data involving such structured data types as attributes. Importantly, we show how this can be done in a generic manner, by modelling the structure within a set of attribute values as a partial order. We then derive a measure of subjective interestingness of such patterns using Information Theory, and propose an algorithm for effectively enumerating all patterns of this syntax. Through empirical evaluation, we found that (a) the new interestingness derivation is relevant and cannot be approximated using existing tools, (b) the new tool, P-N-RMiner, finds patterns that are substantially more informative, and (c) the new enumeration algorithm is considerably faster.",
"Methods that find insightful low-dimensional projections are essential to effectively explore high-dimensional data. Principal Component Analysis is used pervasively to find low-dimensional projections, not only because it is straightforward to use, but it is also often effective, because the variance in data is often dominated by relevant structure. However, even if the projections highlight real structure in the data, not all structure is interesting to every user. If a user is already aware of, or not interested in the dominant structure, Principal Component Analysis is less effective for finding interesting components. We introduce a new method called Subjectively Interesting Component Analysis (SICA), designed to find data projections that are subjectively interesting, i.e, projections that truly surprise the end-user. It is rooted in information theory and employs an explicit model of a user's prior expectations about the data. The corresponding optimization problem is a simple eigenvalue problem, and the result is a trade-off between explained variance and novelty. We present five case studies on synthetic data, images, time-series, and spatial data, to illustrate how SICA enables users to find (subjectively) interesting projections.",
"Several pattern discovery methods proposed in the data mining literature have the drawbacks that they discover too many obvious or irrelevant patterns and that they do not leverage to a full extent valuable prior domain knowledge that decision makers have. In this paper we propose a new method of discovery that addresses these drawbacks. In particular we propose a new method of discovering unexpected patterns that takes into consideration prior background knowledge of decision makers. This prior knowledge constitutes a set of expectations or beliefs about the problem domain. Our proposed method of discovering unexpected patterns uses these beliefs to seed the search for patterns in data that contradict the beliefs. To evaluate the practicality of our approach, we applied our algorithm to consumer purchase data from a major market research company and to web logfile data tracked at an academic Web site and present our findings in the paper.",
"We formalize the data mining process as a process of information exchange, defined by the following key components. The data miner's state of mind is modeled as a probability distribution, called the background distribution, which represents the uncertainty and misconceptions the data miner has about the data. This model initially incorporates any prior (possibly incorrect) beliefs a data miner has about the data. During the data mining process, properties of the data (to which we refer as patterns) are revealed to the data miner, either in batch, one by one, or even interactively. This acquisition of information in the data mining process is formalized by updates to the background distribution to account for the presence of the found patterns. The proposed framework can be motivated using concepts from information theory and game theory. Understanding it from this perspective, it is easy to see how it can be extended to more sophisticated settings, e.g. where patterns are probabilistic functions of the data (thus allowing one to account for noise and errors in the data mining process, and allowing one to study data mining techniques based on subsampling the data). The framework then models the data mining process using concepts from information geometry, and I-projections in particular. The framework can be used to help in designing new data mining algorithms that maximize the efficiency of the information exchange from the algorithm to the data miner."
]
} |
1710.04521 | 2763955277 | Deriving insights from high-dimensional data is one of the core problems in data mining. The difficulty mainly stems from the fact that there are exponentially many variable combinations to potentially consider, and there are infinitely many if we consider weighted combinations, even for linear combinations. Hence, an obvious question is whether we can automate the search for interesting patterns and visualizations. In this paper, we consider the setting where a user wants to learn as efficiently as possible about real-valued attributes. For example, to understand the distribution of crime rates in different geographic areas in terms of other (numerical, ordinal and or categorical) variables that describe the areas. We introduce a method to find subgroups in the data that are maximally informative (in the formal Information Theoretic sense) with respect to a single or set of real-valued target attributes. The subgroup descriptions are in terms of a succinct set of arbitrarily-typed other attributes. The approach is based on the Subjective Interestingness framework FORSIED to enable the use of prior knowledge when finding most informative non-redundant patterns, and hence the method also supports iterative data mining. | Finally, @cite_28 recently introduced a score function for single-target SD where a reduction in variance adds to the interestingness score of a subgroup. While their approach is less general and the interestingsness score arguably less principled, they do study the algorithmic complexity of the problem in detail and derive a tight-optimistic-estimator-based branch and bound algorithm to find the globally best subgroup pattern very efficiently. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2586011290"
],
"abstract": [
"Existing algorithms for subgroup discovery with numerical targets do not optimize the error or target variable dispersion of the groups they find. This often leads to unreliable or inconsistent statements about the data, rendering practical applications, especially in scientific domains, futile. Therefore, we here extend the optimistic estimator framework for optimal subgroup discovery to a new class of objective functions: we show how tight estimators can be computed efficiently for all functions that are determined by subgroup size (non-decreasing dependence), the subgroup median value, and a dispersion measure around the median (non-increasing dependence). In the important special case when dispersion is measured using the mean absolute deviation from the median, this novel approach yields a linear time algorithm. Empirical evaluation on a wide range of datasets shows that, when used within branch-and-bound search, this approach is highly efficient and indeed discovers subgroups with much smaller errors."
]
} |
1710.04102 | 2762453223 | One of the most basic skills a robot should possess is predicting the effect of physical interactions with objects in the environment. This enables optimal action selection to reach a certain goal state. Traditionally, these dynamics are described by physics-based analytical models, which may however be very hard to find for complex problems. More recently, we have seen learning approaches that can predict the effect of more complex physical interactions directly from sensory input. However, it is an open question how far these models generalize beyond their training data. In this work, we analyse how analytical and learned models can be combined to leverage the best of both worlds. As physical interaction task, we use planar pushing, for which there exists a well-known analytical model and a large real-world dataset. We propose to use a neural network to convert the raw sensory data into a suitable representation that can be consumed by the analytical model and compare this approach to using neural networks for both, perception and prediction. Our results show that the combined method outperforms the purely learned version in terms of accuracy and generalization to push actions not seen during training. It also performs comparable to the analytical model applied on ground truth input values, despite using raw sensory data as input. | Many recent approaches in reinforcement learning aim to solve the so called pixels to torque'' problem, where the network processes images to extract a representation of the state and then directly returns the required action to achieve a certain task @cite_6 @cite_12 . argue that the state-representation learned by such methods can be improved by enforcing on the extracted state, that may include e.g. temporal coherence. This is an alternative way of including basic principles of physics in a learning approach, compared to what we propose here. While policy learning requires understanding the effect of actions, the above methods do not acquire an explicit dynamics model. We are interested in learning such an explicit model, as it enables optimal action selection (potentially over a larger time horizon). The following papers share this aim. | {
"cite_N": [
"@cite_12",
"@cite_6"
],
"mid": [
"2964161785",
"2173248099"
],
"abstract": [
"Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.",
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs."
]
} |
1710.04102 | 2762453223 | One of the most basic skills a robot should possess is predicting the effect of physical interactions with objects in the environment. This enables optimal action selection to reach a certain goal state. Traditionally, these dynamics are described by physics-based analytical models, which may however be very hard to find for complex problems. More recently, we have seen learning approaches that can predict the effect of more complex physical interactions directly from sensory input. However, it is an open question how far these models generalize beyond their training data. In this work, we analyse how analytical and learned models can be combined to leverage the best of both worlds. As physical interaction task, we use planar pushing, for which there exists a well-known analytical model and a large real-world dataset. We propose to use a neural network to convert the raw sensory data into a suitable representation that can be consumed by the analytical model and compare this approach to using neural networks for both, perception and prediction. Our results show that the combined method outperforms the purely learned version in terms of accuracy and generalization to push actions not seen during training. It also performs comparable to the analytical model applied on ground truth input values, despite using raw sensory data as input. | SE3-Nets @cite_4 process dense 3D point clouds and an action to predict the next point cloud. For each object in the scene, the network predicts a segmentation mask and the parameters of an SE3 transform (linear velocity, rotation angle and axis). In newer work @cite_14 , an intermediate step is added, that computes the 6D pose of each object, before predicting the transforms based on this more structured state representation. The output point cloud is obtained by transforming all input pixels according to the transform for the object they correspond to. The resulting predictions are very sharp and the network is shown to correctly segment the objects and determine which are affected by the action. An evaluation of the generalization to new objects or forces was however not performed. | {
"cite_N": [
"@cite_14",
"@cite_4"
],
"mid": [
"2763676071",
"2410156224"
],
"abstract": [
"In this work, we present an approach to deep visuomotor control using structured deep dynamics models. Our deep dynamics model, a variant of SE3-Nets, learns a low-dimensional pose embedding for visuomotor control via an encoder-decoder structure. Unlike prior work, our dynamics model is structured: given an input scene, our network explicitly learns to segment salient parts and predict their pose-embedding along with their motion modeled as a change in the pose space due to the applied actions. We train our model using a pair of point clouds separated by an action and show that given supervision only in the form of point-wise data associations between the frames our network is able to learn a meaningful segmentation of the scene along with consistent poses. We further show that our model can be used for closed-loop control directly in the learned low-dimensional pose space, where the actions are computed by minimizing error in the pose space using gradient-based methods, similar to traditional model-based control. We present results on controlling a Baxter robot from raw depth data in simulation and in the real world and compare against two baseline deep networks. Our method runs in real-time, achieves good prediction of scene dynamics and outperforms the baseline methods on multiple control runs. Video results can be found at: this https URL",
"We introduce SE3-Nets, which are deep networks designed to model rigid body motion from raw point cloud data. Based only on pairs of depth images along with an action vector and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. Rather than learning point wise flow vectors, SE3-Nets predict SE3 transformations for different parts of the scene. Using simulated depth data of a table top scene and a robot manipulator, we show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks."
]
} |
1710.04102 | 2762453223 | One of the most basic skills a robot should possess is predicting the effect of physical interactions with objects in the environment. This enables optimal action selection to reach a certain goal state. Traditionally, these dynamics are described by physics-based analytical models, which may however be very hard to find for complex problems. More recently, we have seen learning approaches that can predict the effect of more complex physical interactions directly from sensory input. However, it is an open question how far these models generalize beyond their training data. In this work, we analyse how analytical and learned models can be combined to leverage the best of both worlds. As physical interaction task, we use planar pushing, for which there exists a well-known analytical model and a large real-world dataset. We propose to use a neural network to convert the raw sensory data into a suitable representation that can be consumed by the analytical model and compare this approach to using neural networks for both, perception and prediction. Our results show that the combined method outperforms the purely learned version in terms of accuracy and generalization to push actions not seen during training. It also performs comparable to the analytical model applied on ground truth input values, despite using raw sensory data as input. | is similar to @cite_4 and explores different possibilities of predicting the next frame of a sequence of actions and RGB images using recurrent neural networks. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2410156224"
],
"abstract": [
"We introduce SE3-Nets, which are deep networks designed to model rigid body motion from raw point cloud data. Based only on pairs of depth images along with an action vector and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. Rather than learning point wise flow vectors, SE3-Nets predict SE3 transformations for different parts of the scene. Using simulated depth data of a table top scene and a robot manipulator, we show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks."
]
} |
1710.04102 | 2762453223 | One of the most basic skills a robot should possess is predicting the effect of physical interactions with objects in the environment. This enables optimal action selection to reach a certain goal state. Traditionally, these dynamics are described by physics-based analytical models, which may however be very hard to find for complex problems. More recently, we have seen learning approaches that can predict the effect of more complex physical interactions directly from sensory input. However, it is an open question how far these models generalize beyond their training data. In this work, we analyse how analytical and learned models can be combined to leverage the best of both worlds. As physical interaction task, we use planar pushing, for which there exists a well-known analytical model and a large real-world dataset. We propose to use a neural network to convert the raw sensory data into a suitable representation that can be consumed by the analytical model and compare this approach to using neural networks for both, perception and prediction. Our results show that the combined method outperforms the purely learned version in terms of accuracy and generalization to push actions not seen during training. It also performs comparable to the analytical model applied on ground truth input values, despite using raw sensory data as input. | * Combining analytical models and learning The idea of using analytical models in combination with learning has also been explored in previous work. implemented a differentiable physics engine for rigid body dynamics in Theano and demonstrate how it can be used to train a neural network controller. In @cite_10 , the authors significantly improve Gaussian Process learning of inverse dynamics by using an analytical model of robot dynamics with fixed parameters as the mean function or as feature transform inside the covariance function of the GP's Kernel. Both works however do not cover visual perception. Most recently, used a graphics and physics engine to learn to extract object-based state representations in an unsupervised way: Given a sequence of images, a network learns to produce a state representation that is predicted forward in time using the physics engine. The graphics engine is used to render the predicted state and its output is compared to the next image as training signal. In contrast to the aforementioned work, we not only combine learning and analytical models, but also evaluate the advantages and limitations of this approach. @PARASPLIT | {
"cite_N": [
"@cite_10"
],
"mid": [
"1998179438"
],
"abstract": [
"In recent years, learning models from data has become an increasingly interesting tool for robotics, as it allows straightforward and accurate model approximation. However, in most robot learning approaches, the model is learned from scratch disregarding all prior knowledge about the system. For many complex robot systems, available prior knowledge from advanced physics-based modeling techniques can entail valuable information for model learning that may result in faster learning speed, higher accuracy and better generalization. In this paper, we investigate how parametric physical models (e.g., obtained from rigid body dynamics) can be used to improve the learning performance, and, especially, how semiparametric regression methods can be applied in this context. We present two possible semiparametric regression approaches, where the knowledge of the physical model can either become part of the mean function or of the kernel in a nonparametric Gaussian process regression. We compare the learning performance of these methods first on sampled data and, subsequently, apply the obtained inverse dynamics models in tracking control on a real Barrett WAM. The results show that the semiparametric models learned with rigid body dynamics as prior outperform the standard rigid body dynamics models on real data while generalizing better for unknown parts of the state space."
]
} |
1710.04073 | 2761051840 | Graph theory provides a language for studying the structure of relations, and it is often used to study interactions over time too. However, it poorly captures the both temporal and structural nature of interactions, that calls for a dedicated formalism. In this paper, we generalize graph concepts in order to cope with both aspects in a consistent way. We start with elementary concepts like density, clusters, or paths, and derive from them more advanced concepts like cliques, degrees, clustering coefficients, or connected components. We obtain a language to directly deal with interactions over time, similar to the language provided by graphs to deal with relations. This formalism is self-consistent: usual relations between different concepts are preserved. It is also consistent with graph theory: graph concepts are special cases of the ones we introduce. This makes it easy to generalize higher-level objects such as quotient graphs, line graphs, k-cores, and centralities. This paper also considers discrete versus continuous time assumptions, instantaneous links, and extensions to more complex cases. | To avoid these issues, several authors propose to encode the full information into various kinds of augmented graphs. In @cite_15 @cite_0 @cite_66 for instance, authors consider the graph of all nodes and links occurring within the data, and label each node and link with its presence times. In @cite_35 @cite_8 @cite_54 @cite_4 , the authors duplicate each node into as many copies as its number of occurrences (they assume discrete time steps); then, an interaction between two nodes at a given time is encoded by a link between the copies of these nodes at this time, and each copy of a node is connected to its copy at the next time step. In @cite_32 @cite_36 and others, the authors build reachability graphs: two nodes are linked together if they can reach each other in the stream. With such encodings, some key properties of the stream are equivalent to properties of the obtained graph, and so studying this graph sheds light on the original data. However, concepts like density or clusters make little sense on such objects, and authors then resort to the time slicing approach @cite_66 . | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_8",
"@cite_36",
"@cite_54",
"@cite_32",
"@cite_0",
"@cite_15",
"@cite_66"
],
"mid": [
"1741689439",
"1152522601",
"",
"2082195599",
"2950969150",
"1990920784",
"2963883440",
"",
""
],
"abstract": [
"Graph-based models form a fundamental aspect of data representation in Data Sciences and play a key role in modeling complex networked systems. In particular, recently there is an ever-increasing interest in modeling dynamic complex networks, i.e. networks in which the topological structure (nodes and edges) may vary over time. In this context, we propose a novel model for representing finite discrete Time-Varying Graphs (TVGs), which are typically used to model dynamic complex networked systems. We analyze the data structures built from our proposed model and demonstrate that, for most practical cases, the asymptotic memory complexity of our model is in the order of the cardinality of the set of edges. Further, we show that our proposal is an unifying model that can represent several previous (classes of) models for dynamic networks found in the recent literature, which in general are unable to represent each other. In contrast to previous models, our proposal is also able to intrinsically model cyclic (i.e. periodic) behavior in dynamic networks. These representation capabilities attest the expressive power of our proposed unifying model for TVGs. We thus believe our unifying model for TVGs is a step forward in the theoretical foundations for data analysis of complex networked systems.",
"Structure of real networked systems, such as social relationship, can be modeled as temporal networks in which each edge appears only at the prescribed time. Understanding the structure of temporal networks requires quantifying the importance of a temporal vertex, which is a pair of vertex index and time. In this paper, we define two centrality measures of a temporal vertex based on the fastest temporal paths which use the temporal vertex. The definition is free from parameters and robust against the change in time scale on which we focus. In addition, we can efficiently compute these centrality values for all temporal vertices. Using the two centrality measures, we reveal that distributions of these centrality values of real-world temporal networks are heterogeneous. For various datasets, we also demonstrate that a majority of the highly central temporal vertices are located within a narrow time window around a particular time. In other words, there is a bottleneck time at which most information sent in the temporal network passes through a small number of temporal vertices, which suggests an important role of these temporal vertices in spreading phenomena.",
"",
"Real complex systems are inherently time-varying. Thanks to new communication systems and novel technologies, today it is possible to produce and analyze social and biological networks with detailed information on the time of occurrence and duration of each link. However, standard graph metrics introduced so far in complex network theory are mainly suited for static graphs, i.e., graphs in which the links do not change over time, or graphs built from time-varying systems by aggregating all the links as if they were concurrent in time. In this paper, we extend the notion of connectedness, and the definitions of node and graph components, to the case of time-varying graphs, which are represented as time-ordered sequences of graphs defined over a fixed set of nodes. We show that the problem of finding strongly connected components in a time-varying graph can be mapped into the problem of discovering the maximal-cliques in an opportunely constructed static graph, which we name the affine graph. It is, therefore, an NP-complete problem. As a practical example, we have performed a temporal component analysis of time-varying graphs constructed from three data sets of human interactions. The results show that taking time into account in the definition of graph components allows to capture important features of real systems. In particular, we observe a large variability in the size of node temporal in- and out-components. This is due to intrinsic fluctuations in the activity patterns of individuals, which cannot be detected by static graph analysis.",
"A is, informally speaking, a graph that changes with time. When time is discrete and only the relationships between the participating entities may change and not the entities themselves, a temporal graph may be viewed as a sequence @math of static graphs over the same (static) set of nodes @math . Though static graphs have been extensively studied, for their temporal generalization we are still far from having a concrete set of structural and algorithmic principles. Recent research shows that many graph properties and problems become radically different and usually substantially more difficult when an extra time dimension in added to them. Moreover, there is already a rich and rapidly growing set of modern systems and applications that can be naturally modeled and studied via temporal graphs. This, further motivates the need for the development of a temporal extension of graph theory. We survey here recent results on temporal graphs and temporal graph problems that have appeared in the Computer Science community.",
"While a natural fit for modeling and understanding mobile networks, time-varying graphs remain poorly understood. Indeed, many of the usual concepts of static graphs have no obvious counterpart in time-varying ones. In this paper, we introduce the notion of temporal reachability graphs. A (tau,delta)-reachability graph is a time-varying directed graph derived from an existing connectivity graph. An edge exists from one node to another in the reachability graph at time t if there exists a journey (i.e., a spatiotemporal path) in the connectivity graph from the first node to the second, leaving after t, with a positive edge traversal time tau, and arriving within a maximum delay delta. We make three contributions. First, we develop the theoretical framework around temporal reachability graphs. Second, we harness our theoretical findings to propose an algorithm for their efficient computation. Finally, we demonstrate the analytic power of the temporal reachability graph concept by applying it to synthetic and real-life datasets. On top of defining clear upper bounds on communication capabilities, reachability graphs highlight asymmetric communication opportunities and offloading potential.",
"In a temporal network, the presence and activity of nodes and links can change through time. To describe temporal networks we introduce the notion of temporal quantities. We define the addition and multiplication of temporal quantities in a way that can be used for the definition of addition and multiplication of temporal networks. The corresponding algebraic structures are semirings. The usual approach to (data) analysis of temporal networks is to transform the network into a sequence of time slices—static networks corresponding to selected time intervals and analyze each of them using standard methods to produce a sequence of results. The approach proposed in this paper enables us to compute these results directly. We developed fast algorithms for the proposed operations. They are available as an open source Python library TQ (Temporal Quantities) and a program Ianus. The proposed approach enables us to treat as temporal quantities also other network characteristics such as degrees, connectivity components, centrality measures, Pathfinder skeleton, etc. To illustrate the developed tools we present some results from the analysis of Franzosi’s violence network and Corman’s Reuters terror news network.",
"",
""
]
} |
1710.04073 | 2761051840 | Graph theory provides a language for studying the structure of relations, and it is often used to study interactions over time too. However, it poorly captures the both temporal and structural nature of interactions, that calls for a dedicated formalism. In this paper, we generalize graph concepts in order to cope with both aspects in a consistent way. We start with elementary concepts like density, clusters, or paths, and derive from them more advanced concepts like cliques, degrees, clustering coefficients, or connected components. We obtain a language to directly deal with interactions over time, similar to the language provided by graphs to deal with relations. This formalism is self-consistent: usual relations between different concepts are preserved. It is also consistent with graph theory: graph concepts are special cases of the ones we introduce. This makes it easy to generalize higher-level objects such as quotient graphs, line graphs, k-cores, and centralities. This paper also considers discrete versus continuous time assumptions, instantaneous links, and extensions to more complex cases. | All these approaches have a clear advantage: once the data is transformed into one or several graphs, it is possible to use graph tools and concepts to study the interactions under concern. In the same spirit, various powerful methods for graph studies are extended to cope with the dynamics. This leads for instance to algebraic approaches for temporal network analysis @cite_0 @cite_76 , dynamic stochastic block models @cite_90 @cite_88 @cite_31 @cite_51 , dynamic Markovian models @cite_18 @cite_45 @cite_64 @cite_20 , signals on temporal networks @cite_22 , adjacency tensors @cite_49 @cite_41 , temporal networks studies with walks @cite_67 @cite_69 @cite_77 , dynamic graphlets @cite_74 @cite_71 and temporal motif counting approaches @cite_23 @cite_83 . Clearly, these works extend higher-level methods to the temporal setting, whereas we focus here on the most basic graph concepts, in the hope that they will form a unifying ground to such works. | {
"cite_N": [
"@cite_64",
"@cite_22",
"@cite_41",
"@cite_71",
"@cite_20",
"@cite_18",
"@cite_67",
"@cite_69",
"@cite_49",
"@cite_74",
"@cite_23",
"@cite_77",
"@cite_83",
"@cite_88",
"@cite_76",
"@cite_90",
"@cite_0",
"@cite_45",
"@cite_31",
"@cite_51"
],
"mid": [
"2104725117",
"",
"2005941514",
"2949350626",
"2099815494",
"2612966155",
"",
"",
"2037360998",
"1900217900",
"",
"1904039357",
"2562676961",
"1694128711",
"",
"1923467754",
"2963883440",
"2651906205",
"2951902445",
"2499960616"
],
"abstract": [
"A class of statistical models is proposed for longitudinal network data. The dependent variable is the changing (or evolving) relation network, represented by two or more observations of a directed graph with a fixed set of actors. The network evolution is modeled as the consequence of the actors making new choices, or withdrawing existing choices, on the basis of functions, with fixed and random components, that the actors try to maximize. Individual and dyadic exogenous variables can be used as covariates. The change in the network is modeled as the stochastic result of network effects (reciprocity, transitivity, etc.) and these covariates. The existing network structure is a dynamic constraint for the evolution of the structure itself. The models are continuous-time Markov chain models that can be implemented as simulation models. The model parameters are estimated from observed data. For estimating and testing these models, statistical procedures are proposed that are based on the method of moments. The statistical procedures are implemented using a stochastic approximation algorithm based on computer simulations of the network evolution process.",
"",
"The increasing availability of temporal network data is calling for more research on extracting and characterizing mesoscopic structures in temporal networks and on relating such structure to specific functions or properties of the system. An outstanding challenge is the extension of the results achieved for static networks to time-varying networks, where the topological structure of the system and the temporal activity patterns of its components are intertwined. Here we investigate the use of a latent factor decomposition technique, non-negative tensor factorization, to extract the community-activity structure of temporal networks. The method is intrinsically temporal and allows to simultaneously identify communities and to track their activity over time. We represent the time-varying adjacency matrix of a temporal network as a three-way tensor and approximate this tensor as a sum of terms that can be interpreted as communities of nodes with an associated activity time series. We summarize known computational techniques for tensor decomposition and discuss some quality metrics that can be used to tune the complexity of the factorized representation. We subsequently apply tensor factorization to a temporal network for which a ground truth is available for both the community structure and the temporal activity patterns. The data we use describe the social interactions of students in a school, the associations between students and school classes, and the spatio-temporal trajectories of students over time. We show that non-negative tensor factorization is capable of recovering the class structure with high accuracy. In particular, the extracted tensor components can be validated either as known school classes, or in terms of correlated activity patterns, i.e., of spatial and temporal coincidences that are determined by the known school activity schedule.",
"This paper introduces a novel graph-analytic approach for detecting anomalies in network flow data called GraphPrints. Building on foundational network-mining techniques, our method represents time slices of traffic as a graph, then counts graphlets -- small induced subgraphs that describe local topology. By performing outlier detection on the sequence of graphlet counts, anomalous intervals of traffic are identified, and furthermore, individual IPs experiencing abnormal behavior are singled-out. Initial testing of GraphPrints is performed on real network data with an implanted anomaly. Evaluation shows false positive rates bounded by 2.84 at the time-interval level, and 0.05 at the IP-level with 100 true positive rates at both.",
"Abstract Stochastic actor-based models are models for network dynamics that can represent a wide variety of influences on network change, and allow to estimate parameters expressing such influences, and test corresponding hypotheses. The nodes in the network represent social actors, and the collection of ties represents a social relation. The assumptions posit that the network evolves as a stochastic process ‘driven by the actors’, i.e., the model lends itself especially for representing theories about how actors change their outgoing ties. The probabilities of tie changes are in part endogenously determined, i.e., as a function of the current network structure itself, and in part exogenously, as a function of characteristics of the nodes (‘actor covariates’) and of characteristics of pairs of nodes (‘dyadic covariates’). In an extended form, stochastic actor-based models can be used to analyze longitudinal data on social networks jointly with changing attributes of the actors: dynamics of networks and behavior. This paper gives an introduction to stochastic actor-based models for dynamics of directed networks, using only a minimum of mathematics. The focus is on understanding the basic principles of the model, understanding the results, and on sensible rules for model selection.",
"Ample theoretical work on social networks is explicitly or implicitly concerned with the role of interpersonal interaction. However, empirical studies to date mostly focus on the analysis of stable relations. This article introduces Dynamic Network Actor Models (DyNAMs) for the study of directed, interpersonal interaction through time. The presented model addresses three important aspects of interpersonal interaction. First, interactions unfold in a larger social context and depend on complex structures in social systems. Second, interactions emanate from individuals and are based on personal preferences, restricted by the available interaction opportunities. Third, sequences of interactions develop dynamically, and the timing of interactions relative to one another contains useful information. We refer to these aspects as the network nature, the actor-oriented nature, and the dynamic nature of social interaction. A case study compares the DyNAM framework to the relational event model, a widely used statistical method for the study of social interaction data.",
"",
"",
"How do we find patterns in author-keyword associations, evolving over time? Or in Data Cubes, with product-branch-customer sales information? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks and many more. However, they have only two orders, like author and keyword, in the above example.We propose to envision such higher order data as tensors,and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semi-infinite streams. Thus, we introduce the dynamic tensor analysis (DTA) method, and its variants. DTA provides a compact summary for high-order and high-dimensional data, and it also reveals the hidden correlations. Algorithmically, we designed DTA very carefully so that it is (a) scalable, (b) space efficient (it does not need to store the past) and (c) fully automatic with no need for user defined parameters. Moreover, we propose STA, a streaming tensor analysis method, which provides a fast, streaming approximation to DTA.We implemented all our methods, and applied them in two real settings, namely, anomaly detection and multi-way latent semantic indexing. We used two real, large datasets, one on network flow data (100GB over 1 month) and one from DBLP (200MB over 25 years). Our experiments show that our methods are fast, accurate and that they find interesting patterns and outliers on the real datasets.",
"Motivation: With increasing availability of temporal real-world networks, how to efficiently study these data? One can model a temporal network as a single aggregate static network, or as a series of time-specific snapshots, each being an aggregate static network over the corresponding time window. Then, one can use established methods for static analysis on the resulting aggregate network(s), but losing in the process valuable temporal information either completely, or at the interface between different snapshots, respectively. Here, we develop a novel approach for studying a temporal network more explicitly, by capturing inter-snapshot relationships. Results: We base our methodology on well-established graphlets (subgraphs), which have been proven in numerous contexts in static network research. We develop new theory to allow for graphlet-based analyses of temporal networks. Our new notion of dynamic graphlets is different from existing dynamic network approaches that are based on temporal motifs (statistically significant subgraphs). The latter have limitations: their results depend on the choice of a null network model that is required to evaluate the significance of a subgraph, and choosing a good null model is non-trivial. Our dynamic graphlets overcome the limitations of the temporal motifs. Also, when we aim to characterize the structure and function of an entire temporal network or of individual nodes, our dynamic graphlets outperform the static graphlets. Clearly, accounting for temporal information helps. We apply dynamic graphlets to temporal age-specific molecular network data to deepen our limited knowledge about human aging. Availability and implementation: http: www.nd.edu ∼cone DG. Contact: ude.dn@oknelimt Supplementary information: Supplementary data are available at Bioinformatics online.",
"",
"Temporal networks come with a wide variety of heterogeneities, from burstiness of event sequences to correlations between timings of node and link activations. In this paper, we set to explore the latter by using temporal greedy walks as probes of temporal network structure. Given a temporal network (a sequence of contacts), temporal greedy walks proceed from node to node by always following the first available contact. Because of this, their structure is particularly sensitive to temporal-topological patterns involving repeated contacts between sets of nodes. This becomes evident in their small coverage per step taken as compared to a temporal reference model – in empirical temporal networks, greedy walks often get stuck within small sets of nodes because of correlated contact patterns. While this may also happen in static networks that have pronounced community structure, the use of the temporal reference model takes the underlying static network structure out of the equation and indicates that there is a purely temporal reason for the observations. Further analysis of the structure of greedy walks indicates that burst trains, sequences of repeated contacts between node pairs, are the dominant factor. However, there are larger patterns too, as shown with non-backtracking greedy walks. We proceed further to study the entropy rates of greedy walks, and show that the sequences of visited nodes are more structured and predictable in original data as compared to temporally uncorrelated references. Taken together, these results indicate a richness of correlated temporal-topological patterns in temporal networks.",
"Networks are a fundamental tool for modeling complex systems in a variety of domains including social and communication networks as well as biology and neuroscience. The counts of small subgraph patterns in networks, called network motifs, are crucial to understanding the structure and function of these systems. However, the role of network motifs for temporal networks, which contain many timestamped links between nodes, is not well understood. Here we develop a notion of a temporal network motif as an elementary unit of temporal networks and provide a general methodology for counting such motifs. We define temporal network motifs as induced subgraphs on sequences of edges, design several fast algorithms for counting temporal network motifs, and prove their runtime complexity. We also show that our fast algorithms achieve 1.3x to 56.5x speedups compared to a baseline method. We use our algorithms to count temporal network motifs in a variety of real-world datasets. Results show that networks from different domains have significantly different motif frequencies, whereas networks from the same domain tend to have similar motif frequencies. We also find that measuring motif counts at various time scales reveals different behavior.",
"Statistical node clustering in discrete time dynamic networks is an emerging field that raises many challenges. Here, we explore statistical properties and frequentist inference in a model that combines a stochastic block model (SBM) for its static part with independent Markov chains for the evolution of the nodes groups through time. We model binary data as well as weighted dynamic random graphs (with discrete or continuous edges values). Our approach, motivated by the importance of controlling for label switching issues across the different time steps, focuses on detecting groups characterized by a stable within group connectivity behavior. We study identifiability of the model parameters , propose an inference procedure based on a variational expectation maximization algorithm as well as a model selection criterion to select for the number of groups. We carefully discuss our initialization strategy which plays an important role in the method and compare our procedure with existing ones on synthetic datasets. We also illustrate our approach on dynamic contact networks, one of encounters among high school students and two others on animal interactions. An implementation of the method is available as a R package called dynsbm.",
"",
"Significant efforts have gone into the development of statistical models for analyzing data in the form of networks, such as social networks. Most existing work has focused on modeling static networks, which represent either a single time snapshot or an aggregate view over time. There has been recent interest in statistical modeling of dynamic networks, which are observed at multiple points in time and offer a richer representation of many complex phenomena. In this paper, we propose a state-space model for dynamic networks that extends the well-known stochastic blockmodel for static networks to the dynamic setting. We then propose a procedure to fit the model using a modification of the extended Kalman filter augmented with a local search. We apply the procedure to analyze a dynamic social network of email communication.",
"In a temporal network, the presence and activity of nodes and links can change through time. To describe temporal networks we introduce the notion of temporal quantities. We define the addition and multiplication of temporal quantities in a way that can be used for the definition of addition and multiplication of temporal networks. The corresponding algebraic structures are semirings. The usual approach to (data) analysis of temporal networks is to transform the network into a sequence of time slices—static networks corresponding to selected time intervals and analyze each of them using standard methods to produce a sequence of results. The approach proposed in this paper enables us to compute these results directly. We developed fast algorithms for the proposed operations. They are available as an open source Python library TQ (Temporal Quantities) and a program Ianus. The proposed approach enables us to treat as temporal quantities also other network characteristics such as degrees, connectivity components, centrality measures, Pathfinder skeleton, etc. To illustrate the developed tools we present some results from the analysis of Franzosi’s violence network and Corman’s Reuters terror news network.",
"Important questions in the social sciences are concerned with the circumstances under which individuals, organizations, or states mutually agree to form social network ties. Examples of these coordination ties are found in such diverse domains as scientific collaboration, international treaties, and romantic relationships and marriage. This article introduces dynamic network actor models (DyNAM) for the statistical analysis of coordination networks through time. The strength of the models is that they explicitly address five aspects about coordination networks that empirical researchers will typically want to take into account: (1) that observations are dependent, (2) that ties reflect the opportunities and preferences of both actors involved, (3) that the creation of coordination ties is a two-sided process, (4) that data might be available in a time-stamped format, and (5) that processes typically differ between tie creation and dissolution (signed processes), shorter and longer time windows (windowed p...",
"In this paper, we focus on the stochastic block model (SBM),a probabilistic tool describing interactions between nodes of a network using latent clusters. The SBM assumes that the networkhas a stationary structure, in which connections of time varying intensity are not taken into account. In other words, interactions between two groups are forced to have the same features during the whole observation time. To overcome this limitation,we propose a partition of the whole time horizon, in which interactions are observed, and develop a non stationary extension of the SBM,allowing to simultaneously cluster the nodes in a network along with fixed time intervals in which the interactions take place. The number of clusters (K for nodes, D for time intervals) as well as the class memberships are finallyobtained through maximizing the complete-data integrated likelihood by means of a greedy search approach. After showing that the model works properly with simulated data, we focus on a real data set. We thus consider the three days ACM Hypertext conference held in Turin,June 29th - July 1st 2009. Proximity interactions between attendees during the first day are modelled and an interestingclustering of the daily hours is finally obtained, with times of social gathering (e.g. coffee breaks) recovered by the approach. Applications to large networks are limited due to the computational complexity of the greedy search which is dominated bythe number @math and @math of clusters used in the initialization. Therefore,advanced clustering tools are considered to reduce the number of clusters expected in the data, making the greedy search applicable to large networks.",
"We develop a model in which interactions between nodes of a dynamic network are counted by non homogeneous Poisson processes. In a block modelling perspective, nodes belong to hidden clusters (whose number is unknown) and the intensity functions of the counting processes only depend on the clusters of nodes. In order to make inference tractable we move to discrete time by partitioning the entire time horizon in which interactions are observed in fixed-length time sub-intervals. First, we derive an exact integrated classification likelihood criterion and maximize it relying on a greedy search approach. This allows to estimate the memberships to clusters and the number of clusters simultaneously. Then a maximum-likelihood estimator is developed to estimate non parametrically the integrated intensities. We discuss the over-fitting problems of the model and propose a regularized version solving these issues. Experiments on real and simulated data are carried out in order to assess the proposed methodology."
]
} |
1710.04073 | 2761051840 | Graph theory provides a language for studying the structure of relations, and it is often used to study interactions over time too. However, it poorly captures the both temporal and structural nature of interactions, that calls for a dedicated formalism. In this paper, we generalize graph concepts in order to cope with both aspects in a consistent way. We start with elementary concepts like density, clusters, or paths, and derive from them more advanced concepts like cliques, degrees, clustering coefficients, or connected components. We obtain a language to directly deal with interactions over time, similar to the language provided by graphs to deal with relations. This formalism is self-consistent: usual relations between different concepts are preserved. It is also consistent with graph theory: graph concepts are special cases of the ones we introduce. This makes it easy to generalize higher-level objects such as quotient graphs, line graphs, k-cores, and centralities. This paper also considers discrete versus continuous time assumptions, instantaneous links, and extensions to more complex cases. | Complementary to these approaches that extend methods, some works extend various graph concepts to deal with time, in a way similar to what we do here @cite_0 @cite_25 @cite_37 . | {
"cite_N": [
"@cite_0",
"@cite_37",
"@cite_25"
],
"mid": [
"2963883440",
"1874858905",
""
],
"abstract": [
"In a temporal network, the presence and activity of nodes and links can change through time. To describe temporal networks we introduce the notion of temporal quantities. We define the addition and multiplication of temporal quantities in a way that can be used for the definition of addition and multiplication of temporal networks. The corresponding algebraic structures are semirings. The usual approach to (data) analysis of temporal networks is to transform the network into a sequence of time slices—static networks corresponding to selected time intervals and analyze each of them using standard methods to produce a sequence of results. The approach proposed in this paper enables us to compute these results directly. We developed fast algorithms for the proposed operations. They are available as an open source Python library TQ (Temporal Quantities) and a program Ianus. The proposed approach enables us to treat as temporal quantities also other network characteristics such as degrees, connectivity components, centrality measures, Pathfinder skeleton, etc. To illustrate the developed tools we present some results from the analysis of Franzosi’s violence network and Corman’s Reuters terror news network.",
"Temporal networks, i.e., networks in which the interactions among a set of elementary units change over time, can be modelled in terms of time-varying graphs, which are time-ordered sequences of graphs over a set of nodes. In such graphs, the concepts of node adjacency and reachability crucially depend on the exact temporal ordering of the links. Consequently, all the concepts and metrics proposed and used for the characterisation of static complex networks have to be redefined or appropriately extended to time-varying graphs, in order to take into account the effects of time ordering on causality. In this chapter we discuss how to represent temporal networks and we review the definitions of walks, paths, connectedness and connected components valid for graphs in which the links fluctuate over time. We then focus on temporal node–node distance, and we discuss how to characterise link persistence and the temporal small-world behaviour in this class of networks. Finally, we discuss the extension of classic centrality measures, including closeness, betweenness and spectral centrality, to the case of time-varying graphs, and we review the work on temporal motifs analysis and the definition of modularity for temporal graphs.",
""
]
} |
1710.04073 | 2761051840 | Graph theory provides a language for studying the structure of relations, and it is often used to study interactions over time too. However, it poorly captures the both temporal and structural nature of interactions, that calls for a dedicated formalism. In this paper, we generalize graph concepts in order to cope with both aspects in a consistent way. We start with elementary concepts like density, clusters, or paths, and derive from them more advanced concepts like cliques, degrees, clustering coefficients, or connected components. We obtain a language to directly deal with interactions over time, similar to the language provided by graphs to deal with relations. This formalism is self-consistent: usual relations between different concepts are preserved. It is also consistent with graph theory: graph concepts are special cases of the ones we introduce. This makes it easy to generalize higher-level objects such as quotient graphs, line graphs, k-cores, and centralities. This paper also considers discrete versus continuous time assumptions, instantaneous links, and extensions to more complex cases. | In particular, path-related concepts received much attention because of their importance for spreading phenomena and communication networks, see for instance @cite_3 @cite_32 @cite_89 @cite_72 . Interestingly, although paths defined in these papers are similar to those we consider here, most derived concepts remain node-oriented. For instance most authors define the centrality of a given node and connected components as sets of nodes (without time information) @cite_0 @cite_37 @cite_66 @cite_32 @cite_36 @cite_89 . In @cite_34 , the authors introduce a centrality for time instants. Since the centrality of nodes may greatly change over time @cite_80 , it is important to define centralities of each node at each time instant. Some authors did so for various kinds of centralities @cite_12 @cite_62 @cite_4 @cite_10 @cite_79 but, up to our knowledge, we are the first ones to consider paths from all nodes at all time instants to all other nodes at all other time instants. This has the advantage of fully capturing the dynamics of the data, in particular the fact that nodes are not always present. | {
"cite_N": [
"@cite_37",
"@cite_62",
"@cite_4",
"@cite_36",
"@cite_32",
"@cite_3",
"@cite_89",
"@cite_0",
"@cite_79",
"@cite_72",
"@cite_80",
"@cite_34",
"@cite_10",
"@cite_66",
"@cite_12"
],
"mid": [
"1874858905",
"2619798370",
"1152522601",
"2082195599",
"1990920784",
"1937334562",
"1986909918",
"2963883440",
"2605568719",
"2731152871",
"2292851029",
"",
"",
"",
"2963123334"
],
"abstract": [
"Temporal networks, i.e., networks in which the interactions among a set of elementary units change over time, can be modelled in terms of time-varying graphs, which are time-ordered sequences of graphs over a set of nodes. In such graphs, the concepts of node adjacency and reachability crucially depend on the exact temporal ordering of the links. Consequently, all the concepts and metrics proposed and used for the characterisation of static complex networks have to be redefined or appropriately extended to time-varying graphs, in order to take into account the effects of time ordering on causality. In this chapter we discuss how to represent temporal networks and we review the definitions of walks, paths, connectedness and connected components valid for graphs in which the links fluctuate over time. We then focus on temporal node–node distance, and we discuss how to characterise link persistence and the temporal small-world behaviour in this class of networks. Finally, we discuss the extension of classic centrality measures, including closeness, betweenness and spectral centrality, to the case of time-varying graphs, and we review the work on temporal motifs analysis and the definition of modularity for temporal graphs.",
"Abstract Centrality measures play a central role in Complex Networks Theory as much as they provide a tool to rank nodes by their relevance in the processes occurring in a network. In this paper we propose a model for the eigenvector-like centralities of temporal networks that evolve on a continuous time scale. We analytically prove that these centralities can be approximated by the centralities of temporal networks on discrete time scale.",
"Structure of real networked systems, such as social relationship, can be modeled as temporal networks in which each edge appears only at the prescribed time. Understanding the structure of temporal networks requires quantifying the importance of a temporal vertex, which is a pair of vertex index and time. In this paper, we define two centrality measures of a temporal vertex based on the fastest temporal paths which use the temporal vertex. The definition is free from parameters and robust against the change in time scale on which we focus. In addition, we can efficiently compute these centrality values for all temporal vertices. Using the two centrality measures, we reveal that distributions of these centrality values of real-world temporal networks are heterogeneous. For various datasets, we also demonstrate that a majority of the highly central temporal vertices are located within a narrow time window around a particular time. In other words, there is a bottleneck time at which most information sent in the temporal network passes through a small number of temporal vertices, which suggests an important role of these temporal vertices in spreading phenomena.",
"Real complex systems are inherently time-varying. Thanks to new communication systems and novel technologies, today it is possible to produce and analyze social and biological networks with detailed information on the time of occurrence and duration of each link. However, standard graph metrics introduced so far in complex network theory are mainly suited for static graphs, i.e., graphs in which the links do not change over time, or graphs built from time-varying systems by aggregating all the links as if they were concurrent in time. In this paper, we extend the notion of connectedness, and the definitions of node and graph components, to the case of time-varying graphs, which are represented as time-ordered sequences of graphs defined over a fixed set of nodes. We show that the problem of finding strongly connected components in a time-varying graph can be mapped into the problem of discovering the maximal-cliques in an opportunely constructed static graph, which we name the affine graph. It is, therefore, an NP-complete problem. As a practical example, we have performed a temporal component analysis of time-varying graphs constructed from three data sets of human interactions. The results show that taking time into account in the definition of graph components allows to capture important features of real systems. In particular, we observe a large variability in the size of node temporal in- and out-components. This is due to intrinsic fluctuations in the activity patterns of individuals, which cannot be detected by static graph analysis.",
"While a natural fit for modeling and understanding mobile networks, time-varying graphs remain poorly understood. Indeed, many of the usual concepts of static graphs have no obvious counterpart in time-varying ones. In this paper, we introduce the notion of temporal reachability graphs. A (tau,delta)-reachability graph is a time-varying directed graph derived from an existing connectivity graph. An edge exists from one node to another in the reachability graph at time t if there exists a journey (i.e., a spatiotemporal path) in the connectivity graph from the first node to the second, leaving after t, with a positive edge traversal time tau, and arriving within a maximum delay delta. We make three contributions. First, we develop the theoretical framework around temporal reachability graphs. Second, we harness our theoretical findings to propose an algorithm for their efficient computation. Finally, we demonstrate the analytic power of the temporal reachability graph concept by applying it to synthetic and real-life datasets. On top of defining clear upper bounds on communication capabilities, reachability graphs highlight asymmetric communication opportunities and offloading potential.",
"The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions can make predictions and mechanistic understanding more accurate. The drawback, however, is that there are not so many methods available, partly because temporal networks is a relatively young field, partly because it is more difficult to develop such methods compared to for static networks. In this colloquium, we review the methods to analyze and model temporal networks and processes taking place on them, focusing mainly on the last three years. This includes the spreading of infectious disease, opinions, rumors, in social networks; information packets in computer networks; various types of signaling in biology, and more. We also discuss future directions.",
"The analysis of social and technological networks has attracted a lot of attention as social networking applications and mobile sensing devices have given us a wealth of real data. Classic studies looked at analysing static or aggregated networks, i.e., networks that do not change over time or built as the results of aggregation of information over a certain period of time. Given the soaring collections of measurements related to very large, real network traces, researchers are quickly starting to realise that connections are inherently varying over time and exhibit more dimensionality than static analysis can capture. In this paper we propose new temporal distance metrics to quantify and compare the speed (delay) of information diffusion processes taking into account the evolution of a network from a global view. We show how these metrics are able to capture the temporal characteristics of time-varying graphs, such as delay, duration and time order of contacts (interactions), compared to the metrics used in the past on static graphs. We also characterise network reachability with the concepts of in- and out-components. Then, we generalise them with a global perspective by defining temporal connected components. As a proof of concept we apply these techniques to two classes of time-varying networks, namely connectivity of mobile devices and interactions on an online social network.",
"In a temporal network, the presence and activity of nodes and links can change through time. To describe temporal networks we introduce the notion of temporal quantities. We define the addition and multiplication of temporal quantities in a way that can be used for the definition of addition and multiplication of temporal networks. The corresponding algebraic structures are semirings. The usual approach to (data) analysis of temporal networks is to transform the network into a sequence of time slices—static networks corresponding to selected time intervals and analyze each of them using standard methods to produce a sequence of results. The approach proposed in this paper enables us to compute these results directly. We developed fast algorithms for the proposed operations. They are available as an open source Python library TQ (Temporal Quantities) and a program Ianus. The proposed approach enables us to treat as temporal quantities also other network characteristics such as degrees, connectivity components, centrality measures, Pathfinder skeleton, etc. To illustrate the developed tools we present some results from the analysis of Franzosi’s violence network and Corman’s Reuters terror news network.",
"Abstract The central nervous system is composed of many individual units – from cells to areas – that are connected with one another in a complex pattern of functional interactions that supports perception, action, and cognition. One natural and parsimonious representation of such a system is a graph in which nodes (units) are connected by edges (interactions). While applicable across spatiotemporal scales, species, and cohorts, the traditional graph approach is unable to address the complexity of time-varying connectivity patterns that may be critically important for an understanding of emotional and cognitive state, task-switching, adaptation and development, or aging and disease progression. Here we survey a set of tools from applied mathematics that offer measures to characterize dynamic graphs. Along with this survey, we offer suggestions for visualization and a publicly-available MATLAB toolbox to facilitate the application of these metrics to existing or yet-to-be acquired neuroimaging data. We illustrate the toolbox by applying it to a previously published data set of time-varying functional graphs, but note that the tools can also be applied to time-varying structural graphs or to other sorts of relational data entirely. Our aim is to provide the neuroimaging community with a useful set of tools, and an intuition regarding how to use them, for addressing emerging questions that hinge on accurate and creative analyses of dynamic graphs.",
"Databases recording cattle exchanges offer unique opportunities for a better understanding and fighting of disease spreading. Most studies model contacts with (sequences of) networks, but this approach neglects important dynamical features of exchanges, that are known to play a key role in spreading. We use here a fully dynamic modeling of contacts and empirically compare the spreading outbreaks obtained with it to the ones obtained with network approaches. We show that neglecting time information leads to significant over-estimates of actual sizes of spreading cascades, and that these sizes are much more heterogeneous than generally assumed. Our approach also makes it possible to study the speed of spreading, and we show that the observed speeds vary greatly, even for a same cascade size.",
"For a long time now, researchers have worked on defining different metrics able to characterize the importance of nodes in networks. Among them, centrality measures have proved to be pertinent as they relate the position of a node in the structure to its ability to diffuse an information efficiently. The case of dynamic networks, in which nodes and links appear and disappear over time, led the community to propose extensions of those classical measures. Yet, they do not investigate the fact that the network structure evolves and that node importance may evolve accordingly. In the present paper, we propose temporal extensions of notions of centrality, which take into account the paths existing at any given time, in order to study the time evolution of nodes' importance in dynamic networks. We apply this to two datasets and show that the importance of nodes does indeed vary greatly with time. We also show that in some cases it might be meaningless to try to identify nodes that are consistently important over time, thus strengthening the interest of temporal extensions of centrality measures.",
"",
"",
"",
"Numerous centrality measures have been developed to quantify the importances of nodes in time-independent networks, and many of them can be expressed as the leading eigenvector of some matrix. With the increasing availability of network data that changes in time, it is important to extend such eigenvector-based centrality measures to time-dependent networks. In this paper, we introduce a principled generalization of network centrality measures that is valid for any eigenvector-based centrality. We consider a temporal network with @math nodes as a sequence of @math layers that describe the network during different time windows, and we couple centrality matrices for the layers into a supracentrality matrix of size @math whose dominant eigenvector gives the centrality of each node @math at each time @math . We refer to this eigenvector and its components as a joint centrality, as it reflects the importances of both the node @math and the time layer @math . We also introduce the concepts of marginal and conditional..."
]
} |
1710.04073 | 2761051840 | Graph theory provides a language for studying the structure of relations, and it is often used to study interactions over time too. However, it poorly captures the both temporal and structural nature of interactions, that calls for a dedicated formalism. In this paper, we generalize graph concepts in order to cope with both aspects in a consistent way. We start with elementary concepts like density, clusters, or paths, and derive from them more advanced concepts like cliques, degrees, clustering coefficients, or connected components. We obtain a language to directly deal with interactions over time, similar to the language provided by graphs to deal with relations. This formalism is self-consistent: usual relations between different concepts are preserved. It is also consistent with graph theory: graph concepts are special cases of the ones we introduce. This makes it easy to generalize higher-level objects such as quotient graphs, line graphs, k-cores, and centralities. This paper also considers discrete versus continuous time assumptions, instantaneous links, and extensions to more complex cases. | Some works go beyond path-related notions and study dynamics of node and link presence, link repetitions, instantaneous degree, and triadic closure @cite_27 @cite_61 @cite_18 @cite_85 @cite_86 @cite_29 @cite_9 @cite_17 @cite_14 @cite_0 . However, up to our knowledge, there exists no previous generalization of density, neighborhood, or clustering coefficient that avoids time slicing. Interestingly, a notion of degree very close to the one we propose here was introduced in the context of medical studies @cite_82 . A notion close to average degree is introduced in @cite_42 for dense dynamic sub-graphs searching. We also studied preliminary notions of density, cliques, quotient streams, and dense substreams in our own previous work @cite_11 @cite_38 @cite_84 @cite_55 @cite_26 . | {
"cite_N": [
"@cite_61",
"@cite_38",
"@cite_18",
"@cite_14",
"@cite_11",
"@cite_26",
"@cite_82",
"@cite_29",
"@cite_85",
"@cite_9",
"@cite_42",
"@cite_84",
"@cite_55",
"@cite_0",
"@cite_27",
"@cite_86",
"@cite_17"
],
"mid": [
"2336897744",
"2526011725",
"2612966155",
"2130768852",
"1890592509",
"2081313252",
"1999565474",
"2180313959",
"",
"2151078464",
"",
"",
"2290589277",
"2963883440",
"2274257646",
"2059763605",
"1556758605"
],
"abstract": [
"Characterizing the contacts between nodes is of utmost importance when evaluating mobile opportunistic networks. The most common characterization of inter-contact times is based on the study of the aggregate distribution of contacts between individual pairs of nodes, assuming an homogenous network, where contact patterns between nodes are similar. The problem with this aggregate distribution is that it is not always representative of the individual pair distributions, especially in the short term and when the number of nodes in the network is high. Thus, deriving results from this characterization can lead to inaccurate performance evaluation results.In this paper, we propose new approaches to characterize the inter-contact times distribution having a higher representativeness and, thus, increasing the accuracy of the derived performance results. Furthermore, these new characterizations require only a moderate number of contacts in order to be representative, thereby allowing to perform a temporal modelization of traffic traces. This a key issue for increasing accuracy, since real-traces can have a high variability in terms of contact patterns along time. The experiments show that the new characterizations, compared with the established one, are more precise, even using short time contact traces.",
"A link stream is a set of quadruplets (b, e, u, v) meaning that a link exists between u and v from time b to time e. Link streams model many real-world situations like contacts between individuals, connections between devices, and others. Much work is currently devoted to the generalization of classical graph and network concepts to link streams. We argue that the density is a valuable notion for understanding and characterizing links streams. We propose a method to capture specific groups of links that are structurally and temporally densely connected and show that they are meaningful for the description of link streams. To find such groups, we use classical graph community detection algorithms, and we assess obtained groups. We apply our method to several real-world contact traces (captured by sensors) and demonstrate the relevance of the obtained structures.",
"Ample theoretical work on social networks is explicitly or implicitly concerned with the role of interpersonal interaction. However, empirical studies to date mostly focus on the analysis of stable relations. This article introduces Dynamic Network Actor Models (DyNAMs) for the study of directed, interpersonal interaction through time. The presented model addresses three important aspects of interpersonal interaction. First, interactions unfold in a larger social context and depend on complex structures in social systems. Second, interactions emanate from individuals and are based on personal preferences, restricted by the available interaction opportunities. Third, sequences of interactions develop dynamically, and the timing of interactions relative to one another contains useful information. We refer to these aspects as the network nature, the actor-oriented nature, and the dynamic nature of social interaction. A case study compares the DyNAM framework to the relational event model, a widely used statistical method for the study of social interaction data.",
"This study proposes a new set of measures for longitudinal social networks (LSNs). A LSN evolves over time through the creation and or deletion of links among a set of actors (e.g., individuals or organizations). The current literature does feature some methods, such as multiagent simulation models, for studying the dynamics of LSNs. These methods have mainly been utilized to explore evolutionary changes in LSNs from one state to another and to explain the underlying mechanisms for these changes. However, they cannot quantify different aspects of a LSN. For example, these methods are unable to quantify the level of dynamicity shown by an actor in a LSN and its contribution to the overall dynamicity shown by that LSN. This article develops a set of measures for LSNs to overcome this limitation. We illustrate the benefits of these measures by applying them to an exploration of the Enron crisis. These measures successfully identify a significant but previously unobserved change in network structures (both at individual and group levels) during Enron's crisis period. © 2015 Wiley Periodicals, Inc. Complexity 21: 309–320, 2016",
"We introduce delta-cliques, that generalize graph cliques to link streams time-varying graphs.We provide a greedy algorithm to compute all delta-cliques of a link stream.Implementation available on http: www.github.com JordanV delta-cliques. A link stream is a collection of triplets ( t , u , v ) indicating that an interaction occurred between u and v at time t. We generalize the classical notion of cliques in graphs to such link streams: for a given Δ, a Δ-clique is a set of nodes and a time interval such that all pairs of nodes in this set interact at least once during each sub-interval of duration Δ. We propose an algorithm to enumerate all maximal (in terms of nodes or time interval) cliques of a link stream, and illustrate its practical relevance to a real-world contact trace.",
"Captures of IP traffic contain much information on very different kinds of activities like file transfers, users interacting with remote systems, automatic backups, or distributed computations. Identifying such activities is crucial for an appropriate analysis, modeling and monitoring of the traffic. We propose here a notion of density that captures both temporal and structural features of interactions, and generalizes the classical notion of clustering coefficient. We use it to point out important differences between distinct parts of the traffic, and to identify interesting nodes and groups of nodes in terms of roles in the network.",
"Degree centrality is considered to be one of the most basic measures of social network analysis, which has been used extensively in diverse research domains for measuring network positions of actors in respect of the connections with their immediate neighbors. In network analysis, it emphasizes the number of connections that an actor has with others. However, it does not accommodate the value of the duration of relations with other actors in a network; and, therefore, this traditional degree centrality approach regards only the presence or absence of links. Here, we introduce a time-variant approach to the degree centrality measure — time scale degree centrality (TSDC), which considers both presence and duration of links among actors within a network. We illustrate the difference between traditional and TSDC measure by applying these two approaches to explore the impact of degree attributes of a patient-physician network evolving during patient hospitalization periods on the hospital length of stay (LOS) both at a macro- and a micro-level. At a macro-level, both the traditional and time-scale approaches to degree centrality can explain the relationship between the degree attribute of the patient-physician network and LOS. However, at a micro-level or small cluster level, TSDC provides better explanation while the traditional degree centrality approach is found to be inadequate in explaining its relationship with LOS. Our proposed TSDC measure can explore time-variant relations that evolve among actors in a given social network.",
"Abstract : Network data often take the form of repeated interactions between senders and receivers tabulated over time. A primary question to ask of such data is which traits and behaviors are predictive of interaction. To answer this question, a model is introduced for treating directed interactions as a multivariate point process: a Cox multiplicative intensity model using covariates that depend on the history of the process. Consistency and asymptotic normality are proved for the resulting partial-likelihood-based estimators under suitable regularity conditions, and an efficient fitting procedure is described. Multicast interactions--those involving a single sender but multiple receivers--are treated explicitly. The resulting inferential framework is then employed to model message sending behavior in a corporate e-mail network. The analysis gives a precise quantification of which static shared traits and dynamic network effects are predictive of message recipient selection.",
"",
"We present a detailed study of network evolution by analyzing four large online social networks with full temporal information about node and edge arrivals. For the first time at such a large scale, we study individual node arrival and edge creation processes that collectively lead to macroscopic properties of networks. Using a methodology based on the maximum-likelihood principle, we investigate a wide variety of network formation strategies, and show that edge locality plays a critical role in evolution of networks. Our findings supplement earlier network models based on the inherently non-local preferential attachment. Based on our observations, we develop a complete model of network evolution, where nodes arrive at a prespecified rate and select their lifetimes. Each node then independently initiates edges according to a \"gap\" process, selecting a destination for each edge according to a simple triangle-closing model free of any parameters. We show analytically that the combination of the gap distribution with the node lifetime leads to a power law out-degree distribution that accurately reflects the true network in all four cases. Finally, we give model parameter settings that allow automatic evolution and generation of realistic synthetic networks of arbitrary scale.",
"",
"",
"Interaction traces between humans are usually rich in information concerning the patterns and habits of individuals. Such datasets have been recently made available, and more and more researchers address the new questions raised by this data. A link stream is a sequence of triplets (t, u, v) indicating that an interaction occurred between u and v at time t, and as such is a natural representation of these data. We generalize the classical notion of cliques in graphs to such link streams: for a given Δ, a Δ-clique is a set of nodes and a time interval such that all pairs of nodes in this set interact at least every Δ during this time interval. We proceed to compute the maximal Δ-cliques on a real-world dataset of contact among students, and show how it can bring new interpretation to patterns of contact.",
"In a temporal network, the presence and activity of nodes and links can change through time. To describe temporal networks we introduce the notion of temporal quantities. We define the addition and multiplication of temporal quantities in a way that can be used for the definition of addition and multiplication of temporal networks. The corresponding algebraic structures are semirings. The usual approach to (data) analysis of temporal networks is to transform the network into a sequence of time slices—static networks corresponding to selected time intervals and analyze each of them using standard methods to produce a sequence of results. The approach proposed in this paper enables us to compute these results directly. We developed fast algorithms for the proposed operations. They are available as an open source Python library TQ (Temporal Quantities) and a program Ianus. The proposed approach enables us to treat as temporal quantities also other network characteristics such as degrees, connectivity components, centrality measures, Pathfinder skeleton, etc. To illustrate the developed tools we present some results from the analysis of Franzosi’s violence network and Corman’s Reuters terror news network.",
"Today, numerous models and metrics are available to capture and characterize static properties of online social networks. When it comes to understanding their dynamics and evolution, however, research offers little interms of metrics or models. Current metrics are limited to logical time clocks, and unable to capture interactions with external factors that rely on physical time clocks. In this paper, our goal is to take initial steps towards building a set of metrics for characterizing social network dynamics based on physical time. We focus our attention on two metrics that capture the \"eagerness\" of users in building social structure. More specifically, we propose metrics of link delay and triadic closure delay, two metrics that capture the time delay between when a link or triadic closure is possible,and when they actually instantiate in the trace. Considered over time or across traces, the value of these metrics can provide insight on the speed at which users act in building and extending their social neighborhoods. We apply these metrics to two real traces of social network dynamics from the Renren and Facebook networks. We show that these metrics are generally consistent across networks, but their differences reveal interesting properties of each system. We argue that they can be attributed to factors such as network maturity, environmental and social contexts, and services offered by network provider, all factors independent of the network topology and captured by our proposed metrics. Finally, we find that triadic closure delays capture the ease of neighbor discovery in social networks, and can be strongly influenced by friend recommendation systems.",
"Connections in complex networks are inherently fluctuating over time and exhibit more dimensionality than analysis based on standard static graph measures can capture. Here, we introduce the concepts of temporal paths and distance in time-varying graphs. We define as temporal small world a time-varying graph in which the links are highly clustered in time, yet the nodes are at small average temporal distances. We explore the small-world behavior in synthetic time-varying networks of mobile agents and in real social and biological time-varying systems.",
"We study empirically the time evolution of scientific collaboration networks in physics and biology. In these networks, two scientists are considered connected if they have coauthored one or more papers together. We show that the probability of scientists collaborating increases with the number of other collaborators they have in common, and that the probability of a particular scientist acquiring new collaborators increases with the number of his or her past collaborators. These results provide experimental evidence in favor of previously conjectured mechanisms for clustering and power-law degree distributions in networks."
]
} |
1710.04203 | 2763288992 | Sentiment analysis aims to uncover emotions conveyed through information. In its simplest form, it is performed on a polarity basis, where the goal is to classify information with positive or negative emotion. Recent research has explored more nuanced ways to capture emotions that go beyond polarity. For these methods to work, they require a critical resource: a lexicon that is appropriate for the task at hand, in terms of the range of emotions it captures diversity. In the past, sentiment analysis lexicons have been created by experts, such as linguists and behavioural scientists, with strict rules. Lexicon evaluation was also performed by experts or gold standards. In our paper, we propose a crowdsourcing method for lexicon acquisition, which is scalable, cost-effective, and doesn't require experts or gold standards. We also compare crowd and expert evaluations of the lexicon, to assess the overall lexicon quality, and the evaluation capabilities of the crowd. | According to @cite_26 , an emotion is defined with reference to a list. @cite_12 proposed the six basic emotions joy, anger, fear, sadness, disgust, and surprise. Years later, Plutchik @cite_4 proposed the addition of trust and anticipation as basic emotions, and presented a circumplex model of emotions as seen in Figure , which defines emotional contradictions and some of the possible combinations. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_12"
],
"mid": [
"",
"2066064791",
"2143197238"
],
"abstract": [
"",
"The general psychoevolutionary theory of emotion that is presented here has a number of important characteristics. First, it provides a broad evolutionary foundation for conceptualizing the domain of emotion as seen in animals and humans. Second, it provides a structural model which describes the interrelations among emotions. Third, it has demonstrated both theoretical and empirical relations among a number of derivative domains including personality traits, diagnoses, and ego defenses. Fourth, it has provided a theoretical rationale for the construction of tests and scales for the measurement of key dimensions within these various domains. Fifth, it has stimulated a good deal of empirical research using these tools and concepts. Finally, the theory provides useful insights into the relationships among emotions, adaptations, and evolution.",
"Observers in both literate and preliterate cultures chose the predicted emotion for photographs of the face, although agreement was higher in the literate samples. These findings suggest that the pan-cultural element in facial displays of emotion is the association between facial muscular movements and discrete primary emotions, although cultures may still differ in what evokes an emotion, in rules for controlling the display of emotion, and in behavioral consequences."
]
} |
1710.04203 | 2763288992 | Sentiment analysis aims to uncover emotions conveyed through information. In its simplest form, it is performed on a polarity basis, where the goal is to classify information with positive or negative emotion. Recent research has explored more nuanced ways to capture emotions that go beyond polarity. For these methods to work, they require a critical resource: a lexicon that is appropriate for the task at hand, in terms of the range of emotions it captures diversity. In the past, sentiment analysis lexicons have been created by experts, such as linguists and behavioural scientists, with strict rules. Lexicon evaluation was also performed by experts or gold standards. In our paper, we propose a crowdsourcing method for lexicon acquisition, which is scalable, cost-effective, and doesn't require experts or gold standards. We also compare crowd and expert evaluations of the lexicon, to assess the overall lexicon quality, and the evaluation capabilities of the crowd. | As discussed in Section 1, one of the core tasks of sentiment analysis is lexicon acquisition. A lexicon can be acquired through manual or automatic annotation. However, natural language has a very subjective nature @cite_6 which significantly inhibits automated sentiment lexicon aqcuisition methods from achieving relevance equal to manual methods @cite_15 . Thus a lot of researchers choose to manually annotate their term corpora @cite_14 , or use established lexicon such as WordNet, SentiWordNet, and various other lexicons @cite_0 . Other studies combine manual labeling or machine learning with lexicons @cite_3 . | {
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_15"
],
"mid": [
"2251394420",
"1970592556",
"2251892406",
"",
"2131305515"
],
"abstract": [
"We present a novel way of extracting features from short texts, based on the activation values of an inner layer of a deep convolutional neural network. We use the extracted features in multimodal sentiment analysis of short video clips representing one sentence each. We use the combined feature vectors of textual, visual, and audio modalities to train a classifier based on multiple kernel learning, which is known to be good at heterogeneous data. We obtain 14 performance improvement over the state of the art and present a parallelizable decision-level data fusion method, which is much faster, though slightly less accurate.",
"This paper presents a new method for sentiment analysis in Facebook that, starting from messages written by users, supports: (i) to extract information about the users' sentiment polarity (positive, neutral or negative), as transmitted in the messages they write; and (ii) to model the users' usual sentiment polarity and to detect significant emotional changes. We have implemented this method in SentBuk, a Facebook application also presented in this paper. SentBuk retrieves messages written by users in Facebook and classifies them according to their polarity, showing the results to the users through an interactive interface. It also supports emotional change detection, friend's emotion finding, user classification according to their messages, and statistics, among others. The classification method implemented in SentBuk follows a hybrid approach: it combines lexical-based and machine-learning techniques. The results obtained through this approach show that it is feasible to perform sentiment analysis in Facebook with high accuracy (83.27 ). In the context of e-learning, it is very useful to have information about the users' sentiments available. On one hand, this information can be used by adaptive e-learning systems to support personalized learning, by considering the user's emotional state when recommending him her the most suitable activities to be tackled at each time. On the other hand, the students' sentiments towards a course can serve as feedback for teachers, especially in the case of online learning, where face-to-face contact is less frequent. The usefulness of this work in the context of e-learning, both for teachers and for adaptive systems, is described too.",
"This opinion paper discusses subjective natural language problems in terms of their motivations, applications, characterizations, and implications. It argues that such problems deserve increased attention because of their potential to challenge the status of theoretical understanding, problem-solving methods, and evaluation techniques in computational linguistics. The author supports a more holistic approach to such problems; a view that extends beyond opinion mining or sentiment analysis.",
"",
"The explosion of Web opinion data has made essential the need for automatic tools to analyze and understand people's sentiments toward different topics. In most sentiment analysis applications, the sentiment lexicon plays a central role. However, it is well known that there is no universally optimal sentiment lexicon since the polarity of words is sensitive to the topic domain. Even worse, in the same domain the same word may indicate different polarities with respect to different aspects. For example, in a laptop review, \"large\" is negative for the battery aspect while being positive for the screen aspect. In this paper, we focus on the problem of learning a sentiment lexicon that is not only domain specific but also dependent on the aspect in context given an unlabeled opinionated text collection. We propose a novel optimization framework that provides a unified and principled way to combine different sources of information for learning such a context-dependent sentiment lexicon. Experiments on two data sets (hotel reviews and customer feedback surveys on printers) show that our approach can not only identify new sentiment words specific to the given domain but also determine the different polarities of a word depending on the aspect in context. In further quantitative evaluation, our method is proved to be effective in constructing a high quality lexicon by comparing with a human annotated gold standard. In addition, using the learned context-dependent sentiment lexicon improved the accuracy in an aspect-level sentiment classification task."
]
} |
1710.04203 | 2763288992 | Sentiment analysis aims to uncover emotions conveyed through information. In its simplest form, it is performed on a polarity basis, where the goal is to classify information with positive or negative emotion. Recent research has explored more nuanced ways to capture emotions that go beyond polarity. For these methods to work, they require a critical resource: a lexicon that is appropriate for the task at hand, in terms of the range of emotions it captures diversity. In the past, sentiment analysis lexicons have been created by experts, such as linguists and behavioural scientists, with strict rules. Lexicon evaluation was also performed by experts or gold standards. In our paper, we propose a crowdsourcing method for lexicon acquisition, which is scalable, cost-effective, and doesn't require experts or gold standards. We also compare crowd and expert evaluations of the lexicon, to assess the overall lexicon quality, and the evaluation capabilities of the crowd. | Manual lexicon acquisition is constrained by the number of people contributing to the task, and the number of annotations from each participant. These constraints can be eliminated by increasing the number of people involved, for instance, by using crowdsourcing @cite_18 . Amazon's Mechanical Turk (MTurk) https: www.mturk.com is a crowdsourcing platform frequently used for polarity sentiment lexicon acquisition via crowdsourcing @cite_19 . MTurk is also used, for the annotation of one thousand tweets in @cite_28 , ten thousand terms in @cite_20 with gold standards, and the annotation of ninety five emoticons out of one thousand total emoticons found in @cite_17 . While @cite_27 had one thousand four hundred terms labelled with a supervised machine learning and crowd validators. The challenge is to introduce a work-flow that is scalable, unsupervised and applicable to different information types. | {
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_19",
"@cite_27",
"@cite_20",
"@cite_17"
],
"mid": [
"2129444086",
"2112251034",
"2752201871",
"",
"2949709688",
"2077587655"
],
"abstract": [
"Crowdsourcing is an online, distributed problem-solving and production model that has emerged in recent years. Notable examples of the model include Threadless, iStockphoto, InnoCentive, the Goldcorp Challenge, and user-generated advertising contests. This article provides an introduction to crowdsourcing, both its theoretical grounding and exemplar cases, taking care to distinguish crowdsourcing from open source production. This article also explores the possibilities for the model, its potential to exploit a crowd of innovators, and its potential for use beyond forprofit sectors. Finally, this article proposes an agenda for research into crowdsourcing.",
"Automated identification of diverse sentiment types can be beneficial for many NLP systems such as review summarization and public media analysis. In some of these systems there is an option of assigning a sentiment value to a single sentence or a very short text. In this paper we propose a supervised sentiment classification framework which is based on data from Twitter, a popular microblogging service. By utilizing 50 Twitter tags and 15 smileys as sentiment labels, this framework avoids the need for labor intensive manual annotation, allowing identification and classification of diverse sentiment types of short texts. We evaluate the contribution of different feature types for sentiment classification and show that our framework successfully identifies sentiment types of untagged sentences. The quality of the sentiment identification was also confirmed by human judges. We also explore dependencies and overlap between different sentiment types represented by smileys and Twitter hashtags.",
"This paper discusses the fourth year of the ”Sentiment Analysis in Twitter Task”. SemEval-2016 Task 4 comprises five subtasks, three of which represent a significant departure from previous editions. The first two subtasks are reruns from prior years and ask to predict the overall sentiment, and the sentiment towards a topic in a tweet. The three new subtasks focus on two variants of the basic “sentiment classification in Twitter” task. The first variant adopts a five-point scale, which confers an ordinal character to the classification task. The second variant focuses on the correct estimation of the prevalence of each class of interest, a task which has been called quantification in the supervised learning literature. The task continues to be very popular, attracting a total of 43 teams.",
"",
"In this paper, we describe how we created two state-of-the-art SVM classifiers, one to detect the sentiment of messages such as tweets and SMS (message-level task) and one to detect the sentiment of a term within a submissions stood first in both tasks on tweets, obtaining an F-score of 69.02 in the message-level task and 88.93 in the term-level task. We implemented a variety of surface-form, semantic, and sentiment features. with sentiment-word hashtags, and one from tweets with emoticons. In the message-level task, the lexicon-based features provided a gain of 5 F-score points over all others. Both of our systems can be replicated us available resources.",
"Recent years have witnessed the explosive growth of online social media. Weibo, a Twitter-like online social network in China, has attracted more than 300 million users in less than three years, with more than 1000 tweets generated in every second. These tweets not only convey the factual information, but also reflect the emotional states of the authors, which are very important for understanding user behaviors. However, a tweet in Weibo is extremely short and the words it contains evolve extraordinarily fast. Moreover, the Chinese corpus of sentiments is still very small, which prevents the conventional keyword-based methods from being used. In light of this, we build a system called MoodLens, which to our best knowledge is the first system for sentiment analysis of Chinese tweets in Weibo. In MoodLens, 95 emoticons are mapped into four categories of sentiments, i.e. angry, disgusting, joyful, and sad, which serve as the class labels of tweets. We then collect over 3.5 million labeled tweets as the corpus and train a fast Naive Bayes classifier, with an empirical precision of 64.3 . MoodLens also implements an incremental learning method to tackle the problem of the sentiment shift and the generation of new words. Using MoodLens for real-time tweets obtained from Weibo, several interesting temporal and spatial patterns are observed. Also, sentiment variations are well captured by MoodLens to effectively detect abnormal events in China. Finally, by using the highly efficient Naive Bayes classifier, MoodLens is capable of online real-time sentiment monitoring. The demo of MoodLens can be found at http: goo.gl 8DQ65."
]
} |
1710.04203 | 2763288992 | Sentiment analysis aims to uncover emotions conveyed through information. In its simplest form, it is performed on a polarity basis, where the goal is to classify information with positive or negative emotion. Recent research has explored more nuanced ways to capture emotions that go beyond polarity. For these methods to work, they require a critical resource: a lexicon that is appropriate for the task at hand, in terms of the range of emotions it captures diversity. In the past, sentiment analysis lexicons have been created by experts, such as linguists and behavioural scientists, with strict rules. Lexicon evaluation was also performed by experts or gold standards. In our paper, we propose a crowdsourcing method for lexicon acquisition, which is scalable, cost-effective, and doesn't require experts or gold standards. We also compare crowd and expert evaluations of the lexicon, to assess the overall lexicon quality, and the evaluation capabilities of the crowd. | Regardless of manual or automated sentiment classification, on textual information scenarios, term and phrase sentiment is the main input of the classification method. In some cases the decision might be totally different from the individual term emotion, leading to relabeling of the terms themselves @cite_22 . Manually labelled classification can achieve high relevance, but it requires additional resources, and is not easily scalable. On the other hand, automated processes are scalable but with lower relevance @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_22"
],
"mid": [
"2015650888",
"2251939518"
],
"abstract": [
"Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.",
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases."
]
} |
1710.04280 | 2761424930 | We propose a novel approach for generating high-quality, synthetic data for domain-specific learning tasks, for which training data may not be readily available. We leverage recent progress in image-to-image translation to bridge the gap between simulated and real images, allowing us to generate realistic training data for real-world tasks using only unlabeled real-world images and a simulation. GeneSIS-RT ameliorates the burden of having to collect labeled real-world images and is a promising candidate for generating high-quality, domain-specific, synthetic data. To show the effectiveness of using GeneSIS-RT to create training data, we study two tasks: semantic segmentation and reactive obstacle avoidance. We demonstrate that learning algorithms trained using data generated by GeneSIS-RT make high-accuracy predictions and outperform systems trained on raw simulated data alone, and as well or better than those trained on real data. Finally, we use our data to train a quadcopter to fly 60 meters at speeds up to 3.4 m s through a cluttered environment, demonstrating that our GeneSIS-RT images can be used to learn to perform mission-critical tasks. | There have been some recent efforts to train deep learning systems for real-world tasks using only simulated data. In @cite_15 , the high-budget game was used to generate data for training an object detection system, since the simulated images are already rather realistic. Though using video games for image generation is an attractive approach, games are limited in versatility, making it difficult to generate images of environments or objects not present in the game world. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2528963632"
],
"abstract": [
"Deep learning has rapidly transformed the state of the art algorithms used to address a variety of problems in computer vision and robotics. These breakthroughs have relied upon massive amounts of human annotated training data. This time consuming process has begun impeding the progress of these deep learning efforts. This paper describes a method to incorporate photo-realistic computer images from a simulation engine to rapidly generate annotated data that can be used for the training of machine learning algorithms. We demonstrate that a state of the art architecture, which is trained only using these synthetic annotations, performs better than the identical architecture trained on human annotated real-world data, when tested on the KITTI data set for vehicle detection. By training machine learning algorithms on a rich virtual world, real objects in real scenes can be learned and classified using synthetic data. This approach offers the possibility of accelerating deep learning's application to sensor-based classification problems like those that appear in self-driving cars. The source code and data to train and validate the networks described in this paper are made available for researchers."
]
} |
1710.04280 | 2761424930 | We propose a novel approach for generating high-quality, synthetic data for domain-specific learning tasks, for which training data may not be readily available. We leverage recent progress in image-to-image translation to bridge the gap between simulated and real images, allowing us to generate realistic training data for real-world tasks using only unlabeled real-world images and a simulation. GeneSIS-RT ameliorates the burden of having to collect labeled real-world images and is a promising candidate for generating high-quality, domain-specific, synthetic data. To show the effectiveness of using GeneSIS-RT to create training data, we study two tasks: semantic segmentation and reactive obstacle avoidance. We demonstrate that learning algorithms trained using data generated by GeneSIS-RT make high-accuracy predictions and outperform systems trained on raw simulated data alone, and as well or better than those trained on real data. Finally, we use our data to train a quadcopter to fly 60 meters at speeds up to 3.4 m s through a cluttered environment, demonstrating that our GeneSIS-RT images can be used to learn to perform mission-critical tasks. | Finally, our work relies on recent progress in the domain of image translation @cite_1 and style transfer @cite_14 . There have been some promising results in this space, yet most such approaches use pairs of similar or corresponding images, e.g. two images taken from the same vantage point in both the real and simulated world. There are methods capable of relating sets of images. One such result is @cite_3 , in which the authors use a modified generative adversarial network to learn, among other things, a mapping from synthetic eyes to real eyes, and then use the resulting data for training simple tasks. Yet the construction of their network assumes low image variety and a close correspondence between the simulated and real-world images, making it difficult to use their approach to complex scenes containing simulated objects whose shape, color or texture do not closely reflect the real world. The CycleGAN approach @cite_2 to unpaired image translation is designed with such scenarios in mind and is therefore an appropriate candidate for our work. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_3",
"@cite_2"
],
"mid": [
"2951413914",
"2552465644",
"2567101557",
"2605287558"
],
"abstract": [
"This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.",
"With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain @math to a target domain @math in the absence of paired examples. Our goal is to learn a mapping @math such that the distribution of images from @math is indistinguishable from the distribution @math using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping @math and introduce a cycle consistency loss to push @math (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."
]
} |
1710.04094 | 2490296388 | Hardware performance monitoring (HPM) is a crucial ingredient of performance analysis tools. While there are interfaces like LIKWID, PAPI or the kernel interface perf_event which provide HPM access with some additional features, many higher level tools combine event counts with results retrieved from other sources like function call traces to derive (semi-)automatic performance advice. However, although HPM is available for x86 systems since the early 90s, only a small subset of the HPM features is used in practice. Performance patterns provide a more comprehensive approach, enabling the identification of various performance-limiting effects. Patterns address issues like bandwidth saturation, load imbalance, non-local data access in ccNUMA systems, or false sharing of cache lines. This work defines HPM event sets that are best suited to identify a selection of performance patterns on the Intel Haswell processor. We validate the chosen event sets for accuracy in order to arrive at a reliable pattern detection mechanism and point out shortcomings that cannot be easily circumvented due to bugs or limitations in the hardware. | The most extensive event validation analysis was done by @cite_13 using a self-written assembly validation code. They test determinism and overcounting for the following events: retired instructions, retired branches, retired loads and stores as well as retired floating-point operations including scalar, packed, and vectorized instructions. For validating the measurements the dynamic binary instrumentation tool @cite_14 was used. The main target of that work was not to identify the right events needed to construct accurate performance metrics but to find the sources of non-determinism and over undercounting. It gives hints on how to reduce over- or undercounting and identify deterministic events for a set of architectures. | {
"cite_N": [
"@cite_14",
"@cite_13"
],
"mid": [
"2134633067",
"2119438786"
],
"abstract": [
"Robust and powerful software instrumentation tools are essential for program analysis tasks such as profiling, performance evaluation, and bug detection. To meet this need, we have developed a new instrumentation system called Pin. Our goals are to provide easy-to-use, portable, transparent, and efficient instrumentation. Instrumentation tools (called Pintools) are written in C C++ using Pin's rich API. Pin follows the model of ATOM, allowing the tool writer to analyze an application at the instruction level without the need for detailed knowledge of the underlying instruction set. The API is designed to be architecture independent whenever possible, making Pintools source compatible across different architectures. However, a Pintool can access architecture-specific details when necessary. Instrumentation with Pin is mostly transparent as the application and Pintool observe the application's original, uninstrumented behavior. Pin uses dynamic compilation to instrument executables while they are running. For efficiency, Pin uses several techniques, including inlining, register re-allocation, liveness analysis, and instruction scheduling to optimize instrumentation. This fully automated approach delivers significantly better instrumentation performance than similar tools. For example, Pin is 3.3x faster than Valgrind and 2x faster than DynamoRIO for basic-block counting. To illustrate Pin's versatility, we describe two Pintools in daily use to analyze production software. Pin is publicly available for Linux platforms on four architectures: IA32 (32-bit x86), EM64T (64-bit x86), Itanium®, and ARM. In the ten months since Pin 2 was released in July 2004, there have been over 3000 downloads from its website.",
"Ideal hardware performance counters provide exact deterministic results. Real-world performance monitoring unit (PMU) implementations do not always live up to this ideal. Events that should be exact and deterministic (such as retired instructions) show run-to-run variation and overcount on ×86_64 machines, even when run in strictly controlled environments. These effects are non-intuitive to casual users and cause difficulties when strict determinism is desirable, such as when implementing deterministic replay or deterministic threading libraries. We investigate eleven different x86 64 CPU implementations and discover the sources of divergence from expected count totals. Of all the counter events investigated, we find only a few that exhibit enough determinism to be used without adjustment in deterministic execution environments. We also briefly investigate ARM, IA64, POWER and SPARC systems and find that on these platforms the counter events have more determinism. We explore various methods of working around the limitations of the ×86_64 events, but in many cases this is not possible and would require architectural redesign of the underlying PMU."
]
} |
1710.04049 | 2622516189 | Microservices are used to build complex applications composed of small, independent and highly decoupled processes. Recently, microservices are often mentioned in one breath with container technologies like Docker. That is why operating system virtualization experiences a renaissance in cloud computing. These approaches shall provide horizontally scalable, easily deployable systems and a high-performance alternative to hypervisors. Nevertheless, performance impacts of containers on top of hypervisors are hardly investigated. Furthermore, microservice frameworks often come along with software defined networks. This contribution presents benchmark results to quantify the impacts of container, software defined networking and encryption on network performance. Even containers, although postulated to be lightweight, show a noteworthy impact to network performance. These impacts can be minimized on several system layers. Some design recommendations for cloud deployed systems following the microservice architecture pattern are derived. | Although container based operating system virtualization is postulated to be a scalable and high-performance alternative to hypervisors, hypervisors are the standard approach for IaaS cloud computing @cite_6 . provided a very detailed analysis on CPU, memory, storage and networking resources to explore the performance of traditional virtual machine deployments, and contrast them with the use of Linux containers provided via @cite_21 . Their results indicate that benchmarks that have been run in a container, show almost the same performance (floating point processing, memory transfers, network bandwidth and latencies, block I O and database performances) like benchmarks run on "bare met al" systems. Nevertheless, did not analyze the impact of containers on top of hypervisors. | {
"cite_N": [
"@cite_21",
"@cite_6"
],
"mid": [
"2075174112",
"2140953464"
],
"abstract": [
"Cloud computing makes extensive use of virtual machines because they permit workloads to be isolated from one another and for the resource usage to be somewhat controlled. In this paper, we explore the performance of traditional virtual machine (VM) deployments, and contrast them with the use of Linux containers. We use KVM as a representative hypervisor and Docker as a container manager. Our results show that containers result in equal or better performance than VMs in almost all cases. Both VMs and containers require tuning to support I Ointensive applications. We also discuss the implications of our performance results for future cloud architectures.",
"Hypervisors, popularized by Xen and VMware, are quickly becoming commodity. They are appropriate for many usage scenarios, but there are scenarios that require system virtualization with high degrees of both isolation and efficiency. Examples include HPC clusters, the Grid, hosting centers, and PlanetLab. We present an alternative to hypervisors that is better suited to such scenarios. The approach is a synthesis of prior work on resource containers and security containers applied to general-purpose, time-shared operating systems. Examples of such container-based systems include Solaris 10, Virtuozzo for Linux, and Linux-VServer. As a representative instance of container-based systems, this paper describes the design and implementation of Linux-VServer. In addition, it contrasts the architecture of Linux-VServer with current generations of Xen, and shows how Linux-VServer provides comparable support for isolation and superior system efficiency."
]
} |
1710.03850 | 2763224350 | Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of the inter-task relationships to identify the relevant knowledge to transfer. These inter-task relationships are typically estimated based on training data for each task, which is inefficient in lifelong learning settings where the goal is to learn each consecutive task rapidly from as little data as possible. To reduce this burden, we develop a lifelong learning method based on coupled dictionary learning that utilizes high-level task descriptions to model the inter-task relationships. We show that using task descriptors improves the performance of the learned task policies, providing both theoretical justification for the benefit and empirical demonstration of the improvement across a variety of learning problems. Given only the descriptor for a new task, the lifelong learner is also able to accurately predict a model for the new task through zero-shot learning using the coupled dictionary, eliminating the need to gather training data before addressing the task. | Multi-task learning (MTL) @cite_13 methods often model the relationships between tasks to identify similarities between their datasets or underlying models. There are many different approaches to modeling these task relationships. Bayesian approaches take a variety of forms, making use of common priors @cite_26 @cite_36 , using regularization terms that couple task parameters @cite_16 @cite_17 , and finding mixtures of experts that can be shared across tasks @cite_4 . | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_36",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"2169743339",
"2119187866",
"2134197408",
"2143104527",
"",
""
],
"abstract": [
"We consider the problem of multi-task reinforcement learning, where the agent needs to solve a sequence of Markov Decision Processes (MDPs) chosen randomly from a fixed but unknown distribution. We model the distribution over MDPs using a hierarchical Bayesian infinite mixture model. For each novel MDP, we use the previously learned distribution as an informed prior for modelbased Bayesian reinforcement learning. The hierarchical Bayesian framework provides a strong prior that allows us to rapidly infer the characteristics of new environments based on previous environments, while the use of a nonparametric model allows us to quickly adapt to environments we have not encountered before. In addition, the use of infinite mixtures allows for the model to automatically learn the number of underlying MDP components. We evaluate our approach and show that it leads to significant speedups in convergence to an optimal policy after observing only a small number of tasks.",
"Modeling a collection of similar regression or classification tasks can be improved by making the tasks 'learn from each other'. In machine learning, this subject is approached through 'multitask learning', where parallel tasks are modeled as multiple outputs of the same network. In multilevel analysis this is generally implemented through the mixed-effects linear model where a distinction is made between 'fixed effects', which are the same for all tasks, and 'random effects', which may vary between tasks. In the present article we will adopt a Bayesian approach in which some of the model parameters are shared (the same for all tasks) and others more loosely connected through a joint prior distribution that can be learned from the data. We seek in this way to combine the best parts of both the statistical multilevel approach and the neural network machinery. The standard assumption expressed in both approaches is that each task can learn equally well from any other task. In this article we extend the model by allowing more differentiation in the similarities between tasks. One such extension is to make the prior mean depend on higher-level task characteristics. More unsupervised clustering of tasks is obtained if we go from a single Gaussian prior to a mixture of Gaussians. This can be further generalized to a mixture of experts architecture with the gates depending on task characteristics. All three extensions are demonstrated through application both on an artificial data set and on two real-world problems, one a school problem and the other involving single-copy newspaper sales.",
"We consider the problem of multi-task reinforcement learning where the learner is provided with a set of tasks, for which only a small number of samples can be generated for any given policy. As the number of samples may not be enough to learn an accurate evaluation of the policy, it would be necessary to identify classes of tasks with similar structure and to learn them jointly. We consider the case where the tasks share structure in their value functions, and model this by assuming that the value functions are all sampled from a common prior. We adopt the Gaussian process temporal-difference value function model and use a hierarchical Bayesian approach to model the distribution over the value functions. We study two cases, where all the value functions belong to the same class and where they belong to an undefined number of classes. For each case, we present a hierarchical Bayesian model, and derive inference algorithms for (i) joint learning of the value functions, and (ii) efficient transfer of the information gained in (i) to assist learning the value function of a newly observed task.",
"Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs.",
"",
""
]
} |
1710.03850 | 2763224350 | Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of the inter-task relationships to identify the relevant knowledge to transfer. These inter-task relationships are typically estimated based on training data for each task, which is inefficient in lifelong learning settings where the goal is to learn each consecutive task rapidly from as little data as possible. To reduce this burden, we develop a lifelong learning method based on coupled dictionary learning that utilizes high-level task descriptions to model the inter-task relationships. We show that using task descriptors improves the performance of the learned task policies, providing both theoretical justification for the benefit and empirical demonstration of the improvement across a variety of learning problems. Given only the descriptor for a new task, the lifelong learner is also able to accurately predict a model for the new task through zero-shot learning using the coupled dictionary, eliminating the need to gather training data before addressing the task. | Both the Bayesian strategy of discovering biases and the shared spaces often used in transformation techniques are implicitly connected to methods that learn shared knowledge representations for MTL. For example, the original MTL framework developed by Caruana Caruana1997 and later variations @cite_5 capture task relationships by sharing hidden nodes in neural networks that are trained on multiple tasks. Related work in dictionary learning techniques for MTL @cite_21 @cite_39 factorize the learned models into a shared latent dictionary over the model space to facilitate transfer. Individual task models are then captured as sparse representations over this dictionary; the task relationships are captured in these sparse codes. | {
"cite_N": [
"@cite_5",
"@cite_21",
"@cite_39"
],
"mid": [
"2162888803",
"",
"2949201716"
],
"abstract": [
"A major problem in machine learning is that of inductive bias: how to choose a learner's hypothesis space so that it is large enough to contain a solution to the problem being learnt, yet small enough to ensure reliable generalization from reasonably-sized training sets. Typically such bias is supplied by hand through the skill and insights of experts. In this paper a model for automatically learning bias is investigated. The central assumption of the model is that the learner is embedded within an environment of related learning tasks. Within such an environment the learner can sample from multiple tasks, and hence it can search for a hypothesis space that contains good solutions to many of the problems in the environment. Under certain restrictions on the set of all hypothesis spaces available to the learner, we show that a hypothesis space that performs well on a sufficiently large number of training tasks will also perform well when learning novel tasks in the same environment. Explicit bounds are also derived demonstrating that learning multiple tasks within an environment of related tasks can potentially give much better generalization than learning a single task.",
"",
"In the paradigm of multi-task learning, mul- tiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learn- ing that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combi- nation of a finite number of underlying basis tasks. The coefficients of the linear combina- tion are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on on the assumption that task pa- rameters within a group lie in a low dimen- sional subspace but allows the tasks in differ- ent groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods."
]
} |
1710.03850 | 2763224350 | Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of the inter-task relationships to identify the relevant knowledge to transfer. These inter-task relationships are typically estimated based on training data for each task, which is inefficient in lifelong learning settings where the goal is to learn each consecutive task rapidly from as little data as possible. To reduce this burden, we develop a lifelong learning method based on coupled dictionary learning that utilizes high-level task descriptions to model the inter-task relationships. We show that using task descriptors improves the performance of the learned task policies, providing both theoretical justification for the benefit and empirical demonstration of the improvement across a variety of learning problems. Given only the descriptor for a new task, the lifelong learner is also able to accurately predict a model for the new task through zero-shot learning using the coupled dictionary, eliminating the need to gather training data before addressing the task. | The Efficient Lifelong Learning Algorithm (ELLA) framework @cite_9 used this same approach of a shared latent dictionary, trained online, to facilitate transfer as tasks arrive consecutively. The ELLA framework was first created for regression and classification @cite_9 , and later developed for policy gradient reinforcement learning (PG-ELLA) @cite_41 @cite_25 . Other approaches that extend MTL to online settings also exist @cite_40 . Saha al saha2011online use a task interaction matrix to model task relations online and Dekel al dekel2006online propose a shared global loss function that can be minimized as tasks arrive. | {
"cite_N": [
"@cite_41",
"@cite_40",
"@cite_9",
"@cite_25"
],
"mid": [
"2106008664",
"2806364238",
"",
"2235081654"
],
"abstract": [
"Policy gradient algorithms have shown considerable recent success in solving high-dimensional sequential decision making tasks, particularly in robotics. However, these methods often require extensive experience in a domain to achieve high performance. To make agents more sample-efficient, we developed a multi-task policy gradient method to learn decision making tasks consecutively, transferring knowledge between tasks to accelerate learning. Our approach provides robust theoretical guarantees, and we show empirically that it dramatically accelerates learning on a variety of dynamical systems, including an application to quadrotor control.",
"We study the problem of online learning of multiple tasks in parallel. On each online round, the algorithm receives an instance and makes a prediction for each one of the parallel tasks. We consider the case where these tasks all contribute toward a common goal. We capture the relationship between the tasks by using a single global loss function to evaluate the quality of the multiple predictions made on each round. Specifically, each individual prediction is associated with its own individual loss, and then these loss values are combined using a global loss function. We present several families of online algorithms which can use any absolute norm as a global loss function. We prove worst-case relative loss bounds for all of our algorithms.",
"",
"Online multi-task learning is an important capability for lifelong learning agents, enabling them to acquire models for diverse tasks over time and rapidly learn new tasks by building upon prior experience. However, recent progress toward lifelong reinforcement learning (RL) has been limited to learning from within a single task domain. For truly versatile lifelong learning, the agent must be able to autonomously transfer knowledge between different task domains. A few methods for cross-domain transfer have been developed, but these methods are computationally inefficient for scenarios where the agent must learn tasks consecutively. In this paper, we develop the first cross-domain lifelong RL framework. Our approach efficiently optimizes a shared repository of transferable knowledge and learns projection matrices that specialize that knowledge to different task domains. We provide rigorous theoretical guarantees on the stability of this approach, and empirically evaluate its performance on diverse dynamical systems. Our results show that the proposed method can learn effectively from interleaved task domains and rapidly acquire high performance in new domains."
]
} |
1710.03755 | 2761578149 | Pedestrian safety continues to be a significant concern in urban communities and pedestrian distraction is emerging as one of the main causes of grave and fatal accidents involving pedestrians. The advent of sophisticated mobile and wearable devices, equipped with high-precision on-board sensors capable of measuring fine-grained user movements and context, provides a tremendous opportunity for designing effective pedestrian safety systems and applications. Accurate and efficient recognition of pedestrian distractions in real-time given the memory, computation and communication limitations of these devices, however, remains the key technical challenge in the design of such systems. Earlier research efforts in pedestrian distraction detection using data available from mobile and wearable devices have primarily focused only on achieving high detection accuracy, resulting in designs that are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system that achieves a favorable balance between computational efficiency, detection accuracy, and energy consumption, this paper makes the following main contributions: (i) design of a novel complex activity recognition framework which employs motion data available from users' mobile and wearable devices and a lightweight frequency matching approach to accurately and efficiently recognize complex distraction related activities, and (ii) a comprehensive comparative evaluation of the proposed framework with well-known complex activity recognition techniques in the literature with the help of data collected from human subject pedestrians and prototype implementations on commercially-available mobile and wearable devices. | Several research efforts in the literature have employed mobile and or wearable devices, and data available from them, for improving pedestrian safety. @cite_42 utilized the rear camera of the smartphone to detect vehicles approaching a distracted user (or pedestrian) in order to promptly deliver a danger alert or notification. @cite_2 used image processing techniques and multi-sensor (barometer, accelerometer and gyroscope) information on smartphones to detect surrounding objects. Similarly, @cite_29 used real time video processing of road traffic to help partially sighted pedestrians in spotting obstacles on their path. @cite_45 is another proposal which applied image processing techniques on a smartphone camera feed to find obstacles in a user's path, however unlike @cite_29 , SpareEye is able to track multiple obstacles simultaneously. One significant drawback of all these proposals is that they employ costly and resource-intensive image capture and processing techniques, which can adversely impacts the performance and battery-life of mobile devices and thus their chances of being adopted by users. Reliance on the smartphone's camera, also restricts the ability of these techniques to operate when the camera is obstructed, for example, in a user's pocket. | {
"cite_N": [
"@cite_45",
"@cite_29",
"@cite_42",
"@cite_2"
],
"mid": [
"2027120765",
"1490526505",
"2090340221",
"1966120703"
],
"abstract": [
"Using mobile phones while walking for activities that require continuous focus on the screen, such as texting, has become more and more popular in the last years. To avoid colliding with obstacles, such as lampposts and pedestrians, focus has to be taken off the screen in regular intervals. In this paper we introduce SpareEye, an Android application that warns the smartphone user from obstacles in her way. We use only the camera of the phone and no special hardware, ensuring that it requires minimal effort from the user to use the application during everyday life. Experimental results show that we can detect obstacles with high accuracy, with only some false positives and few false negatives.",
"In this paper, we present a real-time obstacle detection system for the mobility improvement for the visually impaired using a handheld Smartphone. Though there are many existing assistants for the visually impaired, there is not a single one that is low cost, ultra-portable, non-intrusive and able to detect the low-height objects on the floor. This paper proposes a system to detect any objects attached to the floor regardless of their height. Unlike some existing systems where only histogram or edge information is used, the proposed systemcombines both cues and overcomes some limitations of existing systems. The obstacles on the floor in front of the user can be reliably detected in real time using the proposed system implemented on a Smartphone. The proposed system has been tested in different types of floor conditions and a field trial on five blind participants has been conducted. The experimental results demonstrate its reliability in comparison to existing systems.",
"Research in social science has shown that mobile phone conversations distract users, presenting a significant impact to pedestrian safety; for example, a mobile phone user deep in conversation while crossing a street is generally more at risk than other pedestrians not engaged in such behavior. We propose WalkSafe, an Android smartphone application that aids people that walk and talk, improving the safety of pedestrian mobile phone users. WalkSafe uses the back camera of the mobile phone to detect vehicles approaching the user, alerting the user of a potentially unsafe situation; more specifically WalkSafe i) uses machine learning algorithms implemented on the phone to detect the front views and back views of moving vehicles and ii) exploits phone APIs to save energy by running the vehicle detection algorithm only during active calls. We present our initial design, implementation and evaluation of the WalkSafe App that is capable of real-time detection of the front and back views of cars, indicating cars are approaching or moving away from the user, respectively. WalkSafe is implemented on Android phones and alerts the user of unsafe conditions using sound and vibration from the phone. WalkSafe is available on Android Market.",
"Accident detection and alarm system is very important to detect possible accidents or dangers for the peoples using their mobile devices while walking, i.e., distracted walking. In this paper, we introduce an automatic accident detection and alarm system, called AutoADAS, which is fully implemented and tested on the real mobile devices. The proposed system can be activated either manually or automatically when user walks. Under the manual mode, user activates the system before distracted walking while under the automatic mode, a \"user behaviour profiling\" module is used to recognize (distracted) walking behaviours and an \"object detection\" module is activated. Using image processing and camera field of view (FOV), the distance and angle between the user and detected objects are estimated and then applied to identify whether any potential accidents can happen. The \"accident analysis and prediction\" module includes: temporal alarm that inputs the user's walking speed and distance with respect to the detected objects and outputs temporal accident prediction; spatial alarm that inputs the user's walking direction and angle with respect to the detected objects and outputs spatial accident prediction. Once the proposed system positively predicts a potential accident, the \"alarm and suggestion\" module alerts the user with text, sound or vibration."
]
} |
1710.03726 | 2762569234 | One of the barriers to adoption of Electric Vehicles (EVs) is the anxiety around the limited driving range. Recent proposals have explored charging EVs on the move, using dynamic wireless charging which enables power exchange between the vehicle and the grid while the vehicle is moving. In this article, we focus on the intelligent routing of EVs in need of charging so that they can make most efficient use of the so-called Mobile Energy Disseminators (MEDs) which operates as mobile charging stations. We present a method for routing EVs around MEDs on the road network, which is based on constraint logic programming and optimisation using a graph-based shortest path algorithm. The proposed method exploits Inter-Vehicle (IVC) communications in order to eco-route electric vehicles. We argue that combining modern communications between vehicles and state of the art technologies on energy transfer, the driving range of EVs can be extended without the need for larger batteries or overtly costly infrastructure. We present extensive simulations in city conditions that show the driving range and consequently the overall travel time of electric vehicles is improved with intelligent routing in the presence of MEDs. | The EVs attach themselves to one or more MEDs during some part of their journey and until they have enough energy to reach their destination (or get to the closest static charging station). In this way, electric cars are charged “on the fly” and their range is increased while moving along the road. Hence, our proposal does not require significant changes to the existing road network and civil infrastructure @cite_3 @cite_8 @cite_14 and, unlike other proposals @cite_21 , does not pose any health hazards. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_3",
"@cite_8"
],
"mid": [
"1975082051",
"2036019734",
"",
"2159694944"
],
"abstract": [
"The growth of the Electric Vehicle (EV)'s market is strongly conditioned by the availability of cost-effective and reliable charging infrastructures. In the current state of the technology, the limited capacity of the batteries limits EV's market to urban moves. The rapid deployment of charging stations in urban areas is a real challenge. In this paper, we introduce an original approach consisting in reusing existing public lighting infrastructures for that purpose. Two main advantages characterize our solution. First, charging stations can be deployed rapidly without costly civil engineering. Second, a fraction of the power non consumed by the lamps at night can be used for the benefit of the charging stations. Meanwhile, such an approach must be achieved while guaranteeing the stability and quality of the lighting system.",
"Wireless charging system for On-Line Electric Bus(OLEB) consists of power inverter, road embedded rail, pickup module and regulator, which have been developed by KAIST several years ago. In this paper, wireless charging system for OLEB with series-connected road embedded segment structure is suggested to implement small-sized power inverter which is easier and cheaper to install near the bus stop. The experimental results are obtained from the experimental setup, from which the applicability of the proposed system is verified through experimental results.",
"",
"The market for battery powered and plug-in hybrid electric vehicles is currently limited, but this is expected to grow rapidly with the increased concern about the environment and advances in technology. Due to their high energy capacity, mass deployment of electrical vehicles will have significant impact on power networks. This impact will dictate the design of the electric vehicle interface devices and the way future power networks will be designed and controlled. This paper presents the results of an analysis of the impact of electric vehicles on existing power distribution networks. Evaluation of supply demand matching and potential violations of statutory voltage limits, power quality and imbalance are presented."
]
} |
1710.03430 | 2763613715 | In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module. With our proposed model, a text is encoded to a vector representation from an word-level to a chunk-level to effectively capture the entire meaning. In particular, by adapting the hierarchical structure, our model shows very small performance degradations in longer text comprehension while other state-of-the-art recurrent neural network models suffer from it. Additionally, the latent topic clustering module extracts semantic information from target samples. This clustering module is useful for any text related tasks by allowing each data sample to find its nearest topic cluster, thus helping the neural network model analyze the entire data. We evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic domain question answering dataset, which is related to Samsung products. The proposed model shows state-of-the-art results for ranking question-answer pairs. | Researchers have released question and answer datasets for research purposes and have proposed various models to solve these datasets. @cite_0 @cite_9 @cite_19 introduced small dataset to rank sentences that have higher probabilities of answering questions such as WikiQA and insuranceQA. To alleviate the difficulty in aggregating datasets, that are large and have no license restrictions, some researchers introduced new datasets for sentence similarity rankings @cite_5 @cite_4 . As of now, the Ubuntu Dialogue dataset is one of the largest corpus openly available for text ranking. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_5"
],
"mid": [
"836999996",
"",
"2120735855",
"2173361515",
"2306229986"
],
"abstract": [
"This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyzing this dataset, and provide benchmark performance on the task of selecting the best next response.",
"",
"This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax alignment model with a(n optional) lexical-semantics-driven log-linear model. Our model learns soft alignments as a hidden variable in discriminative training. Experimentalresultsusing theTRECdataset are shown to significantly outperform strong state-of-the-art baselines.",
"In this paper, we apply a general deep learning (DL) framework for the answer selection task, which does not depend on manually defined features or linguistic tools. The basic framework is to build the embeddings of questions and answers based on bidirectional long short-term memory (biLSTM) models, and measure their closeness by cosine similarity. We further extend this basic model in two directions. One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework. The other direction is to utilize a simple but efficient attention mechanism in order to generate the answer representation according to the question context. Several variations of models are provided. The models are examined by two datasets, including TREC-QA and InsuranceQA. Experimental results demonstrate that the proposed models substantially outperform several strong baselines.",
"We review the task of Sentence Pair Scoring, popular in the literature in various forms - viewed as Answer Sentence Selection, Semantic Text Scoring, Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a component of Memory Networks. We argue that all such tasks are similar from the model perspective and propose new baselines by comparing the performance of common IR metrics and popular convolutional, recurrent and attention-based neural models across many Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating randomized models, propose a statistically grounded methodology, and attempt to improve comparisons by releasing new datasets that are much harder than some of the currently used well explored benchmarks. We introduce a unified open source software framework with easily pluggable models and tasks, which enables us to experiment with multi-task reusability of trained sentence model. We set a new state-of-art in performance on the Ubuntu Dialogue dataset."
]
} |
1710.03430 | 2763613715 | In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module. With our proposed model, a text is encoded to a vector representation from an word-level to a chunk-level to effectively capture the entire meaning. In particular, by adapting the hierarchical structure, our model shows very small performance degradations in longer text comprehension while other state-of-the-art recurrent neural network models suffer from it. Additionally, the latent topic clustering module extracts semantic information from target samples. This clustering module is useful for any text related tasks by allowing each data sample to find its nearest topic cluster, thus helping the neural network model analyze the entire data. We evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic domain question answering dataset, which is related to Samsung products. The proposed model shows state-of-the-art results for ranking question-answer pairs. | To tackle the Ubuntu dataset, @cite_4 adopted the term frequency-inverse document frequency" approach to capture important words among context and next utterances @cite_23 . @cite_15 @cite_28 proposed deep neural network architecture for embedding sentences and measuring similarities to select answer sentence for a given question. @cite_27 used convolution neural network (CNN) architecture to embed the sentence while a final output vector was compared to the target text to calculate the matching score. They also tried using long short-term memory (LSTM) @cite_7 , bi-directional LSTM and ensemble method with all of those neural network architectures and achieved the best results on the Ubuntu Dialogues Corpus dataset. Another type of neural architecture is the RNN-CNN model, which encodes each token with a recurrent neural network (RNN) and then feeds them to the CNN @cite_5 . Researchers also introduced an attention based model to improve the performance @cite_19 @cite_10 @cite_16 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"836999996",
"",
"1591825359",
"2173361515",
"2197546379",
"",
"2306229986",
"2952792693",
"2952113915",
"2951528484"
],
"abstract": [
"This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyzing this dataset, and provide benchmark performance on the task of selecting the best next response.",
"",
"Answer sentence selection is the task of identifying sentences that contain the answer to a given question. This is an important problem in its own right as well as in the larger context of open domain question answering. We propose a novel approach to solving this task via means of distributed representations, and learn to match questions with answers by considering their semantic encoding. This contrasts prior work on this task, which typically relies on classifiers with large numbers of hand-crafted syntactic and semantic features and various external resources. Our approach does not require any feature engineering nor does it involve specialist linguistic data, making this model easily applicable to a wide range of domains and languages. Experimental results on a standard benchmark dataset from TREC demonstrate that---despite its simplicity---our model matches state of the art performance on the answer sentence selection task.",
"In this paper, we apply a general deep learning (DL) framework for the answer selection task, which does not depend on manually defined features or linguistic tools. The basic framework is to build the embeddings of questions and answers based on bidirectional long short-term memory (biLSTM) models, and measure their closeness by cosine similarity. We further extend this basic model in two directions. One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework. The other direction is to utilize a simple but efficient attention mechanism in order to generate the answer representation according to the question context. Several variations of models are provided. The models are examined by two datasets, including TREC-QA and InsuranceQA. Experimental results demonstrate that the proposed models substantially outperform several strong baselines.",
"This paper presents results of our experiments for the next utterance ranking on the Ubuntu Dialog Corpus -- the largest publicly available multi-turn dialog corpus. First, we use an in-house implementation of previously reported models to do an independent evaluation using the same data. Second, we evaluate the performances of various LSTMs, Bi-LSTMs and CNNs on the dataset. Third, we create an ensemble by averaging predictions of multiple models. The ensemble further improves the performance and it achieves a state-of-the-art result for the next utterance ranking on this dataset. Finally, we discuss our future plans using this corpus.",
"",
"We review the task of Sentence Pair Scoring, popular in the literature in various forms - viewed as Answer Sentence Selection, Semantic Text Scoring, Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a component of Memory Networks. We argue that all such tasks are similar from the model perspective and propose new baselines by comparing the performance of common IR metrics and popular convolutional, recurrent and attention-based neural models across many Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating randomized models, propose a statistically grounded methodology, and attempt to improve comparisons by releasing new datasets that are much harder than some of the currently used well explored benchmarks. We introduce a unified open source software framework with easily pluggable models and tasks, which enables us to experiment with multi-task reusability of trained sentence model. We set a new state-of-art in performance on the Ubuntu Dialogue dataset.",
"Building computers able to answer questions on any subject is a long standing goal of artificial intelligence. Promising progress has recently been achieved by methods that learn to map questions to logical forms or database queries. Such approaches can be effective but at the cost of either large amounts of human-labeled data or by defining lexicons and grammars tailored by practitioners. In this paper, we instead take the radical approach of learning to map questions to vectorial feature representations. By mapping answers into the same space one can query any knowledge base independent of its schema, without requiring any grammar or lexicon. Our method is trained with a new optimization procedure combining stochastic gradient descent followed by a fine-tuning step using the weak supervision provided by blending automatically and collaboratively generated resources. We empirically demonstrate that our model can capture meaningful signals from its noisy supervision leading to major improvements over paralex, the only existing method able to be trained on similar weakly labeled data.",
"Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (word-by-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model under the \"matching-aggregation\" framework. Given two sentences @math and @math , our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions @math and @math . In each matching direction, each time step of one sentence is matched against all time-steps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fix-length matching vector. Finally, based on the matching vector, the decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general \"compare-aggregate\" framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network."
]
} |
1710.03430 | 2763613715 | In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module. With our proposed model, a text is encoded to a vector representation from an word-level to a chunk-level to effectively capture the entire meaning. In particular, by adapting the hierarchical structure, our model shows very small performance degradations in longer text comprehension while other state-of-the-art recurrent neural network models suffer from it. Additionally, the latent topic clustering module extracts semantic information from target samples. This clustering module is useful for any text related tasks by allowing each data sample to find its nearest topic cluster, thus helping the neural network model analyze the entire data. We evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic domain question answering dataset, which is related to Samsung products. The proposed model shows state-of-the-art results for ranking question-answer pairs. | Recently, the hierarchical recurrent encoder-decoder model was proposed to embed contextual information in user query prediction and dialogue generation tasks @cite_12 @cite_22 . This shows improvement in the dialogue generation model where the context for the utterance is important. As another type of neural network architecture, memory network was proposed by @cite_18 . Several researchers adopted this architecture for the reading comprehension (RC) style QA tasks, because it can extract contextual information from each sentence and use it in finding the answer @cite_21 @cite_26 . However, none of this research is applied to the QA pair ranking task directly. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_21",
"@cite_12"
],
"mid": [
"2951008357",
"2131494463",
"889023230",
"2293453011",
""
],
"abstract": [
"We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.",
"Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the inputs and the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN can be trained end-to-end and obtains state-of-the-art results on several types of tasks and datasets: question answering (Facebook's bAbI dataset), text classification for sentiment analysis (Stanford Sentiment Treebank) and sequence modeling for part-of-speech tagging (WSJ-PTB). The training for these different tasks relies exclusively on trained word vector representations and input-question-answer triplets.",
"We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.",
"Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the -10k text question-answering dataset without supporting fact supervision.",
""
]
} |
1710.03370 | 2761886726 | We propose the inverse problem of Visual question answering (iVQA), and explore its suitability as a benchmark for visuo-linguistic understanding. The iVQA task is to generate a question that corresponds to a given image and answer pair. Since the answers are less informative than the questions, and the questions have less learnable bias, an iVQA model needs to better understand the image to be successful than a VQA model. We pose question generation as a multi-modal dynamic inference process and propose an iVQA model that can gradually adjust its focus of attention guided by both a partially generated question and the answer. For evaluation, apart from existing linguistic metrics, we propose a new ranking metric. This metric compares the ground truth question's rank among a list of distractors, which allows the drawbacks of different algorithms and sources of error to be studied. Experimental results show that our model can generate diverse, grammatically correct and content correlated questions that match the given answer. | Image captioning @cite_27 @cite_32 aims to describe, rather than merely recognise objects in images. It encompasses a number of classic vision capabilities as prerequisites including object @cite_22 and action @cite_17 recognition, attribute description @cite_16 and relationship inference @cite_34 . It further requires natural language generation capabilities to synthesise open-ended linguistic descriptions. Popular benchmarks and competitions have inspired intensive research in this area. Captioning models have explicitly addressed these sub-tasks to varying degrees @cite_32 , but the most common and successful approaches use neural encoders (of images), and decoders (of captions), with little explicit knowledge representation and reasoning @cite_27 @cite_15 . The iVQA task investigated here is related to captioning in that we aim to produce natural language outputs, but distinct in that the outputs are sharply conditioned on the required answer, as illustrated in Fig. . | {
"cite_N": [
"@cite_22",
"@cite_32",
"@cite_34",
"@cite_27",
"@cite_15",
"@cite_16",
"@cite_17"
],
"mid": [
"2949650786",
"2951805548",
"2479423890",
"2951912364",
"2549365021",
"2098411764",
"1976546217"
],
"abstract": [
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"The CNN-RNN design pattern is increasingly widely applied in a variety of image annotation tasks including multi-label classification and captioning. Existing models use the weakly semantic CNN hidden layer or its transform as the image embedding that provides the interface between the CNN and RNN. This leaves the RNN overstretched with two jobs: predicting the visual concepts and modelling their correlations for generating structured annotation output. Importantly this makes the end-to-end training of the CNN and RNN slow and ineffective due to the difficulty of back propagating gradients through the RNN to train the CNN. We propose a simple modification to the design pattern that makes learning more effective and efficient. Specifically, we propose to use a semantically regularised embedding layer as the interface between the CNN and RNN. Regularising the interface can partially or completely decouple the learning problems, allowing each to be more effectively trained and jointly training much more efficient. Extensive experiments show that state-of-the art performance is achieved on multi-label classification as well as image captioning.",
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"Detecting objects in cluttered scenes and estimating articulated human body parts from 2D images are two challenging problems in computer vision. The difficulty is particularly pronounced in activities involving human-object interactions (e.g., playing tennis), where the relevant objects tend to be small or only partially visible and the human body parts are often self-occluded. We observe, however, that objects and human poses can serve as mutual context to each other-recognizing one facilitates the recognition of the other. In this paper, we propose a mutual context model to jointly model objects and human poses in human-object interaction activities. In our approach, object detection provides a strong prior for better human pose estimation, while human pose estimation improves the accuracy of detecting the objects that interact with the human. On a six-class sports data set and a 24-class people interacting with musical instruments data set, we show that our mutual context model outperforms state of the art in detecting very difficult objects and estimating human poses, as well as classifying human-object interaction activities."
]
} |
1710.03348 | 2764285152 | Attention in neural machine translation provides the possibility to encode relevant parts of the source sentence at each translation step. As a result, attention is considered to be an alignment model as well. However, there is no work that specifically studies attention and provides analysis of what is being learned by attention models. Thus, the question still remains that how attention is similar or different from the traditional alignment. In this paper, we provide detailed analysis of attention and compare it to traditional alignment. We answer the question of whether attention is only capable of modelling translational equivalent or it captures more information. We show that attention is different from alignment in some cases and is capturing useful information other than alignments. | investigate how training the attention model in a supervised manner can benefit machine translation quality. To this end they use traditional alignments obtained by running automatic alignment tools (GIZA++ @cite_3 and fast @cite_9 ) on the training data and feed it as ground truth to the attention network. They report some improvements in translation quality arguing that the attention model has learned to better align source and target words. The approach of training attention using traditional alignments has also been proposed by others @cite_7 @cite_1 . show that guided attention with traditional alignment helps in the domain of e-commerce data which includes lots of out of vocabulary (OOV) product names and placeholders, but not much in the other domains. have separated the alignment model and translation model, reasoning that this avoids propagation of errors from one model to the other as well as providing more flexibility in the model types and training of the models. They use a feed-forward neural network as their alignment model that learns to model jumps in the source side using HMM IBM alignments obtained by using GIZA++. | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_7",
"@cite_3"
],
"mid": [
"2148708890",
"",
"2469296930",
"2156985047"
],
"abstract": [
"We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http: github.com clab fast align .",
"",
"In this paper, we propose an effective way for biasing the attention mechanism of a sequence-to-sequence neural machine translation (NMT) model towards the well-studied statistical word alignment models. We show that our novel guided alignment training approach improves translation quality on real-life e-commerce texts consisting of product titles and descriptions, overcoming the problems posed by many unknown words and a large type token ratio. We also show that meta-data associated with input texts such as topic or category information can significantly improve translation quality when used as an additional signal to the decoder part of the network. With both novel features, the BLEU score of the NMT system on a product title set improves from 18.6 to 21.3 . Even larger MT quality gains are obtained through domain adaptation of a general domain NMT system to e-commerce data. The developed NMT system also performs well on the IWSLT speech translation task, where an ensemble of four variant systems outperforms the phrase-based baseline by 2.1 BLEU absolute.",
"We present and compare various methods for computing word alignments using statistical or heuristic models. We consider the five alignment models presented in Brown, Della Pietra, Della Pietra, and Mercer (1993), the hidden Markov alignment model, smoothing techniques, and refinements. These statistical models are compared with two heuristic models based on the Dice coefficient. We present different methods for combining word alignments to perform a symmetrization of directed statistical alignment models. As evaluation criterion, we use the quality of the resulting Viterbi alignment compared to a manually produced reference alignment. We evaluate the models on the German-English Verbmobil task and the French-English Hansards task. We perform a detailed analysis of various design decisions of our statistical alignment system and evaluate these on training corpora of various sizes. An important result is that refined alignment models with a first-order dependence and a fertility model yield significantly better results than simple heuristic models. In the Appendix, we present an efficient training algorithm for the alignment models presented."
]
} |
1710.03425 | 2762208110 | Recognizing text in the wild is a really challenging task because of complex backgrounds, various illuminations and diverse distortions, even with deep neural networks (convolutional neural networks and recurrent neural networks). In the end-to-end training procedure for scene text recognition, the outputs of deep neural networks at different iterations are always demonstrated with diversity and complementarity for the target object (text). Here, a simple but effective deep learning method, an adaptive ensemble of deep neural networks (AdaDNNs), is proposed to simply select and adaptively combine classifier components at different iterations from the whole learning system. Furthermore, the ensemble is formulated as a Bayesian framework for classifier weighting and combination. A variety of experiments on several typical acknowledged benchmarks, i.e., ICDAR Robust Reading Competition (Challenge 1, 2 and 4) datasets, verify the surprised improvement from the baseline DNNs, and the effectiveness of AdaDNNs compared with the recent state-of-the-art methods. | Recognizing text in scene videos attracts more and more interests in the fields of document analysis and recognition, computer vision, and machine learning. The existing methods for scene text (cropped word) recognition can be grouped into segmentation-based word recognition and holistic word recognition. In general, segmentation-based word recognition methods integrate character segmentation and character recognition with language priors using optimization techniques, such as Markov models @cite_38 and CRFs @cite_25 @cite_1 . In recent years, the mainstream segmentation-based word recognition techniques usually over-segment the word image into small segments, combine adjacent segments into candidate characters and classify them using CNNs or gradient feature-based classifiers, and find an approximately optimal word recognition result using beam search @cite_8 , Hidden Markov Models @cite_10 , or dynamic programming @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_8",
"@cite_1",
"@cite_10",
"@cite_25"
],
"mid": [
"70975097",
"2006653496",
"2122221966",
"",
"1557952530",
"2049951199"
],
"abstract": [
"The goal of this work is text spotting in natural images. This is divided into two sequential tasks: detecting words regions in the image, and recognizing the words within these regions. We make the following contributions: first, we develop a Convolutional Neural Network (CNN) classifier that can be used for both tasks. The CNN has a novel architecture that enables efficient feature sharing (by using a number of layers in common) for text detection, character case-sensitive and insensitive classification, and bigram classification. It exceeds the state-of-the-art performance for all of these. Second, we make a number of technical changes over the traditional CNN architectures, including no downsampling for a per-pixel sliding window, and multi-mode learning with a mixture of linear models (maxout). Third, we have a method of automated data mining of Flickr, that generates word and character level annotations. Finally, these components are used together to form an end-to-end, state-of-the-art text spotting system. We evaluate the text-spotting system on two standard benchmarks, the ICDAR Robust Reading data set and the Street View Text data set, and demonstrate improvements over the state-of-the-art on multiple measures.",
"The growth in digital camera usage combined with a worldly abundance of text has translated to a rich new era for a classic problem of pattern recognition, reading. While traditional document processing often faces challenges such as unusual fonts, noise, and unconstrained lexicons, scene text reading amplifies these challenges and introduces new ones such as motion blur, curved layouts, perspective projection, and occlusion among others. Reading scene text is a complex problem involving many details that must be handled effectively for robust, accurate results. In this work, we describe and evaluate a reading system that combines several pieces, using probabilistic methods for coarsely binarizing a given text region, identifying baselines, and jointly performing word and character segmentation during the recognition process. By using scene context to recognize several words together in a line of text, our system gives state-of-the-art performance on three difficult benchmark data sets.",
"We describe Photo OCR, a system for text extraction from images. Our particular focus is reliable text extraction from smartphone imagery, with the goal of text recognition as a user input modality similar to speech recognition. Commercially available OCR performs poorly on this task. Recent progress in machine learning has substantially improved isolated character classification, we build on this progress by demonstrating a complete OCR system using these techniques. We also incorporate modern data center-scale distributed language modelling. Our approach is capable of recognizing text in a variety of challenging imaging conditions where traditional OCR systems fail, notably in the presence of substantial blur, low resolution, low contrast, high image noise and other distortions. It also operates with low latency, mean processing time is 600 ms per image. We evaluate our system on public benchmark datasets for text extraction and outperform all previously reported results, more than halving the error rate on multiple benchmarks. The system is currently in use in many applications at Google, and is available as a user input modality in Google Translate for Android.",
"",
"The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-the-art results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.",
"Scene text recognition has gained significant attention from the computer vision community in recent years. Recognizing such text is a challenging problem, even more so than the recognition of scanned documents. In this work, we focus on the problem of recognizing text extracted from street images. We present a framework that exploits both bottom-up and top-down cues. The bottom-up cues are derived from individual character detections from the image. We build a Conditional Random Field model on these detections to jointly model the strength of the detections and the interactions between them. We impose top-down cues obtained from a lexicon-based prior, i.e. language statistics, on the model. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We show significant improvements in accuracies on two challenging public datasets, namely Street View Text (over 15 ) and ICDAR 2003 (nearly 10 )."
]
} |
1710.03425 | 2762208110 | Recognizing text in the wild is a really challenging task because of complex backgrounds, various illuminations and diverse distortions, even with deep neural networks (convolutional neural networks and recurrent neural networks). In the end-to-end training procedure for scene text recognition, the outputs of deep neural networks at different iterations are always demonstrated with diversity and complementarity for the target object (text). Here, a simple but effective deep learning method, an adaptive ensemble of deep neural networks (AdaDNNs), is proposed to simply select and adaptively combine classifier components at different iterations from the whole learning system. Furthermore, the ensemble is formulated as a Bayesian framework for classifier weighting and combination. A variety of experiments on several typical acknowledged benchmarks, i.e., ICDAR Robust Reading Competition (Challenge 1, 2 and 4) datasets, verify the surprised improvement from the baseline DNNs, and the effectiveness of AdaDNNs compared with the recent state-of-the-art methods. | Word spotting @cite_4 , a direct holistic word recognition approach, is to identify specific words in scene images without character segmentation, given a lexicon of words @cite_27 . Word spotting methods usually calculate a similarity measure between the candidate word image and a query word. Impressively, some recent methods design a proper CNN architecture and train CNNs directly on the holistic word images @cite_13 @cite_14 , or use label embedding techniques to enrich relations between word images and text strings @cite_37 @cite_6 . Sequence matching, an indirect holistic word recognition approach, recognizes the whole word image by embedding hidden segmentation strategies. constructed an end-to-end train deep neural network for image-based sequence recognition (scene text recognition), where a convolutional recurrent neural networks framework (CRNN) is designed and utilized @cite_20 . In this paper, a similar CRNN architecture is used in AdaDNNs for recognizing scene text sequently and holistically. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_6",
"@cite_27",
"@cite_13",
"@cite_20"
],
"mid": [
"2053317383",
"1922126009",
"1932188282",
"2952770877",
"1521064364",
"1491389626",
"2194187530"
],
"abstract": [
"This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.",
"In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.",
"There are many historical manuscripts written in a single hand which it would be useful to index. Examples include the W.B. DuBois collection at the University of Massachusetts and the early Presidential libraries at the Library of Congress. Since Optical Character Recognition (OCR) does not work well on handwriting, an alternative scheme based on matching the images of the words is proposed for indexing such texts. The current paper deals with the matching aspects of this process. Two different techniques for matching words are discussed. The first method matches words assuming that the transformation between the words may be modelled by a translation (shift). The second method matches words assuming that the transformation between the words may be modelled by an affine transform. Experiments are shown demonstrating the feasibility of the approach for indexing handwriting. The method should also be applicable to retrieving previously stored material from personal digital assistants (PDAs).",
"This paper addresses the problem of learning word image representations: given the cropped image of a word, we are interested in finding a descriptive, robust, and compact fixed-length representation. Machine learning techniques can then be supplied with these representations to produce models useful for word retrieval or recognition tasks. Although many works have focused on the machine learning aspect once a global representation has been produced, little work has been devoted to the construction of those base image representations: most works use standard coding and aggregation techniques directly on top of standard computer vision features such as SIFT or HOG. We propose to learn local mid-level features suitable for building word image representations. These features are learnt by leveraging character bounding box annotations on a small set of training images. However, contrary to other approaches that use character bounding box information, our approach does not rely on detecting the individual characters explicitly at testing time. Our local mid-level features can then be aggregated to produce a global word image signature. When pairing these features with the recent word attributes framework of Almaz ', we obtain results comparable with or better than the state-of-the-art on matching and recognition tasks using global descriptors of only 96 dimensions.",
"We present a method for spotting words in the wild, i.e., in real images taken in unconstrained environments. Text found in the wild has a surprising range of difficulty. At one end of the spectrum, Optical Character Recognition (OCR) applied to scanned pages of well formatted printed text is one of the most successful applications of computer vision to date. At the other extreme lie visual CAPTCHAs - text that is constructed explicitly to fool computer vision algorithms. Both tasks involve recognizing text, yet one is nearly solved while the other remains extremely challenging. In this work, we argue that the appearance of words in the wild spans this range of difficulties and propose a new word recognition approach based on state-of-the-art methods from generic object recognition, in which we consider object categories to be the words themselves. We compare performance of leading OCR engines - one open source and one proprietary - with our new approach on the ICDAR Robust Reading data set and a new word spotting data set we introduce in this paper: the Street View Text data set. We show improvements of up to 16 on the data sets, demonstrating the feasibility of a new approach to a seemingly old problem.",
"In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine -- synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one \"reading\" words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.",
"Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it."
]
} |
1710.03617 | 2761162478 | In applications that involve interactive curve and surface modeling, the intuitive manipulation of shapes is crucial. For instance, user interaction is facilitated if a geometrical object can be manipulated through control points that interpolate the shape itself. Additionally, models for shape representation often need to provide local shape control and they need to be able to reproduce common shape primitives such as ellipsoids, spheres, cylinders, or tori. We present a general framework to construct families of compactly-supported interpolators that are piecewise-exponential polynomial. They can be designed to satisfy regularity constraints of any order and they enable one to build parametric deformable shape models by suitable linear combinations of interpolators. They allow to change the resolution of shapes based on the refinability of B-splines. We illustrate their use on examples to construct shape models that involve curves and surfaces with applications to interactive modeling and character design. | Recently, a method to build piecewise-polynomial interpolators has been presented in @cite_23 @cite_19 and its bivariate generalization was proposed in @cite_12 . The present work is the continuation of our previous efforts to, first, generalize the popular Catmull-Rom @cite_15 and Keys @cite_24 @cite_33 interpolators for practical applications @cite_37 @cite_2 @cite_4 @cite_36 and, next, to go one step further and construct families of interpolators that allow varying the resolution of a shape @cite_22 @cite_13 . Here, the novelty w.r.t. @cite_36 is that the presented framework allows one to vary the resolution of shapes which facilitates shape design in practice, as illustrated in . | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_33",
"@cite_22",
"@cite_36",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_12"
],
"mid": [
"1769279530",
"1814362492",
"2131745743",
"",
"1028196162",
"2157494358",
"2024068274",
"2055341771",
"2558578462",
"",
"",
"2222430368"
],
"abstract": [
"We propose a new parametric 3D snake with cylindrical topology. Its construction is based on interpolatory basis functions which facilitates user-interaction because the control points of the snake directly lie on the surface of the deformable cylinder. We prove that the basis functions exactly reproduce a cylinder and propose a new parametrization as a tensor-product spline surface. We provide explicit formulas for the energy function based on Green's theorem that speed up the computation of the optimization algorithm. We have implemented the proposed framework as a freely available open-source plugin for the bioimaging platform Icy. Its utility has been tested on phantom data as well as on real 3D data to segment the spinal cord and the descending aorta.",
"We present a new trigonometric basis function that is capable of perfectly reproducing circles, spheres and ellipsoids while at the same time being interpolatory. Such basis functions have the advantage that they allow to construct shapes through a sequence of control points that lie on their contour (2-D) or surface (3-D) which facilitates user-interaction, especially in 3-D. Our piecewise exponential basis function has finite support, which enables local control for shape modification. We derive and prove all the necessary properties of the kernel to represent shapes that can be smoothly deformed and show how idealized shapes such as ellipses and spheres can be constructed.",
"We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.",
"",
"Interpolatory basis functions are helpful to specify parametric curves or surfaces that can be modified by simple user-interaction. Their main advantage is a characterization of the object by a set of control points that lie on the shape itself (i.e., curve or surface). In this paper, we characterize a new family of compactly supported piecewise-exponential basis functions that are smooth and satisfy the interpolation property. They can be seen as a generalization and extension of the Keys interpolation kernel using cardinal exponential B-splines. The proposed interpolators can be designed to reproduce trigonometric, hyperbolic, and polynomial functions or combinations of them. We illustrate the construction and give concrete examples on how to use such functions to construct parametric curves and surfaces.",
"Cubic convolution interpolation is a new technique for resampling discrete data. It has a number of desirable features which make it useful for image processing. The technique can be performed efficiently on a digital computer. The cubic convolution interpolation function converges uniformly to the function being interpolated as the sampling increment approaches zero. With the appropriate boundary conditions and constraints on the interpolation kernel, it can be shown that the order of accuracy of the cubic convolution method is between that of linear interpolation and that of cubic splines. A one-dimensional interpolation function is derived in this paper. A separable extension of this algorithm to two dimensions is applied to image data.",
"In this paper we consider the problem of designing piecewise polynomial local interpolants of non-uniformly spaced data. We provide a constructive approach that, for any assigned degree of polynomial reproduction, continuity order, and support width, allows for generating the fundamental spline functions of minimum degree having the desired properties. Finally, the proposed construction is extended to handle open sets of data and to the case of multiple knots.",
"This paper presents a general framework for the construction of piecewise-polynomial local interpolants with given smoothness and approximation order, defined on non-uniform knot partitions. We design such splines through a suitable combination of polynomial interpolants with either polynomial or rational, compactly supported blending functions. In particular, when the blending functions are rational, our approach provides spline interpolants having low, and sometimes minimum degree. Thanks to its generality, the proposed framework also allows us to recover uniform local interpolating splines previously proposed in the literature, to generalize them to the non-uniform case, and to complete families of arbitrary support width. Furthermore it provides new local interpolating polynomial splines with prescribed smoothness and polynomial reproduction properties.",
"Existing shape models with spherical topology are typically designed either in the discrete domain using interpolating polygon meshes or in the continuous domain using smooth but non-interpolating schemes such as NURBS. Polygon models and subdivision methods require a large number of parameters to model smooth surfaces. NURBS need fewer parameters but have a complicated rational expression and non-uniform shifts in their formulation. We present a new method to construct deformable closed surfaces, which includes the exact sphere, by combining the best of two worlds: a smooth and interpolating model with a continuously varying tangent plane and well-defined curvature at every point on the surface. Our formulation is simpler than NURBS while it requires fewer parameters than polygon meshes. We demonstrate the generality of our method with applications ranging from intuitive user-interactive shape modeling, continuous surface deformation, reconstruction of shapes from parameterized point clouds, to fast iterative shape optimization for image segmentation.",
"",
"",
"In CAGD the design of a surface that interpolates an arbitrary quadrilateral mesh is definitely a challenging task. The basic requirement is to satisfy both criteria concerning the regularity of the surface and aesthetic concepts.With regard to aesthetic quality, it is well known that interpolatory methods often produce shape artifacts when the data points are unevenly spaced. In the univariate setting, this problem can be overcome, or at least mitigated, by exploiting a proper non-uniform parametrization, that accounts for the geometry of the data. Recently the same principle has been generalized and proven to be effective in the context of bivariate interpolatory subdivision schemes.In this paper, we propose a construction for parametric surfaces of good aesthetic quality and high smoothness interpolating quadrilateral meshes of arbitrary topology. In the classical tensor product setting the same parameter interval must be shared by an entire row or column of mesh edges. Conversely, in this paper, we assign a different parameter interval to each mesh edge. This particular structure, which we call an augmented parametrization, allows us to interpolate each section polyline at parameters values that prevent wiggling of the resulting curve or other interpolation artifacts and yields high quality interpolatory surfaces.The proposed method is generalization of the local univariate spline interpolants introduced in (2013a) and (2014), that have arbitrary continuity and arbitrary order of polynomial reproduction. The generated surfaces retain the same smoothness of the underlying class of univariate splines in the regular regions of the mesh (where, locally, all vertices have valence 4). Mesh regions containing vertices of valence other than 4 are covered with suitably defined surface patches joining the neighboring regular ones with G 1 - or G 2 -continuity. Interpolatory methods often produce shape artifacts for unevenly-spaced data points.A proper non-uniform parametrization can help overcome interpolation artifacts.Parametric surfaces of good aesthetic quality and high smoothness are constructed.The method relies on non-uniform parametrization and locally supported splines.The surfaces interpolate quadrilateral meshes with arbitrary topology."
]
} |
1710.03006 | 2761904836 | In recent years, (retro-)digitizing paper-based files became a major undertaking for private and public archives as well as an important task in electronic mailroom applications. As a first step, the workflow involves scanning and Optical Character Recognition (OCR) of documents. Preservation of document contexts of single page scans is a major requirement in this context. To facilitate workflows involving very large amounts of paper scans, page stream segmentation (PSS) is the task to automatically separate a stream of scanned images into multi-page documents. In a digitization project together with a German federal archive, we developed a novel approach based on convolutional neural networks (CNN) combining image and text features to achieve optimal document separation results. Evaluation shows that our PSS architecture achieves an accuracy up to 93 which can be regarded as a new state-of-the-art for this task. | Page stream segmentation is related to a series of other tasks concerned with digital document management workflows. Table summarizes important characteristics of recent works in this field. A common task related to PSS is document image classification (DIC) in which typically visual features (pixels) are utilized to classify scanned document representations into categories such as invoice", letter", certificate" etc. Category systems can become quite large and complex. @cite_14 summarize different approaches in a survey article on PSS and DIC. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2083122709"
],
"abstract": [
"In this paper we present a method for the segmentation of continuous page streams into multipage documents and the simultaneous classification of the resulting documents. We first present an approach to combine the multiple pages of a document into a single feature vector that represents the whole document. Despite its simplicity and low computational cost, the proposed representation yields results comparable to more complex methods in multipage document classification tasks. We then exploit this representation in the context of page stream segmentation. The most plausible segmentation of a page stream into a sequence of multipage documents is obtained by optimizing a statistical model that represents the probability of each segmented multipage document belonging to a particular class. Experimental results are reported on a large sample of real administrative multipage documents."
]
} |
1710.03006 | 2761904836 | In recent years, (retro-)digitizing paper-based files became a major undertaking for private and public archives as well as an important task in electronic mailroom applications. As a first step, the workflow involves scanning and Optical Character Recognition (OCR) of documents. Preservation of document contexts of single page scans is a major requirement in this context. To facilitate workflows involving very large amounts of paper scans, page stream segmentation (PSS) is the task to automatically separate a stream of scanned images into multi-page documents. In a digitization project together with a German federal archive, we developed a novel approach based on convolutional neural networks (CNN) combining image and text features to achieve optimal document separation results. Evaluation shows that our PSS architecture achieves an accuracy up to 93 which can be regarded as a new state-of-the-art for this task. | In @cite_4 , PSS is performed on top of the results from a DIC process. Pages from the stream are segmented each time the DIC system detects a change of class labels between consecutive page images. This approach can only be successful in case there are alternating types of documents in the sequential stream. Often, this cannot be guaranteed, especially in case of small document category systems. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2563209596"
],
"abstract": [
"In this manuscript we propose a novel method for jointly page stream segmentation and multi-page document classification.The end goal is to classify a stream of pages as belonging to different classes of documents. We take advantage of the recent state-of-the-art results achieved using deep architectures in related fields such as document image classification, and we adopt similar models to obtain satisfying classification accuracies and a low computational complexity. Our contribution is twofold: first, the extraction of visual features from the processed documents is automatically performed by the chosen Convolutional Neural Network; second, the predictions of the same network are further refined using an additional deep model which processes them in a classic sliding-window manner to help finding and solving classification errors committed by the first network. The proposed pipeline has been evaluated on a publicly available dataset composed of more than half a million multi-page documents collected by an on-line loan comparison company, showing excellent results and high efficiency."
]
} |
1710.03006 | 2761904836 | In recent years, (retro-)digitizing paper-based files became a major undertaking for private and public archives as well as an important task in electronic mailroom applications. As a first step, the workflow involves scanning and Optical Character Recognition (OCR) of documents. Preservation of document contexts of single page scans is a major requirement in this context. To facilitate workflows involving very large amounts of paper scans, page stream segmentation (PSS) is the task to automatically separate a stream of scanned images into multi-page documents. In a digitization project together with a German federal archive, we developed a novel approach based on convolutional neural networks (CNN) combining image and text features to achieve optimal document separation results. Evaluation shows that our PSS architecture achieves an accuracy up to 93 which can be regarded as a new state-of-the-art for this task. | Alternative approaches seek to identify document boundaries explicitly. Such approaches are proposed in @cite_13 and @cite_9 where each individual image of the sequence is classified as either continuity of the same document (SD) or beginning of a new document (ND). For this binary classification, @cite_13 rely on textual features extracted from OCR-results and classify pages with SVM and multi-layer perceptrons (MLP). @cite_9 employ bag of visual words (BoVW) and font information obtained from OCR as features, and test performance with three binary classifiers (SVM, Random Forest, and MLP). | {
"cite_N": [
"@cite_9",
"@cite_13"
],
"mid": [
"2044899606",
"2074720849"
],
"abstract": [
"In this paper, we present a method for segmentation of document page flow applied to heterogeneous real bank documents. The approach is based on the content of images and it also incorporates font based features inside the documents. Our method involves a bag of visual words (BoVW) model on the designed image based feature descriptors and a novel approach to combine the consecutive pages of a document into a single feature vector that represents the transition between these pages. The transitions here could be represented by one of the two different classes: continuity of the same document or beginning of a new document. Using the transition feature vectors, we utilize three different binary classifiers to make predictions on the relationship between consecutive pages. Our initial results demonstrate that the proposed method can exhibit promising performance for document flow segmentation at this stage.",
"The aim of this paper is to propose a document flow supervised segmentation approach applied to real world heterogeneous documents. Our algorithm treats the flow of documents as couples of consecutive pages and studies the relationship that exists between them. At first, sets of features are extracted from the pages where we propose an approach to model the couple of pages into a single feature vector representation. This representation will be provided to a binary classifier which classifies the relationship as either segmentation or continuity. In case of segmentation, we consider that we have a complete document and the analysis of the flow continues by starting a new document. In case of continuity, the couple of pages are assimilated to the same document and the analysis continues on the flow. If there is an uncertainty on whether the relationship between the couple of pages should be classified as a continuity or segmentation, a rejection is decided and the pages analyzed until this point are considered as a \"fragment\". The first classification already provides good results approaching 90 on certain documents, which is high at this level of the system."
]
} |
1710.03006 | 2761904836 | In recent years, (retro-)digitizing paper-based files became a major undertaking for private and public archives as well as an important task in electronic mailroom applications. As a first step, the workflow involves scanning and Optical Character Recognition (OCR) of documents. Preservation of document contexts of single page scans is a major requirement in this context. To facilitate workflows involving very large amounts of paper scans, page stream segmentation (PSS) is the task to automatically separate a stream of scanned images into multi-page documents. In a digitization project together with a German federal archive, we developed a novel approach based on convolutional neural networks (CNN) combining image and text features to achieve optimal document separation results. Evaluation shows that our PSS architecture achieves an accuracy up to 93 which can be regarded as a new state-of-the-art for this task. | The recent state-of-the-art for DIC is achieved by @cite_4 , @cite_0 and @cite_5 who employ Deep Learning with Convolutional Neural Networks to identify document classes. While the former two employ only visual features, the latter study uses both, visual and text features for DIC. For this, class-specific key terms are extracted from the OCR-ed training documents and highlighted with correspondingly colored boxes in the document images. Then, a CNN is applied to learn document classes from these images augmented with textual information highlighting. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_4"
],
"mid": [
"2130723172",
"2515936484",
"2563209596"
],
"abstract": [
"This paper presents a new state-of-the-art for document image classification and retrieval, using features learned by deep convolutional neural networks (CNNs). In object and scene analysis, deep neural nets are capable of learning a hierarchical chain of abstraction from pixel inputs to concise and descriptive representations. The current work explores this capacity in the realm of document analysis, and confirms that this representation strategy is superior to a variety of popular hand-crafted alternatives. Experiments also show that (i) features extracted from CNNs are robust to compression, (ii) CNNs trained on non-document images transfer well to document analysis tasks, and (iii) enforcing region-specific feature-learning is unnecessary given sufficient training data. This work also makes available a new labelled subset of the IIT-CDIP collection, containing 400,000 document images across 16 categories, useful for training new CNNs for document analysis.",
"In this paper we introduce a novel document image classification method based on combined visual and textual information. The proposed algorithm's pipeline is inspired to the ones of other recent state-of-the-art methods which perform document image classification using Convolutional Neural Networks. The main addition of our work is the introduction of a preprocessing step embedding additional textual information into the processed document images. To do so we combine Optical Character Recognition and Natural Language Processing algorithms to extract and manipulate relevant text concepts from document images. Such textual information is then visually embedded within each document image to improve the classification results of a Convolutional Neural Network. Our experiments prove that the overall document classification accuracy of a Convolutional Neural Network trained using these text-augmented document images is considerably higher than the one achieved by a similar model trained solely on classic document images, especially when different classes of documents share similar visual characteristics.",
"In this manuscript we propose a novel method for jointly page stream segmentation and multi-page document classification.The end goal is to classify a stream of pages as belonging to different classes of documents. We take advantage of the recent state-of-the-art results achieved using deep architectures in related fields such as document image classification, and we adopt similar models to obtain satisfying classification accuracies and a low computational complexity. Our contribution is twofold: first, the extraction of visual features from the processed documents is automatically performed by the chosen Convolutional Neural Network; second, the predictions of the same network are further refined using an additional deep model which processes them in a classic sliding-window manner to help finding and solving classification errors committed by the first network. The proposed pipeline has been evaluated on a publicly available dataset composed of more than half a million multi-page documents collected by an on-line loan comparison company, showing excellent results and high efficiency."
]
} |
1710.03006 | 2761904836 | In recent years, (retro-)digitizing paper-based files became a major undertaking for private and public archives as well as an important task in electronic mailroom applications. As a first step, the workflow involves scanning and Optical Character Recognition (OCR) of documents. Preservation of document contexts of single page scans is a major requirement in this context. To facilitate workflows involving very large amounts of paper scans, page stream segmentation (PSS) is the task to automatically separate a stream of scanned images into multi-page documents. In a digitization project together with a German federal archive, we developed a novel approach based on convolutional neural networks (CNN) combining image and text features to achieve optimal document separation results. Evaluation shows that our PSS architecture achieves an accuracy up to 93 which can be regarded as a new state-of-the-art for this task. | Although with @cite_4 there is already one study employing neural network technology not only for DIC but also for PSS, their approach was not applicable to our project for two reasons. First, as mentioned earlier, they perform PSS only indirectly based on changing class labels of consecutive pages. Since we only have 17 document categories and a majority of them belong to one category ("letter"), we need to perform direct separation of the page stream by classifying each page into either SD or ND. Second, quality and layout of our data is extremely heterogeneous due to the long time period of document creation. We expect a lowered performance by solely relying on visual features for separation. Therefore, taking the previous work of @cite_4 as a starting point, we propose our approach for direct PSS as a binary classification task combining textual features and visual features using deep neural networks. We compare this architecture against a baseline comprising an SVM classifier solely relying on textual features. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2563209596"
],
"abstract": [
"In this manuscript we propose a novel method for jointly page stream segmentation and multi-page document classification.The end goal is to classify a stream of pages as belonging to different classes of documents. We take advantage of the recent state-of-the-art results achieved using deep architectures in related fields such as document image classification, and we adopt similar models to obtain satisfying classification accuracies and a low computational complexity. Our contribution is twofold: first, the extraction of visual features from the processed documents is automatically performed by the chosen Convolutional Neural Network; second, the predictions of the same network are further refined using an additional deep model which processes them in a classic sliding-window manner to help finding and solving classification errors committed by the first network. The proposed pipeline has been evaluated on a publicly available dataset composed of more than half a million multi-page documents collected by an on-line loan comparison company, showing excellent results and high efficiency."
]
} |
1710.03107 | 2949522170 | We study the problem of formal verification of Binarized Neural Networks (BNN), which have recently been proposed as a energy-efficient alternative to traditional learning networks. The verification of BNNs, using the reduction to hardware verification, can be even more scalable by factoring computations among neurons within the same layer. By proving the NP-hardness of finding optimal factoring as well as the hardness of PTAS approximability, we design polynomial-time search heuristics to generate factoring solutions. The overall framework allows applying verification techniques to moderately-sized BNNs for embedded devices with thousands of neurons and inputs. | Around the time (Oct 9th, 2017) we first release of our work regarding formal verification of BNNs, have also worked on the same problem @cite_13 . Their work focuses on efficient encoding within a single neuron, while we focus on computational savings among neurons within the same layer. One can view our result and their result complementary. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2757956526"
],
"abstract": [
"Understanding properties of deep neural networks is an important challenge in deep learning. In this paper, we take a step in this direction by proposing a rigorous way of verifying properties of a popular class of neural networks, Binarized Neural Networks, using the well-developed means of Boolean satisfiability. Our main contribution is a construction that creates a representation of a binarized neural network as a Boolean formula. Our encoding is the first exact Boolean representation of a deep neural network. Using this encoding, we leverage the power of modern SAT solvers along with a proposed counterexample-guided search procedure to verify various properties of these networks. A particular focus will be on the critical property of robustness to adversarial perturbations. For this property, our experimental results demonstrate that our approach scales to medium-size deep neural networks used in image classification tasks. To the best of our knowledge, this is the first work on verifying properties of deep neural networks using an exact Boolean encoding of the network."
]
} |
1710.03107 | 2949522170 | We study the problem of formal verification of Binarized Neural Networks (BNN), which have recently been proposed as a energy-efficient alternative to traditional learning networks. The verification of BNNs, using the reduction to hardware verification, can be even more scalable by factoring computations among neurons within the same layer. By proving the NP-hardness of finding optimal factoring as well as the hardness of PTAS approximability, we design polynomial-time search heuristics to generate factoring solutions. The overall framework allows applying verification techniques to moderately-sized BNNs for embedded devices with thousands of neurons and inputs. | Researchers from the machine learning domain (e.g. @cite_9 @cite_17 @cite_19 ) target the generation of adversarial examples for debugging and retraining purposes. Adverserial examples are slightly perturbed inputs (such as images) which may fool a neural network into generating undesirable results (such as "wrong" classifications) @. Using satisfiability assignments from the SAT solving stage in our verification procedure, we are also able to generate counterexamples to the BNN verification problem. Our work, however, goes well beyond current approaches to generating adverserial examples in that it does not only support debugging and retraining purposes. Instead, our verification algorithm establishes formal correctness results for neural network-like structures. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_17"
],
"mid": [
"2953047670",
"2099471712",
"1945616565"
],
"abstract": [
"Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset."
]
} |
1710.03255 | 2764168753 | We address the problem of automatic American Sign Language fingerspelling recognition from video. Prior work has largely relied on frame-level labels, hand-crafted features, or other constraints, and has been hampered by the scarcity of data for this task. We introduce a model for fingerspelling recognition that addresses these issues. The model consists of an auto-encoder-based feature extractor and an attention-based neural encoder-decoder, which are trained jointly. The model receives a sequence of image frames and outputs the fingerspelled word, without relying on any frame-level training labels or hand-crafted features. In addition, the auto-encoder subcomponent makes it possible to leverage unlabeled data to improve the feature learning. The model achieves 11.6 and 4.4 absolute letter accuracy improvement respectively in signer-independent and signer- adapted fingerspelling recognition over previous approaches that required frame-level training labels. | Despite the importance of fingerspelling in spontaneous sign language, there has been relatively little work explicitly addressing fingerspelling recognition. Most prior work on fingerspelling recognition is focused on restricted settings. One typical restriction is the size of the lexicon. When the lexicon is fixed to a small size (20-100 words), excellent recognition accuracy has been achieved @cite_23 @cite_11 @cite_27 , but this restriction is impractical. For ASL fingerspelling, the largest available open-vocabulary dataset to our knowledge is the TTIC UChicago Fingerspelling Video Dataset (Chicago-FSVid), containing 2400 open-domain word instances produced by 4 signers @cite_8 , which we use here. Another important restriction is the signer identity. In the signer-dependent setting, letter error rates below 10 The best-performing prior approaches for open-vocabulary fingerspelling recognition have been based on HMMs or segmental conditional random fields (SCRFs) using deep neural network (DNN) frame classifiers to define features @cite_8 . This prior work has largely relied on frame-level labels for training data, but these are hard to obtain. In addition, because of the scarcity of data, prior work has largely relied on human-engineered image features, such as histograms of oriented gradients (HOG) @cite_0 , as the initial image representation. | {
"cite_N": [
"@cite_8",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_11"
],
"mid": [
"2963408148",
"",
"1580385328",
"1985408783",
"2108274355"
],
"abstract": [
"Abstract We study the problem of recognizing video sequences of fingerspelled letters in American Sign Language (ASL). Fingerspelling comprises a significant but relatively understudied part of ASL. Recognizing fingerspelling is challenging for a number of reasons: it involves quick, small motions that are often highly coarticulated; it exhibits significant variation between signers; and there has been a dearth of continuous fingerspelling data collected. In this work we collect and annotate a new data set of continuous fingerspelling videos, compare several types of recognizers, and explore the problem of signer variation. Our best-performing models are segmental (semi-Markov) conditional random fields using deep neural network-based features. In the signer-dependent setting, our recognizers achieve up to about 92 letter accuracy. The multi-signer setting is much more challenging, but with neural network adaptation we achieve up to 83 letter accuracies in this setting.",
"",
"We propose a new principle for recognizing fingerspelling sequences from American Sign Language (ASL) Instead of training a system to recognize the static posture for each letter from an isolated frame, we recognize the dynamic gestures corresponding to transitions between letters This eliminates the need for an explicit temporal segmentation step, which we show is error-prone at speeds used by native signers We present results from our system recognizing 82 different words signed by a single signer, using more than an hour of training and test video We demonstrate that recognizing letter-to-letter transitions without temporal segmentation is feasible and results in improved performance.",
"This paper presents the Australian sign language (Auslan) Fingerspelling Recognizer (AFR): a system capable of recognizing signs consisting of Auslan manual alphabet letters from video sequences. The AFR system uses a combination of geometric features and motion features based on optical flow which are extracted from video sequences. The sequence of features are then classified using Hidden Markov Models (HMMs). Tests using a vocabulary of twenty signed words showed the system could achieve 97 accuracy at the letter level and 88 at the word level by using a finite state grammar network and embedded training.",
"We investigate the problem of recognizing words from video, fingerspelled using the British Sign Language (BSL) fingerspelling alphabet. This is a challenging task since the BSL alphabet involves both hands occluding each other, and contains signs which are ambiguous from the observer's viewpoint. The main contributions of our work include: (i) recognition based on hand shape alone, not requiring motion cues; (ii) robust visual features for hand shape recognition; (iii) scalability to large lexicon recognition with no re-training. We report results on a dataset of 1,000 low quality webcam videos of 100 words. The proposed method achieves a word recognition accuracy of 98.9 ."
]
} |
1710.03029 | 2760957422 | Trajectory optimization and posture generation are hard problems in robot locomotion, which can be non-convex and have multiple local optima. Progress on these problems is further hindered by a lack of open benchmarks, since comparisons of different solutions are difficult to make. In this paper we introduce a new benchmark for trajectory optimization and posture generation of legged robots, using a pre-defined scenario, robot and constraints, as well as evaluation criteria. We evaluate state-of-the-art trajectory optimization algorithms based on sequential quadratic programming (SQP) on the benchmark, as well as new stochastic and incremental optimization methods borrowed from the large-scale machine learning literature. Interestingly we show that some of these stochastic and incremental methods, which are based on stochastic gradient descent (SGD), achieve higher success rates than SQP on tough initializations. Inspired by this observation we also propose a new incremental variant of SQP which updates only a random subset of the costs and constraints at each iteration. The algorithm is the best performing in both success rate and convergence speed, improving over SQP by up to 30 in both criteria. The benchmark's resources and a solution evaluation script are made openly available. | Robot motion planning has been tackled with search, sampling and optimization methods. Recently, optimization algorithms have gained popularity, due to the existence of fast general-purpose software and the possibility to easily integrate many different constraints in the problem. One of the state-of-the-art algorithms is sequential quadratic programming, which is used by SNOPT @cite_23 for general-purpose optimization, but also by trajectory-optimization libraries @cite_1 , and trajectory optimization research on legged robots @cite_20 . Projected conjugate-gradient methods such as CHOMP @cite_6 have also been proposed for the problem. The gradient-descent methods we evaluate in this paper are related to CHOMP in the sense that they also do (pre-conditioned) gradient descent. However, as in @cite_1 we use penalties for constraints, thus being able to consider general non-linear constraints on robot postures or motion. | {
"cite_N": [
"@cite_20",
"@cite_1",
"@cite_6",
"@cite_23"
],
"mid": [
"2101340954",
"2142224528",
"2161819990",
"2022144657"
],
"abstract": [
"Direct methods for trajectory optimization are widely used for planning locally optimal trajectories of robotic systems. Many critical tasks, such as locomotion and manipulation, often involve impacting the ground or objects in the environment. Most state-of-the-art techniques treat the discontinuous dynamics that result from impacts as discrete modes and restrict the search for a complete path to a specified sequence through these modes. Here we present a novel method for trajectory planning of rigid-body systems that contact their environment through inelastic impacts and Coulomb friction. This method eliminates the requirement for a priori mode ordering. Motivated by the formulation of multi-contact dynamics as a Linear Complementarity Problem for forward simulation, the proposed algorithm poses the optimization problem as a Mathematical Program with Complementarity Constraints. We leverage Sequential Quadratic Programming to naturally resolve contact constraint forces while simultaneously optimizing a trajectory that satisfies the complementarity constraints. The method scales well to high-dimensional systems with large numbers of possible modes. We demonstrate the approach on four increasingly complex systems: rotating a pinned object with a finger, simple grasping and manipulation, planar walking with the Spring Flamingo robot, and high-speed bipedal running on the FastRunner platform.",
"We present a new optimization-based approach for robotic motion planning among obstacles. Like CHOMP (Covariant Hamiltonian Optimization for Motion Planning), our algorithm can be used to find collision-free trajectories from naA¯ve, straight-line initializations that might be in collision. At the core of our approach are (a) a sequential convex optimization procedure, which penalizes collisions with a hinge loss and increases the penalty coefficients in an outer loop as necessary, and (b) an efficient formulation of the no-collisions constraint that directly considers continuous-time safety Our algorithm is implemented in a software package called TrajOpt. We report results from a series of experiments comparing TrajOpt with CHOMP and randomized planners from OMPL, with regard to planning time and path quality. We consider motion planning for 7 DOF robot arms, 18 DOF full-body robots, statically stable walking motion for the 34 DOF Atlas humanoid robot, and physical experiments with the 18 DOF PR2. We also apply TrajOpt to plan curvature-constrained steerable needle trajectories in the SE(3) configuration space and multiple non-intersecting curved channels within 3D-printed implants for intracavitary brachytherapy. Details, videos, and source code are freely available at: http: rll.berkeley.edu trajopt ijrr.",
"In this paper, we present CHOMP (covariant Hamiltonian optimization for motion planning), a method for trajectory optimization invariant to reparametrization. CHOMP uses functional gradient techniques to iteratively improve the quality of an initial trajectory, optimizing a functional that trades off between a smoothness and an obstacle avoidance component. CHOMP can be used to locally optimize feasible trajectories, as well as to solve motion planning queries, converging to low-cost trajectories even when initialized with infeasible ones. It uses Hamiltonian Monte Carlo to alleviate the problem of convergence to high-cost local minima (and for probabilistic completeness), and is capable of respecting hard constraints along the trajectory. We present extensive experiments with CHOMP on manipulation and locomotion tasks, using seven-degree-of-freedom manipulators and a rough-terrain quadruped robot.",
"Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. SNOPT is a particular implementation that makes use of a semidefinite QP solver. It is based on a limited-memory quasi-Newton approximation to the Hessian of the Lagrangian and uses a reduced-Hessian algorithm (SQOPT) for solving the QP subproblems. It is designed for problems with many thousands of constraints and variables but a moderate number of degrees of freedom (say, up to 2000). An important application is to trajectory optimization in the aerospace industry. Numerical results are given for most problems in the CUTE and COPS test collections (about 900 examples)."
]
} |
1710.03029 | 2760957422 | Trajectory optimization and posture generation are hard problems in robot locomotion, which can be non-convex and have multiple local optima. Progress on these problems is further hindered by a lack of open benchmarks, since comparisons of different solutions are difficult to make. In this paper we introduce a new benchmark for trajectory optimization and posture generation of legged robots, using a pre-defined scenario, robot and constraints, as well as evaluation criteria. We evaluate state-of-the-art trajectory optimization algorithms based on sequential quadratic programming (SQP) on the benchmark, as well as new stochastic and incremental optimization methods borrowed from the large-scale machine learning literature. Interestingly we show that some of these stochastic and incremental methods, which are based on stochastic gradient descent (SGD), achieve higher success rates than SQP on tough initializations. Inspired by this observation we also propose a new incremental variant of SQP which updates only a random subset of the costs and constraints at each iteration. The algorithm is the best performing in both success rate and convergence speed, improving over SQP by up to 30 in both criteria. The benchmark's resources and a solution evaluation script are made openly available. | In this paper we explore the use of stochastic methods for posture generation and trajectory optimization. The motivation behind it is to improve success rates by successfully avoiding local minima through random perturbations. The idea has also been explored in stochastic variants of CHOMP @cite_3 which increased success rates. Here we instead make use of progress in the stochastic optimization literature, which has recently gained attention in part because of the problem of local minima, saddle points and non-convexities which pervade deep neural network training landscapes. The large-scale optimization and deep learning communities have recently come up with different algorithms to deal with these optimization landscapes, such as variants of stochastic gradient descent with pre-conditioning @cite_22 , incremental gradient descent methods @cite_12 , noise injection @cite_2 and others. Some of these algorithms have provable convergence guarantees @cite_12 and saddle-escaping guarantees @cite_8 . Our assumption in this paper is that these methods and insights which work on the highly-nonconvex optimization landscapes of neural networks will transfer to the (also non-convex) landscapes of legged robot posture generation and trajectory optimization. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_3",
"@cite_2",
"@cite_12"
],
"mid": [
"1522301498",
"2963687412",
"2019965290",
"2263490141",
"1791038712"
],
"abstract": [
"We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.",
"Author(s): Anandkumar, A; Ge, R | Abstract: Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with local optima. In this paper we use higher order derivatives to escape these saddle points: we design the first efficient algorithm guaranteed to converge to a third order local optimum (while existing techniques are at most second order). We also show that it is NP-hard to extend this further to finding fourth order local optima.",
"We present a new approach to motion planning using a stochastic trajectory optimization framework. The approach relies on generating noisy trajectories to explore the space around an initial (possibly infeasible) trajectory, which are then combined to produced an updated trajectory with lower cost. A cost function based on a combination of obstacle and smoothness cost is optimized in each iteration. No gradient information is required for the particular optimization algorithm that we use and so general costs for which derivatives may not be available (e.g. costs corresponding to constraints and motor torques) can be included in the cost function. We demonstrate the approach both in simulation and on a mobile manipulation system for unconstrained and constrained tasks. We experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in.",
"Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we explore the low-overhead and easy-to-implement optimization technique of adding annealed Gaussian noise to the gradient, which we find surprisingly effective when training these very deep architectures. Unlike classical weight noise, gradient noise injection is complementary to advanced stochastic optimization algorithms such as Adam and AdaGrad. The technique not only helps to avoid overfitting, but also can result in lower training loss. We see consistent improvements in performance across an array of complex models, including state-of-the-art deep networks for question answering and algorithm learning. We observe that this optimization strategy allows a fully-connected 20-layer deep network to escape a bad initialization with standard stochastic gradient descent. We encourage further application of this technique to additional modern neural architectures.",
"We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O(1 k^ 1 2 ) to O(1 k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1 k) to a linear convergence rate of the form O(p^k) for p 1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies."
]
} |
1710.03029 | 2760957422 | Trajectory optimization and posture generation are hard problems in robot locomotion, which can be non-convex and have multiple local optima. Progress on these problems is further hindered by a lack of open benchmarks, since comparisons of different solutions are difficult to make. In this paper we introduce a new benchmark for trajectory optimization and posture generation of legged robots, using a pre-defined scenario, robot and constraints, as well as evaluation criteria. We evaluate state-of-the-art trajectory optimization algorithms based on sequential quadratic programming (SQP) on the benchmark, as well as new stochastic and incremental optimization methods borrowed from the large-scale machine learning literature. Interestingly we show that some of these stochastic and incremental methods, which are based on stochastic gradient descent (SGD), achieve higher success rates than SQP on tough initializations. Inspired by this observation we also propose a new incremental variant of SQP which updates only a random subset of the costs and constraints at each iteration. The algorithm is the best performing in both success rate and convergence speed, improving over SQP by up to 30 in both criteria. The benchmark's resources and a solution evaluation script are made openly available. | Results of state-of-the-art robot motion planning algorithms are impressive @cite_20 @cite_0 @cite_11 , but it is arguably difficult to compare each planner's performance, advantages and disadvantages. This is partly because each algorithm is evaluated on a different environment, or using different cost functions, constraints or robot models. For results to be comparable and verifiable, the evaluation criteria must be the same and largely sampled, while all inputs (i.e. scenario, robot, constraints) must be available. Recently, verifiability and comparability have been strongly pursued in fields such as computer vision through an investment in open benchmarks @cite_19 @cite_16 and open source - which has arguably been a strong factor in fostering research progress. This paper tries to follow this trend and make public a benchmark with pre-defined robot, environment, costs, constraints and evaluation criteria. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"1945123189",
"",
"2031489346",
"2101340954",
"2568901734"
],
"abstract": [
"This paper describes a collection of optimization algorithms for achieving dynamic planning, control, and state estimation for a bipedal robot designed to operate reliably in complex environments. To make challenging locomotion tasks tractable, we describe several novel applications of convex, mixed-integer, and sparse nonlinear optimization to problems ranging from footstep placement to whole-body planning and control. We also present a state estimator formulation that, when combined with our walking controller, permits highly precise execution of extended walking plans over non-flat terrain. We describe our complete system integration and experiments carried out on Atlas, a full-size hydraulic humanoid robot built by Boston Dynamics, Inc.",
"",
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"Direct methods for trajectory optimization are widely used for planning locally optimal trajectories of robotic systems. Many critical tasks, such as locomotion and manipulation, often involve impacting the ground or objects in the environment. Most state-of-the-art techniques treat the discontinuous dynamics that result from impacts as discrete modes and restrict the search for a complete path to a specified sequence through these modes. Here we present a novel method for trajectory planning of rigid-body systems that contact their environment through inelastic impacts and Coulomb friction. This method eliminates the requirement for a priori mode ordering. Motivated by the formulation of multi-contact dynamics as a Linear Complementarity Problem for forward simulation, the proposed algorithm poses the optimization problem as a Mathematical Program with Complementarity Constraints. We leverage Sequential Quadratic Programming to naturally resolve contact constraint forces while simultaneously optimizing a trajectory that satisfies the complementarity constraints. The method scales well to high-dimensional systems with large numbers of possible modes. We demonstrate the approach on four increasingly complex systems: rotating a pinned object with a finger, simple grasping and manipulation, planar walking with the Spring Flamingo robot, and high-speed bipedal running on the FastRunner platform.",
"In this paper we tackle the problem of visually predicting surface friction for environments with diverse surfaces, and integrating this knowledge into biped robot locomotion planning. The problem is essential for autonomous robot locomotion since diverse surfaces with varying friction abound in the real world, from wood to ceramic tiles, grass or ice, which may cause difficulties or huge energy costs for robot locomotion if not considered. We propose to estimate friction and its uncertainty from visual estimation of material classes using convolutional neural networks, together with probability distribution functions of friction associated with each material. We then robustly integrate the friction predictions into a hierarchical (footstep and full-body) planning method using chance constraints, and optimize the same trajectory costs at both levels of the planning method for consistency. Our solution achieves fully autonomous perception and locomotion on slippery terrain, which considers not only friction and its uncertainty, but also collision, stability and trajectory cost. We show promising friction prediction results in real pictures of outdoor scenarios, and planning experiments on a real robot facing surfaces with different friction."
]
} |
1710.03148 | 2963667306 | The constraint satisfaction problem (CSP) is concerned with homomorphisms between two structures. For CSPs with restricted left-hand side structures, the results of Dalmau, Kolaitis, and Vardi [CP'02], Grohe [FOCS'03 JACM'07], and Atserias, Bulatov, and Dalmau [ICALP'07] establish the precise borderline of polynomial-time solvability (subject to complexity-theoretic assumptions) and of solvability by bounded-consistency algorithms (unconditionally) as bounded treewidth modulo homomorphic equivalence. The general-valued constraint satisfaction problem (VCSP) is a generalisation of the CSP concerned with homomorphisms between two valued structures. For VCSPs with restricted left-hand side valued structures, we establish the precise borderline of polynomial-time solvability (subject to complexity-theoretic assumptions) and of solvability by the k-th level of the Sherali-Adams LP hierarchy (unconditionally). We also obtain results on related problems concerned with finding a solution and recognising the tractable cases; the latter has an application in database theory. | In his PhD thesis @cite_2 , F "arnqvist studied the complexity of VCSP( @math , @math ) and also some fragments of VCSPs (see also @cite_24 @cite_45 ). He considered a very specific framework that only allows for particular types of classes @math 's to be classified. For these classes, he showed that only bounded treewidth gives rise to tractability (assuming bounded arity) and asked about more general classes. In particular, decision CSPs do fit in his framework and Grohe's classification @cite_21 is not implied by F "arnqvist's work. In contrast, our characterisation (of classes @math 's of valued structures) gives rise to new tractable cases going beyond those identified by F "arnqvist. Moreover, we can derive both Grohe's classification and F "arnqvist's classification directly from our results, as explained in . | {
"cite_N": [
"@cite_24",
"@cite_45",
"@cite_21",
"@cite_2"
],
"mid": [
"",
"191485599",
"2111829945",
"218667201"
],
"abstract": [
"",
"The valued constraint satisfaction problem (VCSP) is an optimization framework originating from artificial intelligence which generalizes the classical constraint satisfaction problem (CSP). In this paper, we are interested in structural properties that can make problems from the VCSP framework, as well as other CSP variants, solvable to optimality in polynomial time. So far, the largest structural class that is known to be polynomial-time solvable to optimality is the class of bounded hypertree width instances introduced by Here, larger classes of tractable instances are singled out by using dynamic programming and structural decompositions based on a hypergraph invariant proposed by Grohe and Marx. In the second part of the paper, we take a different view on our optimization problems; instead of considering fixed arbitrary values for some structural invariant of the (hyper)graph structure of the constraints, we consider the problems parameterized by the tree-width of primal, dual, and incidence graphs, combined with several other basic parameters such as domain size and arity. Such parameterizations of plain CSPs have been studied by Samer and Szeider. Here, we extend their framework to encompass our optimization problems, by coupling it with further non-trivial machinery and new reductions. By doing so, we are able to determine numerous combinations of the considered parameters that make our optimization problems admit fixed-parameter algorithms.",
"We give a complexity theoretic classification of homomorphism problems for graphs and, more generally, relational structures obtained by restricting the left hand side structure in a homomorphism. For every class C of structures, let HOM(C,−) be the problem of deciding whether a given structure A ∈C has a homomorphism to a given (arbitrary) structure s. We prove that, under some complexity theoretic assumption from parameterized complexity theory, HOM(C,−) is in polynomial time if and only if C has bounded tree width modulo homomorphic equivalence. Translated into the language of constraint satisfaction problems, our result yields a characterization of the tractable structural restrictions of constraint satisfaction problems. Translated into the language of database theory, it implies a characterization of the tractable instances of the evaluation problem for conjunctive queries over relational databases.",
"In this thesis we investigate the computational complexity and approximability of computational problems from the constraint satisfaction framework. An instance of a constraint satisfaction problem ..."
]
} |
1710.03135 | 2628359750 | Online programming discussion platforms such as Stack Overflow serve as a rich source of information for software developers. Available information include vibrant discussions and oftentimes ready-to-use code snippets. Anecdotes report that software developers copy and paste code snippets from those information sources for convenience reasons. Such behavior results in a constant flow of community-provided code snippets into production software. To date, the impact of this behaviour on code security is unknown. We answer this highly important question by quantifying the proliferation of security-related code snippets from Stack Overflow in Android applications available on Google Play. Access to the rich source of information available on Stack Overflow including ready-to-use code snippets provides huge benefits for software developers. However, when it comes to code security there are some caveats to bear in mind: Due to the complex nature of code security, it is very difficult to provide ready-to-use and secure solutions for every problem. Hence, integrating a security-related code snippet from Stack Overflow into production software requires caution and expertise. Unsurprisingly, we observed insecure code snippets being copied into Android applications millions of users install from Google Play every day. To quantitatively evaluate the extent of this observation, we scanned Stack Overflow for code snippets and evaluated their security score using a stochastic gradient descent classifier. In order to identify code reuse in Android applications, we applied state-of-the-art static analysis. Our results are alarming: 15.4 of the 1.3 million Android applications we analyzed, contained security-related code snippets from Stack Overflow. Out of these 97.9 contain at least one insecure code snippet. | @cite_40 report that developer discussion platforms like are very effective at code reviews and conceptual questions. @cite_48 investigate the interplay of activity and development process on GitHub. They conclude that knowledge of the GitHub community flows into . In turn, this knowledge increases the number of commits of users on GitHub. @cite_27 created an algorithm to link questions with Android classes detected in source code. They found that Android developer question counts peak on immediately after APIs receive updates that modify their behavior. | {
"cite_N": [
"@cite_48",
"@cite_40",
"@cite_27"
],
"mid": [
"2545778708",
"2123246351",
""
],
"abstract": [
"Stack Overflow is a popular on-line programming question and answer community providing its participants with rapid access to knowledge and expertise of their peers, especially benefitting coders. Despite the popularity of Stack Overflow, its role in the work cycle of open-source developers is yet to be understood: on the one hand, participation in it has the potential to increase the knowledge of individual developers thus improving and speeding up the development process. On the other hand, participation in Stack Overflow may interrupt the regular working rhythm of the developer, hence also possibly slow down the development process. In this paper we investigate the interplay between Stack Overflow activities and the development process, reflected by code changes committed to the largest social coding repository, GitHub. Our study shows that active GitHub committers ask fewer questions and provide more answers than others. Moreover, we observe that active Stack Overflow askers distribute their work in a less uniform way than developers that do not ask questions. Finally, we show that despite the interruptions incurred, the Stack Overflow activity rate correlates with the code changing activity in GitHub.",
"Question and Answer (Q&A) websites, such as Stack Overflow, use social media to facilitate knowledge exchange between programmers and fill archives with millions of entries that contribute to the body of knowledge in software development. Understanding the role of Q&A websites in the documentation landscape will enable us to make recommendations on how individuals and companies can leverage this knowledge effectively. In this paper, we analyze data from Stack Overflow to categorize the kinds of questions that are asked, and to explore which questions are answered well and which ones remain unanswered. Our preliminary findings indicate that Q&A websites are particularly effective at code reviews and conceptual questions. We pose research questions and suggest future work to explore the motivations of programmers that contribute to Q&A websites, and to understand the implications of turning Q&A exchanges into technical mini-blogs through the editing of questions and answers.",
""
]
} |
1710.03135 | 2628359750 | Online programming discussion platforms such as Stack Overflow serve as a rich source of information for software developers. Available information include vibrant discussions and oftentimes ready-to-use code snippets. Anecdotes report that software developers copy and paste code snippets from those information sources for convenience reasons. Such behavior results in a constant flow of community-provided code snippets into production software. To date, the impact of this behaviour on code security is unknown. We answer this highly important question by quantifying the proliferation of security-related code snippets from Stack Overflow in Android applications available on Google Play. Access to the rich source of information available on Stack Overflow including ready-to-use code snippets provides huge benefits for software developers. However, when it comes to code security there are some caveats to bear in mind: Due to the complex nature of code security, it is very difficult to provide ready-to-use and secure solutions for every problem. Hence, integrating a security-related code snippet from Stack Overflow into production software requires caution and expertise. Unsurprisingly, we observed insecure code snippets being copied into Android applications millions of users install from Google Play every day. To quantitatively evaluate the extent of this observation, we scanned Stack Overflow for code snippets and evaluated their security score using a stochastic gradient descent classifier. In order to identify code reuse in Android applications, we applied state-of-the-art static analysis. Our results are alarming: 15.4 of the 1.3 million Android applications we analyzed, contained security-related code snippets from Stack Overflow. Out of these 97.9 contain at least one insecure code snippet. | @cite_25 compared the similarity of abstract syntax trees to detect code duplicates in source code. @cite_37 created k-gram streams from bytecode basic blocks. Each k-gram defines a program feature. A code snippet and an application is represented by the binary feature vector that is created using universal hashing over k-grams. They decide if a code snippet is contained in an app by dividing the number of common features by the number of features of the code snippet. While their approach works in benign scenarios, it is not robust against trivial code modifications (e. ,g. reordering of instructions or renaming of variables). @cite_0 @cite_26 detect code clones by searching for subgraph isomophisms of program dependency graphs (PDG). Their approach is able to detect code fragments that perform similar computations through different syntactic variants @cite_30 and robust against trivial modifications, constant renaming and method class restructuring. @cite_30 use Control Flow Graphs (CFG) in combination with opcodes to detect code clones in . They define a geometry characteristic called centroid to embed a CFG into vector space. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_26",
"@cite_0",
"@cite_25"
],
"mid": [
"2088479623",
"1578479379",
"",
"183494281",
"2128782367"
],
"abstract": [
"Besides traditional problems such as potential bugs, (smartphone) application clones on Android markets bring new threats. That is, attackers clone the code from legitimate Android applications, assemble it with malicious code or advertisements, and publish these purpose-added\" app clones on the same or other markets for benefits. Three inherent and unique characteristics make app clones difficult to detect by existing techniques: a billion opcode problem caused by cross-market publishing, gap between code clones and app clones, and prevalent Type 2 and Type 3 clones. Existing techniques achieve either accuracy or scalability, but not both. To achieve both goals, we use a geometry characteristic, called centroid, of dependency graphs to measure the similarity between methods (code fragments) in two apps. Then we synthesize the method-level similarities and draw a Y N conclusion on app (core functionality) cloning. The observed centroid effect\" and the inherent monotonicity\" property enable our approach to achieve both high accuracy and scalability. We implemented the app clone detection system and evaluated it on five whole Android markets (including 150,145 apps, 203 million methods and 26 billion opcodes). It takes less than one hour to perform cross-market app clone detection on the five markets after generating centroids only once.",
"Mobile application markets such as the Android Marketplace provide a centralized showcase of applications that end users can purchase or download for free onto their mobile phones. Despite the influx of applications to the markets, applications are cursorily reviewed by marketplace maintainers due to the vast number of submissions. User policing and reporting is the primary method to detect misbehaving applications. This reactive approach to application security, especially when programs can contain bugs, malware, or pirated (inauthentic) code, puts too much responsibility on the end users. In light of this, we propose Juxtapp, a scalable infrastructure for code similarity analysis among Android applications. Juxtapp provides a key solution to a number of problems in Android security, including determining if apps contain copies of buggy code, have significant code reuse that indicates piracy, or are instances of known malware. We evaluate our system using more than 58,000 Android applications and demonstrate that our system scales well and is effective. Our results show that Juxtapp is able to detect: 1) 463 applications with confirmed buggy code reuse that can lead to serious vulnerabilities in real-world apps, 2) 34 instances of known malware and variants (13 distinct variants of the GoldDream malware), and 3) pirated variants of a popular paid game.",
"",
"We present DNADroid, a tool that detects Android application copying, or “cloning”, by robustly computing the similarity between two applications. DNADroid achieves this by comparing program dependency graphs between methods in candidate applications. Using DNADroid, we found at least 141 applications that have been the victims of cloning, some as many as seven times. DNADroid has a very low false positive rate — we manually confirmed that all the applications detected are indeed clones by either visual or behavioral similarity. We present several case studies that give insight into why applications are cloned, including localization and redirecting ad revenue. We describe a case of malware being added to an application and show how DNADroid was able to detect two variants of the same malware. Lastly, we offer examples of an open source cracking tool being used in the wild.",
"Detecting code clones has many software engineering applications. Existing approaches either do not scale to large code bases or are not robust against minor code modifications. In this paper, we present an efficient algorithm for identifying similar subtrees and apply it to tree representations of source code. Our algorithm is based on a novel characterization of subtrees with numerical vectors in the Euclidean space Rnmiddot and an efficient algorithm to cluster these vectors w.r.t. the Euclidean distance metric. Subtrees with vectors in one cluster are considered similar. We have implemented our tree similarity algorithm as a clone detection tool called DECKARD and evaluated it on large code bases written in C and Java including the Linux kernel and JDK. Our experiments show that DECKARD is both scalable and accurate. It is also language independent, applicable to any language with a formally specified grammar."
]
} |
1710.03222 | 2891702854 | With the advent of Big Data, nowadays in many applications databases containing large quantities of similar time series are available. Forecasting time series in these domains with traditional univariate forecasting procedures leaves great potentials for producing accurate forecasts untapped. Recurrent neural networks (RNNs), and in particular Long Short-Term Memory (LSTM) networks, have proven recently that they are able to outperform state-of-the-art univariate time series forecasting methods in this context when trained across all available time series. However, if the time series database is heterogeneous, accuracy may degenerate, so that on the way towards fully automatic forecasting methods in this space, a notion of similarity between the time series needs to be built into the methods. To this end, we present a prediction model that can be used with different types of RNN models on subgroups of similar time series, which are identified by time series clustering techniques. We assess our proposed methodology using LSTM networks, a widely popular RNN variant. Our method achieves competitive results on benchmarking datasets under competition evaluation procedures. In particular, in terms of mean sMAPE accuracy, it consistently outperforms the baseline LSTM model and outperforms all other methods on the CIF2016 forecasting competition dataset. | The powerful data-driven self-adaptability and model generalizability enable NNs to uncover complex relationships among samples and perform predictions on new observations of a population, without being constrained by assumptions regarding the underlying data generating process of a dataset. These promising characteristics are further strengthened by the universal function approximation properties that NNs possess . Therefore, NNs are popular for classification and regression, and also in time series forecasting when external regressors and additional knowledge is available. In pure univariate time series forecasting, over the past two decades, NN architectures have been advocated as a strong alternative to traditional statistical forecasting methods . Researchers have been increasingly drawing their interest towards developing and applying different NN models for time series forecasting. This includes multi-layer perceptrons (MLP), GRNNs, ensemble architectures, RNNs, ESNs and LSTMs, while MLPs are being the most widely used NN variant for time series forecasting thus far. For a detailed description of the MLP architecture and its widespread applications employed in time series forecasting see @cite_4 . | {
"cite_N": [
"@cite_4"
],
"mid": [
"2041404167"
],
"abstract": [
"Scientific knowledge grows at a phenomenal pace--but few books have had as lasting an impact or played as important a role in our modern world as The Mathematical Theory of Communication, published originally as a paper on communication theory more than fifty years ago. Republished in book form shortly thereafter, it has since gone through four hardcover and sixteen paperback printings. It is a revolutionary work, astounding in its foresight and contemporaneity. The University of Illinois Press is pleased and honored to issue this commemorative reprinting of a classic."
]
} |
1710.03222 | 2891702854 | With the advent of Big Data, nowadays in many applications databases containing large quantities of similar time series are available. Forecasting time series in these domains with traditional univariate forecasting procedures leaves great potentials for producing accurate forecasts untapped. Recurrent neural networks (RNNs), and in particular Long Short-Term Memory (LSTM) networks, have proven recently that they are able to outperform state-of-the-art univariate time series forecasting methods in this context when trained across all available time series. However, if the time series database is heterogeneous, accuracy may degenerate, so that on the way towards fully automatic forecasting methods in this space, a notion of similarity between the time series needs to be built into the methods. To this end, we present a prediction model that can be used with different types of RNN models on subgroups of similar time series, which are identified by time series clustering techniques. We assess our proposed methodology using LSTM networks, a widely popular RNN variant. Our method achieves competitive results on benchmarking datasets under competition evaluation procedures. In particular, in terms of mean sMAPE accuracy, it consistently outperforms the baseline LSTM model and outperforms all other methods on the CIF2016 forecasting competition dataset. | @cite_6 highlights several design implications of the MLP architecture for time series forecasting, such as a large number of design parameters, long training time, a potential of the fitting procedure to suffer from local minima, etc. To overcome these shortcomings, those authors introduce GRNN, a special type of neural network that contains a single design parameter and carries out a relatively fast training procedure compared to vanilla MLP. Also, they incorporate several design strategies (e.g., fusing multiple GRNNs) to automate the proposed modelling scheme to make it more desirable for large-scale business time series forecasting. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2030888282"
],
"abstract": [
"Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide."
]
} |
1710.03289 | 2762667616 | This paper presents a novel enumerative biclustering algorithm to directly mine all maximal biclusters in mixed-attribute datasets (containing both numerical and categorical attributes), with or without missing values. The proposal is an extension of RIn-Close_CVC, which was originally conceived to mine perfect or perturbed biclusters with constant values on columns solely from numerical datasets, and without missing values. Even endowed with additional and more general features, the extended RIn-Close_CVC retains four key properties: (1) efficiency, (2) completeness, (3) correctness, and (4) non-redundancy. Our proposal is the first one to deal with mixed-attribute datasets without requiring any pre-processing step, such as discretization and itemization of real-valued attributes. This is a decisive aspect, because discretization and itemization implies a priori decisions, with information loss and no clear control over the consequences. On the other hand, even having to specify a priori an individual threshold for each numerical attribute, that will be used to indicate internal consistency per attribute, each threshold will be applied during the construction of the biclusters, shaping the peculiarities of the data distribution. We also explore the strong connection between biclustering and frequent pattern mining to (1) provide filters to select a compact bicluster set that exhibits high relevance and low redundancy, and (2) in the case of labeled datasets, automatically present the biclusters in a user-friendly and intuitive form, by means of quantitative class association rules. Our experimental results showed that the biclusters yield a parsimonious set of relevant rules, providing useful and interpretable models for five mixed-attribute labeled datasets. | In @cite_12 , the authors presented a biclustering method designed to handle mixed-attribute datasets. This method uses a pre-processing step to simplify the data by means of discretization, and a constructive greedy heuristic to build the biclusters by iteratively adding columns. Their goal was, as expected, to detect CVC biclusters. To the best of Vandromme 's knowledge, this was the first method to handle mixed-attribute datasets in the biclustering literature @cite_12 . | {
"cite_N": [
"@cite_12"
],
"mid": [
"2563966252"
],
"abstract": [
"We define the problem of biclustering on heterogeneous data, that is, data of various types (binary, numeric, etc.). This problem has not yet been investigated in the biclustering literature.We propose a new method, HBC (Heterogeneous BiClustering), designed to extract biclus- ters from heterogeneous, large-scale, sparse data matrices. The goal of this method is to handle medical data gathered by hospitals (on patients, stays, acts, diagnoses, prescriptions, etc.) and to provide valuable insight on such data. HBC takes advantage of the data sparsity and uses a con- structive greedy heuristic to build a large number of possibly overlapping biclusters. The proposed method is successfully compared with a stan- dard biclustering algorithm on small-size numeric data. Experiments on real-life data sets further assert its scalability and efficiency."
]
} |
1710.03289 | 2762667616 | This paper presents a novel enumerative biclustering algorithm to directly mine all maximal biclusters in mixed-attribute datasets (containing both numerical and categorical attributes), with or without missing values. The proposal is an extension of RIn-Close_CVC, which was originally conceived to mine perfect or perturbed biclusters with constant values on columns solely from numerical datasets, and without missing values. Even endowed with additional and more general features, the extended RIn-Close_CVC retains four key properties: (1) efficiency, (2) completeness, (3) correctness, and (4) non-redundancy. Our proposal is the first one to deal with mixed-attribute datasets without requiring any pre-processing step, such as discretization and itemization of real-valued attributes. This is a decisive aspect, because discretization and itemization implies a priori decisions, with information loss and no clear control over the consequences. On the other hand, even having to specify a priori an individual threshold for each numerical attribute, that will be used to indicate internal consistency per attribute, each threshold will be applied during the construction of the biclusters, shaping the peculiarities of the data distribution. We also explore the strong connection between biclustering and frequent pattern mining to (1) provide filters to select a compact bicluster set that exhibits high relevance and low redundancy, and (2) in the case of labeled datasets, automatically present the biclusters in a user-friendly and intuitive form, by means of quantitative class association rules. Our experimental results showed that the biclusters yield a parsimonious set of relevant rules, providing useful and interpretable models for five mixed-attribute labeled datasets. | In fact, after imposing discretization, we have better proposals than @cite_12 , especially when we consider the connection between biclustering, FPM and FCA. Notice that we can extract CVC biclusters from quantitative-itemsets (and vice-versa), and (quantitative) association rules are mined from (quantitative-)frequent itemsets. Veroneze @cite_1 also showed that well-known heuristic-based biclustering algorithms can have a poor performance when trying to identify the existing biclusters in a simple and controlled scenario, thus fully favouring the use of efficient enumerative algorithms, such as the ones provided in FPM and FCA literature and the RIn-Close family @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_12"
],
"mid": [
"1894726941",
"2563966252"
],
"abstract": [
"Biclustering has proved to be a powerful data analysis technique due to its wide success in various application domains. However, the existing literature presents efficient solutions only for enumerating maximal biclusters with constant values, or heuristic-based approaches which can not find all biclusters or even support the maximality of the obtained biclusters. Here, we present a general family of biclustering algorithms for enumerating all maximal biclusters with (i) constant values on rows, (ii) constant values on columns, or (iii) coherent values. Versions for perfect and for perturbed biclusters are provided. Our algorithms have four key properties (just the algorithm for perturbed biclusters with coherent values fails to exhibit the first property): they are (1) efficient (take polynomial time per pattern), (2) complete (find all maximal biclusters), (3) correct (all biclusters attend the user-defined measure of similarity), and (4) non-redundant (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). They are based on a generalization of an efficient formal concept analysis algorithm called In-Close2. Experimental results point to the necessity of having efficient enumerative biclustering algorithms and provide a valuable insight into the scalability of our family of algorithms and its sensitivity to user-defined parameters.",
"We define the problem of biclustering on heterogeneous data, that is, data of various types (binary, numeric, etc.). This problem has not yet been investigated in the biclustering literature.We propose a new method, HBC (Heterogeneous BiClustering), designed to extract biclus- ters from heterogeneous, large-scale, sparse data matrices. The goal of this method is to handle medical data gathered by hospitals (on patients, stays, acts, diagnoses, prescriptions, etc.) and to provide valuable insight on such data. HBC takes advantage of the data sparsity and uses a con- structive greedy heuristic to build a large number of possibly overlapping biclusters. The proposed method is successfully compared with a stan- dard biclustering algorithm on small-size numeric data. Experiments on real-life data sets further assert its scalability and efficiency."
]
} |
1710.03289 | 2762667616 | This paper presents a novel enumerative biclustering algorithm to directly mine all maximal biclusters in mixed-attribute datasets (containing both numerical and categorical attributes), with or without missing values. The proposal is an extension of RIn-Close_CVC, which was originally conceived to mine perfect or perturbed biclusters with constant values on columns solely from numerical datasets, and without missing values. Even endowed with additional and more general features, the extended RIn-Close_CVC retains four key properties: (1) efficiency, (2) completeness, (3) correctness, and (4) non-redundancy. Our proposal is the first one to deal with mixed-attribute datasets without requiring any pre-processing step, such as discretization and itemization of real-valued attributes. This is a decisive aspect, because discretization and itemization implies a priori decisions, with information loss and no clear control over the consequences. On the other hand, even having to specify a priori an individual threshold for each numerical attribute, that will be used to indicate internal consistency per attribute, each threshold will be applied during the construction of the biclusters, shaping the peculiarities of the data distribution. We also explore the strong connection between biclustering and frequent pattern mining to (1) provide filters to select a compact bicluster set that exhibits high relevance and low redundancy, and (2) in the case of labeled datasets, automatically present the biclusters in a user-friendly and intuitive form, by means of quantitative class association rules. Our experimental results showed that the biclusters yield a parsimonious set of relevant rules, providing useful and interpretable models for five mixed-attribute labeled datasets. | An approach to mine biclusters from non-binary datasets using traditional FPM and FCA algorithms devoted to binary datasets (such as Apriori @cite_13 , Charm @cite_32 , or In-Close2 @cite_25 ) is (1) to discretize the dataset, and (2) to itemize the discrete dataset. Notice that each dataset attribute is an item in the binary case. Basically, the itemization (the second step of the proposed approach) consists in creating a binary dataset from a discrete dataset, without information loss. The first step will necessarily involve some kind of information loss. An item here is a pair @math , where @math is an attribute (of the original dataset), and @math is a discretized value. So, we have as many items as the number of pairs @math . Thus, there is a trade-off between faster execution time with fewer discretized values and reduced information loss with more discretized values. Therefore, depending on the nature of the dataset, the user is not totally free to choose the granularity of the discretization. | {
"cite_N": [
"@cite_13",
"@cite_32",
"@cite_25"
],
"mid": [
"1484413656",
"",
"1947406465"
],
"abstract": [
"We consider the problem of discovering association rules between items in a large database of sales transactions. We present two new algorithms for solving thii problem that are fundamentally different from the known algorithms. Empirical evaluation shows that these algorithms outperform the known algorithms by factors ranging from three for small problems to more than an order of magnitude for large problems. We also show how the best features of the two proposed algorithms can be combined into a hybrid algorithm, called AprioriHybrid. Scale-up experiments show that AprioriHybrid scales linearly with the number of transactions. AprioriHybrid also has excellent scale-up properties with respect to the transaction size and the number of items in the database.",
"",
"This paper presents a program, called In-Close2, that is a high performance realisation of the Close-by-One (CbO) algorithm. The design of In-Close2 is discussed and some new optimisation and data preprocessing techniques are presented. The performance of In-Close2 is favourably compared with another contemporary CbO variant called FCbO. An application of In-Close2 is given, using minimum support to reduce the size and complexity of a large formal context. Based on this application, an analysis of gene expression data is presented. In-Close2 can be downloaded from Sourceforge."
]
} |
1710.03289 | 2762667616 | This paper presents a novel enumerative biclustering algorithm to directly mine all maximal biclusters in mixed-attribute datasets (containing both numerical and categorical attributes), with or without missing values. The proposal is an extension of RIn-Close_CVC, which was originally conceived to mine perfect or perturbed biclusters with constant values on columns solely from numerical datasets, and without missing values. Even endowed with additional and more general features, the extended RIn-Close_CVC retains four key properties: (1) efficiency, (2) completeness, (3) correctness, and (4) non-redundancy. Our proposal is the first one to deal with mixed-attribute datasets without requiring any pre-processing step, such as discretization and itemization of real-valued attributes. This is a decisive aspect, because discretization and itemization implies a priori decisions, with information loss and no clear control over the consequences. On the other hand, even having to specify a priori an individual threshold for each numerical attribute, that will be used to indicate internal consistency per attribute, each threshold will be applied during the construction of the biclusters, shaping the peculiarities of the data distribution. We also explore the strong connection between biclustering and frequent pattern mining to (1) provide filters to select a compact bicluster set that exhibits high relevance and low redundancy, and (2) in the case of labeled datasets, automatically present the biclusters in a user-friendly and intuitive form, by means of quantitative class association rules. Our experimental results showed that the biclusters yield a parsimonious set of relevant rules, providing useful and interpretable models for five mixed-attribute labeled datasets. | BicPAM @cite_30 and BiC2PAM @cite_18 also relies on discretization, itemization, and the usage of a traditional FPM algorithm to mine the biclusters. BiC2PAM extends BicPAM to incorporate constraints derived from background knowledge in the mining process. BicPAM and BiC2PAM are available in a free bicluster software called BicPAMS @cite_11 . | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_11"
],
"mid": [
"2100851406",
"2519170897",
""
],
"abstract": [
"Background Biclustering, the discovery of sets of objects with a coherent pattern across a subset of conditions, is a critical task to study a wide-set of biomedical problems, where molecular units or patients are meaningfully related with a set of properties. The challenging combinatorial nature of this task led to the development of approaches with restrictions on the allowed type, number and quality of biclusters. Contrasting, recent biclustering approaches relying on pattern mining methods can exhaustively discover flexible structures of robust biclusters. However, these approaches are only prepared to discover constant biclusters and their underlying contributions remain dispersed.",
"Background Biclustering has been largely used in biological data analysis, enabling the discovery of putative functional modules from omic and network data. Despite the recognized importance of incorporating domain knowledge to guide biclustering and guarantee a focus on relevant and non-trivial biclusters, this possibility has not yet been comprehensively addressed. This results from the fact that the majority of existing algorithms are only able to deliver sub-optimal solutions with restrictive assumptions on the structure, coherency and quality of biclustering solutions, thus preventing the up-front satisfaction of knowledge-driven constraints. Interestingly, in recent years, a clearer understanding of the synergies between pattern mining and biclustering gave rise to a new class of algorithms, termed as pattern-based biclustering algorithms. These algorithms, able to efficiently discover flexible biclustering solutions with optimality guarantees, are thus positioned as good candidates for knowledge incorporation. In this context, this work aims to bridge the current lack of solid views on the use of background knowledge to guide (pattern-based) biclustering tasks.",
""
]
} |
1710.03289 | 2762667616 | This paper presents a novel enumerative biclustering algorithm to directly mine all maximal biclusters in mixed-attribute datasets (containing both numerical and categorical attributes), with or without missing values. The proposal is an extension of RIn-Close_CVC, which was originally conceived to mine perfect or perturbed biclusters with constant values on columns solely from numerical datasets, and without missing values. Even endowed with additional and more general features, the extended RIn-Close_CVC retains four key properties: (1) efficiency, (2) completeness, (3) correctness, and (4) non-redundancy. Our proposal is the first one to deal with mixed-attribute datasets without requiring any pre-processing step, such as discretization and itemization of real-valued attributes. This is a decisive aspect, because discretization and itemization implies a priori decisions, with information loss and no clear control over the consequences. On the other hand, even having to specify a priori an individual threshold for each numerical attribute, that will be used to indicate internal consistency per attribute, each threshold will be applied during the construction of the biclusters, shaping the peculiarities of the data distribution. We also explore the strong connection between biclustering and frequent pattern mining to (1) provide filters to select a compact bicluster set that exhibits high relevance and low redundancy, and (2) in the case of labeled datasets, automatically present the biclusters in a user-friendly and intuitive form, by means of quantitative class association rules. Our experimental results showed that the biclusters yield a parsimonious set of relevant rules, providing useful and interpretable models for five mixed-attribute labeled datasets. | BicPAM is a framework that relies on 3 steps: pre-processing (which includes normalization, discretization, itemization, handling of missing values, and tackling varying levels of noise), mining (where some FPM algorithm is used to mine the biclusters), and post-processing (in which the biclusters can be extended, merged and filtered out, among other possibilities). BicPAM makes available three discretization options (each one with key implications on the target solution), and the user can easily incorporate other options into the framework. BicPAM also makes available several FPM algorithms in the mining step, and the user can also incorporate others. To alleviate common drawbacks related to discretization procedures (such as information loss), the user can choose to assign multiple items over a single element, tackling the items-boundary problem. The drawback of this strategy is that it usually generates many redundant biclusters (even when using algorithms to mine closed frequent itemsets), guiding to extra computational cost, and the information loss is still present, though attenuated. For more contributions regarding biclustering based on FPM algorithms, see the survey of Henriques @cite_3 . | {
"cite_N": [
"@cite_3"
],
"mid": [
"2195615921"
],
"abstract": [
"Mining matrices to find relevant biclusters, subsets of rows exhibiting a coherent pattern over a subset of columns, is a critical task for a wide-set of biomedical and social applications. Since biclustering is a challenging combinatorial optimization task, existing approaches place restrictions on the allowed structure, coherence and quality of biclusters. Biclustering approaches relying on pattern mining (PM) allow an exhaustive yet efficient space exploration together with the possibility to discover flexible structures of biclusters with parameterizable coherency and noise-tolerance. Still, state-of-the-art contributions are dispersed and the potential of their integration remains unclear.This work proposes a structured and integrated view of the contributions of state-of-the-art PM-based biclustering approaches, makes available a set of principles for a guided definition of new PM-based biclustering approaches, and discusses their relevance for applications in pattern recognition. Empirical evidence shows that these principles guarantee the robustness, efficiency and flexibility of PM-based biclustering. HighlightsPattern mining (PM) searches enable flexible, exhaustive and efficient biclusteringIntegration of existing dispersed PM-inspired contributions for biclustering.Principles for guided design and evaluation of new PM-based biclustering approachesPM-based biclustering solutions have parameterizable coherency and quality."
]
} |
1710.03289 | 2762667616 | This paper presents a novel enumerative biclustering algorithm to directly mine all maximal biclusters in mixed-attribute datasets (containing both numerical and categorical attributes), with or without missing values. The proposal is an extension of RIn-Close_CVC, which was originally conceived to mine perfect or perturbed biclusters with constant values on columns solely from numerical datasets, and without missing values. Even endowed with additional and more general features, the extended RIn-Close_CVC retains four key properties: (1) efficiency, (2) completeness, (3) correctness, and (4) non-redundancy. Our proposal is the first one to deal with mixed-attribute datasets without requiring any pre-processing step, such as discretization and itemization of real-valued attributes. This is a decisive aspect, because discretization and itemization implies a priori decisions, with information loss and no clear control over the consequences. On the other hand, even having to specify a priori an individual threshold for each numerical attribute, that will be used to indicate internal consistency per attribute, each threshold will be applied during the construction of the biclusters, shaping the peculiarities of the data distribution. We also explore the strong connection between biclustering and frequent pattern mining to (1) provide filters to select a compact bicluster set that exhibits high relevance and low redundancy, and (2) in the case of labeled datasets, automatically present the biclusters in a user-friendly and intuitive form, by means of quantitative class association rules. Our experimental results showed that the biclusters yield a parsimonious set of relevant rules, providing useful and interpretable models for five mixed-attribute labeled datasets. | The missing values can be simply ignored in methods that rely in itemization to mine the biclusters. Henriques & Madeira @cite_30 also proposed the use of additional items, specially handled according to a level of relaxation imposed by the user. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2100851406"
],
"abstract": [
"Background Biclustering, the discovery of sets of objects with a coherent pattern across a subset of conditions, is a critical task to study a wide-set of biomedical problems, where molecular units or patients are meaningfully related with a set of properties. The challenging combinatorial nature of this task led to the development of approaches with restrictions on the allowed type, number and quality of biclusters. Contrasting, recent biclustering approaches relying on pattern mining methods can exhaustively discover flexible structures of robust biclusters. However, these approaches are only prepared to discover constant biclusters and their underlying contributions remain dispersed."
]
} |
1710.03289 | 2762667616 | This paper presents a novel enumerative biclustering algorithm to directly mine all maximal biclusters in mixed-attribute datasets (containing both numerical and categorical attributes), with or without missing values. The proposal is an extension of RIn-Close_CVC, which was originally conceived to mine perfect or perturbed biclusters with constant values on columns solely from numerical datasets, and without missing values. Even endowed with additional and more general features, the extended RIn-Close_CVC retains four key properties: (1) efficiency, (2) completeness, (3) correctness, and (4) non-redundancy. Our proposal is the first one to deal with mixed-attribute datasets without requiring any pre-processing step, such as discretization and itemization of real-valued attributes. This is a decisive aspect, because discretization and itemization implies a priori decisions, with information loss and no clear control over the consequences. On the other hand, even having to specify a priori an individual threshold for each numerical attribute, that will be used to indicate internal consistency per attribute, each threshold will be applied during the construction of the biclusters, shaping the peculiarities of the data distribution. We also explore the strong connection between biclustering and frequent pattern mining to (1) provide filters to select a compact bicluster set that exhibits high relevance and low redundancy, and (2) in the case of labeled datasets, automatically present the biclusters in a user-friendly and intuitive form, by means of quantitative class association rules. Our experimental results showed that the biclusters yield a parsimonious set of relevant rules, providing useful and interpretable models for five mixed-attribute labeled datasets. | Aiming at bypassing the itemization step, we may resort to enumerative biclustering algorithms that mine CVC biclusters directly from numerical matrices, such as RIn , RIn and their competitors @cite_1 . They are able to mine the biclusters from a discretized matrix (that has only integer numbers), thus avoiding itemization. This implies that it is possible to use a more flexible discretization, without restrictions in the arity of an attribute. Additionally, RIn is a very efficient algorithm, exhibiting a computational cost similar to that of In-Close2. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1894726941"
],
"abstract": [
"Biclustering has proved to be a powerful data analysis technique due to its wide success in various application domains. However, the existing literature presents efficient solutions only for enumerating maximal biclusters with constant values, or heuristic-based approaches which can not find all biclusters or even support the maximality of the obtained biclusters. Here, we present a general family of biclustering algorithms for enumerating all maximal biclusters with (i) constant values on rows, (ii) constant values on columns, or (iii) coherent values. Versions for perfect and for perturbed biclusters are provided. Our algorithms have four key properties (just the algorithm for perturbed biclusters with coherent values fails to exhibit the first property): they are (1) efficient (take polynomial time per pattern), (2) complete (find all maximal biclusters), (3) correct (all biclusters attend the user-defined measure of similarity), and (4) non-redundant (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). They are based on a generalization of an efficient formal concept analysis algorithm called In-Close2. Experimental results point to the necessity of having efficient enumerative biclustering algorithms and provide a valuable insight into the scalability of our family of algorithms and its sensitivity to user-defined parameters."
]
} |
1710.02862 | 2762863716 | Heterogeneous data pose serious challenges to data analysis tasks, including exploration and visualization. Current techniques often utilize dimensionality reductions, aggregation, or conversion to numerical values to analyze heterogeneous data. However, the effectiveness of such techniques to find subtle structures such as the presence of multiple modes or detection of outliers is hindered by the challenge to find the proper subspaces or prior knowledge to reveal the structures. In this paper, we propose a generic similarity-based exploration technique that is applicable to a wide variety of datatypes and their combinations, including heterogeneous ensembles. The proposed concept of similarity has a close connection to statistical analysis and can be deployed for summarization, revealing fine structures such as the presence of multiple modes, and detection of anomalies or outliers. We then propose a visual encoding framework that enables the exploration of a heterogeneous dataset in different levels of detail and provides insightful information about both global and local structures. We demonstrate the utility of the proposed technique using various real datasets, including ensemble data. | Another class of techniques uses special visual mapping techniques to visualize high-dimensional datasets so that the user can observe the patterns or structure in the data intuitively. Scatterplots and parallel coordinates @cite_40 are among the prominent techniques in this group. Scatterplots or SPLOM are useful for visual detection of the correlation between two variables or finding clustering of datapoints in a dataset for which pairwise similarity or distance measures are available. One of the main concerns about SPLOM visualization is its scalability in terms of the size of the dataset and also depicting the relation between more than two variables (or dimensions). In comparison to scatterplots, parallel coordinates can provide a good overview of various attributes of high-dimensional data @cite_10 . However, standard 2D parallel coordinates allow the identification of relationships only between adjacent axes. Therefore, the ordering of the axis plays a major role when the goal is to find structures in high-dimensional datasets @cite_10 . | {
"cite_N": [
"@cite_40",
"@cite_10"
],
"mid": [
"2153829923",
"1892979790"
],
"abstract": [
"This book is about visualization, systematically incorporating the fantastic human pattern recognition into the problem-solving process, and focusing on parallel coordinates. The barrier, imposed by our three-dimensional habitation and perceptual experience, has been breached by this innovative and versatile methodology. The accurate visualization of multidimensional problems and multivariate data unlocks insights into the role of dimensionality. Beginning with an introductory chapter on geometry, the mathematical foundations are intuitively developed, interlaced with applications to data mining, information visualization, computer vision, geometric modeling,collision avoidance for air traffic and process-control. Many results appear for the first time.Multidimensional lines, planes, proximities, surfaces and their properties are unambiguously recognized (i.e. convexity viewed in any dimension) enabling powerful construction algorithms (for intersections, interior-points, linear-programming). Key features of Parallel Coordinates: * An easy-to-read self-contained chapter on data mining and information visualization * Numerous exercises with solutions, from basic to advanced topics, course projects and research directions * \"Fast Track\" markers throughout provide a quick grasp of essential material. * Interactive Learning Module (ILM) CD: designed for classroom demonstration and fun experimentation for mastering key topics and examples cross-referenced in the text * Extensive bibliography, index, and a chapter containing a collection of recent results (i.e. visualizing large networks,complex-valued functions and more) Parallel Coordinates requires only an elementary knowledge of linear algebra. It is well-suited for self-study and as a textbook (or companion) for courses on information visualization, data mining, mathematics, statistics, computer science, engineering, finance, management,manufacturing, in scientific disciplines and even the arts",
"The parallel coordinates technique is widely used for the analysis of multivariate data. During recent decades significant research efforts have been devoted to exploring the applicability of the technique and to expand upon it, resulting in a variety of extensions. Of these many research activities, a surprisingly small number concerns user-centred evaluations investigating actual use and usability issues for different tasks, data and domains. The result is a clear lack of convincing evidence to support and guide uptake by users as well as future research directions. To address these issues this paper contributes a thorough literature survey of what has been done in the area of user-centred evaluation of parallel coordinates. These evaluations are divided into four categories based on characterization of use, derived from the survey. Based on the data from the survey and the categorization combined with the authors' experience of working with parallel coordinates, a set of guidelines for future research directions is proposed."
]
} |
1710.02745 | 2964171745 | As the number of documents on the web is growing exponentially, multi-document summarization is becoming more and more important since it can provide the main ideas in a document set in short time. In this paper, we present an unsupervised centroid-based document-level reconstruction framework using distributed bag of words model. Specifically, our approach selects summary sentences in order to minimize the reconstruction error between the summary and the documents. We apply sentence selection and beam search, to further improve the performance of our model. Experimental results on two different datasets show significant performance gains compared with the state-of-the-art baselines. | Our model is closely related to data reconstruction based summarization which was first proposed by @cite_10 . Since then, several other data reconstruction @cite_9 @cite_16 based approaches has been proposed. @cite_5 proposed a two-level sparse representation model to reconstruct the sentences in the document set subject to a diversity constraint. @cite_0 proposed a model based on Nonnegative matrix factorization () to group the sentences into clusters. Recently, several neural network based models have been proposed for both extractive @cite_6 @cite_8 and abstractive summarization @cite_11 @cite_13 | {
"cite_N": [
"@cite_13",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"2963929190",
"2574535369",
"2294749963",
"2329305226",
"2089391273",
"2177844813",
"",
"",
"1843891098"
],
"abstract": [
"In this work, we model abstractive text summarization using Attentional EncoderDecoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-toword structure, and emitting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multi-sentence summaries, and establish performance benchmarks for further research.",
"",
"In this paper, we formulate a sparse optimization framework for extractive document summarization. The proposed framework has a decomposable convex objective function. We derive an efficient ADMM algorithm to solve it. To encourage diversity in the summaries, we explicitly introduce an additional sentence dissimilarity term in the optimization framework. We achieve significant improvement over previous related work under similar data reconstruction framework. We then generalize our formulation to the case of compressive summarization and derive a block coordinate descent algorithm to optimize the objective function. Performance on DUC 2006 and DUC 2007 datasets shows that our compressive summarization results are competitive against the state-of-the-art results while maintaining reasonable readability.",
"Query relevance ranking and sentence saliency ranking are the two main tasks in extractive query-focused summarization. Previous supervised summarization systems often perform the two tasks in isolation. However, since reference summaries are the trade-off between relevance and saliency, using them as supervision, neither of the two rankers could be trained well. This paper proposes a novel summarization system called AttSum, which tackles the two tasks jointly. It automatically learns distributed representations for sentences as well as the document cluster. Meanwhile, it applies the attention mechanism to simulate the attentive reading of human behavior when a query is given. Extensive experiments are conducted on DUC query-focused summarization benchmark datasets. Without using any hand-crafted features, AttSum achieves competitive performance. It is also observed that the sentences recognized to focus on the query indeed meet the query need.",
"Multi-document summarization aims to create a compressed summary while retaining the main characteristics of the original set of documents. Many approaches use statistics and machine learning techniques to extract sentences from documents. In this paper, we propose a new multi-document summarization framework based on sentence-level semantic analysis and symmetric non-negative matrix factorization. We first calculate sentence-sentence similarities using semantic analysis and construct the similarity matrix. Then symmetric matrix factorization, which has been shown to be equivalent to normalized spectral clustering, is used to group sentences into clusters. Finally, the most informative sentences are selected from each group to form the summary. Experimental results on DUC2005 and DUC2006 data sets demonstrate the improvement of our proposed framework over the implemented existing summarization systems. A further study on the factors that benefit the high performance is also conducted.",
"Multi-document summarization is of great value to many real world applications since it can help people get the main ideas within a short time. In this paper, we tackle the problem of extracting summary sentences from multi-document sets by applying sparse coding techniques and present a novel framework to this challenging problem. Based on the data reconstruction and sentence denoising assumption, we present a two-level sparse representation model to depict the process of multi-document summarization. Three requisite properties is proposed to form an ideal reconstructable summary: Coverage, Sparsity and Diversity. We then formalize the task of multi-document summarization as an optimization problem according to the above properties, and use simulated annealing algorithm to solve it. Extensive experiments on summarization benchmark data sets DUC2006 and DUC2007 show that our proposed model is effective and outperforms the state-of-the-art algorithms.",
"",
"",
"Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines."
]
} |
1710.02861 | 2761984197 | Clickbait is a pejorative term describing web content that is aimed at generating online advertising revenue, especially at the expense of quality or accuracy, relying on sensationalist headlines or eye-catching thumbnail pictures to attract click-throughs and to encourage forwarding of the material over online social networks. We use distributed word representations of the words in the title as features to identify clickbaits in online news media. We train a machine learning model using linear regression to predict the cickbait score of a given tweet. Our methods achieve an F1-score of 64.98 and an MSE of 0.0791. Compared to other methods, our method is simple, fast to train, does not require extensive feature engineering and yet moderately effective. | @cite_6 highlighted many interesting differences between clickbait and non-clickbait categories which include sentence structure, word patterns etc. They rely on a rich set of 14 hand-crafted features to detect clickbait headlines. In addition, @cite_6 build a browser extension which warns the readers of different media sites about the possibility of being baited by such headlines. Their methods achieve 93 | {
"cite_N": [
"@cite_6"
],
"mid": [
"2952861497"
],
"abstract": [
"Most of the online news media outlets rely heavily on the revenues generated from the clicks made by their readers, and due to the presence of numerous such outlets, they need to compete with each other for reader attention. To attract the readers to click on an article and subsequently visit the media site, the outlets often come up with catchy headlines accompanying the article links, which lure the readers to click on the link. Such headlines are known as Clickbaits. While these baits may trick the readers into clicking, in the long run, clickbaits usually don't live up to the expectation of the readers, and leave them disappointed. In this work, we attempt to automatically detect clickbaits and then build a browser extension which warns the readers of different media sites about the possibility of being baited by such headlines. The extension also offers each reader an option to block clickbaits she doesn't want to see. Then, using such reader choices, the extension automatically blocks similar clickbaits during her future visits. We run extensive offline and online experiments across multiple media sites and find that the proposed clickbait detection and the personalized blocking approaches perform very well achieving 93 accuracy in detecting and 89 accuracy in blocking clickbaits."
]
} |
1710.02861 | 2761984197 | Clickbait is a pejorative term describing web content that is aimed at generating online advertising revenue, especially at the expense of quality or accuracy, relying on sensationalist headlines or eye-catching thumbnail pictures to attract click-throughs and to encourage forwarding of the material over online social networks. We use distributed word representations of the words in the title as features to identify clickbaits in online news media. We train a machine learning model using linear regression to predict the cickbait score of a given tweet. Our methods achieve an F1-score of 64.98 and an MSE of 0.0791. Compared to other methods, our method is simple, fast to train, does not require extensive feature engineering and yet moderately effective. | @cite_0 used deep learning techniques like Bi-Directional Recurrent Neural Network model with character and word embeddings as the features. They achieve the state of the art results with an F1 score of 98 While @cite_6 and @cite_0 explore identifying clickbaity titles in webpages, @cite_1 explore identifying clickbaits in tweets. | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_6"
],
"mid": [
"2952500835",
"",
"2952861497"
],
"abstract": [
"Online content publishers often use catchy headlines for their articles in order to attract users to their websites. These headlines, popularly known as clickbaits, exploit a user's curiosity gap and lure them to click on links that often disappoint them. Existing methods for automatically detecting clickbaits rely on heavy feature engineering and domain knowledge. Here, we introduce a neural network architecture based on Recurrent Neural Networks for detecting clickbaits. Our model relies on distributed word representations learned from a large unannotated corpora, and character embeddings learned via Convolutional Neural Networks. Experimental results on a dataset of news headlines show that our model outperforms existing techniques for clickbait detection with an accuracy of 0.98 with F1-score of 0.98 and ROC-AUC of 0.99.",
"",
"Most of the online news media outlets rely heavily on the revenues generated from the clicks made by their readers, and due to the presence of numerous such outlets, they need to compete with each other for reader attention. To attract the readers to click on an article and subsequently visit the media site, the outlets often come up with catchy headlines accompanying the article links, which lure the readers to click on the link. Such headlines are known as Clickbaits. While these baits may trick the readers into clicking, in the long run, clickbaits usually don't live up to the expectation of the readers, and leave them disappointed. In this work, we attempt to automatically detect clickbaits and then build a browser extension which warns the readers of different media sites about the possibility of being baited by such headlines. The extension also offers each reader an option to block clickbaits she doesn't want to see. Then, using such reader choices, the extension automatically blocks similar clickbaits during her future visits. We run extensive offline and online experiments across multiple media sites and find that the proposed clickbait detection and the personalized blocking approaches perform very well achieving 93 accuracy in detecting and 89 accuracy in blocking clickbaits."
]
} |
1710.02861 | 2761984197 | Clickbait is a pejorative term describing web content that is aimed at generating online advertising revenue, especially at the expense of quality or accuracy, relying on sensationalist headlines or eye-catching thumbnail pictures to attract click-throughs and to encourage forwarding of the material over online social networks. We use distributed word representations of the words in the title as features to identify clickbaits in online news media. We train a machine learning model using linear regression to predict the cickbait score of a given tweet. Our methods achieve an F1-score of 64.98 and an MSE of 0.0791. Compared to other methods, our method is simple, fast to train, does not require extensive feature engineering and yet moderately effective. | Unlike earlier work done on clickbaits, the clickbait challenge @cite_8 requires us to calculate a clickbait score of a tweet post. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2907324027"
],
"abstract": [
"Clickbait has grown to become a nuisance to social media users and social media operators alike. Malicious content publishers misuse social media to manipulate as many users as possible to visit their websites using clickbait messages. Machine learning technology may help to handle this problem, giving rise to automatic clickbait detection. To accelerate progress in this direction, we organized the Clickbait Challenge 2017, a shared task inviting the submission of clickbait detectors for a comparative evaluation. A total of 13 detectors have been submitted, achieving significant improvements over the previous state of the art in terms of detection performance. Also, many of the submitted approaches have been published open source, rendering them reproducible, and a good starting point for newcomers. While the 2017 challenge has passed, we maintain the evaluation system and answer to new registrations in support of the ongoing research on better clickbait detectors."
]
} |
1710.02588 | 2762105572 | We consider linear structural equation models that are associated with mixed graphs. The structural equations in these models only involve observed variables, but their idiosyncratic error terms are allowed to be correlated and non-Gaussian. We propose empirical likelihood (EL) procedures for inference, and suggest several modifications, including a profile likelihood, in order to improve tractability and performance of the resulting methods. Through simulations, we show that when the error distributions are non-Gaussian, the use of EL and the proposed modifications may increase statistical efficiency and improve assessment of significance. | Frequently, the errors in a SEM, and consequently also the observations @math , are assumed to be multivariate Gaussian which yield (MLEs). The Gaussian likelihood is often maximized using generic optimization methods; as done in the popular packages sem and lavaan for R . The coordinate-descent methods proposed by @cite_4 and @cite_2 can be a useful computational alternative that largely avoids convergence issues. | {
"cite_N": [
"@cite_4",
"@cite_2"
],
"mid": [
"2122634476",
"2531339191"
],
"abstract": [
"In recursive linear models, the multivariate normal joint distribution of all variables exhibits a dependence structure induced by a recursive (or acyclic) system of linear structural equations. These linear models have a long tradition and appear in seemingly unrelated regressions, structural equation modelling, and approaches to causal inference. They are also related to Gaussian graphical models via a classical representation known as a path diagram. Despite the models' long history, a number of problems remain open. In this paper, we address the problem of computing maximum likelihood estimates in the subclass of 'bow-free' recursive linear models. The term 'bow-free' refers to the condition that the errors for variables i and j be uncorrelated if variable i occurs in the structural equation for variable j. We introduce a new algorithm, termed Residual Iterative Conditional Fitting (RICF), that can be implemented using only least squares computations. In contrast to existing algorithms, RICF has clear convergence properties and yields exact maximum likelihood estimates after the first iteration whenever the MLE is available in closed form.",
"Software for computation of maximum likelihood estimates in linear structural equation models typically employs general techniques from non-linear optimization, such as quasi-Newton methods. In practice, careful tuning of initial values is often required to avoid convergence issues. As an alternative approach, we propose a block-coordinate descent method that cycles through the considered variables, updating only the parameters related to a given variable in each step. We show that the resulting block update problems can be solved in closed form even when the structural equation model comprises feedback cycles. Furthermore, we give a characterization of the models for which the block-coordinate descent algorithm is well-defined, meaning that for generic data and starting values all block optimization problems admit a unique solution. For the characterization, we represent each model by its mixed graph (also known as path diagram), which leads to criteria that can be checked in time that is polynomial in the number of considered variables."
]
} |
1710.02599 | 2764103701 | Users of Virtual Reality (VR) systems often experience vection, the perception of self-motion in the absence of any physical movement. While vection helps to improve presence in VR, it often leads to a form of motion sickness called cybersickness. Cybersickness is a major deterrent to large scale adoption of VR. Prior work has discovered that changing vection (changing the perceived speed or moving direction) causes more severe cybersickness than steady vection (walking at a constant speed or in a constant direction). Based on this idea, we try to reduce the cybersickness caused by character movements in a First Person Shooter (FPS) game in VR. We propose Rotation Blurring (RB), uniformly blurring the screen during rotational movements to reduce cybersickness. We performed a user study to evaluate the impact of RB in reducing cybersickness. We found that the blurring technique led to an overall reduction in sickness levels of the participants and delayed its onset. Participants who experienced acute levels of cybersickness benefited significantly from this technique. | One of the first works to explore the impact of vection on SS by @cite_15 exposed users to a fixed-based flight simulator and measured SS and vection levels. One theory for cybersickness induced by vection is the mismatch of motion information from the visual system and the vestibular system during vection leading to cybersickness @cite_10 . Another theory posits that changes in stability of the human balance mechanism causes cybersickness @cite_16 . @cite_8 provide an in-depth review of works in this area. | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_8"
],
"mid": [
"2088845896",
"2054749727",
"",
"1972321706"
],
"abstract": [
"Simulator sickness has been identified as a form of motion sickness in which users of simulators exhibit symptoms characteristic of true motion sickness. In a fixed-base simulator, visual and vestibular sources of information specifying dynamic orientation are in conflict to the extent that the optical flow pattern viewed by the pilot creates a compelling illusion of self-motion, which is not corroborated by the inertial forces transmitted through the vestibular sense organs. Visually induced illusory self-motion is known as vection, and a strict interpretation of sensory conflict theory of motion sickness suggests that vection in a fixed-base simulator would be a necessary precondition for simulator sickness. Direct confirmation of this relation is reported in this article.",
"In this article we present a new theory of motion sickness. In the sensory conflict theory, changes in stimulation of perceptual systems are believed to be responsible for motion sickness. We discuss the fact that these changes in stimulation are not independent of the animal-environment interaction, but are determined by corresponding changes in the constraints operating on the control of action. Thus, provocative situations may be characterized by novel demands on the control of action as well as by novel patterns of stimulation. Our hypothesis is that animals become sick in situations in which they do not possess (or have not yet learned) strategies that are effective for the maintenance of postural stability. We identify a broad range of situations over which the occurrence of motion sickness is related to factors that should influence postural stability. This allows us to establish a logical link between motion sickness and postural stability. Our analysis implies that an understanding of stability s...",
"",
"The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (so-called vection), however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate whether or not vection is a necessary prerequisite for visually induced motion sickness (VIMS). That is, can there be visually induced motion sickness without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that may speak to this relationship (including theoretical accounts of vection and VIMS), and offer suggestions with respect to operationally defining and reporting these phenomena in future."
]
} |
1710.02587 | 2763062829 | The Paulsen problem is a basic open problem in operator theory: Given vectors @math that are @math -nearly satisfying the Parseval's condition and the equal norm condition, is it close to a set of vectors @math that exactly satisfy the Parseval's condition and the equal norm condition? Given @math , the squared distance (to the set of exact solutions) is defined as @math where the infimum is over the set of exact solutions. Previous results show that the squared distance of any @math -nearly solution is at most @math and there are @math -nearly solutions with squared distance at least @math . The fundamental open question is whether the squared distance can be independent of the number of vectors @math . We answer this question affirmatively by proving that the squared distance of any @math -nearly solution is @math . Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any @math -nearly solution is @math . Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an @math -nearly solution is @math when @math is large enough and @math is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyze the operator scaling algorithm. | Scaling a frame into an equal norm Parseval frame, and more generally, scaling an operator into a doubly stochastic operator (see Subsection ) has various applications in theoretical computer science. Sometimes they go under different names such as radial isotropic positions in machine learning @cite_40 , and geometric conditions in Brascamp-Lieb inequalities @cite_14 @cite_33 @cite_12 . | {
"cite_N": [
"@cite_40",
"@cite_14",
"@cite_33",
"@cite_12"
],
"mid": [
"1670485642",
"2261415748",
"2037726123",
"2950633396"
],
"abstract": [
"We consider a fundamental problem in unsupervised learning called : given a collection of @math points in @math , if many but not necessarily all of these points are contained in a @math -dimensional subspace @math can we find it? The points contained in @math are called inliers and the remaining points are outliers . This problem has received considerable attention in computer science and in statistics. Yet efficient algorithms from computer science are not robust to adversarial outliers, and the estimators from robust statistics are hard to compute in high dimensions. Are there algorithms for subspace recovery that are both robust to outliers and efficient? We give an algorithm that finds @math when it contains more than a @math fraction of the points. Hence, for say @math this estimator is both easy to compute and well-behaved when there are a constant fraction of outliers. We prove that it is Small Set Expansion hard to find @math when the fraction of errors is any larger, thus giving evidence that our estimator is an optimal compromise between efficiency and robustness. As it turns out, this basic problem has a surprising number of connections to other areas including small set expansion, matroid theory and functional analysis that we make use of here.",
"",
"We prove a reverse form of the multidimensional Brascamp-Lieb inequality. Our method also gives a new way to derive the Brascamp-Lieb inequality and is rather convenient for the study of equality cases.",
"The celebrated Brascamp-Lieb (BL) inequalities (and their extensions) are an important mathematical tool, unifying and generalizing numerous inequalities in analysis, convex geometry and information theory. While their structural theory is very well understood, far less is known about computing their main parameters. We give polynomial time algorithms to compute feasibility of BL-datum, the optimal BL-constant and a weak separation oracle for the BL-polytope. The same result holds for the so-called Reverse BL inequalities of Barthe. The best known algorithms for any of these tasks required at least exponential time. The algorithms are obtained by a simple efficient reduction of a given BL-datum to an instance of the Operator Scaling problem defined by Gurvits, for which the present authors have provided a polynomial time algorithm. This reduction implies algorithmic versions of many of the known structural results, and in some cases provide proofs that are different or simpler than existing ones. Of particular interest is the fact that the operator scaling algorithm is continuous in its input. Thus as a simple corollary of our reduction we obtain explicit bounds on the magnitude and continuity of the BL-constant in terms of the BL-data. To the best of our knowledge no such bounds were known, as past arguments relied on compactness. The continuity of BL-constants is important for developing non-linear BL inequalities that have recently found so many applications."
]
} |
1710.02587 | 2763062829 | The Paulsen problem is a basic open problem in operator theory: Given vectors @math that are @math -nearly satisfying the Parseval's condition and the equal norm condition, is it close to a set of vectors @math that exactly satisfy the Parseval's condition and the equal norm condition? Given @math , the squared distance (to the set of exact solutions) is defined as @math where the infimum is over the set of exact solutions. Previous results show that the squared distance of any @math -nearly solution is at most @math and there are @math -nearly solutions with squared distance at least @math . The fundamental open question is whether the squared distance can be independent of the number of vectors @math . We answer this question affirmatively by proving that the squared distance of any @math -nearly solution is @math . Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any @math -nearly solution is @math . Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an @math -nearly solution is @math when @math is large enough and @math is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyze the operator scaling algorithm. | An early application of frame scaling is discovered by Forster @cite_6 , who showed that a set of @math vectors @math can always be scaled (see Definition ) to an equal norm Parseval frame if every subset of @math vectors is linearly independent, and he used this result to derive a lower bound on the sign rank of the Hadamard matrix with applications in proving communication complexity lower bounds. We note that Forster's scaling result was proved earlier in a more general setting by Gurvits and Samorodnitsky @cite_7 in their work of approximating mixed discriminants, and is also implicit in the work of Barthe @cite_33 in proving Brascamp-Lieb inequalities. A recent application of frame scaling is found by Hardt and Moitra @cite_40 in robust subspace discovery. | {
"cite_N": [
"@cite_40",
"@cite_33",
"@cite_7",
"@cite_6"
],
"mid": [
"1670485642",
"2037726123",
"2052125266",
"2143355494"
],
"abstract": [
"We consider a fundamental problem in unsupervised learning called : given a collection of @math points in @math , if many but not necessarily all of these points are contained in a @math -dimensional subspace @math can we find it? The points contained in @math are called inliers and the remaining points are outliers . This problem has received considerable attention in computer science and in statistics. Yet efficient algorithms from computer science are not robust to adversarial outliers, and the estimators from robust statistics are hard to compute in high dimensions. Are there algorithms for subspace recovery that are both robust to outliers and efficient? We give an algorithm that finds @math when it contains more than a @math fraction of the points. Hence, for say @math this estimator is both easy to compute and well-behaved when there are a constant fraction of outliers. We prove that it is Small Set Expansion hard to find @math when the fraction of errors is any larger, thus giving evidence that our estimator is an optimal compromise between efficiency and robustness. As it turns out, this basic problem has a surprising number of connections to other areas including small set expansion, matroid theory and functional analysis that we make use of here.",
"We prove a reverse form of the multidimensional Brascamp-Lieb inequality. Our method also gives a new way to derive the Brascamp-Lieb inequality and is rather convenient for the study of equality cases.",
"",
"We prove a general lower bound on the complexity of unbounded error probabilistic communication protocols. This result improves on a lower bound for bounded error protocols from Krause (1996). As a simple consequence we get the, to our knowledge, first linear lower bound on the complexity of unbounded error probabilistic communication protocols for the functions defined by Hadamard matrices. We also give an upper bound on the margin of any embedding of a concept class in half spaces."
]
} |
1710.02587 | 2763062829 | The Paulsen problem is a basic open problem in operator theory: Given vectors @math that are @math -nearly satisfying the Parseval's condition and the equal norm condition, is it close to a set of vectors @math that exactly satisfy the Parseval's condition and the equal norm condition? Given @math , the squared distance (to the set of exact solutions) is defined as @math where the infimum is over the set of exact solutions. Previous results show that the squared distance of any @math -nearly solution is at most @math and there are @math -nearly solutions with squared distance at least @math . The fundamental open question is whether the squared distance can be independent of the number of vectors @math . We answer this question affirmatively by proving that the squared distance of any @math -nearly solution is @math . Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any @math -nearly solution is @math . Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an @math -nearly solution is @math when @math is large enough and @math is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyze the operator scaling algorithm. | Operator scaling was introduced by Gurvits @cite_11 in an attempt to design a deterministic polynomial time algorithm for polynomial identity testing, and he used it to solve the special case when the commutative rank of a symbolic matrix is equal to its non-commutative rank (e.g. this includes the linear matroid intersection problem over reals). Recently, Garg, Gurvits, Oliveira, and Wigderson @cite_38 improved Gurvits' analysis to prove that the alternating algorithm for operator scaling can be used to compute the non-commutative rank of a symbolic matrix in polynomial time. Subsequently, the alternating algorithm for operator scaling is used by the same group @cite_12 to obtain a polynomial time algorithm to compute the optimal constants in Brascamp-Lieb inequalities, which we will elaborate more below as it is related to our work. | {
"cite_N": [
"@cite_38",
"@cite_12",
"@cite_11"
],
"mid": [
"2606436810",
"2950633396",
"1966991151"
],
"abstract": [
"In this paper we present a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over @math is invertible or not. The analogous question for commuting variables is the celebrated polynomial identity testing (PIT) for symbolic determinants. In contrast to the commutative case, which has an efficient probabilistic algorithm, the best previous algorithm for the non-commutative setting required exponential time (whether or not randomization is allowed). The algorithm efficiently solves the \"word problem\" for the free skew field, and the identity testing problem for arithmetic formulae with division over non-commuting variables, two problems which had only exponential-time algorithms prior to this work. The main contribution of this paper is a complexity analysis of an existing algorithm due to Gurvits, who proved it was polynomial time for certain classes of inputs. We prove it always runs in polynomial time. The main component of our analysis is a simple (given the necessary known tools) lower bound on central notion of capacity of operators (introduced by Gurvits). We extend the algorithm to actually approximate capacity to any accuracy in polynomial time, and use this analysis to give quantitative bounds on the continuity of capacity (the latter is used in a subsequent paper on Brascamp-Lieb inequalities). Symbolic matrices in non-commuting variables, and the related structural and algorithmic questions, have a remarkable number of diverse origins and motivations. They arise independently in (commutative) invariant theory and representation theory, linear algebra, optimization, linear system theory, quantum information theory, approximation of the permanent and naturally in non-commutative algebra. We provide a detailed account of some of these sources and their interconnections.",
"The celebrated Brascamp-Lieb (BL) inequalities (and their extensions) are an important mathematical tool, unifying and generalizing numerous inequalities in analysis, convex geometry and information theory. While their structural theory is very well understood, far less is known about computing their main parameters. We give polynomial time algorithms to compute feasibility of BL-datum, the optimal BL-constant and a weak separation oracle for the BL-polytope. The same result holds for the so-called Reverse BL inequalities of Barthe. The best known algorithms for any of these tasks required at least exponential time. The algorithms are obtained by a simple efficient reduction of a given BL-datum to an instance of the Operator Scaling problem defined by Gurvits, for which the present authors have provided a polynomial time algorithm. This reduction implies algorithmic versions of many of the known structural results, and in some cases provide proofs that are different or simpler than existing ones. Of particular interest is the fact that the operator scaling algorithm is continuous in its input. Thus as a simple corollary of our reduction we obtain explicit bounds on the magnitude and continuity of the BL-constant in terms of the BL-data. To the best of our knowledge no such bounds were known, as past arguments relied on compactness. The continuity of BL-constants is important for developing non-linear BL inequalities that have recently found so many applications.",
"Generalizing a decision problem for bipartite perfect matching, Edmonds (J. Res. Natl. Bur. Standards 718(4) (1967) 242) introduced the problem (now known as the Edmonds Problem) of deciding if a given linear subspace of M(N) contains a non-singular matrix, where M(N) stands for the linear space of complex N × N matrices. This problem led to many fundamental developments in matroid theory, etc.Classical matching theory can be defined in terms of matrices with non-negative entries. The notion of Positive operator, central in Quantum Theory, is a natural generalization of matrices with non-negative entries. (Here operator refers to maps from matrices to matrices.) First, we reformulate the Edmonds Problem in terms of completely positive operators, or equivalently, in terms of bipartite density matrices. It turns out that one of the most important cases when Edmonds' problem can be solved in polynomial deterministic time, i.e. an intersection of two geometric matroids, corresponds to unentangled (aka separable) bipartite density matrices. We introduce a very general class (or promise) of linear subspaces of M(N) on which there exists a polynomial deterministic time algorithm to solve Edmonds' problem. The algorithm is a thoroughgoing generalization of algorithms in Linial, Samorodnitsky and Wigderson, Proceedings of the 30th ACM Symposium on Theory of Computing, ACM, New York, 1998; Gurvits and Yianilos, and its analysis benefits from an operator analog of permanents, so-called Quantum Permanents.Finally, we prove that the weak membership problem for the convex set of separable normalized bipartite density matrices is NP-HARD."
]
} |
1710.02587 | 2763062829 | The Paulsen problem is a basic open problem in operator theory: Given vectors @math that are @math -nearly satisfying the Parseval's condition and the equal norm condition, is it close to a set of vectors @math that exactly satisfy the Parseval's condition and the equal norm condition? Given @math , the squared distance (to the set of exact solutions) is defined as @math where the infimum is over the set of exact solutions. Previous results show that the squared distance of any @math -nearly solution is at most @math and there are @math -nearly solutions with squared distance at least @math . The fundamental open question is whether the squared distance can be independent of the number of vectors @math . We answer this question affirmatively by proving that the squared distance of any @math -nearly solution is @math . Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any @math -nearly solution is @math . Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an @math -nearly solution is @math when @math is large enough and @math is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyze the operator scaling algorithm. | The Brascamp-Lieb inequalities @cite_24 and their reversed form established by Barthe @cite_33 are general classes of inequalities with important applications in functional analysis and convex geometry (e.g. including Nelson's hypercontractivity inequality and the Brunn-Minkowski inequality as special cases). The optimal constants for thses inequalities are determined by Ball @cite_14 assuming the geometric condition (which is a condition similar to that in John's ellipsoid theorem). Garg, Gurvits, Oliveira and Wigderson @cite_12 show that the Brascamp-Lieb constants are equivalent to the capacity of an operator by a simple transformation, in which the geometric condition corresponds exactly to the doubly stochastic condition. Therefore, the algorithm in @cite_38 can be employed to scale the input to satisfying the geometric condition so as to compute the optimal constant. For our smoothed analysis in , we develop a new technique to proving a lower bound on the operator capacity and thus an upper bound on the Brascamp-Lieb constant. In particular, this implies improved bounds on the Brascamp-Lieb constants for perturbed instances in the rank-one case (which is the case that Brascamp and Lieb proved in @cite_24 ). See @cite_12 and the references therein for applications of these bounds to non-linear Brascamp-Lieb inequalities. | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_33",
"@cite_24",
"@cite_12"
],
"mid": [
"2606436810",
"2261415748",
"2037726123",
"1993000235",
"2950633396"
],
"abstract": [
"In this paper we present a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over @math is invertible or not. The analogous question for commuting variables is the celebrated polynomial identity testing (PIT) for symbolic determinants. In contrast to the commutative case, which has an efficient probabilistic algorithm, the best previous algorithm for the non-commutative setting required exponential time (whether or not randomization is allowed). The algorithm efficiently solves the \"word problem\" for the free skew field, and the identity testing problem for arithmetic formulae with division over non-commuting variables, two problems which had only exponential-time algorithms prior to this work. The main contribution of this paper is a complexity analysis of an existing algorithm due to Gurvits, who proved it was polynomial time for certain classes of inputs. We prove it always runs in polynomial time. The main component of our analysis is a simple (given the necessary known tools) lower bound on central notion of capacity of operators (introduced by Gurvits). We extend the algorithm to actually approximate capacity to any accuracy in polynomial time, and use this analysis to give quantitative bounds on the continuity of capacity (the latter is used in a subsequent paper on Brascamp-Lieb inequalities). Symbolic matrices in non-commuting variables, and the related structural and algorithmic questions, have a remarkable number of diverse origins and motivations. They arise independently in (commutative) invariant theory and representation theory, linear algebra, optimization, linear system theory, quantum information theory, approximation of the permanent and naturally in non-commutative algebra. We provide a detailed account of some of these sources and their interconnections.",
"",
"We prove a reverse form of the multidimensional Brascamp-Lieb inequality. Our method also gives a new way to derive the Brascamp-Lieb inequality and is rather convenient for the study of equality cases.",
"The best possible constant Dmt in the inequality | ∬ dx dyf(x)g(x —y) h(y)| |, 1 p + llq+ 1 t = 2, is determined; the equality is reached if , g, and h are appropriate Gaussians. The same is shown to be true for the converse inequality (0 < p, q < 1, t < 0), in which case the inequality is reversed. Furthermore, an analogous property is proved for an integral of k functions over n variables, each function depending on a linear combination of the n variables; some of the functions may be taken to be fixed Gaussians. Two applications are given, one of which is a pr∞f of Nelson’s hypercontractive inequality.",
"The celebrated Brascamp-Lieb (BL) inequalities (and their extensions) are an important mathematical tool, unifying and generalizing numerous inequalities in analysis, convex geometry and information theory. While their structural theory is very well understood, far less is known about computing their main parameters. We give polynomial time algorithms to compute feasibility of BL-datum, the optimal BL-constant and a weak separation oracle for the BL-polytope. The same result holds for the so-called Reverse BL inequalities of Barthe. The best known algorithms for any of these tasks required at least exponential time. The algorithms are obtained by a simple efficient reduction of a given BL-datum to an instance of the Operator Scaling problem defined by Gurvits, for which the present authors have provided a polynomial time algorithm. This reduction implies algorithmic versions of many of the known structural results, and in some cases provide proofs that are different or simpler than existing ones. Of particular interest is the fact that the operator scaling algorithm is continuous in its input. Thus as a simple corollary of our reduction we obtain explicit bounds on the magnitude and continuity of the BL-constant in terms of the BL-data. To the best of our knowledge no such bounds were known, as past arguments relied on compactness. The continuity of BL-constants is important for developing non-linear BL inequalities that have recently found so many applications."
]
} |
1710.02587 | 2763062829 | The Paulsen problem is a basic open problem in operator theory: Given vectors @math that are @math -nearly satisfying the Parseval's condition and the equal norm condition, is it close to a set of vectors @math that exactly satisfy the Parseval's condition and the equal norm condition? Given @math , the squared distance (to the set of exact solutions) is defined as @math where the infimum is over the set of exact solutions. Previous results show that the squared distance of any @math -nearly solution is at most @math and there are @math -nearly solutions with squared distance at least @math . The fundamental open question is whether the squared distance can be independent of the number of vectors @math . We answer this question affirmatively by proving that the squared distance of any @math -nearly solution is @math . Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any @math -nearly solution is @math . Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an @math -nearly solution is @math when @math is large enough and @math is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyze the operator scaling algorithm. | Matrix scaling @cite_28 is a well-studied special case of operator scaling. It has applications in numerical analysis, in approximating permanents @cite_9 and in combinatorial geometry @cite_19 . Very recently, much faster algorithms are developed for matrix scaling by two independent research groups @cite_18 @cite_15 . Cohen, Madry, Tsipras and Vladu @cite_18 obtain an algorithm for matrix scaling with running time @math , where @math is the number of nonzeros in the input matrix, @math is the ratio between the largest and the smallest entries in the optimal scaling solution, and @math is the error parameter of the output. Note that the algorithm is near linear time when @math is bounded by a polynomial in @math , but in general it could be exponentially large. Not much is known about upper bounding @math for specific instances, except when the input matrix is strictly positive @cite_31 . Our techniques for smoothed analysis in provides a new way to bound @math ; see Remark . In particular, this implies that the algorithm in @cite_18 is near linear time in a pseudorandom instance as defined in Definition (not necessarily strictly positive). | {
"cite_N": [
"@cite_18",
"@cite_31",
"@cite_28",
"@cite_9",
"@cite_19",
"@cite_15"
],
"mid": [
"2605569246",
"2034025033",
"1990283121",
"2162960148",
"2543706324",
"2606394847"
],
"abstract": [
"In this paper, we study matrix scaling and balancing, which are fundamental problems in scientific computing, with a long line of work on them that dates back to the 1960s. We provide algorithms for both these problems that, ignoring logarithmic factors involving the dimension of the input matrix and the size of its entries, both run in time @math where @math is the amount of error we are willing to tolerate. Here, @math represents the ratio between the largest and the smallest entries of the optimal scalings. This implies that our algorithms run in nearly-linear time whenever @math is quasi-polynomial, which includes, in particular, the case of strictly positive matrices. We complement our results by providing a separate algorithm that uses an interior-point method and runs in time @math . In order to establish these results, we develop a new second-order optimization framework that enables us to treat both problems in a unified and principled manner. This framework identifies a certain generalization of linear system solving that we can use to efficiently minimize a broad class of functions, which we call second-order robust. We then show that in the context of the specific functions capturing matrix scaling and balancing, we can leverage and generalize the work on Laplacian system solving to make the algorithms obtained via this framework very efficient.",
"An n × n nonnegative matrix A is said to be (doubly stochastic) scalable if there exist two positive diagonal matrices X and Y such that XAY is doubly stochastic. We derive an upper bound on the norms of the scaling factors X and Y and give a polynomial-time complexity bound on the problem of computing the scaling factors to a prescribed accuracy.",
"",
"We present a deterministic strongly polynomial algorithm that computes the permanent of a nonnegative n × n matrix to within a multiplicative factor of en. To this end we develop the first strongly polynomial-time algorithm for matrix scaling an important nonlinear optimization problem with many applications. Our work suggests a simple new (slow) polynomial time decision algorithm for bipartite perfect matching, conceptually different from classical approaches. ∗Hebrew University. Work supported in part by a grant of the Binational Israel-US Science Foundation. †Hebrew University ‡Hebrew University. Work partially supported by grant 032-7736 from the Israel Academy of Sciences. Part of this work was done during a visit to the Institute for Advanced Study, under the support of a Sloan Foundation grant 96-6-2.",
"Design matrices are sparse matrices in which the supports of different columns intersect in a few positions. Such matrices come up naturally when studying problems involving point sets with many collinear triples. In this work we consider design matrices with block (or matrix) entries. Our main result is a lower bound on the rank of such matrices, extending the bounds proved in BDWY12,DSW12 for the scalar case. As a result we obtain several applications in combinatorial geometry. The first application involves extending the notion of structural rigidity (or graph rigidity) to the setting where we wish to bound the number of degrees of freedom' in perturbing a set of points under collinearity constraints (keeping some family of triples collinear). Other applications are an asymptotically tight Sylvester-Gallai type result for arrangements of subspaces (improving DH16 ) and a new incidence bound for high dimensional line curve arrangements. The main technical tool in the proof of the rank bound is an extension of the technique of matrix scaling to the setting of block matrices. We generalize the definition of doubly stochastic matrices to matrices with block entries and derive sufficient conditions for a doubly stochastic scaling to exist.",
"We develop several efficient algorithms for the classical problem, which is used in many diverse areas, from preconditioning linear systems to approximation of the permanent. On an input @math matrix @math , this problem asks to find diagonal (scaling) matrices @math and @math (if they exist), so that @math @math -approximates a doubly stochastic, or more generally a matrix with prescribed row and column sums. We address the general scaling problem as well as some important special cases. In particular, if @math has @math nonzero entries, and if there exist @math and @math with polynomially large entries such that @math is doubly stochastic, then we can solve the problem in total complexity @math . This greatly improves on the best known previous results, which were either @math or @math . Our algorithms are based on tailor-made first and second order techniques, combined with other recent advances in continuous optimization, which may be of independent interest for solving similar problems."
]
} |
1710.02173 | 2763925030 | While clustering is one of the most popular methods for data mining, analysts lack adequate tools for quick, iterative clustering analysis, which is essential for hypothesis generation and data reasoning. We introduce Clustrophile, an interactive tool for iteratively computing discrete and continuous data clusters, rapidly exploring different choices of clustering parameters, and reasoning about clustering instances in relation to data dimensions. Clustrophile combines three basic visualizations -- a table of raw datasets, a scatter plot of planar projections, and a matrix diagram (heatmap) of discrete clusterings -- through interaction and intermediate visual encoding. Clustrophile also contributes two spatial interaction techniques, @math and @math , and a visualization method, @math , for reasoning about two-dimensional projections obtained through dimensionality reductions. | Prior research applies visualization for improving user understanding of clustering results across domains. Using coordinated visualizations with drill-down up capabilities is a typical approach in earlier interactive tools. The Hierarchical Clustering Explorer @cite_44 is an early and comprehensive example of interactive visualization tools for exploring clusterings. It supports the exploration of hierarchical clusterings of gene expression datasets through dendrograms (hierarchical clustering trees) stacked up with heatmap visualizations. | {
"cite_N": [
"@cite_44"
],
"mid": [
"2062937620"
],
"abstract": [
"To date, work in microarrays, sequenced genomes and bioinformatics has focused largely on algorithmic methods for processing and manipulating vast biological data sets. Future improvements will likely provide users with guidance in selecting the most appropriate algorithms and metrics for identifying meaningful clusters-interesting patterns in large data sets, such as groups of genes with similar profiles. Hierarchical clustering has been shown to be effective in microarray data analysis for identifying genes with similar profiles and thus possibly with similar functions. Users also need an efficient visualization tool, however, to facilitate pattern extraction from microarray data sets. The Hierarchical Clustering Explorer integrates four interactive features to provide information visualization techniques that allow users to control the processes and interact with the results. Thus, hybrid approaches that combine powerful algorithms with interactive visualization tools will join the strengths of fast processors with the detailed understanding of domain experts."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.