aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1703.02243 | 2952438563 | In this paper, we establish a baseline for object symmetry detection in complex backgrounds by presenting a new benchmark and an end-to-end deep learning approach, opening up a promising direction for symmetry detection in the wild. The new benchmark, named Sym-PASCAL, spans challenges including object diversity, multi-objects, part-invisibility, and various complex backgrounds that are far beyond those in existing datasets. The proposed symmetry detection approach, named Side-output Residual Network (SRN), leverages output Residual Units (RUs) to fit the errors between the object symmetry groundtruth and the outputs of RUs. By stacking RUs in a deep-to-shallow manner, SRN exploits the 'flow' of errors among multiple scales to ease the problems of fitting complex outputs with limited layers, suppressing the complex backgrounds, and effectively matching object symmetry of different scales. Experimental results validate both the benchmark and its challenging aspects related to realworld images, and the state-of-the-art performance of our symmetry detection approach. The benchmark and the code for SRN are publicly available at this https URL. | In the early research, symmetry extraction algorithms are qualitatively evaluated on quite limited binary shapes @cite_16 . Such shapes are selected from the MPEG-7 Shape-1 dataset for subjective observation @cite_20 . Later, Liu al @cite_13 use very a few real-world images to perform symmetry detection competitions. To be honest, SYMMAX @cite_19 could be regarded as an authentic benchmark that contains hundreds of training testing images with local symmetry annotation. But the local reflection symmetry it defined mainly focuses on low-level image edges and contours , missing the high-level concept of objects. WH-SYMMAX @cite_11 and SK506 @cite_2 are recently proposed benchmarks with annotation of object skeletons. Nevertheless, WH-SYMMAX is simply composed of side-view horses while SK506 consists objects with little background. Neither of them involves multiple objects in complex backgrounds, leaving a plenty of room for developing new object symmetry benchmarks. | {
"cite_N": [
"@cite_19",
"@cite_2",
"@cite_16",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"174734558",
"",
"2158008371",
"1981078224",
"2104093257",
"2160306297"
],
"abstract": [
"In this work we propose a learning-based approach to symmetry detection in natural images. We focus on ribbon-like structures, i.e. contours marking local and approximate reflection symmetry and make three contributions to improve their detection. First, we create and make publicly available a ground-truth dataset for this task by building on the Berkeley Segmentation Dataset. Second, we extract features representing multiple complementary cues, such as grayscale structure, color, texture, and spectral clustering information. Third, we use supervised learning to learn how to combine these cues, and employ MIL to accommodate the unknown scale and orientation of the symmetric structures. We systematically evaluate the performance contribution of each individual component in our pipeline, and demonstrate that overall we consistently improve upon results obtained using existing alternatives.",
"",
"A comprehensive survey of thinning methodologies is presented. A wide range of thinning algorithms, including iterative deletion of pixels and nonpixel-based methods, is covered. Skeletonization algorithms based on medial axis and other distance transforms are not considered. An overview of the iterative thinning process and the pixel-deletion criteria needed to preserve the connectivity of the image pattern is given first. Thinning algorithms are then considered in terms of these criteria and their modes of operation. Nonpixel-based methods that usually produce a center line of the pattern directly in one pass without examining all the individual pixels are discussed. The algorithms are considered in great detail and scope, and the relationships among them are explored. >",
"Symmetry is a pervasive phenomenon presenting itself in all forms and scales in natural and manmade environments. Its detection plays an essential role at all levels of human as well as machine perception. The recent resurging interest in computational symmetry for computer vision and computer graphics applications has motivated us to conduct a US NSF funded symmetry detection algorithm competition as a workshop affiliated with the Computer Vision and Pattern Recognition (CVPR) Conference, 2013. This competition sets a more complete benchmark for computer vision symmetry detection algorithms. In this report we explain the evaluation metric and the automatic execution of the evaluation workflow. We also present and analyze the algorithms submitted, and show their results on three test sets of real world images depicting reflection, rotation and translation symmetries respectively. This competition establishes a performance baseline for future work on symmetry detection.",
"In this paper, we introduce a new skeleton pruning method based on contour partitioning. Any contour partition can be used, but the partitions obtained by discrete curve evolution (DCE) yield excellent results. The theoretical properties and the experiments presented demonstrate that obtained skeletons are in accord with human visual perception and stable, even in the presence of significant noise and shape variations, and have the same topology as the original skeletons. In particular, we have proven that the proposed approach never produces spurious branches, which are common when using the known skeleton pruning methods. Moreover, the proposed pruning method does not displace the skeleton points. Consequently, all skeleton points are centers of maximal disks. Again, many existing methods displace skeleton points in order to produces pruned skeletons",
"Local reflection symmetry detection in nature images is a quite important but challenging task in computer vision. The main obstacle is both the scales and the orientations of symmetric structure are unknown. The multiple instance learning (MIL) framework sheds lights onto this task owing to its capability to well accommodate the unknown scales and orientations of the symmetric structures. However, to differentiate symmetry vs non-symmetry remains to face extreme confusions caused by clutters scenes and ambiguous object structures. In this paper, we propose a novel multiple instance learning framework for local reflection symmetry detection, named multiple instance subspace learning (MISL), which instead learns a group of models respectively on well partitioned subspaces. To obtain such subspaces, we propose an efficient dividing strategy under MIL setting, named partial random projection tree (PRPT), by taking advantage of the fact that each sample (bag) is represented by the proposed symmetry features computed at specific scale and orientation combinations (instances). Encouraging experimental results on two datasets demonstrate that the proposed local reflection symmetry detection method outperforms current state-of-the-arts. HighlightsWe perform clustering on samples represented by multiple instances.We learn a group of MIL classifiers on subspaces.We report state-of-the-arts results on the symmetry detection benchmark."
]
} |
1703.02243 | 2952438563 | In this paper, we establish a baseline for object symmetry detection in complex backgrounds by presenting a new benchmark and an end-to-end deep learning approach, opening up a promising direction for symmetry detection in the wild. The new benchmark, named Sym-PASCAL, spans challenges including object diversity, multi-objects, part-invisibility, and various complex backgrounds that are far beyond those in existing datasets. The proposed symmetry detection approach, named Side-output Residual Network (SRN), leverages output Residual Units (RUs) to fit the errors between the object symmetry groundtruth and the outputs of RUs. By stacking RUs in a deep-to-shallow manner, SRN exploits the 'flow' of errors among multiple scales to ease the problems of fitting complex outputs with limited layers, suppressing the complex backgrounds, and effectively matching object symmetry of different scales. Experimental results validate both the benchmark and its challenging aspects related to realworld images, and the state-of-the-art performance of our symmetry detection approach. The benchmark and the code for SRN are publicly available at this https URL. | Researchers have tried to extract symmetry in color images based on multi-scale super-pixels. One hypothesis is that the object symmetry axes are the subsets of lines connecting the center points of super-pixels @cite_8 . Such line subsets are explored from the super-pixels using a sequence of deformable disc models extracting the symmetry pathes @cite_12 . Their consistence and smoothness are enforced with spatial filters, e.g., a particle filter, which link local skeleton segments into continuous curves @cite_0 . Due to the lack of object prior and the learning module, however, these methods are still limited to handle the images with simple backgrounds. | {
"cite_N": [
"@cite_0",
"@cite_12",
"@cite_8"
],
"mid": [
"",
"2114379931",
"2537310184"
],
"abstract": [
"",
"Symmetry is a powerful shape regularity that's been exploited by perceptual grouping researchers in both human and computer vision to recover part structure from an image without a priori knowledge of scene content. Drawing on the concept of a medial axis, defined as the locus of centers of maximal inscribed discs that sweep out a symmetric part, we model part recovery as the search for a sequence of deformable maximal inscribed disc hypotheses generated from a multiscale super pixel segmentation, a framework proposed by LEV09. However, we learn affinities between adjacent super pixels in a space that's invariant to bending and tapering along the symmetry axis, enabling us to capture a wider class of symmetric parts. Moreover, we introduce a global cost that perceptually integrates the hypothesis space by combining a pair wise and a higher-level smoothing term, which we minimize globally using dynamic programming. The new framework is demonstrated on two datasets, and is shown to significantly outperform the baseline LEV09.",
"Skeletonization algorithms typically decompose an object's silhouette into a set of symmetric parts, offering a powerful representation for shape categorization. However, having access to an object's silhouette assumes correct figure-ground segmentation, leading to a disconnect with the mainstream categorization community, which attempts to recognize objects from cluttered images. In this paper, we present a novel approach to recovering and grouping the symmetric parts of an object from a cluttered scene. We begin by using a multiresolution superpixel segmentation to generate medial point hypotheses, and use a learned affinity function to perceptually group nearby medial points likely to belong to the same medial branch. In the next stage, we learn higher granularity affinity functions to group the resulting medial branches likely to belong to the same object. The resulting framework yields a skelet al approximation that's free of many of the instabilities plaguing traditional skeletons. More importantly, it doesn't require a closed contour, enabling the application of skeleton-based categorization systems to more realistic imagery"
]
} |
1703.02243 | 2952438563 | In this paper, we establish a baseline for object symmetry detection in complex backgrounds by presenting a new benchmark and an end-to-end deep learning approach, opening up a promising direction for symmetry detection in the wild. The new benchmark, named Sym-PASCAL, spans challenges including object diversity, multi-objects, part-invisibility, and various complex backgrounds that are far beyond those in existing datasets. The proposed symmetry detection approach, named Side-output Residual Network (SRN), leverages output Residual Units (RUs) to fit the errors between the object symmetry groundtruth and the outputs of RUs. By stacking RUs in a deep-to-shallow manner, SRN exploits the 'flow' of errors among multiple scales to ease the problems of fitting complex outputs with limited layers, suppressing the complex backgrounds, and effectively matching object symmetry of different scales. Experimental results validate both the benchmark and its challenging aspects related to realworld images, and the state-of-the-art performance of our symmetry detection approach. The benchmark and the code for SRN are publicly available at this https URL. | More effective symmetry detection approaches root in powerful learning methods. On the SYMMAX benchmark, the Multiple Instance Learning (MIL) @cite_19 is used to train a curve symmetry detector with multi-scale and multi-orientation features. To capture diversity of symmetry patterns, Teo al @cite_17 employ the Structured Random Forest (SRF) and Shen al @cite_11 use subspace MIL with the same feature. Nevertheless, as the pixel-wise hand-craft feature is computationally expensive and representation limited, these methods are intractable to detect object symmetry in complex backgrounds. | {
"cite_N": [
"@cite_19",
"@cite_11",
"@cite_17"
],
"mid": [
"174734558",
"2160306297",
""
],
"abstract": [
"In this work we propose a learning-based approach to symmetry detection in natural images. We focus on ribbon-like structures, i.e. contours marking local and approximate reflection symmetry and make three contributions to improve their detection. First, we create and make publicly available a ground-truth dataset for this task by building on the Berkeley Segmentation Dataset. Second, we extract features representing multiple complementary cues, such as grayscale structure, color, texture, and spectral clustering information. Third, we use supervised learning to learn how to combine these cues, and employ MIL to accommodate the unknown scale and orientation of the symmetric structures. We systematically evaluate the performance contribution of each individual component in our pipeline, and demonstrate that overall we consistently improve upon results obtained using existing alternatives.",
"Local reflection symmetry detection in nature images is a quite important but challenging task in computer vision. The main obstacle is both the scales and the orientations of symmetric structure are unknown. The multiple instance learning (MIL) framework sheds lights onto this task owing to its capability to well accommodate the unknown scales and orientations of the symmetric structures. However, to differentiate symmetry vs non-symmetry remains to face extreme confusions caused by clutters scenes and ambiguous object structures. In this paper, we propose a novel multiple instance learning framework for local reflection symmetry detection, named multiple instance subspace learning (MISL), which instead learns a group of models respectively on well partitioned subspaces. To obtain such subspaces, we propose an efficient dividing strategy under MIL setting, named partial random projection tree (PRPT), by taking advantage of the fact that each sample (bag) is represented by the proposed symmetry features computed at specific scale and orientation combinations (instances). Encouraging experimental results on two datasets demonstrate that the proposed local reflection symmetry detection method outperforms current state-of-the-arts. HighlightsWe perform clustering on samples represented by multiple instances.We learn a group of MIL classifiers on subspaces.We report state-of-the-arts results on the symmetry detection benchmark.",
""
]
} |
1703.01897 | 2949827967 | Software engineers working in large projects must navigate complex information landscapes. Change Impact Analysis (CIA) is a task that relies on engineers' successful information seeking in databases storing, e.g., source code, requirements, design descriptions, and test case specifications. Several previous approaches to support information seeking are task-specific, thus understanding engineers' seeking behavior in specific tasks is fundamental. We present an industrial case study on how engineers seek information in CIA, with a particular focus on traceability and development artifacts that are not source code. We show that engineers have different information seeking behavior, and that some do not consider traceability particularly useful when conducting CIA. Furthermore, we observe a tendency for engineers to prefer less rigid types of support rather than formal approaches, i.e., engineers value support that allows flexibility in how to practically conduct CIA. Finally, due to diverse information seeking behavior, we argue that future CIA support should embrace individual preferences to identify change impact by empowering several seeking alternatives, including searching, browsing, and tracing. | A popular definition of CIA is provided by Bohner, stating that CIA is '' @cite_37 . CIA can be described as a cognitive process of incrementally adding items to a set of candidate impact, which is known to be both tedious and error-prone for large systems @cite_15 @cite_25 . However, as CIA is mandated by most safety standards, companies aspiring to release certified software systems must comply. | {
"cite_N": [
"@cite_37",
"@cite_25",
"@cite_15"
],
"mid": [
"1548254758",
"1969559915",
"1483161216"
],
"abstract": [
"From the Publisher: As software systems become increasingly large and complex, the need increases to predict and control the effects of software changes. Software Change Impact Analysis captures the latest information on the science and art of determining what software parts affect each other. It provides a battery of ideas for doing impact analysis better, presents a framework for the field, and focuses attention on important results. You will gain a healthy respect for the strengths and limitations of impact analysis technology and a solid background that will prove valuable for years to come. The book identifies key impact analysis definitions and themes and illustrates the important themes to give you a solid understanding for tackling impact analysis problems. It includes reports on software source code dependency analysis and software traceability analysis and shows how results from both areas can more effectively support impact analysis in software engineering repositories. It also describes why impact representation and determination techniques are at the heart of both source dependency analysis and traceability analysis.",
"Most software is accompanied by frequent changes, whereas the implementation of a single change can affect many different parts of the system. Approaches for Impact Analysis have been developed to assist developers with changing software. However, there is no solid framework for classifying and comparing such approaches, and it is therefore hard to find a suitable technique with minimal effort. The contribution of this paper is a taxonomy for Impact Analysis, based on a literature review conducted on related studies, to overcome this limitation. The presented classification criteria are more detailed and precise than those proposed in previous work, and possible candidates for all criteria are derived from studied literature. We classify several approaches according to our taxonomy to illustrate its applicability and the usefulness of our criteria. The research presented in this paper prepares the ground for a comprehensive survey of Software Change Impact Analysis.",
"The ability to evolve software rapidly and reliably is a major challenge for software engineering. In this introductory chapter we start with a historic overview of the research domain of software evolution. Next, we briefly introduce the important research themes in software evolution, and identify research challenges for the years to come. Finally, we provide a roadmap of the topics treated in this book, and explain how the various chapters are related."
]
} |
1703.01897 | 2949827967 | Software engineers working in large projects must navigate complex information landscapes. Change Impact Analysis (CIA) is a task that relies on engineers' successful information seeking in databases storing, e.g., source code, requirements, design descriptions, and test case specifications. Several previous approaches to support information seeking are task-specific, thus understanding engineers' seeking behavior in specific tasks is fundamental. We present an industrial case study on how engineers seek information in CIA, with a particular focus on traceability and development artifacts that are not source code. We show that engineers have different information seeking behavior, and that some do not consider traceability particularly useful when conducting CIA. Furthermore, we observe a tendency for engineers to prefer less rigid types of support rather than formal approaches, i.e., engineers value support that allows flexibility in how to practically conduct CIA. Finally, due to diverse information seeking behavior, we argue that future CIA support should embrace individual preferences to identify change impact by empowering several seeking alternatives, including searching, browsing, and tracing. | Most CIA work in industry is manual @cite_6 , although the importance of improved CIA tools has been highlighted in research for a long time @cite_11 . Also, two recent reviews of scientific literature shows that most research on CIA is limited to impact on source code @cite_25 @cite_4 . However, as stated in Lehnert's review: '' [pp. 26] lehnert_taxonomy_2011 . Especially in safety-critical development, it is critical to also analyze how a change to a software system affects artifact types that are not source code, e.g., whether any requirements are affected, or which test cases should be selected for regression testing. | {
"cite_N": [
"@cite_4",
"@cite_25",
"@cite_6",
"@cite_11"
],
"mid": [
"2111403266",
"1969559915",
"2324328435",
"1935624361"
],
"abstract": [
"SUMMARY Software change impact analysis (CIA) is a technique for identifying the effects of a change, or estimating what needs to be modified to accomplish a change. Since the 1980s, there have been many investigations on CIA, especially for code-based CIA techniques. However, there have been very few surveys on this topic. This article tries to fill this gap. And 30 papers that provide empirical evaluation on 23 code-based CIA techniques are identified. Then, data was synthesized against four research questions. The study presents a comparative framework including seven properties, which characterize the CIA techniques, and identifies key applications of CIA techniques in software maintenance. In addition, the need for further research is also presented in the following areas: evaluating existing CIA techniques and proposing new CIA techniques under the proposed framework, developing more mature tools to support CIA, comparing current CIA techniques empirically with unified metrics and common benchmarks, and applying the CIA more extensively and effectively in the software maintenance phase. Copyright © 2012 John Wiley & Sons, Ltd.",
"Most software is accompanied by frequent changes, whereas the implementation of a single change can affect many different parts of the system. Approaches for Impact Analysis have been developed to assist developers with changing software. However, there is no solid framework for classifying and comparing such approaches, and it is therefore hard to find a suitable technique with minimal effort. The contribution of this paper is a taxonomy for Impact Analysis, based on a literature review conducted on related studies, to overcome this limitation. The presented classification criteria are more detailed and precise than those proposed in previous work, and possible candidates for all criteria are derived from studied literature. We classify several approaches according to our taxonomy to illustrate its applicability and the usefulness of our criteria. The research presented in this paper prepares the ground for a comprehensive survey of Software Change Impact Analysis.",
"Context. In many application domains, critical systems must comply with safety standards. This involves gathering safety evidence in the form of artefacts such as safety analyses, system specifications, and testing results. These artefacts can evolve during a system's lifecycle, creating a need for change impact analysis to guarantee that system safety and compliance are not jeopardised. Objective. We aim to provide new insights into how safety evidence change impact analysis is addressed in practice. The knowledge about this activity is limited despite the extensive research that has been conducted on change impact analysis and on safety evidence management. Method. We conducted an industrial survey on the circumstances under which safety evidence change impact analysis is addressed, the tool support used, and the challenges faced. Results. We obtained 97 valid responses representing 16 application domains, 28 countries, and 47 safety standards. The respondents had most often performed safety evidence change impact analysis during system development, from system specifications, and fully manually. No commercial change impact analysis tool was reported as used for all artefact types and insufficient tool support was the most frequent challenge. Conclusion. The results suggest that the different artefact types used as safety evidence co-evolve. In addition, the evolution of safety cases should probably be better managed, the level of automation in safety evidence change impact analysis is low, and the state of the practice can benefit from over 20 improvement areas.",
"As software engineering practice evolves to respond to demands for distributed applications on heterogeneous platforms, software change is increasingly influenced by middleware and components. Interoperability dependency relationships now point to more relevant impacts of software change and necessarily drive the analysis. Software changes to software systems that incorporate middleware components like Web services expose these systems and the organizations they serve to unforeseen ripple effects that frequently result in failures. Current software change impact analysis models have not adequately addressed this trend. Moreover, as software systems grow in size and complexity, the dependency webs of information extend beyond most software engineers ability to comprehend them. This paper examines preliminary research for extending current software change impact analysis to incorporate interoperability dependency relationships for addressing distributed applications and explores three dimensional (3D) visualization techniques for more effective navigation of software changes."
]
} |
1703.01897 | 2949827967 | Software engineers working in large projects must navigate complex information landscapes. Change Impact Analysis (CIA) is a task that relies on engineers' successful information seeking in databases storing, e.g., source code, requirements, design descriptions, and test case specifications. Several previous approaches to support information seeking are task-specific, thus understanding engineers' seeking behavior in specific tasks is fundamental. We present an industrial case study on how engineers seek information in CIA, with a particular focus on traceability and development artifacts that are not source code. We show that engineers have different information seeking behavior, and that some do not consider traceability particularly useful when conducting CIA. Furthermore, we observe a tendency for engineers to prefer less rigid types of support rather than formal approaches, i.e., engineers value support that allows flexibility in how to practically conduct CIA. Finally, due to diverse information seeking behavior, we argue that future CIA support should embrace individual preferences to identify change impact by empowering several seeking alternatives, including searching, browsing, and tracing. | In the 2000s, the growing interest in agile development methods made many organizations downplay traceability. Agile developers often consider traceability management to be a burdensome activity that does not generate return on investment @cite_8 . Still, traceability remains non-negotiable in development of safety-critical systems. Safety standards such as ISO 26262 in the automotive industry ISO 26262-1:2011 Road vehicles -- Functional safety and IEC 61511 in the process industry sector IEC 61511-1 ed. 1.0 Safety Instrumented Systems for the Process Industry Sector explicitly requires traceability through the development lifecycle. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1839118838"
],
"abstract": [
"Agile methods are becoming an increasingly mainstream approach to software development. They are characterized by short iterations with frequent deliverables, test-driven development, lightweight documentation, and frequent interactions with the customer. Perhaps unsurprisingly, traceability is often seen as unnecessary and therefore unwanted in agile projects. This is due to the perceived overhead of creating and maintaining traceability links and the assumption that agile developers have sufficient understanding of a project to implement a change without the support of previously defined traceability links. This chapter explores the challenges, benefits, techniques, and processes of tracing across a broad spectrum of agile projects."
]
} |
1703.01897 | 2949827967 | Software engineers working in large projects must navigate complex information landscapes. Change Impact Analysis (CIA) is a task that relies on engineers' successful information seeking in databases storing, e.g., source code, requirements, design descriptions, and test case specifications. Several previous approaches to support information seeking are task-specific, thus understanding engineers' seeking behavior in specific tasks is fundamental. We present an industrial case study on how engineers seek information in CIA, with a particular focus on traceability and development artifacts that are not source code. We show that engineers have different information seeking behavior, and that some do not consider traceability particularly useful when conducting CIA. Furthermore, we observe a tendency for engineers to prefer less rigid types of support rather than formal approaches, i.e., engineers value support that allows flexibility in how to practically conduct CIA. Finally, due to diverse information seeking behavior, we argue that future CIA support should embrace individual preferences to identify change impact by empowering several seeking alternatives, including searching, browsing, and tracing. | A number of studies on SE information seeking has targeted issue reports, closely related to our study as the CIA process in the case company is tightly connected with the issue tracker (see ). The number of incoming issue reports in large software engineering projects can be overwhelming, and particularly research on duplicate detection has received much attention. Runeson pioneered issue duplicate detection using standard information retrieval techniques @cite_39 . Several researchers have done similar work, including a replication by Borg that discusses improving findability in the issue tracker using the open source software search library Apache Lucene @cite_31 . Also in issue tracking there is research available on task specific search solutions, e.g., Baysal developed customized issue dashboards based on a grounded theory study of developers' comments @cite_40 . | {
"cite_N": [
"@cite_40",
"@cite_31",
"@cite_39"
],
"mid": [
"2014690198",
"2114651011",
""
],
"abstract": [
"Modern software development tools such as issue trackers are often complex and multi-purpose tools that provide access to an immense amount of raw information. Unfortunately, developers sometimes feel frustrated when they cannot easily obtain the particular information they need for a given task; furthermore, the constant influx of new data — the vast majority of which is irrelevant to their task at hand — may result in issues being \"dropped on the floor\". In this paper, we present a developer-centric approach to issue tracking that aims to reduce information overload and improve developers' situational awareness. Our approach is motivated by a grounded theory study of developer comments, which suggests that customized views of a project's repositories that are tailored to developer-specific tasks can help developers better track their progress and understand the surrounding technical context. From the qualitative study, we uncovered a model of the kinds of information elements that are essential for developers in completing their daily tasks, and from this model we built a tool organized around customized issue-tracking dashboards. Further quantitative and qualitative evaluation demonstrated that this dashboard-like approach to issue tracking can reduce the volume of irrelevant emails by over 99 and also improve support for specific issue-tracking tasks.",
"Context: Duplicate detection is a fundamental part of issue management. Systems able to predict whether a new defect report will be closed as a duplicate, may decrease costs by limiting rework and collecting related pieces of information. Goal: Our work explores using Apache Lucene for large-scale duplicate detection based on textual content. Also, we evaluate the previous claim that results are improved if the title is weighted as more important than the description. Method: We conduct a conceptual replication of a well-cited study conducted at Sony Ericsson, using Lucene for searching in the public Android defect repository. In line with the original study, we explore how varying the weighting of the title and the description affects the accuracy. Results: We show that Lucene obtains the best results when the defect report title is weighted three times higher than the description, a bigger difference than has been previously acknowledged. Conclusions: Our work shows the potential of using Lucene as a scalable solution for duplicate detection.",
""
]
} |
1703.01946 | 2949101676 | Human-centered environments are rich with a wide variety of spatial relations between everyday objects. For autonomous robots to operate effectively in such environments, they should be able to reason about these relations and generalize them to objects with different shapes and sizes. For example, having learned to place a toy inside a basket, a robot should be able to generalize this concept using a spoon and a cup. This requires a robot to have the flexibility to learn arbitrary relations in a lifelong manner, making it challenging for an expert to pre-program it with sufficient knowledge to do so beforehand. In this paper, we address the problem of learning spatial relations by introducing a novel method from the perspective of distance metric learning. Our approach enables a robot to reason about the similarity between pairwise spatial relations, thereby enabling it to use its previous knowledge when presented with a new relation to imitate. We show how this makes it possible to learn arbitrary spatial relations from non-expert users using a small number of examples and in an interactive manner. Our extensive evaluation with real-world data demonstrates the effectiveness of our method in reasoning about a continuous spectrum of spatial relations and generalizing them to new objects. | Related to this is the work by rosman2011learning , which proposes constructing a contact point graph to classify spatial relations @cite_13 . Similarly, fichtl2014learning train random forest classifiers for relations based on histograms that encode the relative position of surface patches @cite_6 . guadarrama2013grounding learn models of pre-defined prepositions by training a multi-class logistic regression model using data gathered from crowdsourcing @cite_22 . As opposed to those works, we propose learning a distance metric that captures the similarities between different relations without specifying explicit classes. | {
"cite_N": [
"@cite_13",
"@cite_22",
"@cite_6"
],
"mid": [
"2098201806",
"2050755999",
"2115859267"
],
"abstract": [
"Although a manipulator must interact with objects in terms of their full complexity, it is the qualitative structure of the objects in an environment and the relationships between them which define the composition of that environment, and allow for the construction of efficient plans to enable the completion of various elaborate tasks. In this paper we present an algorithm which redescribes a scene in terms of a layered representation, from labeled point clouds of the objects in the scene. The representation includes a qualitative description of the structure of the objects, as well as the symbolic relationships between them. This is achieved by constructing contact point networks of the objects, which are topological representations of how each object is used in that particular scene, and are based on the regions of contact between objects. We demonstrate the performance of the algorithm, by presenting results from the algorithm tested on a database of stereo images. This shows a high percentage of correctly classified relationships, as well as the discovery of interesting topological features. This output provides a layered representation of a scene, giving symbolic meaning to the inter-object relationships useful for subsequent commonsense reasoning and decision making.",
"This paper describes CRAM (Cognitive Robot Abstract Machine) as a software toolbox for the design, the implementation, and the deployment of cognition-enabled autonomous robots performing everyday manipulation activities. CRAM equips autonomous robots with lightweight reasoning mechanisms that can infer control decisions rather than requiring the decisions to be preprogrammed. This way CRAM-programmed autonomous robots are much more flexible, reliable, and general than control programs that lack such cognitive capabilities. CRAM does not require the whole domain to be stated explicitly in an abstract knowledge base. Rather, it grounds symbolic expressions in the knowledge representation into the perception and actuation routines and into the essential data structures of the control programs. In the accompanying video, we show complex mobile manipulation tasks performed by our household robot that were realized using the CRAM infrastructure.",
"Effective robot manipulation requires a vision system which can extract features of the environment which determine what manipulation actions are possible. There is existing work in this direction under the broad banner of recognising “affordances”. We are particularly interested in possibilities for actions afforded by relationships among pairs of objects. For example if an object is “inside” another or “on top” of another. For this there is a need for a vision system which can recognise such relationships in a scene. We use an approach in which a vision system first segments an image, and then considers a pair of objects to determine their physical relationship. The system extracts surface patches for each object in the segmented image, and then compiles various histograms from looking at relationships between the surface patches of one object and those of the other object. From these histograms a classifier is trained to recognise the relationship between a pair of objects. Our results identify the most promising ways to construct histograms in order to permit classification of physical relationships with high accuracy. This work is important for manipulator robots who may be presented with novel scenes and must identify the salient physical relationships in order to plan manipulation activities."
]
} |
1703.01946 | 2949101676 | Human-centered environments are rich with a wide variety of spatial relations between everyday objects. For autonomous robots to operate effectively in such environments, they should be able to reason about these relations and generalize them to objects with different shapes and sizes. For example, having learned to place a toy inside a basket, a robot should be able to generalize this concept using a spoon and a cup. This requires a robot to have the flexibility to learn arbitrary relations in a lifelong manner, making it challenging for an expert to pre-program it with sufficient knowledge to do so beforehand. In this paper, we address the problem of learning spatial relations by introducing a novel method from the perspective of distance metric learning. Our approach enables a robot to reason about the similarity between pairwise spatial relations, thereby enabling it to use its previous knowledge when presented with a new relation to imitate. We show how this makes it possible to learn arbitrary spatial relations from non-expert users using a small number of examples and in an interactive manner. Our extensive evaluation with real-world data demonstrates the effectiveness of our method in reasoning about a continuous spectrum of spatial relations and generalizing them to new objects. | Moreover, related to our work is the interactive approach by kulick2013active for learning relational symbols from a teacher @cite_26 . They use Gaussian Process classifiers to model symbols and therefore enable a robot to query the teacher with examples to increase its confidence in the learned models. Similarly, our method enables a robot to generalize a relation by interacting with a teacher. However, we do this from the perspective of metric learning, allowing the robot to re-use previous demonstrations of other relations. | {
"cite_N": [
"@cite_26"
],
"mid": [
"1754243990"
],
"abstract": [
"We investigate an interactive teaching scenario, where a human teaches a robot symbols which abstract the geometric properties of objects. There are multiple motivations for this scenario: First, state-of-the-art methods for relational reinforcement learning demonstrate that we can learn and employ strongly generalizing abstract models with great success for goal-directed object manipulation. However, these methods rely on given grounded action and state symbols and raise the classical question: Where do the symbols come from? Second, existing research on learning from human-robot interaction has focused mostly on the motion level (e.g., imitation learning). However, if the goal of teaching is to enable the robot to autonomously solve sequential manipulation tasks in a goal-directed manner, the human should have the possibility to teach the relevant abstractions to describe the task and let the robot eventually leverage powerful relational RL methods. In this paper we formalize human-robot teaching of grounded symbols as an active learning problem, where the robot actively generates pick-and-place geometric situations that maximize its information gain about the symbol to be learned. We demonstrate that the learned symbols can be used by a robot in a relational RL framework to learn probabilistic relational rules and use them to solve object manipulation tasks in a goal-directed manner."
]
} |
1703.01946 | 2949101676 | Human-centered environments are rich with a wide variety of spatial relations between everyday objects. For autonomous robots to operate effectively in such environments, they should be able to reason about these relations and generalize them to objects with different shapes and sizes. For example, having learned to place a toy inside a basket, a robot should be able to generalize this concept using a spoon and a cup. This requires a robot to have the flexibility to learn arbitrary relations in a lifelong manner, making it challenging for an expert to pre-program it with sufficient knowledge to do so beforehand. In this paper, we address the problem of learning spatial relations by introducing a novel method from the perspective of distance metric learning. Our approach enables a robot to reason about the similarity between pairwise spatial relations, thereby enabling it to use its previous knowledge when presented with a new relation to imitate. We show how this makes it possible to learn arbitrary spatial relations from non-expert users using a small number of examples and in an interactive manner. Our extensive evaluation with real-world data demonstrates the effectiveness of our method in reasoning about a continuous spectrum of spatial relations and generalizing them to new objects. | Similar to our work, zampogiannis2015learning model spatial relations based on the geometries of objects given their point cloud models @cite_11 . However, they define a variety of common relations and focus on addressing the problem of extracting the semantics of manipulation actions through temporal analysis of spatial relations between objects. Other methods have also relied on the geometries of objects and scenes to reason about preferred object placements @cite_17 or likely places to find an object @cite_18 . Moreover, kroemer2014predicting used 3D object models to extract contact point distributions for predicting interactions between objects @cite_25 . | {
"cite_N": [
"@cite_18",
"@cite_25",
"@cite_17",
"@cite_11"
],
"mid": [
"1977820603",
"2082884224",
"2091862867",
"1547225229"
],
"abstract": [
"In this paper, we argue that there is a strong correlation between local 3D structure and object placement in everyday scenes. We call this the 3D context of the object. In previous work, this is typically hand-coded and limited to flat horizontal surfaces. In contrast, we propose to use a more general model for 3D context and learn the relationship between 3D context and different object classes. This way, we can capture more complex 3D contexts without implementing specialized routines. We present extensive experiments with both qualitative and quantitative evaluations of our method for different object classes. We show that our method can be used in conjunction with an object detection algorithm to reduce the rate of false positives. Our results support that the 3D structure surrounding objects in everyday scenes is a strong indicator of their placement and that it can give significant improvements in the performance of, for example, an object detection system. For evaluation, we have collected a large dataset of Microsoft Kinect frames from five different locations, which we also make publicly available.",
"Contacts between objects play an important role in manipulation tasks. Depending on the locations of contacts, different manipulations or interactions can be performed with the object. By observing the contacts between two objects, a robot can learn to detect potential interactions between them. Rather than defining a set of features for modeling the contact distributions, we propose a kernel-based approach. The contact points are first modeled using a Gaussian distribution. The similarity between these distributions is computed using a kernel function. The contact distributions are then classified using kernel logistic regression. The proposed approach was used to predict stable grasps of an elongated object, as well as to construct towers out of assorted toy blocks.",
"Placing is a necessary skill for a personal robot to have in order to perform tasks such as arranging objects in a disorganized room. The object placements should not only be stable but also be in their semantically preferred placing areas and orientations. This is challenging because an environment can have a large variety of objects and placing areas that may not have been seen by the robot before. In this paper, we propose a learning approach for placing multiple objects in different placing areas in a scene. Given point-clouds of the objects and the scene, we design appropriate features and use a graphical model to encode various properties, such as the stacking of objects, stability, object-area relationship and common placing constraints. The inference in our model is an integer linear program, which we solve efficiently via an linear programming relaxation. We extensively evaluate our approach on 98 objects from 16 categories being placed into 40 areas. Our robotic experiments show a success rate of 98 in placing known objects and 82 in placing new objects stably. We use our method on our robots for performing tasks such as loading several dish-racks, a bookshelf and a fridge with multiple items.",
"In this paper, we introduce an abstract representation for manipulation actions that is based on the evolution of the spatial relations between involved objects. Object tracking in RGBD streams enables straightforward and intuitive ways to model spatial relations in 3D space. Reasoning in 3D overcomes many of the limitations of similar previous approaches, while providing significant flexibility in the desired level of abstraction. At each frame of a manipulation video, we evaluate a number of spatial predicates for all object pairs and treat the resulting set of sequences (Predicate Vector Sequences, PVS) as an action descriptor. As part of our representation, we introduce a symmetric, time-normalized pairwise distance measure that relies on finding an optimal object correspondence between two actions. We experimentally evaluate the method on the classification of various manipulation actions in video, performed at different speeds and timings and involving different objects. The results demonstrate that the proposed representation is remarkably descriptive of the high-level manipulation semantics."
]
} |
1703.02212 | 2953176855 | Users are rarely familiar with the content of a data source they are querying, and therefore cannot avoid using keywords that do not exist in the data source. Traditional systems may respond with an empty result, causing dissatisfaction, while the data source in effect holds semantically related content. In this paper we study this no-but-semantic-match problem on XML keyword search and propose a solution which enables us to present the top-k semantically related results to the user. Our solution involves two steps: (a) extracting semantically related candidate queries from the original query and (b) processing candidate queries and retrieving the top-k semantically related results. Candidate queries are generated by replacement of non-mapped keywords with candidate keywords obtained from an ontological knowledge base. Candidate results are scored using their cohesiveness and their similarity to the original query. Since the number of queries to process can be large, with each result having to be analyzed, we propose pruning techniques to retrieve the top- @math results efficiently. We develop two query processing algorithms based on our pruning techniques. Further, we exploit a property of the candidate queries to propose a technique for processing multiple queries in batch, which improves the performance substantially. Extensive experiments on two real datasets verify the effectiveness and efficiency of the proposed approaches. | Sometimes the system shows erroneous mismatch results for a user query which is called mismatch problem. @cite_32 proposed a framework to detect the keyword queries that lead to a list of irrelevant results on XML data. They detect a mismatch problem by analyzing the results of a user query and inferring the user's intended node type result based on data structure. Based on this, they are able to suggest queries with relevant results to the user. Unlike the current study, investigate ways of producing relevant results instead of finding results for no-match queries. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2076520477"
],
"abstract": [
"When users issue a query to a database, they have expectations about the results. If what they search for is unavailable in the database, the system will return an empty result or, worse, erroneous mismatch results. We call this problem the MisMatch problem. In this paper, we solve the MisMatch problem in the context of XML keyword search. Our solution is based on two novel concepts that we introduce: target node type and Distinguishability. Target Node Type represents the type of node a query result intends to match, and Distinguishability is used to measure the importance of the query keywords. Using these concepts, we develop a low-cost post-processing algorithm on the results of query evaluation to detect the MisMatch problem and generate helpful suggestions to users. Our approach has three noteworthy features: (1) for queries with the MisMatch problem, it generates the explanation, suggested queries and their sample results as the output to users, helping users judge whether the MisMatch problem is solved without reading all query results; (2) it is portable as it can work with any lowest common ancestor-based matching semantics (for XML data without ID references) or minimal Steiner tree-based matching semantics (for XML data with ID references) which return tree structures as results. It is orthogonal to the choice of result retrieval method adopted; (3) it is lightweight in the way that it occupies a very small proportion of the whole query evaluation time. Extensive experiments on three real datasets verify the effectiveness, efficiency and scalability of our approach. A search engine called XClear has been built and is available at http: xclear.comp.nus.edu.sg."
]
} |
1703.02212 | 2953176855 | Users are rarely familiar with the content of a data source they are querying, and therefore cannot avoid using keywords that do not exist in the data source. Traditional systems may respond with an empty result, causing dissatisfaction, while the data source in effect holds semantically related content. In this paper we study this no-but-semantic-match problem on XML keyword search and propose a solution which enables us to present the top-k semantically related results to the user. Our solution involves two steps: (a) extracting semantically related candidate queries from the original query and (b) processing candidate queries and retrieving the top-k semantically related results. Candidate queries are generated by replacement of non-mapped keywords with candidate keywords obtained from an ontological knowledge base. Candidate results are scored using their cohesiveness and their similarity to the original query. Since the number of queries to process can be large, with each result having to be analyzed, we propose pruning techniques to retrieve the top- @math results efficiently. We develop two query processing algorithms based on our pruning techniques. Further, we exploit a property of the candidate queries to propose a technique for processing multiple queries in batch, which improves the performance substantially. Extensive experiments on two real datasets verify the effectiveness and efficiency of the proposed approaches. | Sometimes the empty result is caused by typographical errors. Pu and Yu @cite_0 and @cite_3 investigated a way of suggesting queries that have been cleaned of typing errors. Unlike our study, these authors do not tackle the problem of non-mapped keywords. | {
"cite_N": [
"@cite_0",
"@cite_3"
],
"mid": [
"2103615139",
"2128475980"
],
"abstract": [
"Unlike traditional database queries, keyword queries do not adhere to predefined syntax and are often dirty with irrelevant words from natural languages. This makes accurate and efficient keyword query processing over databases a very challenging task. In this paper, we introduce the problem of query cleaning for keyword search queries in a database context and propose a set of effective and efficient solutions. Query cleaning involves semantic linkage and spelling corrections of database relevant query words, followed by segmentation of nearby query words such that each segment corresponds to a high quality data term. We define a quality metric of a keyword query, and propose a number of algorithms for cleaning keyword queries optimally. It is demonstrated that the basic optimal query cleaning problem can be solved using a dynamic programming algorithm. We further extend the basic algorithm to address incremental query cleaning and top-k optimal query cleaning. The incremental query cleaning is efficient and memory-bounded, hence is ideal for scenarios in which the keywords are streamed. The top-k query cleaning algorithm is guaranteed to return the best k cleaned keyword queries in ranked order. Extensive experiments are conducted on three real-life data sets, and the results confirm the effectiveness and efficiency of the proposed solutions.",
"An important facility to aid keyword search on XML data is suggesting alternative queries when user queries contain typographical errors. Query suggestion thus can improve users' search experience by avoiding returning empty result or results of poor qualities. In this paper, we study the problem of effectively and efficiently providing quality query suggestions for keyword queries on an XML document. We illustrate certain biases in previous work and propose a principled and general framework, XClean, based on the state-of-the-art language model. Compared with previous methods, XClean can accommodate different error models and XML keyword query semantics without losing rigor. Algorithms have been developed that compute the top-k suggestions efficiently. We performed an extensive experiment study using two large-scale real datasets. The experiment results demonstrate the effectiveness and efficiency of the proposed methods."
]
} |
1703.02212 | 2953176855 | Users are rarely familiar with the content of a data source they are querying, and therefore cannot avoid using keywords that do not exist in the data source. Traditional systems may respond with an empty result, causing dissatisfaction, while the data source in effect holds semantically related content. In this paper we study this no-but-semantic-match problem on XML keyword search and propose a solution which enables us to present the top-k semantically related results to the user. Our solution involves two steps: (a) extracting semantically related candidate queries from the original query and (b) processing candidate queries and retrieving the top-k semantically related results. Candidate queries are generated by replacement of non-mapped keywords with candidate keywords obtained from an ontological knowledge base. Candidate results are scored using their cohesiveness and their similarity to the original query. Since the number of queries to process can be large, with each result having to be analyzed, we propose pruning techniques to retrieve the top- @math results efficiently. We develop two query processing algorithms based on our pruning techniques. Further, we exploit a property of the candidate queries to propose a technique for processing multiple queries in batch, which improves the performance substantially. Extensive experiments on two real datasets verify the effectiveness and efficiency of the proposed approaches. | Many studies have used ontology information for searching the semantic web @cite_29 , @cite_14 , @cite_28 . Studies by Aleman-Meza @cite_36 , Cakmak and " O zsoyoglu @cite_15 as well as Wu, Yang and Yan @cite_6 used ontology information to find frequent patterns in graphs. Wu, Yang and Yan @cite_6 proposed an improved subgraph querying technique by ontology information. They revised subgraph isomorphism by mapping a query to semantically related subgraphs in terms of a given ontology graph. Our work generates substitute queries for the user given keyword query by extracting the semantically related keywords from the ontological knowledge base and thereafter, produce semantically related results to the user query instead of returning an empty result set to the user. | {
"cite_N": [
"@cite_14",
"@cite_28",
"@cite_36",
"@cite_29",
"@cite_6",
"@cite_15"
],
"mid": [
"1484139676",
"1592753763",
"2150115779",
"1974775483",
"2060072498",
""
],
"abstract": [
"Ontologies are being used increasingly in fusion applications, particularly for higher-level fusion, where data must often be understood relationally. This research presents a methodology for utilizing ontologies to enhance the process of graph matching in fusion applications, particularly those associated with soft data (e.g., linguistic data existing in things such as intelligence messages). This paper presents some of the considerations and challenges associated with merging the technologies of ontologies and graph matching, as well as some preliminary research findings that show the effectiveness of using ontologies to enhance the matching capabilities of target graphs (as relational items of interest) against larger data graphs.",
"With the fast development of Semantic Web, more and more RDF and OWL ontologies are created and shared. The effective management, such as storage, inference and query, of these ontologies on databases gains increasing attention. This paper addresses ontology query answering on databases by means of Datalog programs. Via epistemic operators, integrity constraints are introduced, and used for conveying semantic aspects of OWL that are not covered by Datalog-style rule languages. We believe such a processing suitable to capture ontologies in the database flavor, while keeping reasoning tractable. Here, we present a logically equivalent knowledge base whose (sound and complete) inference system appears as a Datalog program. As such, SPARQL query answering on OWL ontologies could be solved in databases. Bi-directional strategies, taking advantage of both forward and backward chaining, are then studied to support this kind of customized Datalog programs, returning exactly answers to the query within our logical framework.",
"Today's search technology delivers impressive results in finding relevant documents for given keywords. However many applications in various fields including genetics, pharmacy, social networks, etc. as well as national security need more than what traditional search can provide. Users need to query a very large knowledge base (KB) using semantic similarity, to discover its relevant subsets. One approach is to use templates that support semantic similarity-based discovery of suspicious activities, that can be exploited to support applications such as money laundering, insider threat and terrorist activities. Such discovery that relies on a semantic similarity notion will tolerate syntactic differences between templates and KB using ontologies. We address the problem of identifying known scenarios using a notion of template-based similarity performed as part of the SemDIS project [1, 3]. This approach is prototyped in a system named TRAKS (Terrorism Related Assessment using Knowledge Similarity) and tested using scenarios involving potential money laundering.",
"The semantic Web aims to represent the contents of Web resources in formalisms that both programs and humans can understand. It relies on rich metadata, called semantic annotations, offering explicit semantic descriptions of Web resources. These annotations are built on ontologies, representing domains through their concepts and the semantic relations between them. Ontologies are the foundations of the semantic Web and the keystone of the Web's automated tasks - searching, merging, sharing, maintaining, customizing, and monitoring.",
"Subgraph querying has been applied in a variety of emerging applications. Traditional subgraph querying based on subgraph isomorphism requires identical label matching, which is often too restrictive to capture the matches that are semantically close to the query graphs. This paper extends subgraph querying to identify semantically related matches by leveraging ontology information. (1) We introduce the ontology-based subgraph querying, which revises subgraph isomorphism by mapping a query to semantically related subgraphs in terms of a given ontology graph. We introduce a metric to measure the similarity of the matches. Based on the metric, we introduce an optimization problem to find top K best matches. (2) We provide a filtering-and-verification framework to identify (top-K) matches for ontology-based subgraph queries. The framework efficiently extracts a small subgraph of the data graph from an ontology index, and further computes the matches by only accessing the extracted subgraph. (3) In addition, we show that the ontology index can be efficiently updated upon the changes to the data graphs, enabling the framework to cope with dynamic data graphs. (4) We experimentally verify the effectiveness and efficiency of our framework using both synthetic and real life graphs, comparing with traditional subgraph querying methods.",
""
]
} |
1703.02083 | 2951821408 | Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and robustness of brain extraction, therefore, is crucial for the accuracy of the entire brain analysis process. With the aim of designing a learning-based, geometry-independent and registration-free brain extraction tool in this study, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2D patches of different window sizes. In this architecture three parallel 2D convolutional pathways for three different directions (axial, coronal, and sagittal) implicitly learn 3D image information without the need for computationally expensive 3D convolutions. Posterior probability maps generated by the network are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain, to extract it from non-brain tissue. The brain extraction results we have obtained from our algorithm are superior to the recently reported results in the literature on two publicly available benchmark datasets, namely LPBA40 and OASIS, in which we obtained Dice overlap coefficients of 97.42 and 95.40 , respectively. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily-oriented fet al brains in reconstructed fet al brain magnetic resonance imaging (MRI) datasets. In this application our algorithm performed much better than the other methods (Dice coefficient: 95.98 ), where the other methods performed poorly due to the non-standard orientation and geometry of the fet al brain in MRI. Our CNN-based method can provide accurate, geometry-independent brain extraction in challenging applications. | Many algorithms have been developed and continuously improved over the past decade for whole brain segmentation, which has been a necessary component of large-scale neuroscience and neuroimage analysis studies. As the usage of these algorithms dramatically grew, the demand for higher accuracy and reliability also increased. Consequently, while fully-automated, accurate brain extraction has already been investigated extensively, it is still an active area of research. Of particular interest is a recent deep learning based algorithm @cite_26 that has shown to outperform most of the popular routinely-used brain extraction tools. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2284198383"
],
"abstract": [
"Abstract Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N = 53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N = 135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials."
]
} |
1703.02083 | 2951821408 | Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and robustness of brain extraction, therefore, is crucial for the accuracy of the entire brain analysis process. With the aim of designing a learning-based, geometry-independent and registration-free brain extraction tool in this study, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2D patches of different window sizes. In this architecture three parallel 2D convolutional pathways for three different directions (axial, coronal, and sagittal) implicitly learn 3D image information without the need for computationally expensive 3D convolutions. Posterior probability maps generated by the network are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain, to extract it from non-brain tissue. The brain extraction results we have obtained from our algorithm are superior to the recently reported results in the literature on two publicly available benchmark datasets, namely LPBA40 and OASIS, in which we obtained Dice overlap coefficients of 97.42 and 95.40 , respectively. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily-oriented fet al brains in reconstructed fet al brain magnetic resonance imaging (MRI) datasets. In this application our algorithm performed much better than the other methods (Dice coefficient: 95.98 ), where the other methods performed poorly due to the non-standard orientation and geometry of the fet al brain in MRI. Our CNN-based method can provide accurate, geometry-independent brain extraction in challenging applications. | Among brain extraction methods four algorithms that are distributed with the widely-used neuroimage analysis software packages, have been evolved and are routinely used. These are the Brain Extraction Tool (BET) from FSL @cite_39 @cite_29 , 3dSkullStrip from the AFNI toolkit @cite_42 , the Hybrid Watershed Algorithm (HWA) from FreeSurfer @cite_37 , and Robust Learning-Based Brain Extraction (ROBEX) @cite_35 . BET expands a deformable spherical surface mesh model initialized at the center-of-gravity of the image based on local intensity values and surface smoothness. 3dSkullStrip, which is a modified version of BET, uses points outside of the expanding mesh to guide the borders of the mesh. HWA uses edge detection for watershed segmentation along with an atlas-based deformable surface model. ROBEX fits a triangular mesh, constrained by a shape model, to the probabilistic output of a brain boundary classifier based on random forests. Because the shape model alone cannot perfectly accommodate unseen cases, Robex also uses a small free-form deformation which is optimized via graph cuts. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_29",
"@cite_42",
"@cite_39"
],
"mid": [
"2145661921",
"2076927398",
"",
"2117140276",
"2071881327"
],
"abstract": [
"Automatic whole-brain extraction from magnetic resonance images (MRI), also known as skull stripping, is a key component in most neuroimage pipelines. As the first element in the chain, its robustness is critical for the overall performance of the system. Many skull stripping methods have been proposed, but the problem is not considered to be completely solved yet. Many systems in the literature have good performance on certain datasets (mostly the datasets they were trained tuned on), but fail to produce satisfactory results when the acquisition conditions or study populations are different. In this paper we introduce a robust, learning-based brain extraction system (ROBEX). The method combines a discriminative and a generative model to achieve the final result. The discriminative model is a Random Forest classifier trained to detect the brain boundary; the generative model is a point distribution model that ensures that the result is plausible. When a new image is presented to the system, the generative model is explored to find the contour with highest likelihood according to the discriminative model. Because the target shape is in general not perfectly represented by the generative model, the contour is refined using graph cuts to obtain the final segmentation. Both models were trained using 92 scans from a proprietary dataset but they achieve a high degree of robustness on a variety of other datasets. ROBEX was compared with six other popular, publicly available methods (BET, BSE, FreeSurfer, AFNI, BridgeBurner, and GCUT) on three publicly available datasets (IBSR, LPBA40, and OASIS, 137 scans in total) that include a wide range of acquisition hardware and a highly variable population (different age groups, healthy diseased). The results show that ROBEX provides significantly improved performance measures for almost every method dataset combination.",
"Background Automated segmentation of fluorescently-labeled cell nuclei in 3D confocal microscope images is essential to many studies involving morphological and functional analysis. A common source of segmentation error is tight clustering of nuclei. There is a compelling need to minimize these errors for constructing highly automated scoring systems. Methods A combination of two approaches is presented. First, an improved distance transform combining intensity gradients and geometric distance is used for the watershed step. Second, an explicit mathematical model for the anatomic characteristics of cell nuclei such as size and shape measures is incorporated. This model is constructed automatically from the data. Deliberate initial over-segmentation of the image data is performed, followed by statistical model-based merging. A confidence score is computed for each detected nucleus, measuring how well the nucleus fits the model. This is used in combination with the intensity gradient to control the merge decisions. Results Experimental validation on a set of rodent brain cell images showed 97 concordance with the human observer and significant improvement over prior methods. Conclusions Combining a gradient-weighted distance transform with a richer morphometric model significantly improves the accuracy of automated segmentation and FISH analysis. Cytometry Part A 56A:23–36, 2003. © 2003 Wiley-Liss, Inc.",
"",
"Abstract A package of computer programs for analysis and visualization of three-dimensional human brain functional magnetic resonance imaging (FMRI) results is described. The software can color overlay neural activation maps onto higher resolution anatomical scans. Slices in each cardinal plane can be viewed simultaneously. Manual placement of markers on anatomical landmarks allows transformation of anatomical and functional scans into stereotaxic (Talairach–Tournoux) coordinates. The techniques for automatically generating transformed functional data sets from manually labeled anatomical data sets are described. Facilities are provided for several types of statistical analyses of multiple 3D functional data sets. The programs are written in ANSI C and Motif 1.2 to run on Unix workstations.",
"An automated method for segmenting magnetic resonance head images into brain and non-brain has been developed. It is very robust and accurate and has been tested on thousands of data sets from a wide variety of scanners and taken with a wide variety of MR sequences. The method, Brain Extraction Tool (BET), uses a deformable model that evolves to fit the brain's surface by the application of a set of locally adaptive model forces. The method is very fast and requires no preregistration or other pre-processing before being applied. We describe the new method and give examples of results and the results of extensive quantitative testing against “gold-standard” hand segmentations, and two other popular automated methods. Hum. Brain Mapping 17:143–155, 2002. © 2002 Wiley-Liss, Inc."
]
} |
1703.02083 | 2951821408 | Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and robustness of brain extraction, therefore, is crucial for the accuracy of the entire brain analysis process. With the aim of designing a learning-based, geometry-independent and registration-free brain extraction tool in this study, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2D patches of different window sizes. In this architecture three parallel 2D convolutional pathways for three different directions (axial, coronal, and sagittal) implicitly learn 3D image information without the need for computationally expensive 3D convolutions. Posterior probability maps generated by the network are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain, to extract it from non-brain tissue. The brain extraction results we have obtained from our algorithm are superior to the recently reported results in the literature on two publicly available benchmark datasets, namely LPBA40 and OASIS, in which we obtained Dice overlap coefficients of 97.42 and 95.40 , respectively. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily-oriented fet al brains in reconstructed fet al brain magnetic resonance imaging (MRI) datasets. In this application our algorithm performed much better than the other methods (Dice coefficient: 95.98 ), where the other methods performed poorly due to the non-standard orientation and geometry of the fet al brain in MRI. Our CNN-based method can provide accurate, geometry-independent brain extraction in challenging applications. | Recently, Kleesiek et. al. @cite_26 proposed a deep learning based algorithm for brain extraction, which will be referred to as PCNN in this paper. PCNN uses seven 3D convolutional layers for voxelwise image segmentation. Cubes of size @math around the grayscale target voxel are used as inputs to the network. In the extensive evaluation and comparison reported in @cite_26 , PCNN outperformed state-of-the-art brain extraction algorithms in publicly available benchmark datasets. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2284198383"
],
"abstract": [
"Abstract Brain extraction from magnetic resonance imaging (MRI) is crucial for many neuroimaging workflows. Current methods demonstrate good results on non-enhanced T1-weighted images, but struggle when confronted with other modalities and pathologically altered tissue. In this paper we present a 3D convolutional deep learning architecture to address these shortcomings. In contrast to existing methods, we are not limited to non-enhanced T1w images. When trained appropriately, our approach handles an arbitrary number of modalities including contrast-enhanced scans. Its applicability to MRI data, comprising four channels: non-enhanced and contrast-enhanced T1w, T2w and FLAIR contrasts, is demonstrated on a challenging clinical data set containing brain tumors (N = 53), where our approach significantly outperforms six commonly used tools with a mean Dice score of 95.19. Further, the proposed method at least matches state-of-the-art performance as demonstrated on three publicly available data sets: IBSR, LPBA40 and OASIS, totaling N = 135 volumes. For the IBSR (96.32) and LPBA40 (96.96) data set the convolutional neuronal network (CNN) obtains the highest average Dice scores, albeit not being significantly different from the second best performing method. For the OASIS data the second best Dice (95.02) results are achieved, with no statistical difference in comparison to the best performing tool. For all data sets the highest average specificity measures are evaluated, whereas the sensitivity displays about average results. Adjusting the cut-off threshold for generating the binary masks from the CNN's probability output can be used to increase the sensitivity of the method. Of course, this comes at the cost of a decreased specificity and has to be decided application specific. Using an optimized GPU implementation predictions can be achieved in less than one minute. The proposed method may prove useful for large-scale studies and clinical trials."
]
} |
1703.02083 | 2951821408 | Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and robustness of brain extraction, therefore, is crucial for the accuracy of the entire brain analysis process. With the aim of designing a learning-based, geometry-independent and registration-free brain extraction tool in this study, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2D patches of different window sizes. In this architecture three parallel 2D convolutional pathways for three different directions (axial, coronal, and sagittal) implicitly learn 3D image information without the need for computationally expensive 3D convolutions. Posterior probability maps generated by the network are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain, to extract it from non-brain tissue. The brain extraction results we have obtained from our algorithm are superior to the recently reported results in the literature on two publicly available benchmark datasets, namely LPBA40 and OASIS, in which we obtained Dice overlap coefficients of 97.42 and 95.40 , respectively. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily-oriented fet al brains in reconstructed fet al brain magnetic resonance imaging (MRI) datasets. In this application our algorithm performed much better than the other methods (Dice coefficient: 95.98 ), where the other methods performed poorly due to the non-standard orientation and geometry of the fet al brain in MRI. Our CNN-based method can provide accurate, geometry-independent brain extraction in challenging applications. | Context information has shown to be useful in computer vision and image segmentation tasks. Widely-used models, such as conditional random fields @cite_12 , rely on fixed topologies thus offer limited flexibility; but when integrated into deep CNNs, they have shown significant gain in segmentation accuracy @cite_5 @cite_15 . To increase flexibility and speed of computations, several cascaded CNN architectures have been proposed in medical image segmentation @cite_1 @cite_18 @cite_17 . In such networks, the output layer of a first network is concatenated with input to a second network to incorporate spatial correspondence of labels. To learn and incorporate context information in our CNN architectures, we adopt the auto-context algorithm @cite_7 , which fuses low-level appearance features with high-level shape information. As compared to a cascaded network, an auto-context CNN involves a generic and flexible procedure that uses posterior distribution of labels along with image features in an iterative supervised manner until convergence. To this end, the model is flexible and the balance between context information and image features is naturally handled. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_1",
"@cite_5",
"@cite_15",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2129259959",
"2589647984",
"1923697677",
"2301358467",
"2147880316",
"2589409328"
],
"abstract": [
"",
"The notion of using context information for solving high-level vision and medical image segmentation problems has been increasingly realized in the field. However, how to learn an effective and efficient context model, together with an image appearance model, remains mostly unknown. The current literature using Markov Random Fields (MRFs) and Conditional Random Fields (CRFs) often involves specific algorithm design in which the modeling and computing stages are studied in isolation. In this paper, we propose a learning algorithm, auto-context. Given a set of training images and their corresponding label maps, we first learn a classifier on local image patches. The discriminative probability (or classification confidence) maps created by the learned classifier are then used as context information, in addition to the original image patches, to train a new classifier. The algorithm then iterates until convergence. Auto-context integrates low-level and context information by fusing a large number of low-level appearance features with context and implicit shape information. The resulting discriminative algorithm is general and easy to implement. Under nearly the same parameter settings in training, we apply the algorithm to three challenging vision applications: foreground background segregation, human body configuration estimation, and scene region labeling. Moreover, context also plays a very important role in medical brain images where the anatomical structures are mostly constrained to relatively fixed positions. With only some slight changes resulting from using 3D instead of 2D features, the auto-context algorithm applied to brain MRI image segmentation is shown to outperform state-of-the-art algorithms specifically designed for this domain. Furthermore, the scope of the proposed algorithm goes beyond image analysis and it has the potential to be used for a wide variety of problems for structured prediction problems.",
"Abstract We introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images. DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification. We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning. To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground. Since patches lack spatial context, we augment them with coordinates. To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As network architecture, we use three convolutional layers with pooling, batch normalization, and non-linearities, followed by fully connected layers with dropout. The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels. The roughly 2.7 million parameters in the network are learned with stochastic gradient descent. Our results show that DeepNAT compares favorably to state-of-the-art methods. Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"This work is supported by the EPSRC First Grant scheme (grant ref no. EP N023668 1) and partially funded under the 7th Framework Programme by the European Commission (TBIcare: http: www.tbicare.eu ; CENTER-TBI: https: www.center-tbi.eu ). This work was further supported by a Medical Research Council (UK) Program Grant (Acute brain injury: heterogeneity of mechanisms, therapeutic targets and outcome effects [G9439390 ID 65883]), the UK National Institute of Health Research Biomedical Research Centre at Cambridge and Technology Platform funding provided by the UK Department of Health. KK is supported by the Imperial College London PhD Scholarship Programme. VFJN is supported by a Health Foundation Academy of Medical Sciences Clinician Scientist Fellowship. DKM is supported by an NIHR Senior Investigator Award. We gratefully acknowledge the support of NVIDIA Corporation with the donation of two Titan X GPUs for our research.",
"We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.",
"Abstract In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small ( n ≤ 35 ) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating ( r ≥ 0.97 ) also with the expected lesion volume."
]
} |
1703.01664 | 2952849295 | Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization. | Texture synthesis methods are broadly categorized as non-parametric or parametric. Parametric methods @cite_15 @cite_3 for texture synthesis aim to represent textures through proper statistical models, with the assumption that two images can be visually similar when certain image statistics match well @cite_34 . The synthesis procedure starts from a random noise image and gradually coerces it to have the same relevant statistics as the given example. The statistical measurement is either based on marginal filter response histograms @cite_11 @cite_15 at different scales or more complicated joint responses @cite_3 . However, exploiting proper image statistics is challenging for parametric models especially when synthesizing structured textures. | {
"cite_N": [
"@cite_15",
"@cite_34",
"@cite_3",
"@cite_11"
],
"mid": [
"2078790577",
"2125027853",
"2127006916",
""
],
"abstract": [
"This paper describes a method for synthesizing images that match the texture appearanceof a given digitized sample. This synthesis is completely automatic and requires only the “target” texture as input. It allows generation of as much texture as desired so that any object can be covered. It can be used to produce solid textures for creating textured 3-d objects without the distortions inherent in texture mapping. It can also be used to synthesize texture mixtures, images that look a bit like each of several digitized samples. The approach is based on a model of human texture perception, and has potential to be a practically useful tool for graphics applications.",
"Visual discrimination experiments were conducted using unfamiliar displays generated by a digital computer. The displays contained two side-by-side fields with different statistical, topological or heuristic properties. Discrimination was defined as that spontaneous visual process which gives the immediate impression of two distinct fields. The condition for such discrimination was found to be based primarily on clusters or lines formed by proximate points of uniform brightness. A similar rule of connectivity with hue replacing brightness was obtained by using varicolored dots of equal subjective brightness. The limitations in discriminating complex line structures were also investigated.",
"We present a universal statistical model for texture images in the context of an overcomplete complex wavelet transform. The model is parameterized by a set of statistics computed on pairs of coefficients corresponding to basis functions at adjacent spatial locations, orientations, and scales. We develop an efficient algorithm for synthesizing random images subject to these constraints, by iteratively projecting onto the set of images satisfying each constraint, and we use this to test the perceptual validity of the model. In particular, we demonstrate the necessity of subgroups of the parameter set by showing examples of texture synthesis that fail when those parameters are removed from the set. We also demonstrate the power of our model by successfully synthesizing examples drawn from a diverse collection of artificial and natural textures.",
""
]
} |
1703.01664 | 2952849295 | Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization. | The success of deep CNNs in discriminative tasks @cite_31 @cite_17 has attracted much attention for image generation. Images can be reconstructed by inverting features @cite_27 @cite_4 @cite_0 , synthesized by matching features, or even generated from noise @cite_26 @cite_29 @cite_33 . Synthesis with neural nets is essentially a parametric approach, where intermediate network outputs provide rich and effective image statistics. @cite_20 propose that two textures are perceptually similar if their features extracted by a pre-trained CNN-based classifier share similar statistics. Based on this, a noise map is gradually optimized to a desired output that matches the texture example in the CNN feature space. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_29",
"@cite_0",
"@cite_27",
"@cite_31",
"@cite_20",
"@cite_17"
],
"mid": [
"2099471712",
"2273348943",
"2951523806",
"2173520492",
"2259643685",
"2949987032",
"",
"2161208721",
"2117539524"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.",
"In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Image-generating machine learning models are typically trained with loss functions based on distance in the image space. This often leads to over-smoothed results. We propose a class of loss functions, which we call deep perceptual similarity metrics (DeePSiM), that mitigate this problem. Instead of computing distances in the image space, we compute distances between image features extracted by deep neural networks. This metric better reflects perceptually similarity of images and thus leads to better results. We show three applications: autoencoder training, a modification of a variational autoencoder, and inversion of deep convolutional networks. In all cases, the generated images look sharp and resemble natural images.",
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.",
"",
"Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. The model provides a new tool to generate stimuli for neuroscience and might offer insights into the deep representations learned by convolutional neural networks.",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements."
]
} |
1703.01664 | 2952849295 | Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization. | Subsequent methods @cite_10 @cite_13 accelerate this optimization procedure by formulating the generation as learning a feed-forward network. These methods train a feed-forward network by minimizing the differences between statistics of the ground truth and the generated image. In particular, image statistics was measured by intermediate outputs of a pre-trained network. Further improvements are made by other methods that follow either optimization based @cite_9 @cite_23 @cite_1 or feed-forward based @cite_21 @cite_30 framework. However, these methods are limited by the unnecessary requirement of training one network per texture. Our framework also belongs to the feed-forward category but synthesizes diverse results for multiple textures in one single network. | {
"cite_N": [
"@cite_30",
"@cite_10",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_23",
"@cite_13"
],
"mid": [
"2502312327",
"2950689937",
"",
"2951745349",
"2461230277",
"2471440592",
"2952226636"
],
"abstract": [
"It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at this https URL. Full paper can be found at arXiv:1701.02096.",
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
"",
"This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.",
"This note presents an extension to the neural artistic style transfer algorithm (). The original algorithm transforms an image to have the style of another given image. For example, a photograph can be transformed to have the style of a famous painting. Here we address a potential shortcoming of the original method: the algorithm transfers the colors of the original painting, which can alter the appearance of the scene in undesirable ways. We describe simple linear methods for transferring style while preserving colors.",
"This paper presents a novel unsupervised method to transfer the style of an example image to a source image. The complex notion of image style is here considered as a local texture transfer, eventually coupled with a global color transfer. For the local texture transfer, we propose a new method based on an adaptive patch partition that captures the style of the example image and preserves the structure of the source image. More precisely, this example-based partition predicts how well a source patch matches an example patch. Results on various images show that our method outperforms the most recent techniques.",
"recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions."
]
} |
1703.01664 | 2952849295 | Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization. | A concurrent related method recently proposed by @cite_14 handles multi-style transfer in one network by specializing scaling and shifting parameters after normalization to each specific texture. Our work differs from @cite_14 mainly in two aspects. First, we employ a different approach in representing textures. We represent textures as bits in a one-hot selection unit and as a continuous embedding vector within the network. Second, we propose diversity loss and incremental training scheme in order to achieve better convergence and output diverse results. Moreover, we demonstrate the effectiveness of our method on a much larger set of textures (e.g., 300) whereas @cite_14 develops a network for 32 textures. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2953054324"
],
"abstract": [
"The diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style."
]
} |
1703.01576 | 2599749705 | We generalize the modular Koszul duality of Achar-Riche to the setting of Soergel bimodules associated to any finite Coxeter system. The key new tools are a functorial monodromy action and wall-crossing functors in the mixed modular derived category. In characteristic 0, this duality together with Soergel's conjecture (proved by Elias-Williamson) imply that our Soergel-theoretic graded category @math is Koszul self-dual, generalizing the result of Beilinson-Ginzburg-Soergel. | For finite dihedral groups @math , the Koszulity and Koszul self-duality of @math was proved earlier by explicit methods by Sauerwein @cite_5 . | {
"cite_N": [
"@cite_5"
],
"mid": [
"1524735191"
],
"abstract": [
"We show that the endomorphism ring of the projective generator in the category of Soergel modules (for dihedral groups) is Koszul self-dual."
]
} |
1703.01460 | 2950186184 | Simulation-based training (SBT) is gaining popularity as a low-cost and convenient training technique in a vast range of applications. However, for a SBT platform to be fully utilized as an effective training tool, it is essential that feedback on performance is provided automatically in real-time during training. It is the aim of this paper to develop an efficient and effective feedback generation method for the provision of real-time feedback in SBT. Existing methods either have low effectiveness in improving novice skills or suffer from low efficiency, resulting in their inability to be used in real-time. In this paper, we propose a neural network based method to generate feedback using the adversarial technique. The proposed method utilizes a bounded adversarial update to minimize a L1 regularized loss via back-propagation. We empirically show that the proposed method can be used to generate simple, yet effective feedback. Also, it was observed to have high effectiveness and efficiency when compared to existing methods, thus making it a promising option for real-time feedback generation in SBT. | The simplest way to provide feedback in SBT is the rule-based approach. The follow-me" approach (ghost drill) @cite_24 and the step-by-step" approach @cite_13 in surgical simulation are examples of this approach. However, it may be hard for a novice who has limited experience to follow a ghost drill at his own pace, and step-by-step feedback will not respond if the trainee does not follow the suggested paths. | {
"cite_N": [
"@cite_24",
"@cite_13"
],
"mid": [
"2049629312",
"2547872309"
],
"abstract": [
"Abstract Objective We present a dental training simulator that provides a virtual reality (VR) environment with haptic feedback for dental students to practice dental surgical skills in the context of a crown preparation procedure. The simulator addresses challenges in traditional training such as the subjective nature of surgical skill assessment and the limited availability of expert supervision. Methods and materials We identified important features for characterizing the quality of a procedure based on interviews with experienced dentists. The features are patterns combining tool position, tool orientation, and applied force. The simulator monitors these features during the procedure, objectively assesses the quality of the performed procedure using hidden Markov models (HMMs), and provides objective feedback on the user's performance in each stage of the procedure. We recruited five dental students and five experienced dentists to evaluate the accuracy of our skill assessment method and the quality of the system's generated feedback. Results The experimental results show that HMMs with selected features can correctly classify all test sequences into novice and expert categories. The evaluation also indicates a high acceptance rate from experts for the system's generated feedback. Conclusion In this work, we introduce our VR dental training simulator and describe a mechanism for providing objective skill assessment and feedback. The HMM is demonstrated as an effective tool for classifying a particular operator as novice-level or expert-level. The simulator can generate tutoring feedback with quality comparable to the feedback provided by human tutors.",
"One of the roadblocks to the wide-spread use of virtual reality simulation as a surgical training platform is the need for expert supervision during training to ensure proper skill acquisition. To fully utilize the capacity of virtual reality in surgical training, it is imperative that the guidance process is automated. In this paper, we discuss a method of providing one aspect of performance guidance: advice on the steps of a surgery or procedural guidance. We manually segment the surgical trajectory of an expert surgeon into steps and present them one at a time to guide trainees through a surgical procedure. We show, using a randomized controlled trial, that this form of guidance is effective in moving trainee behavior towards an expert ideal. To support practice variation and different surgical styles adopted by experts, separate guidance templates have to be generated. To enable this, we introduce a method of automatically segmenting a surgical trajectory into steps. We propose a pre-processing step that uses domain knowledge specific to our application to reduce the solution space. We show how this can be incorporated into existing trajectory segmentation methods, as well as a greedy approach that we propose. We compare this segmentation method to existing techniques and show that it is accurate and efficient."
]
} |
1703.01460 | 2950186184 | Simulation-based training (SBT) is gaining popularity as a low-cost and convenient training technique in a vast range of applications. However, for a SBT platform to be fully utilized as an effective training tool, it is essential that feedback on performance is provided automatically in real-time during training. It is the aim of this paper to develop an efficient and effective feedback generation method for the provision of real-time feedback in SBT. Existing methods either have low effectiveness in improving novice skills or suffer from low efficiency, resulting in their inability to be used in real-time. In this paper, we propose a neural network based method to generate feedback using the adversarial technique. The proposed method utilizes a bounded adversarial update to minimize a L1 regularized loss via back-propagation. We empirically show that the proposed method can be used to generate simple, yet effective feedback. Also, it was observed to have high effectiveness and efficiency when compared to existing methods, thus making it a promising option for real-time feedback generation in SBT. | A similar attempt used a prediction model to discriminate the expertise levels using random forests, and then generate feedback directly from the prediction model itself @cite_15 . Here, the generated feedback was the optimal change that would change a novice level to an expert, based on votes of the random forest (Split Voting (SV)). Decision trees and random forests were used in other research areas as well to provide feedback. For example, a decision tree based method was used in customer relationship management to change disloyal customers to loyal ones @cite_18 @cite_23 . Generating feedback from additive tree models such as random forest and gradient boosted trees is NP-hard, but the exact solution can be found by solving a transformed integer linear programming (ILP) problem @cite_17 . | {
"cite_N": [
"@cite_15",
"@cite_18",
"@cite_23",
"@cite_17"
],
"mid": [
"2165422299",
"2152489256",
"2163541574",
"2115424804"
],
"abstract": [
"As demands on surgical training efficiency increase, there is a stronger need for computer assisted surgical training systems. The ability to provide automated performance feedback and assessment is a critical aspect of such systems. The development of feedback and assessment models will allow the use of surgical simulators as self-guided training systems that act like expert trainers and guide trainees towards improved performance. This paper presents an approach based on Random Forest models to analyse data recorded during surgery using a virtual reality temporal bone simulator and generate meaningful automated real-time performance feedback. The training dataset consisted of 27 temporal bone simulation runs composed of 16 expert runs provided by 7 different experts and 11 trainee runs provided by 6 trainees. We demonstrate how Random Forest models can be used to predict surgical expertise and deliver feedback that improves trainees’ surgical technique. We illustrate the potential of the approach through a feasibility study.",
"Most data mining algorithms and tools stop at discovered customer models, producing distribution information on customer profiles. Such techniques, when applied to industrial problems such as customer relationship management (CRM), are useful in pointing out customers who are likely attritors and customers who are loyal, but they require human experts to postprocess the mined information manually. Most of the postprocessing techniques have been limited to producing visualization results and interestingness ranking, but they do not directly suggest actions that would lead to an increase the objective function such as profit. Here, we present a novel algorithm that suggest actions to change customers from an undesired status (such as attritors) to a desired one (such as loyal) while maximizing objective function: the expected net profit. We develop these algorithms under resource constraints that are abound in reality. The contribution of the work is in taking the output from an existing mature technique (decision trees, for example), and producing novel, actionable knowledge through automatic postprocessing.",
"Most data mining algorithms and tools stop at discovered customer models, producing distribution information on customer profiles. Such techniques, when applied to industrial problems such as customer relationship management (CRM), are useful in pointing out customers who are likely attritors and customers who are loyal, but they require human experts to postprocess the discovered knowledge manually. Most of the postprocessing techniques have been limited to producing visualization results and interestingness ranking, but they do not directly suggest actions that would lead to an increase in the objective function such as profit. In this paper, we present novel algorithms that suggest actions to change customers from an undesired status (such as attritors) to a desired one (such as loyal) while maximizing an objective function: the expected net profit. These algorithms can discover cost-effective actions to transform customers from undesirable classes to desirable ones. The approach we take integrates data mining and decision making tightly by formulating the decision making problems directly on top of the data mining results in a postprocessing step. To improve the effectiveness of the approach, we also present an ensemble of decision trees which is shown to be more robust when the training data changes. Empirical tests are conducted on both a realistic insurance application domain and UCI benchmark data",
"Additive tree models (ATMs) are widely used for data mining and machine learning. Important examples of ATMs include random forest, adaboost (with decision trees as weak learners), and gradient boosted trees, and they are often referred to as the best off-the-shelf classifiers. Though capable of attaining high accuracy, ATMs are not well interpretable in the sense that they do not provide actionable knowledge for a given instance. This greatly limits the potential of ATMs on many applications such as medical prediction and business intelligence, where practitioners need suggestions on actions that can lead to desirable outcomes with minimum costs. To address this problem, we present a novel framework to post-process any ATM classifier to extract an optimal actionable plan that can change a given input to a desired class with a minimum cost. In particular, we prove the NP-hardness of the optimal action extraction problem for ATMs and formulate this problem in an integer linear programming formulation which can be efficiently solved by existing packages. We also empirically demonstrate the effectiveness of the proposed framework by conducting comprehensive experiments on challenging real-world datasets."
]
} |
1703.01460 | 2950186184 | Simulation-based training (SBT) is gaining popularity as a low-cost and convenient training technique in a vast range of applications. However, for a SBT platform to be fully utilized as an effective training tool, it is essential that feedback on performance is provided automatically in real-time during training. It is the aim of this paper to develop an efficient and effective feedback generation method for the provision of real-time feedback in SBT. Existing methods either have low effectiveness in improving novice skills or suffer from low efficiency, resulting in their inability to be used in real-time. In this paper, we propose a neural network based method to generate feedback using the adversarial technique. The proposed method utilizes a bounded adversarial update to minimize a L1 regularized loss via back-propagation. We empirically show that the proposed method can be used to generate simple, yet effective feedback. Also, it was observed to have high effectiveness and efficiency when compared to existing methods, thus making it a promising option for real-time feedback generation in SBT. | In this paper, we propose a neural network based method to generate feedback using the adversarial technique. One intriguing property of neural networks is that the input can be changed by maximizing the prediction error so that it moves into a different class with high confidence @cite_5 . This property has been used to generate from deep neural nets in image classification @cite_21 . An adversarial example is formed by applying small perturbations (imperceptible to the human eye) to the original image, such that the neural network misclassifies it with high confidence. Although the adversarial example has similarities to the feedback problem in that they both change the input to a different class, they are not synonymous. First, the adversarial example is formed by adding intentionally-designed noise that may result in states that do not exist or have practical meaning in a real-world dataset such as that of the feedback problem. Second, only a few changes to inputs are recommended for feedback, to make it useful to follow. These considerations lead to the formal problem definition below. | {
"cite_N": [
"@cite_5",
"@cite_21"
],
"mid": [
"1673923490",
"1945616565"
],
"abstract": [
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset."
]
} |
1703.01421 | 2966705009 | We study recovery of piecewise-constant signals on graphs by the estimator minimizing an @math -edge-penalized objective. Although exact minimization of this objective may be computationally intractable, we show that the same statistical risk guarantees are achieved by the @math -expansion algorithm which computes an approximate minimizer in polynomial time. We establish that for graphs with small average vertex degree, these guarantees are minimax rate-optimal over classes of edge-sparse signals. For spatially inhomogeneous graphs, we propose minimization of an edge-weighted objective where each edge is weighted by its effective resistance or another measure of its contribution to the graph's connectivity. We establish minimax optimality of the resulting estimators over corresponding edge-weighted sparsity classes. We show theoretically that these risk guarantees are not always achieved by the estimator minimizing the @math total-variation relaxation, and empirically that the @math -based estimates are more accurate in high signal-to-noise settings. | For changepoint problems where @math is the linear chain, ) may be exactly minimized by dynamic programming in quadratic time @cite_26 @cite_50 @cite_55 . Pruning ideas may reduce runtime to be near-linear in practice @cite_56 . Correct changepoint recovery and distributional properties of @math minimizing ) were studied asymptotically in @cite_36 @cite_9 when the number of true changepoints is fixed. Non-asymptotic risk bounds similar to ) and ) were established for estimators minimizing similar objectives in @cite_51 @cite_21 ; we discuss this further below. Extension to the recovery of piecewise-constant functions over a continuous interval was considered in @cite_41 . | {
"cite_N": [
"@cite_26",
"@cite_36",
"@cite_41",
"@cite_55",
"@cite_9",
"@cite_21",
"@cite_56",
"@cite_50",
"@cite_51"
],
"mid": [
"",
"2060955827",
"2094089565",
"2097880821",
"",
"2167121755",
"",
"1973897753",
"1968512439"
],
"abstract": [
"",
"An estimator of the number of change-points in an independent normal sequence is proposed via Schwarz' criterion. Weak consistency of this estimator is established.",
"We study the asymptotics for jump-penalized least squares regression aiming at approximating a regression function by piecewise constant functions. Besides conventional consistency and convergence rates of the estimates in L 2 ([0,1)) our results cover other metrics like Skorokhod metric on the space of cadlag functions and uniform metrics on C([0, 1]). We will show that these estimators are in an adaptive sense rate optimal over certain classes of \"approximation spaces.\" Special cases are the class of functions of bounded variation (piecewise) Holder continuous functions of order 0 < α ≤ 1 and the class of step functions with a finite but arbitrary number of jumps. In the latter setting, we will also deduce the rates known from change-point analysis for detecting the jumps. Finally, the issue of fully automatic selection of the smoothing parameter is addressed.",
"Many signal processing problems can be solved by maximizing the fitness of a segmented model over all possible partitions of the data interval. This letter describes a simple but powerful algorithm that searches the exponentially large space of partitions of N data points in time O(N sup 2 ). The algorithm is guaranteed to find the exact global optimum, automatically determines the model order (the number of segments), has a convenient real-time mode, can be extended to higher dimensional data spaces, and solves a surprising variety of problems in signal detection and characterization, density estimation, cluster analysis, and classification.",
"",
"This paper is mainly devoted to a precise analysis of what kind of penalties should be used in order to perform model selection via the minimiza- tion of a penalized least-squares type criterion within some general Gaussian framework including the classical ones. As compared to our previous paper on this topic (Birge and Massart in J. Eur. Math. Soc. 3, 203-268 (2001)), more elaborate forms of the penalties are given which are shown to be, in some sense, optimal. We indeed provide more precise upper bounds for the risk of the penalized estimators and lower bounds for the penalty terms, showing that the use of smaller penalties may lead to disastrous results. These lower bounds may also be used to design a practical strategy that allows to estimate the penalty from the data when the amount of noise is unknown. We provide an illustra- tion of the method for the problem of estimating a piecewise constant signal in Gaussian noise when neither the number, nor the location of the change points are known.",
"",
"We discuss the interplay between local M -smoothers, Bayes smoothers and some nonlinear filters for edge-preserving signal reconstruction. We prove that all smoothers in question are nonlinear filters in a precise sense and characterize their fixed points. Then a Potts model is adopted for segmentation. For 1-d signals, an exact algorithm for the computation of maximum posterior modes is derived and applied to a phantom and to 1-d fMRI-data.",
"This paper deals with the problem of detecting change-points in the mean of a signal corrupted by an additive Gaussian noise. The number of changes and their position are unknown. From a nonasymptotic point of view, we propose to estimate them with a method based on a penalized least-squares criterion. We choose the penalty function such that the resulting estimator minimizes the quadratic risk according to the results of Birge and Massart. This penalty depends on unknown constants and we propose a calibration to obtain an automatic method. The performance of the method is assessed through simulation experiments. An application to real data is shown."
]
} |
1703.01421 | 2966705009 | We study recovery of piecewise-constant signals on graphs by the estimator minimizing an @math -edge-penalized objective. Although exact minimization of this objective may be computationally intractable, we show that the same statistical risk guarantees are achieved by the @math -expansion algorithm which computes an approximate minimizer in polynomial time. We establish that for graphs with small average vertex degree, these guarantees are minimax rate-optimal over classes of edge-sparse signals. For spatially inhomogeneous graphs, we propose minimization of an edge-weighted objective where each edge is weighted by its effective resistance or another measure of its contribution to the graph's connectivity. We establish minimax optimality of the resulting estimators over corresponding edge-weighted sparsity classes. We show theoretically that these risk guarantees are not always achieved by the estimator minimizing the @math total-variation relaxation, and empirically that the @math -based estimates are more accurate in high signal-to-noise settings. | In image applications where @math is the 2-D lattice, ) is closely related to the Mumford-Shah functional @cite_8 and Ising Potts-model energies for discrete Markov random fields @cite_2 . In the latter discrete setting, where each @math is allowed to take value in a finite set of labels'', a variety of algorithms seek to minimize ) using minimum s-t cuts on augmented graphs; see @cite_72 and the contained references for a review. For an Ising model with only two distinct labels, @cite_4 showed that the exact minimizer may be computed via a single minimum s-t cut. For more than two distinct labels, exact minimization of ) is NP-hard @cite_10 . We analyze a graph-cut algorithm from @cite_10 that applies to more than two labels, where the exact minimization property is replaced by an approximation guarantee. We show that the deterministic guarantee of this algorithm implies rate-optimal statistical risk bounds, for the 2-D lattice as well as for general graphs. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_72",
"@cite_2",
"@cite_10"
],
"mid": [
"79315950",
"2114487471",
"2101309634",
"2020999234",
"2143516773"
],
"abstract": [
"",
"Abstract : This reprint will introduce and study the most basic properties of three new variational problems which are suggested by applications to computer vision. In computer vision, a fundamental problem is to appropriately decompose the domain R of a function g (x,y) of two variables. This problem starts by describing the physical situation which produces images: assume that a three-dimensional world is observed by an eye or camera from some point P and that g1(rho) represents the intensity of the light in this world approaching the point sub 1 from a direction rho. If one has a lens at P focusing this light on a retina or a film-in both cases a plane domain R in which we may introduce coordinates x, y then let g(x,y) be the strength of the light signal striking R at a point with coordinates (x,y); g(x,y) is essentially the same as sub 1 (rho) -possibly after a simple transformation given by the geometry of the imaging syste. The function g(x,y) defined on the plane domain R will be called an image. What sort of function is g? The light reflected off the surfaces Si of various solid objects O sub i visible from P will strike the domain R in various open subsets R sub i. When one object O1 is partially in front of another object O2 as seen from P, but some of object O2 appears as the background to the sides of O1, then the open sets R1 and R2 will have a common boundary (the 'edge' of object O1 in the image defined on R) and one usually expects the image g(x,y) to be discontinuous along this boundary. (JHD)",
"In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.",
"We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, nonlinear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low energy states ( annealing''), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result is a highly parallel relaxation'' algorithm for MAP estimation. We establish convergence properties of the algorithm and we experiment with some simple pictures, for which good restorations are obtained at low signal-to-noise ratios.",
"Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy."
]
} |
1703.01421 | 2966705009 | We study recovery of piecewise-constant signals on graphs by the estimator minimizing an @math -edge-penalized objective. Although exact minimization of this objective may be computationally intractable, we show that the same statistical risk guarantees are achieved by the @math -expansion algorithm which computes an approximate minimizer in polynomial time. We establish that for graphs with small average vertex degree, these guarantees are minimax rate-optimal over classes of edge-sparse signals. For spatially inhomogeneous graphs, we propose minimization of an edge-weighted objective where each edge is weighted by its effective resistance or another measure of its contribution to the graph's connectivity. We establish minimax optimality of the resulting estimators over corresponding edge-weighted sparsity classes. We show theoretically that these risk guarantees are not always achieved by the estimator minimizing the @math total-variation relaxation, and empirically that the @math -based estimates are more accurate in high signal-to-noise settings. | For an arbitrary graph @math , the estimators @math exactly minimizing ) and ) are examples of general model-complexity penalized estimators studied in @cite_44 @cite_21 . The penalties we impose may be smaller than those needed for the analyses of @cite_44 @cite_21 by logarithmic factors, and we instead control the supremum of a certain Gaussian process using an argument specialized to our graph-based problem. A theoretical focus of @cite_44 @cite_21 was on adaptive attainment of minimax rates over families of models---for example, for the linear chain graph, @cite_51 @cite_21 considered penalties increasing but concave in the number of changepoints, and the resulting estimates achieve the guarantee ) simultaneously for all @math . Instead of using such a penalty, which poses additional computational challenges, we will apply a data-driven procedure to choose @math , although we will not study the adaptivity properties of the procedure in this paper. | {
"cite_N": [
"@cite_44",
"@cite_21",
"@cite_51"
],
"mid": [
"1996268356",
"2167121755",
"1968512439"
],
"abstract": [
"Performance bounds for criteria for model selection are developed using recent theory for sieves. The model selection criteria are based on an empirical loss or contrast function with an added penalty term motivated by empirical process theory and roughly proportional to the number of parameters needed to describe the model divided by the number of observations. Most of our examples involve density or regression estimation settings and we focus on the problem of estimating the unknown density or regression function. We show that the quadratic risk of the minimum penalized empirical contrast estimator is bounded by an index of the accuracy of the sieve. This accuracy index quantifies the trade-off among the candidate models between the approximation error and parameter dimension relative to sample size.",
"This paper is mainly devoted to a precise analysis of what kind of penalties should be used in order to perform model selection via the minimiza- tion of a penalized least-squares type criterion within some general Gaussian framework including the classical ones. As compared to our previous paper on this topic (Birge and Massart in J. Eur. Math. Soc. 3, 203-268 (2001)), more elaborate forms of the penalties are given which are shown to be, in some sense, optimal. We indeed provide more precise upper bounds for the risk of the penalized estimators and lower bounds for the penalty terms, showing that the use of smaller penalties may lead to disastrous results. These lower bounds may also be used to design a practical strategy that allows to estimate the penalty from the data when the amount of noise is unknown. We provide an illustra- tion of the method for the problem of estimating a piecewise constant signal in Gaussian noise when neither the number, nor the location of the change points are known.",
"This paper deals with the problem of detecting change-points in the mean of a signal corrupted by an additive Gaussian noise. The number of changes and their position are unknown. From a nonasymptotic point of view, we propose to estimate them with a method based on a penalized least-squares criterion. We choose the penalty function such that the resulting estimator minimizes the quadratic risk according to the results of Birge and Massart. This penalty depends on unknown constants and we propose a calibration to obtain an automatic method. The performance of the method is assessed through simulation experiments. An application to real data is shown."
]
} |
1703.01656 | 2604142613 | Simulators are powerful tools for reasoning about a robot's interactions with its environment. However, when simulations diverge from reality, that reasoning becomes less useful. In this paper, we show how to close the loop between liquid simulation and real-time perception. We use observations of liquids to correct errors when tracking the liquid's state in a simulator. Our results show that closed-loop simulation is an effective way to prevent large divergence between the simulated and real liquid states. As a direct consequence of this, our method can enable reasoning about liquids that would otherwise be infeasible due to large divergences, such as reasoning about occluded liquid. | Liquid simulation and fluid mechanics are well researched in the literature @cite_24 . They are commonly used to model fluid flow in areas such as mechanical and aerospace engineering @cite_10 , and to model liquid surfaces in computer graphics @cite_30 @cite_34 @cite_0 . Work by Ladick ' y @cite_14 combined these methods with regression forests to learn the update rules for particles in a particle-based liquid simulator. There has also been some work combining real world observations with deformable object simulation. Schulman @cite_1 , by applying forces in the simulator in the direction of the gradient of the error between depth pixels and simulation, were able to track cloth based on real observations. Our warp field method, described in section , applies a similar concept to liquids. Finally, the only example in the literature of combining real observations with liquid simulation is work by Wang @cite_17 , which used stereo cameras and colored water to reconstruct fluid surfaces, and then used fluid mechanics to make the resulting surface meshes more realistic, although they were limited to making realistic appearing liquid flows rather than using them to solve robotic tasks. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_1",
"@cite_24",
"@cite_0",
"@cite_34",
"@cite_10",
"@cite_17"
],
"mid": [
"588441650",
"2089858332",
"2096404799",
"",
"",
"",
"1482662578",
"2025333109"
],
"abstract": [
"Animating fluids like water, smoke, and fire using physics-based simulation is increasingly important in visual effects, in particular in movies, like The Day After Tomorrow, and in computer games. This book provides a practical introduction to fluid simulation for graphics. The focus is on animating fully three-dimensional incompressible flow, from understanding the math and the algorithms to the actual implementation.",
"Traditional fluid simulations require large computational resources even for an average sized scene with the main bottleneck being a very small time step size, required to guarantee the stability of the solution. Despite a large progress in parallel computing and efficient algorithms for pressure computation in the recent years, realtime fluid simulations have been possible only under very restricted conditions. In this paper we propose a novel machine learning based approach, that formulates physics-based fluid simulation as a regression problem, estimating the acceleration of every particle for each frame. We designed a feature vector, directly modelling individual forces and constraints from the Navier-Stokes equations, giving the method strong generalization properties to reliably predict positions and velocities of particles in a large time step setting on yet unseen test videos. We used a regression forest to approximate the behaviour of particles observed in the large training set of simulations obtained using a traditional solver. Our GPU implementation led to a speed-up of one to three orders of magnitude compared to the state-of-the-art position-based fluid solver and runs in real-time for systems with up to 2 million particles.",
"We introduce an algorithm for tracking deformable objects from a sequence of point clouds. The proposed tracking algorithm is based on a probabilistic generative model that incorporates observations of the point cloud and the physical properties of the tracked object and its environment. We propose a modified expectation maximization algorithm to perform maximum a posteriori estimation to update the state estimate at each time step. Our modification makes it practical to perform the inference through calls to a physics simulation engine. This is significant because (i) it allows for the use of highly optimized physics simulation engines for the core computations of our tracking algorithm, and (ii) it makes it possible to naturally, and efficiently, account for physical constraints imposed by collisions, grasping actions, and material properties in the observation updates. Even in the presence of the relatively large occlusions that occur during manipulation tasks, our algorithm is able to robustly track a variety of types of deformable objects, including ones that are one-dimensional, such as ropes; two-dimensional, such as cloth; and three-dimensional, such as sponges. Our implementation can track these objects in real time.",
"",
"",
"",
"Keywords: turbine a gaz ; propulsion ; mecanique ; thermodynamique ; aeronautique Reference Record created on 2005-11-18, modified on 2016-08-08",
"We present an image-based reconstruction framework to model real water scenes captured by stereoscopic video. In contrast to many image-based modeling techniques that rely on user interaction to obtain high-quality 3D models, we instead apply automatically calculated physically-based constraints to refine the initial model. The combination of image-based reconstruction with physically-based simulation allows us to model complex and dynamic objects such as fluid. Using a depth map sequence as initial conditions, we use a physically based approach that automatically fills in missing regions, removes outliers, and refines the geometric shape so that the final 3D model is consistent to both the input video data and the laws of physics. Physically-guided modeling also makes interpolation or extrapolation in the space-time domain possible, and even allows the fusion of depth maps that were taken at different times or viewpoints. We demonstrated the effectiveness of our framework with a number of real scenes, all captured using only a single pair of cameras."
]
} |
1703.01656 | 2604142613 | Simulators are powerful tools for reasoning about a robot's interactions with its environment. However, when simulations diverge from reality, that reasoning becomes less useful. In this paper, we show how to close the loop between liquid simulation and real-time perception. We use observations of liquids to correct errors when tracking the liquid's state in a simulator. Our results show that closed-loop simulation is an effective way to prevent large divergence between the simulated and real liquid states. As a direct consequence of this, our method can enable reasoning about liquids that would otherwise be infeasible due to large divergences, such as reasoning about occluded liquid. | In robotics, there has been work using simulators to reason about liquids, although only in constrained settings, e.g., pouring tasks. Kunze and Beetz @cite_7 @cite_8 employed a simulator to reason about a robot's actions as it attempted to make pancakes, which involved reasoning about the liquid batter. Yamaguchi and Atkeson @cite_13 @cite_2 used a simulator to reason about pouring different kinds of liquids. However, these works use rather crude liquid simulations for prediction tasks that do not require accurate feedback. Schenck and Fox @cite_4 used a finite element method liquid simulator to train a deep network on the tasks of detecting and tracking liquids. They did not use the simulator to reason about perceived liquid, though. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_2",
"@cite_13"
],
"mid": [
"2483265793",
"2746240053",
"2093550895",
"2570797809",
"2215032184"
],
"abstract": [
"Recent advances in AI and robotics have claimed many incredible results with deep learning, yet no work to date has applied deep learning to the problem of liquid perception and reasoning. In this paper, we apply fully-convolutional deep neural networks to the tasks of detecting and tracking liquids. We evaluate three models: a single-frame network, multi-frame network, and a LSTM recurrent network. Our results show that the best liquid detection results are achieved when aggregating data over multiple frames and that the LSTM network outperforms the other two in both tasks. This suggests that LSTM-based neural networks have the potential to be a key component for enabling robots to handle liquids using robust, closed-loop controllers.",
"Today's autonomous robots accomplish only selected tasks in constrained environments. But if robots are to master everyday manipulation, they need general problem-solving capabilities as humans have. We have developed commonsense reasoning methods to make qualitative spatio-temporal inferences about manipulation tasks performed by humans and robots. Hence, robots are enabled to reason about a wider spectrum of tasks, which will considerably deepen their understanding of everyday manipulation.",
"Abstract Autonomous robots that are to perform complex everyday tasks such as making pancakes have to understand how the effects of an action depend on the way the action is executed. Within Artificial Intelligence, classical planning reasons about whether actions are executable, but makes the assumption that the actions will succeed (with some probability). In this work, we have designed, implemented, and analyzed a framework that allows us to envision the physical effects of robot manipulation actions. We consider envisioning to be a qualitative reasoning method that reasons about actions and their effects based on simulation-based projections. Thereby it allows a robot to infer what could happen when it performs a task in a certain way. This is achieved by translating a qualitative physics problem into a parameterized simulation problem; performing a detailed physics-based simulation of a robot plan; logging the state evolution into appropriate data structures; and then translating these sub-symbolic data structures into interval-based first-order symbolic, qualitative representations, called timelines. The result of the envisioning is a set of detailed narratives represented by timelines which are then used to infer answers to qualitative reasoning problems. By envisioning the outcome of actions before committing to them, a robot is able to reason about physical phenomena and can therefore prevent itself from ending up in unwanted situations. Using this approach, robots can perform manipulation tasks more efficiently, robustly, and flexibly, and they can even successfully accomplish previously unknown variations of tasks.",
"We explore differential dynamic programming for dynamical systems that form a directed graph structure. This planning method is applicable to complicated tasks where sub-tasks are sequentially connected and different skills are selected according to the situation. A pouring task is an example: it involves grasping and moving a container, and selection of skills, e.g. tipping and shaking. Our method can handle these situations; we plan the continuous parameters of each subtask and skill, as well as select skills. Our method is based on stochastic differential dynamic programming. We use stochastic neural networks to learn dynamical systems when they are unknown. Our method is a form of reinforcement learning. On the other hand, we use ideas from artificial intelligence, such as graph-structured dynamical systems, and frame-and-slots to represent a large state-action vector. This work is a partial unification of these different fields. We demonstrate our method in a simulated pouring task, where we show that our method generalizes over material property and container shape. Accompanying video: https: youtu.be _ECmnG2BLE8.",
"We explore a temporal decomposition of dynamics in order to enhance policy learning with unknown dynamics. There are model-free methods and model-based methods for policy learning with unknown dynamics, but both approaches have problems: in general, model-free methods have less generalization ability, while model-based methods are often limited by the assumed model structure or need to gather many samples to make models. We consider a temporal decomposition of dynamics to make learning models easier. To obtain a policy, we apply differential dynamic programming (DDP). A feature of our method is that we consider decomposed dynamics even when there is no action to be taken, which allows us to decompose dynamics more flexibly. Consequently learned dynamics become more accurate. Our DDP is a first-order gradient descent algorithm with a stochastic evaluation function. In DDP with learned models, typically there are many local maxima. In order to avoid them, we consider multiple criteria evaluation functions. In addition to the stochastic evaluation function, we use a reference value function. This method was verified with pouring simulation experiments where we created complicated dynamics. The results show that we can optimize actions with DDP while learning dynamics models."
]
} |
1703.01416 | 2949691147 | In this work we present a real-time approach for local trajectory replanning for MAVs. Current trajectory generation methods for multicopters achieve high success rates in cluttered environments, but assume the environment is static and require prior knowledge of the map. In our work we utilize the results of such planners and extend them with local replanning algorithm that can handle unmodeled (possibly dynamic) obstacles while keeping MAV close to the global trajectory. To make our approach real-time capable we maintain information about the environment around MAV in an occupancy grid stored in 3D circular buffer that moves together with a drone, and represent the trajectories using uniform B-splines. This representation ensures that trajectory is sufficiently smooth and at the same time allows efficient optimization. | Optimization-based approaches rely on minimizing a cost function that consists of smoothness and collision terms. The trajectory itself can be represented as a set of discrete points @cite_20 or polynomial segments @cite_11 . The approach presented in the present work falls into this category, but represents a trajectory using uniform B-splines. | {
"cite_N": [
"@cite_20",
"@cite_11"
],
"mid": [
"2161819990",
"2109656638"
],
"abstract": [
"In this paper, we present CHOMP (covariant Hamiltonian optimization for motion planning), a method for trajectory optimization invariant to reparametrization. CHOMP uses functional gradient techniques to iteratively improve the quality of an initial trajectory, optimizing a functional that trades off between a smoothness and an obstacle avoidance component. CHOMP can be used to locally optimize feasible trajectories, as well as to solve motion planning queries, converging to low-cost trajectories even when initialized with infeasible ones. It uses Hamiltonian Monte Carlo to alleviate the problem of convergence to high-cost local minima (and for probabilistic completeness), and is capable of respecting hard constraints along the trajectory. We present extensive experiments with CHOMP on manipulation and locomotion tasks, using seven-degree-of-freedom manipulators and a rough-terrain quadruped robot.",
"This paper provides new results for the tracking control of a quadrotor unmanned aerial vehicle (UAV). The UAV has four input degrees of freedom, namely the magnitudes of the four rotor thrusts, that are used to control the six translational and rotational degrees of freedom, and to achieve asymptotic tracking of four outputs, namely, three position variables for the vehicle center of mass and the direction of one vehicle body-fixed axis. A globally defined model of the quadrotor UAV rigid body dynamics is introduced as a basis for the analysis. A nonlinear tracking controller is developed on the special Euclidean group SE(3) and it is shown to have desirable closed loop properties that are almost global. Several numerical examples, including an example in which the quadrotor recovers from being initially upside down, illustrate the versatility of the controller."
]
} |
1703.01416 | 2949691147 | In this work we present a real-time approach for local trajectory replanning for MAVs. Current trajectory generation methods for multicopters achieve high success rates in cluttered environments, but assume the environment is static and require prior knowledge of the map. In our work we utilize the results of such planners and extend them with local replanning algorithm that can handle unmodeled (possibly dynamic) obstacles while keeping MAV close to the global trajectory. To make our approach real-time capable we maintain information about the environment around MAV in an occupancy grid stored in 3D circular buffer that moves together with a drone, and represent the trajectories using uniform B-splines. This representation ensures that trajectory is sufficiently smooth and at the same time allows efficient optimization. | To deal with the memory limitation, octree-based representations of the environment are used in @cite_5 @cite_19 . They store information in an efficient way by pruning the leaves of the trees that contain the same information, but the access times for each element become logarithmic in the number of nodes, instead of the constant time as in the voxel-based approaches. | {
"cite_N": [
"@cite_19",
"@cite_5"
],
"mid": [
"2013345945",
"2133844819"
],
"abstract": [
"In this paper we propose a novel volumetric multi-resolution mapping system for RGB-D images that runs on a standard CPU in real-time. Our approach generates a textured triangle mesh from a signed distance function that it continuously updates as new RGB-D images arrive. We propose to use an octree as the primary data structure which allows us to represent the scene at multiple scales. Furthermore, it allows us to grow the reconstruction volume dynamically. As most space is either free or unknown, we allocate and update only those voxels that are located in a narrow band around the observed surface. In contrast to a regular grid, this approach saves enormous amounts of memory and computation time. The major challenge is to generate and maintain a consistent triangle mesh, as neighboring cells in the octree are more difficult to find and may have different resolutions. To remedy this, we present in this paper a novel algorithm that keeps track of these dependencies, and efficiently updates corresponding parts of the triangle mesh. In our experiments, we demonstrate the real-time capability on a large set of RGB-D sequences. As our approach does not require a GPU, it is well suited for applications on mobile or flying robots with limited computational resources.",
"Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum."
]
} |
1703.01416 | 2949691147 | In this work we present a real-time approach for local trajectory replanning for MAVs. Current trajectory generation methods for multicopters achieve high success rates in cluttered environments, but assume the environment is static and require prior knowledge of the map. In our work we utilize the results of such planners and extend them with local replanning algorithm that can handle unmodeled (possibly dynamic) obstacles while keeping MAV close to the global trajectory. To make our approach real-time capable we maintain information about the environment around MAV in an occupancy grid stored in 3D circular buffer that moves together with a drone, and represent the trajectories using uniform B-splines. This representation ensures that trajectory is sufficiently smooth and at the same time allows efficient optimization. | Another popular approach to environment mapping is voxel hashing, which was proposed by and used in @cite_21 . It is mainly used for storing a truncated signed distance function representation of the environment. In this case, only a narrow band of measurements around the surface is inserted and only the memory required for that sub-volume is allocated. However, when full measurements must be inserted or the dense information must be stored the advantages of this approach compared to those of the other approaches are not significant. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2561394090"
],
"abstract": [
"Truncated Signed Distance Fields (TSDFs) have become a popular tool in 3D reconstruction, as they allow building very high-resolution models of the environment in real-time on GPU. However, they have rarely been used for planning on robotic platforms, mostly due to high computational and memory requirements. We propose to reduce these requirements by using large voxel sizes, and extend the standard TSDF representation to be faster and better model the environment at these scales. We also propose a method to build Euclidean Signed Distance Fields (ESDFs), which are a common representation for planning, incrementally out of our TSDF representation. ESDFs provide Euclidean distance to the nearest obstacle at any point in the map, and also provide collision gradient information for use with optimization-based planners. We validate the reconstruction accuracy and real-time performance of our combined system on both new and standard datasets from stereo and RGB-D imagery. The complete system will be made available as an open-source library called voxblox."
]
} |
1703.01500 | 2950877726 | In this work, we use player behavior during the closed beta test of the MMORPG ArcheAge as a proxy for an extreme situation: at the end of the closed beta test, all user data is deleted, and thus, the outcome (or penalty) of players' in-game behaviors in the last few days loses its meaning. We analyzed 270 million records of player behavior in the 4th closed beta test of ArcheAge. Our findings show that there are no apparent pandemic behavior changes, but some outliers were more likely to exhibit anti-social behavior (e.g., player killing). We also found that contrary to the reassuring adage that "Even if I knew the world would go to pieces tomorrow, I would still plant my apple tree," players abandoned character progression, showing a drastic decrease in quest completion, leveling, and ability changes at the end of the beta test. | One well-known study around a critical event in online games was performed by Boman and Johansson, modeling a synthetic plague in WoW @cite_11 . The synthetic plague grew from what was originally conceived as a debuff'' intended to spread only from monsters to players. A programming bug, however, resulted in the plague being able to spread from player to player. As players constitute a synthetic society in the game, the game can be seen as an interactive executable model for studying disease spread (with the caveat that it is a very special kind of disease) @cite_18 . One interesting emerging behavior was that players would deliberately attempt to infect others by passing the debuff to their pets, dismissing them, and then re-summoning them in a populated area, causing the plague to spread. Similarly, some industrious players set up sales of fraudulent cures. | {
"cite_N": [
"@cite_18",
"@cite_11"
],
"mid": [
"2080640836",
"2153933800"
],
"abstract": [
"Summary Simulation models are of increasing importance within the field of applied epidemiology. However, very little can be done to validate such models or to tailor their use to incorporate important human behaviours. In a recent incident in the virtual world of online gaming, the accidental inclusion of a disease-like phenomenon provided an excellent example of the potential of such systems to alleviate these modelling constraints. We discuss this incident and how appropriate exploitation of these gaming systems could greatly advance the capabilities of applied simulation modelling in infectious disease research.",
"A virtual plague is a process in which a behavior-affecting property spreads among characters in a Massively Multiplayer Online Game (MMOG). The MMOG individuals constitute a synthetic population, and the game can be seen as a form of interactive executable model for studying disease spread, albeit of a very special kind. To a game developer maintaining an MMOG, recognizing, monitoring, and ultimately controlling a virtual plague is important, regardless of how it was initiated. The prospect of using tools, methods and theory from the field of epidemiology to do this seems natural and appealing. We will address the feasibility of such a prospect, first by considering some basic measures used in epidemiology, then by pointing out the differences between real world epidemics and virtual plagues. We also suggest directions for MMOG developer control through epidemiological modeling. Our aim is understanding the properties of virtual plagues, rather than trying to eliminate them or mitigate their effects, as would be in the case of real infectious disease."
]
} |
1703.01437 | 2952964002 | In this paper, we propose a novel method to register football broadcast video frames on the static top view model of the playing surface. The proposed method is fully automatic in contrast to the current state of the art which requires manual initialization of point correspondences between the image and the static model. Automatic registration using existing approaches has been difficult due to the lack of sufficient point correspondences. We investigate an alternate approach exploiting the edge information from the line markings on the field. We formulate the registration problem as a nearest neighbour search over a synthetically generated dictionary of edge map and homography pairs. The synthetic dictionary generation allows us to exhaustively cover a wide variety of camera angles and positions and reduce this problem to a minimal per-frame edge map matching procedure. We show that the per-frame results can be improved in videos using an optimization framework for temporal camera stabilization. We demonstrate the efficacy of our approach by presenting extensive results on a dataset collected from matches of football World Cup 2014. | Top view data for sports analytics has been extensively used in previous works. @cite_12 uses 8 fixed high-definition (HD) cameras to detect the players in field hockey matches. They demonstrated that event recognition (goal, penalty corner etc.) can be performed robustly even with noisy player tracks. @cite_3 used the same setup to highlight that a role based assignment of players can eliminate the need of actual player identities in several applications. In basketball, a fixed set of six small cameras are now used for player tracking as a standard in all NBA matches, and the data has been used for extensive analytics @cite_20 . Football certainly has gained the most attention @cite_22 and the commercially available data has been utilized for variety of applications from estimating the likelihood of a shot to be a goal @cite_4 or to learn a team's defensive weaknesses and strengths @cite_10 . | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_3",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2187932643",
"2287065228",
"2170245723",
"",
"2009923444",
""
],
"abstract": [
"In this paper, we present a method which accurately estimates the likelihood of chances in soccer using strategic features from an entire season of player and ball tracking data taken from a professional league. From the data, we analyzed the spatiotemporal patterns of the ten-second window of play before a shot for nearly 10,000 shots. From our analysis, we found that not only is the game phase important (i.e., corner, free-kick, open-play, counter attack etc.), the strategic features such as defender proximity, interaction of surrounding players, speed of play, coupled with the shot location play an impact on determining the likelihood of a team scoring a goal. Using our spatiotemporal strategic features, we can accurately measure the likelihood of each shot. We use this analysis to quantify the efficiency of each team and their strategy.",
"Team-based invasion sports such as football, basketball, and hockey are similar in the sense that the players are able to move freely around the playing area and that player and team performance cannot be fully analysed without considering the movements and interactions of all players as a group. State-of-the-art object tracking systems now produce spatio-temporal traces of player trajectories with high definition and high frequency, and this, in turn, has facilitated a variety of research efforts, across many disciplines, to extract insight from the trajectories. We survey recent research efforts that use spatio-temporal data from team sports as input and involve non-trivial computation. This article categorises the research efforts in a coherent framework and identifies a number of open research questions.",
"In this paper, we describe a method to represent and discover adversarial group behavior in a continuous domain. In comparison to other types of behavior, adversarial behavior is heavily structured as the location of a player (or agent) is dependent both on their teammates and adversaries, in addition to the tactics or strategies of the team. We present a method which can exploit this relationship through the use of a spatiotemporal basis model. As players constantly change roles during a match, we show that employing a \"role-based\" representation instead of one based on player \"identity\" can best exploit the playing structure. As vision-based systems currently do not provide perfect detection tracking (e.g. missed or false detections), we show that our compact representation can effectively \"denoise\" erroneous detections as well as enabling temporal analysis, which was previously prohibitive due to the dimensionality of the signal. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labelled data.",
"",
"Recently, vision-based systems have been deployed in professional sports to track the ball and players to enhance analysis of matches. Due to their unobtrusive nature, vision-based approaches are preferred to wearable sensors (e.g. GPS or RFID sensors) as it does not require players or balls to be instrumented prior to matches. Unfortunately, in continuous team sports where players need to be tracked continuously over long-periods of time (e.g. 35 minutes in field-hockey or 45 minutes in soccer), current vision-based tracking approaches are not reliable enough to provide fully automatic solutions. As such, human intervention is required to fix-up missed or false detections. However, in instances where a human can not intervene due to the sheer amount of data being generated - this data can not be used due to the missing noisy data. In this paper, we investigate two representations based on raw player detections (and not tracking) which are immune to missed and false detections. Specifically, we show that both team occupancy maps and centroids can be used to detect team activities, while the occupancy maps can be used to retrieve specific team activities. An evaluation on over 8 hours of field hockey data captured at a recent international tournament demonstrates the validity of the proposed approach.",
""
]
} |
1703.01437 | 2952964002 | In this paper, we propose a novel method to register football broadcast video frames on the static top view model of the playing surface. The proposed method is fully automatic in contrast to the current state of the art which requires manual initialization of point correspondences between the image and the static model. Automatic registration using existing approaches has been difficult due to the lack of sufficient point correspondences. We investigate an alternate approach exploiting the edge information from the line markings on the field. We formulate the registration problem as a nearest neighbour search over a synthetically generated dictionary of edge map and homography pairs. The synthetic dictionary generation allows us to exhaustively cover a wide variety of camera angles and positions and reduce this problem to a minimal per-frame edge map matching procedure. We show that the per-frame results can be improved in videos using an optimization framework for temporal camera stabilization. We demonstrate the efficacy of our approach by presenting extensive results on a dataset collected from matches of football World Cup 2014. | The idea of obtaining top view data from broadcast videos has also been explored in previous works, @cite_7 used KLT @cite_26 tracks on manually annotated interest points (with known correspondences) and used them in RANSAC @cite_17 based approach to obtain the homographies in presence of camera pan tilt zoom in NHL hockey games. @cite_33 showed improvement over this work by using SIFT features @cite_5 augmented with line and ellipse information. Similar idea of manually annotating initial frame and then propagating the matches has also been explored in @cite_14 . Li and Chellapa @cite_16 projected player tracking data from small broadcast clips of American football in top view form to segment group motion patterns. The homographies in their work were also obtained using manually annotated landmarks. | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_33",
"@cite_7",
"@cite_5",
"@cite_16",
"@cite_17"
],
"mid": [
"2130103520",
"2089515781",
"",
"",
"2151103935",
"2066025447",
"2085261163"
],
"abstract": [
"No feature-based vision system can work unless good features can be identified and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous Newton-Raphson style search methods to work under affine image transformations. We test performance with several simulations and experiments. >",
"Tracking and identifying players in sports videos filmed with a single pan-tilt-zoom camera has many applications, but it is also a challenging problem. This paper introduces a system that tackles this difficult task. The system possesses the ability to detect and track multiple players, estimates the homography between video frames and the court, and identifies the players. The identification system combines three weak visual cues, and exploits both temporal and mutual exclusion constraints in a Conditional Random Field (CRF). In addition, we propose a novel Linear Programming (LP) Relaxation algorithm for predicting the best player identification in a video clip. In order to reduce the number of labeled training data required to learn the identification system, we make use of weakly supervised learning with the assistance of play-by-play texts. Experiments show promising results in tracking, homography estimation, and identification. Moreover, weakly supervised learning with play-by-play texts greatly reduces the number of labeled training examples required. The identification system can achieve similar accuracies by using merely 200 labels in weakly supervised learning, while a strongly supervised approach needs a least 20,000 labels.",
"",
"",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"We consider the ‘group motion segmentation’ problem and provide a solution for it. The group motion segmentation problem aims at analyzing motion trajectories of multiple objects in video and finding among them the ones involved in a ‘group motion pattern’. This problem is motivated by and serves as the basis for the ‘multi-object activity recognition’ problem, which is currently an active research topic in event analysis and activity recognition. Specifically, we learn a Spatio-Temporal Driving Force Model to characterize a group motion pattern and design an approach for segmenting the group motion. We illustrate the approach using videos of American football plays, where we identify the offensive players, who follow an offensive motion pattern, from motions of all players in the field. Experiments using GaTech Football Play Dataset validate the effectiveness of the segmentation algorithm.",
"A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing"
]
} |
1703.01437 | 2952964002 | In this paper, we propose a novel method to register football broadcast video frames on the static top view model of the playing surface. The proposed method is fully automatic in contrast to the current state of the art which requires manual initialization of point correspondences between the image and the static model. Automatic registration using existing approaches has been difficult due to the lack of sufficient point correspondences. We investigate an alternate approach exploiting the edge information from the line markings on the field. We formulate the registration problem as a nearest neighbour search over a synthetically generated dictionary of edge map and homography pairs. The synthetic dictionary generation allows us to exhaustively cover a wide variety of camera angles and positions and reduce this problem to a minimal per-frame edge map matching procedure. We show that the per-frame results can be improved in videos using an optimization framework for temporal camera stabilization. We demonstrate the efficacy of our approach by presenting extensive results on a dataset collected from matches of football World Cup 2014. | Our work is also related to camera stabilization method of @cite_29 which demonstrates that the stabilized camera motion can be represented as combination of distinct constant, linear and parabolic segments. We extend their idea for smoothing the computed homographies over a video. We also benefit from the work of Muja and Lowe @cite_32 for computationally efficient nearest neighbour search. | {
"cite_N": [
"@cite_29",
"@cite_32"
],
"mid": [
"2113018061",
"1627400044"
],
"abstract": [
"We present a novel algorithm for automatically applying constrainable, L1-optimal camera paths to generate stabilized videos by removing undesired motions. Our goal is to compute camera paths that are composed of constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To this end, our algorithm is based on a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond the conventional filtering of camera paths that only suppresses high frequency jitter. We incorporate additional constraints on the path of the camera directly in our algorithm, allowing for stabilized and retargeted videos. Our approach accomplishes this without the need of user interaction or costly 3D reconstruction of the scene, and works as a post-process for videos from any camera or from an online source.",
"For many computer vision problems, the most time consuming component consists of nearest neighbor matching in high-dimensional spaces. There are no known exact algorithms for solving these high-dimensional problems that are faster than linear search. Approximate algorithms are known to provide large speedups with only minor loss in accuracy, but many such algorithms have been published with only minimal guidance on selecting an algorithm and its parameters for any given problem. In this paper, we describe a system that answers the question, “What is the fastest approximate nearest-neighbor algorithm for my data?” Our system will take any given dataset and desired degree of precision and use these to automatically determine the best algorithm and parameter values. We also describe a new algorithm that applies priority search on hierarchical k-means trees, which we have found to provide the best known performance on many datasets. After testing a range of alternatives, we have found that multiple randomized k-d trees provide the best performance for other datasets. We are releasing public domain code that implements these approaches. This library provides about one order of magnitude improvement in query time over the best previously available software and provides fully automated parameter selection."
]
} |
1703.01006 | 2604472983 | Tracking congestion throughout the network road is a critical component of Intelligent transportation network management systems. Understanding how the traffic flows and short-term prediction of congestion occurrence due to rush-hour or incidents can be beneficial to such systems to effectively manage and direct the traffic to the most appropriate detours. Many of the current traffic flow prediction systems are designed by utilizing a central processing component where the prediction is carried out through aggregation of the information gathered from all measuring stations. However, centralized systems are not scalable and fail provide real-time feedback to the system whereas in a decentralized scheme, each node is responsible to predict its own short-term congestion based on the local current measurements in neighboring nodes. We propose a decentralized deep learning-based method where each node accurately predicts its own congestion state in realtime based on the congestion state of the neighboring stations. Moreover, historical data from the deployment site is not required, which makes the proposed method more suitable for newly installed stations. In order to achieve higher performance, we introduce a regularized euclidean loss function that favors high congestion samples over low congestion samples to avoid the impact of the unbalanced training dataset. A novel dataset for this purpose is designed based on the traffic data obtained from traffic control stations in northern California. Extensive experiments conducted on the designed benchmark reflect a successful congestion prediction. | Traffic congestion leads to extra gas emissions and low transportation efficiency, and it wastes a lot of individuals' time and a hunge amount of fuel. Diagnosing congestion and building a pattern for predicting traffic congestion has been regarded as one the most important issues as it can lead to informal decisions on the routes that motorists take, and on expanding road networks and public transport. Research to predict traffic congested spots, especially in urban areas is thus very important.Typcally, congestion prediction can be used in Advanced Traffic Management Systems (ATMSs) and Advanced Traveller Information Systems in order to develope proactive traffic control strategies and real-time route guidance. @cite_16 | {
"cite_N": [
"@cite_16"
],
"mid": [
"2131767615"
],
"abstract": [
"Short-term traffic flow prediction has long been regarded as a critical concern for intelligent transportation systems. On the basis of many existing prediction models, each having good performance only in a particular period, an improved approach is to combine these single predictors together for prediction in a span of periods. In this paper, a neural network model is introduced that combines the prediction from single neural network predictors according to an adaptive and heuristic credit assignment algorithm based on the theory of conditional probability and Bayes' rule. Two single predictors, i.e., the back propagation and the radial basis function neural networks are designed and combined linearly into a Bayesian combined neural network model. The credit value for each predictor in the combined model is calculated according to the proposed credit assignment algorithm and largely depends on the accumulative prediction perfor- mance of these predictors during the previous prediction intervals. For experimental test, two data sets comprising traffic flow rates in 15-min time intervals have been collected from Singapore's Ayer Rajah Expressway. One data set is used to train the two single neural networks and the other to test and compare the performances between the combined and singular models. Three indices, i.e., the mean absolute percentage error, the variance of absolute percentage error, and the probability of percentage error, are employed to compare the forecasting performance. It is found that most of the time, the combined model outperforms the singular predictors. More importantly, for a given time period, it is the role of this newly proposed model to track the predictors' performance online, so as to always select and combine the best-performing predictors for prediction."
]
} |
1703.01006 | 2604472983 | Tracking congestion throughout the network road is a critical component of Intelligent transportation network management systems. Understanding how the traffic flows and short-term prediction of congestion occurrence due to rush-hour or incidents can be beneficial to such systems to effectively manage and direct the traffic to the most appropriate detours. Many of the current traffic flow prediction systems are designed by utilizing a central processing component where the prediction is carried out through aggregation of the information gathered from all measuring stations. However, centralized systems are not scalable and fail provide real-time feedback to the system whereas in a decentralized scheme, each node is responsible to predict its own short-term congestion based on the local current measurements in neighboring nodes. We propose a decentralized deep learning-based method where each node accurately predicts its own congestion state in realtime based on the congestion state of the neighboring stations. Moreover, historical data from the deployment site is not required, which makes the proposed method more suitable for newly installed stations. In order to achieve higher performance, we introduce a regularized euclidean loss function that favors high congestion samples over low congestion samples to avoid the impact of the unbalanced training dataset. A novel dataset for this purpose is designed based on the traffic data obtained from traffic control stations in northern California. Extensive experiments conducted on the designed benchmark reflect a successful congestion prediction. | The data regarding Traffic Flow and Traffic Congestion are two instances of Spatio-temporal data. They embady a location (Spatial Feature) and a time (Temporal feature). Besides, as we already mentioned, traffic flow and traffic congestion are based on human actions @cite_6 . In @cite_15 , the authors propose a fully automatic deep model for human-action-based spatio-temporal data. This model first utilizes Convolutional Neural Network model (CNN) to learn the spatio-temporal features. Then, in the second part of this model, they use the output of the first step to train a recurrent neural network model (RNN) in order to classify the entire sequence. @cite_15 does not mention traffic issues as one of the possible applications of their work, however it seems promising to make some model, which is inspired by their model, to predict traffic flow and congestion. | {
"cite_N": [
"@cite_15",
"@cite_6"
],
"mid": [
"2062017159",
"28988658"
],
"abstract": [
"Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88 within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.",
"We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works."
]
} |
1703.01006 | 2604472983 | Tracking congestion throughout the network road is a critical component of Intelligent transportation network management systems. Understanding how the traffic flows and short-term prediction of congestion occurrence due to rush-hour or incidents can be beneficial to such systems to effectively manage and direct the traffic to the most appropriate detours. Many of the current traffic flow prediction systems are designed by utilizing a central processing component where the prediction is carried out through aggregation of the information gathered from all measuring stations. However, centralized systems are not scalable and fail provide real-time feedback to the system whereas in a decentralized scheme, each node is responsible to predict its own short-term congestion based on the local current measurements in neighboring nodes. We propose a decentralized deep learning-based method where each node accurately predicts its own congestion state in realtime based on the congestion state of the neighboring stations. Moreover, historical data from the deployment site is not required, which makes the proposed method more suitable for newly installed stations. In order to achieve higher performance, we introduce a regularized euclidean loss function that favors high congestion samples over low congestion samples to avoid the impact of the unbalanced training dataset. A novel dataset for this purpose is designed based on the traffic data obtained from traffic control stations in northern California. Extensive experiments conducted on the designed benchmark reflect a successful congestion prediction. | In 2015, @cite_15 Deep Learning theory was put into practice for large-scale congestion prediction. To this end, they utilized Restricted Boltzmann Machine @cite_7 and Recurrent Neural Network @cite_0 to model and predict the traffic congestion. In order to do this, they convert all the speed data of Taxis in Ningbo, China to binary values (i.e. the speed more than a threshold is 1, otherwise it is 0), and then call these values . Therefore, the network congestion condition data will be a matrix as follows: [ ] Each element in the matrix indicates congestion condition in a specific point at a specific time slot. Therefore, @math represents the congestion condition on the th point of the traffic network at th time slot (The Network has point). Give this matrix to the model presented in @cite_15 , the result will be the predicted traffic condition for each point at . [ ===== ] Although @cite_15 presented a good performance for predicting traffic condition, it has some drawbacks: | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_7"
],
"mid": [
"2104518905",
"2062017159",
"2100495367"
],
"abstract": [
"While neural networks are very successfully applied to the processing of fixed-length vectors and variable-length sequences, the current state of the art does not allow the efficient processing of structured objects of arbitrary shape (like logical terms, trees or graphs). We present a connectionist architecture together with a novel supervised learning scheme which is capable of solving inductive inference tasks on complex symbolic structures of arbitrary size. The most general structures that can be handled are labeled directed acyclic graphs. The major difference of our approach compared to others is that the structure-representations are exclusively tuned for the intended inference task. Our method is applied to tasks consisting in the classification of logical terms. These range from the detection of a certain subterm to the satisfaction of a specific unification pattern. Compared to previously known approaches we obtained superior results in that domain.",
"Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88 within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
]
} |
1703.01006 | 2604472983 | Tracking congestion throughout the network road is a critical component of Intelligent transportation network management systems. Understanding how the traffic flows and short-term prediction of congestion occurrence due to rush-hour or incidents can be beneficial to such systems to effectively manage and direct the traffic to the most appropriate detours. Many of the current traffic flow prediction systems are designed by utilizing a central processing component where the prediction is carried out through aggregation of the information gathered from all measuring stations. However, centralized systems are not scalable and fail provide real-time feedback to the system whereas in a decentralized scheme, each node is responsible to predict its own short-term congestion based on the local current measurements in neighboring nodes. We propose a decentralized deep learning-based method where each node accurately predicts its own congestion state in realtime based on the congestion state of the neighboring stations. Moreover, historical data from the deployment site is not required, which makes the proposed method more suitable for newly installed stations. In order to achieve higher performance, we introduce a regularized euclidean loss function that favors high congestion samples over low congestion samples to avoid the impact of the unbalanced training dataset. A novel dataset for this purpose is designed based on the traffic data obtained from traffic control stations in northern California. Extensive experiments conducted on the designed benchmark reflect a successful congestion prediction. | The traffic condition is limited in either or (1 or 0). However, in real applications, we usually need a of values (or colors in case of Map) to show amount of traffic flow. The traffic condition is set based on a specific threshold (for example 20 km h). If the average speed is less than the threshold the traffic condition will be set as , otherwise it will be . Nevertheless, having a specific threshold for the whole network is inappropriate. Rather, the traffic condition is supposed to be set based on the ratio of average speed of vehicles to possible max speed (Speed limit). In the model presented in @cite_15 , authors did not consider any order for Network points as the input (the rows of the matrix). However, the spatial influence of adjacent network points should be taken into consideration. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2062017159"
],
"abstract": [
"Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88 within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation."
]
} |
1703.01106 | 2951122226 | Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results, but the proposed algorithms require a single trusted party to have access to the entire data, which is a clear weakness. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a novel method for DP learning in this distributed setting, based on a secure multi-party sum function for aggregating summaries from the data holders. Each data holder adds their share of Gaussian noise to make the total computation differentially private using the Gaussian mechanism. We prove that the system can be made secure against a desired number of colluding data owners and robust against faulting data owners. The method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost. | In machine learning, @cite_2 presented the first method for aggregating classifiers in a DP manner, but their approach is sensitive to the number of parties and sizes of the data sets held by each party and cannot be applied in a completely distributed setting. @cite_14 improved upon this by an algorithm for distributed DP stochastic gradient descent that works for any number of parties. The privacy of the algorithm is based on perturbation of gradients which cannot be directly applied to the efficient SSP mechanism. The idea of aggregating classifiers was further refined in @cite_3 through a method that uses an auxiliary public data set to improve the performance. | {
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_2"
],
"mid": [
"2119874464",
"2263253503",
"2953120443"
],
"abstract": [
"Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the e-differential privacy definition due to (2006). First we apply the output perturbation ideas of (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.",
"Learning a classifier from private data collected by multiple parties is an important problem that has many potential applications. How can we build an accurate and differentially private global classifier by combining locally-trained classifiers from different parties, without access to any party's private data? We propose to transfer the knowledge' of the local classifier ensemble by first creating labeled data from auxiliary unlabeled data, and then train a global @math -differentially private classifier. We show that majority voting is too sensitive and therefore propose a new risk weighted by class probabilities estimated from the ensemble. Relative to a non-private solution, our private solution has a generalization error bounded by @math where @math is the number of parties. This allows strong privacy without performance loss when @math is large, such as in crowdsensing applications. We demonstrate the performance of our method with realistic tasks of activity recognition, network intrusion detection, and malicious URL detection.",
"We study how to communicate findings of Bayesian inference to third parties, while preserving the strong guarantee of differential privacy. Our main contributions are four different algorithms for private Bayesian inference on proba-bilistic graphical models. These include two mechanisms for adding noise to the Bayesian updates, either directly to the posterior parameters, or to their Fourier transform so as to preserve update consistency. We also utilise a recently introduced posterior sampling mechanism, for which we prove bounds for the specific but general case of discrete Bayesian networks; and we introduce a maximum-a-posteriori private mechanism. Our analysis includes utility and privacy bounds, with a novel focus on the influence of graph structure on privacy. Worked examples and experiments with Bayesian na \"i ve Bayes and Bayesian linear regression illustrate the application of our mechanisms."
]
} |
1703.01106 | 2951122226 | Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results, but the proposed algorithms require a single trusted party to have access to the entire data, which is a clear weakness. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a novel method for DP learning in this distributed setting, based on a secure multi-party sum function for aggregating summaries from the data holders. Each data holder adds their share of Gaussian noise to make the total computation differentially private using the Gaussian mechanism. We prove that the system can be made secure against a desired number of colluding data owners and robust against faulting data owners. The method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost. | The first practical method for implementing DP queries in a distributed manner was the distributed Laplace mechanism presented in @cite_18 . The distributed Laplace mechanism could be used instead of the Gaussian mechanism if pure @math -DP is required, but the method, like those in @cite_2 @cite_14 , needs homomorphic encryption which can be computationally more demanding for high-dimensional data. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_2"
],
"mid": [
"2104803737",
"2119874464",
"2953120443"
],
"abstract": [
"We propose the first differentially private aggregation algorithm for distributed time-series data that offers good practical utility without any trusted server. This addresses two important challenges in participatory data-mining applications where (i) individual users collect temporally correlated time-series data (such as location traces, web history, personal health data), and (ii) an untrusted third-party aggregator wishes to run aggregate queries on the data. To ensure differential privacy for time-series data despite the presence of temporal correlation, we propose the Fourier Perturbation Algorithm (FPAk). Standard differential privacy techniques perform poorly for time-series data. To answer n queries, such techniques can result in a noise of Θ(n) to each query answer, making the answers practically useless if n is large. Our FPAk algorithm perturbs the Discrete Fourier Transform of the query answers. For answering n queries, FPAk improves the expected error from Θ(n) to roughly Θ(k) where k is the number of Fourier coefficients that can (approximately) reconstruct all the n query answers. Our experiments show that k To deal with the absence of a trusted central server, we propose the Distributed Laplace Perturbation Algorithm (DLPA) to add noise in a distributed way in order to guarantee differential privacy. To the best of our knowledge, DLPA is the first distributed differentially private algorithm that can scale with a large number of users: DLPA outperforms the only other distributed solution for differential privacy proposed so far, by reducing the computational load per user from O(U) to O(1) where U is the number of users.",
"Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the e-differential privacy definition due to (2006). First we apply the output perturbation ideas of (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.",
"We study how to communicate findings of Bayesian inference to third parties, while preserving the strong guarantee of differential privacy. Our main contributions are four different algorithms for private Bayesian inference on proba-bilistic graphical models. These include two mechanisms for adding noise to the Bayesian updates, either directly to the posterior parameters, or to their Fourier transform so as to preserve update consistency. We also utilise a recently introduced posterior sampling mechanism, for which we prove bounds for the specific but general case of discrete Bayesian networks; and we introduce a maximum-a-posteriori private mechanism. Our analysis includes utility and privacy bounds, with a novel focus on the influence of graph structure on privacy. Worked examples and experiments with Bayesian na \"i ve Bayes and Bayesian linear regression illustrate the application of our mechanisms."
]
} |
1703.01106 | 2951122226 | Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results, but the proposed algorithms require a single trusted party to have access to the entire data, which is a clear weakness. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a novel method for DP learning in this distributed setting, based on a secure multi-party sum function for aggregating summaries from the data holders. Each data holder adds their share of Gaussian noise to make the total computation differentially private using the Gaussian mechanism. We prove that the system can be made secure against a desired number of colluding data owners and robust against faulting data owners. The method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost. | There is a wealth of literature on secure distributed computation of DP sum queries as reviewed in @cite_7 . The methods of @cite_1 @cite_15 @cite_8 @cite_7 also include different forms of noise scaling to provide collusion resistance and or fault tolerance, where the latter requires a separate recovery round after data holder failures which is not needed by DCA. @cite_16 discusses low level details of an efficient implementation of the distributed Laplace mechanism. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_15",
"@cite_16"
],
"mid": [
"2525846285",
"2232997092",
"2146673169",
"1952161176",
"1970141429"
],
"abstract": [
"This paper considers the problem of secure data aggregation (mainly summation) in a distributed setting, while ensuring differential privacy of the result. We study secure multiparty addition protocols using well known security schemes: Shamir’s secret sharing, perturbation-based, and various encryptions. We supplement our study with our new enhanced encryption scheme EFT, which is efficient and fault tolerant.Differential privacy of the final result is achieved by either distributed Laplace or Geometric mechanism (respectively DLPA or DGPA), while approximated differential privacy is achieved by diluted mechanisms. Distributed random noise is generated collectively by all participants, which draw random variables from one of several distributions: Gamma, Gauss, Geometric, or their diluted versions. We introduce a new distributed privacy mechanism with noise drawn from the Laplace distribution, which achieves smaller redundant noise with efficiency. We compare complexity and security characteristics of the protocols with different differential privacy mechanisms and security schemes. More importantly, we implemented all protocols and present an experimental comparison on their performance and scalability in a real distributed environment. Based on the evaluations, we identify our security scheme and Laplace DLPA as the most efficient for secure distributed data aggregation with differential privacy.",
"We consider applications where an untrusted aggregator would like to collect privacy sensitive data from users, and compute aggregate statistics periodically. For example, imagine a smart grid operator who wishes to aggregate the total power consumption of a neighborhood every ten minutes; or a market researcher who wishes to track the fraction of population watching ESPN on an hourly basis.",
"A private stream aggregation (PSA) system contributes a user's data to a data aggregator without compromising the user's privacy. The system can begin by determining a private key for a local user in a set of users, wherein the sum of the private keys associated with the set of users and the data aggregator is equal to zero. The system also selects a set of data values associated with the local user. Then, the system encrypts individual data values in the set based in part on the private key to produce a set of encrypted data values, thereby allowing the data aggregator to decrypt an aggregate value across the set of users without decrypting individual data values associated with the set of users, and without interacting with the set of users while decrypting the aggregate value. The system also sends the set of encrypted data values to the data aggregator.",
"This paper presents a new privacy-preserving smart metering system. Our scheme is private under the differential privacy model and therefore provides strong and provable guarantees.With our scheme, an (electricity) supplier can periodically collect data from smart meters and derive aggregated statistics without learning anything about the activities of individual households. For example, a supplier cannot tell from a user's trace whether or when he watched TV or turned on heating. Our scheme is simple, efficient and practical. Processing cost is very limited: smart meters only have to add noise to their data and encrypt the results with an efficient stream cipher.",
"Computing aggregate statistics about user data is of vital importance for a variety of services and systems, but this practice has been shown to seriously undermine the privacy of users. Differential privacy has proved to be an effective tool to sanitize queries over a database, and various cryptographic protocols have been recently proposed to enforce differential privacy in a distributed setting, e.g., statical queries on sensitive data stored on the user's side. The widespread deployment of differential privacy techniques in real-life settings is, however, undermined by several limitations that existing constructions suffer from: they support only a limited class of queries, they pose a trade-off between privacy and utility of the query result, they are affected by the answer pollution problem, or they are inefficient. This paper presents PrivaDA, a novel design architecture for distributed differential privacy that leverages recent advances in secure multiparty computations on fixed and floating point arithmetics to overcome the previously mentioned limitations. In particular, PrivaDA supports a variety of perturbation mechanisms (e.g., the Laplace, discrete Laplace, and exponential mechanisms) and it constitutes the first generic technique to generate noise in a fully distributed manner while maintaining the optimal utility. Furthermore, PrivaDA does not suffer from the answer pollution problem. We demonstrate the efficiency of PrivaDA with a performance evaluation, and its expressiveness and flexibility by illustrating several application scenarios such as privacy-preserving web analytics."
]
} |
1703.01106 | 2951122226 | Many applications of machine learning, for example in health care, would benefit from methods that can guarantee privacy of data subjects. Differential privacy (DP) has become established as a standard for protecting learning results, but the proposed algorithms require a single trusted party to have access to the entire data, which is a clear weakness. We consider DP Bayesian learning in a distributed setting, where each party only holds a single sample or a few samples of the data. We propose a novel method for DP learning in this distributed setting, based on a secure multi-party sum function for aggregating summaries from the data holders. Each data holder adds their share of Gaussian noise to make the total computation differentially private using the Gaussian mechanism. We prove that the system can be made secure against a desired number of colluding data owners and robust against faulting data owners. The method builds on an asymptotically optimal and practically efficient DP Bayesian inference with rapidly diminishing extra cost. | Finally, @cite_17 presents several proofs related to the SMC setting and introduce a protocol for generating approximately Gaussian noise in a distributed manner. Compared to their protocol, our method of noise addition is considerably simpler and faster, and produces exactly instead of approximately Gaussian noise with negligible increase in noise level. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2336734734"
],
"abstract": [
"How to achieve differential privacy in the distributed setting, where the dataset is distributed among the distrustful parties, is an important problem. We consider in what condition can a protocol inherit the differential privacy property of a function it computes. The heart of the problem is the secure multiparty computation of randomized function. A notion is introduced, which captures the key security problems when computing a randomized function from a deterministic one in the distributed setting. By this observation, a sufficient and necessary condition about computing a randomized function from a deterministic one is given. The above result can not only be used to determine whether a protocol computing differentially private function is secure, but also be used to construct secure one. Then we prove that the differential privacy property of a function can be inherited by the protocol computing it if the protocol privately computes it. A composition theorem of differentially private protocols is also presented. We also construct some protocols to generate random variate in the distributed setting, such as the uniform random variates and the inversion method. By using these fundamental protocols, we construct protocols of the Gaussian mechanism, the Laplace mechanism and the Exponential mechanism. Importantly, all these protocols satisfy obliviousness and so can be proved to be secure in a simulation based manner. We also provide a complexity bound of computing randomized function in the distribute setting. Finally, to show that our results are fundamental and powerful to multiparty differential privacy, we construct a differentially private empirical risk minimization protocol."
]
} |
1703.00956 | 2950040888 | Representation learning and option discovery are two of the biggest challenges in reinforcement learning (RL). Proto-RL is a well known approach for representation learning in MDPs. The representations learned with this framework are called proto-value functions (PVFs). In this paper we address the option discovery problem by showing how PVFs implicitly define options. We do it by introducing eigenpurposes, intrinsic reward functions derived from the learned representations. The options discovered from eigenpurposes traverse the principal directions of the state space. They are useful for multiple tasks because they are independent of the agents' intentions. Moreover, by capturing the diffusion process of a random walk, different options act at different time scales, making them helpful for exploration strategies. We demonstrate features of eigenpurposes in traditional tabular domains as well as in Atari 2600 games. | Most algorithms for option discovery can be seen as approaches. Agents use trajectories leading to informative rewards We define an informative reward to be the signal that informs the agent it has reached a goal. For example, when trying to escape from a maze, we consider @math to be an informative reward if the agent observes rewards of value @math in every time step it is inside the maze. A different example is a positive reward observed by an agent that typically observes rewards of value @math . as a starting point, decomposing and refining them into options. There are many approaches based on this principle, such as methods that use the observed rewards to generate intrinsic rewards leading to new value functions (, McGovern01 ; Menache02 ; Konidaris09 ), methods that use the observed rewards to climb a gradient (, Mankowitz16 ; Vezhnevets16 ; Bacon17 ), or to do probabilistic inference @cite_8 . However, such approaches are not applicable in large state spaces with sparse rewards. If informative rewards are unlikely to be found by an agent using only primitive actions, requiring long or specific sequences of actions, options are equally unlikely to be discovered. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2498991332"
],
"abstract": [
"Tasks that require many sequential decisions or complex solutions are hard to solve using conventional reinforcement learning algorithms. Based on the semi Markov decision process setting (SMDP) and the option framework, we propose a model which aims to alleviate these concerns. Instead of learning a single monolithic policy, the agent learns a set of simpler sub-policies as well as the initiation and termination probabilities for each of those sub-policies. While existing option learning algorithms frequently require manual specification of components such as the sub-policies, we present an algorithm which infers all relevant components of the option framework from data. Furthermore, the proposed approach is based on parametric option representations and works well in combination with current policy search methods, which are particularly well suited for continuous real-world tasks. We present results on SMDPs with discrete as well as continuous state-action spaces. The results show that the presented algorithm can combine simple sub-policies to solve complex tasks and can improve learning performance on simpler tasks."
]
} |
1703.00956 | 2950040888 | Representation learning and option discovery are two of the biggest challenges in reinforcement learning (RL). Proto-RL is a well known approach for representation learning in MDPs. The representations learned with this framework are called proto-value functions (PVFs). In this paper we address the option discovery problem by showing how PVFs implicitly define options. We do it by introducing eigenpurposes, intrinsic reward functions derived from the learned representations. The options discovered from eigenpurposes traverse the principal directions of the state space. They are useful for multiple tasks because they are independent of the agents' intentions. Moreover, by capturing the diffusion process of a random walk, different options act at different time scales, making them helpful for exploration strategies. We demonstrate features of eigenpurposes in traditional tabular domains as well as in Atari 2600 games. | Our algorithm can be seen as a approach, in which options are constructed before the agent observes any informative reward. These options are composed to generate the desired policy. Options discovered this way tend to be independent of an agent's intention, and are potentially useful in many different tasks @cite_12 . Such options can also be seen as being useful for exploration by allowing agents to commit to a behavior for an extended period of time @cite_0 . Among the approaches to discover options without using extrinsic rewards are the use of global or local graph centrality measures @cite_10 @cite_21 @cite_4 and clustering of states @cite_17 @cite_13 @cite_25 . Interestingly, and also use the graph Laplacian in their algorithm, but to identify bottleneck states. | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_21",
"@cite_0",
"@cite_10",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2550612212",
"2108535023",
"2090170171",
"2400719195",
"",
"",
"",
"2160808139"
],
"abstract": [
"The bottleneck concept in reinforcement learning has played a prominent role in automatically finding temporal abstractions from experience. Lacking significant theory, it has however been regarded by some as being merely a trick. This thesis attempts to gain better intuition about this approach using spectral graph theory. A connection to the theory of Nearly Decomposable Markov Chains is also drawn and shows great promise. An options discovery algorithm is proposed and is the first of its kind to be applicable in continuous state spaces. As opposed to other similar approaches, this one has running time O(mn2) rather than O(n3) making it suitable to much larger domains than the typical grid worlds.",
"We introduce a skill discovery method for reinforcement learning in continuous domains that constructs chains of skills leading to an end-of-task reward. We demonstrate experimentally that it creates appropriate skills and achieves performance benefits in a challenging continuous domain.",
"We present a new subgoal-based method for automatically creating useful skills in reinforcement learning. Our method identifies subgoals by partitioning local state transition graphs---those that are constructed using only the most recent experiences of the agent. The local scope of our subgoal discovery method allows it to successfully identify the type of subgoals we seek---states that lie between two densely-connected regions of the state space while producing an algorithm with low computational cost.",
"Artificial intelligence is commonly defined as the ability to achieve goals in the world. In the reinforcement learning framework, goals are encoded as reward functions that guide agent behaviour, and the sum of observed rewards provide a notion of progress. However, some domains have no such reward signal, or have a reward signal so sparse as to appear absent. Without reward feedback, agent behaviour is typically random, often dithering aimlessly and lacking intentionality. In this paper we present an algorithm capable of learning purposeful behaviour in the absence of rewards. The algorithm proceeds by constructing temporally extended actions (options), through the identification of purposes that are \"just out of reach\" of the agent's current behaviour. These purposes establish intrinsic goals for the agent to learn, ultimately resulting in a suite of behaviours that encourage the agent to visit different parts of the state space. Moreover, the approach is particularly suited for settings where rewards are very sparse, and such behaviours can help in the exploration of the environment until reward is observed.",
"",
"",
"",
"We consider a graph theoretic approach for automatic construction of options in a dynamic environment. A map of the environment is generated on-line by the learning agent, representing the topological structure of the state transitions. A clustering algorithm is then used to partition the state space to different regions. Policies for reaching the different parts of the space are separately learned and added to the model in a form of options (macro-actions). The options are used for accelerating the Q-Learning algorithm. We extend the basic algorithm and consider building a map that includes preliminary indication of the location of \"interesting\" regions of the state space, where the value gradient is significant and additional exploration might be beneficial. Experiments indicate significant speedups, especially in the initial learning phase."
]
} |
1703.00956 | 2950040888 | Representation learning and option discovery are two of the biggest challenges in reinforcement learning (RL). Proto-RL is a well known approach for representation learning in MDPs. The representations learned with this framework are called proto-value functions (PVFs). In this paper we address the option discovery problem by showing how PVFs implicitly define options. We do it by introducing eigenpurposes, intrinsic reward functions derived from the learned representations. The options discovered from eigenpurposes traverse the principal directions of the state space. They are useful for multiple tasks because they are independent of the agents' intentions. Moreover, by capturing the diffusion process of a random walk, different options act at different time scales, making them helpful for exploration strategies. We demonstrate features of eigenpurposes in traditional tabular domains as well as in Atari 2600 games. | The idea of discovering options by learning to control parts of the environment is also related to our work. Eigenpurposes encode different rates of change in the agent’s representation of the world, while the corresponding options aim at maximizing such change. Others have also proposed ways to discover options based on the idea of learning to control the environment. , for instance, proposes an algorithm that explicitly models changes in the variables that form the agent's representation. Recently, proposed an algorithm in which agents discover options by maximizing a notion of empowerment @cite_20 , where the agent aims at getting to states with a maximal set of available intrinsic options. | {
"cite_N": [
"@cite_20"
],
"mid": [
"1786044565"
],
"abstract": [
"Is it better for you to own a corkscrew or not? If asked, you as a human being would likely say “yes”, but more importantly, you are somehow able to make this decision. You are able to decide this, even if your current acute problems or task do not include opening a wine bottle. Similarly, it is also unlikely that you evaluated several possible trajectories your life could take and looked at them with and without a corkscrew, and then measured your survival or reproductive fitness in each. When you, as a human cognitive agent, made this decision, you were likely relying on a behavioural “proxy”, an internal motivation that abstracts the problem of evaluating a decision impact on your overall life, but evaluating it in regard to some simple fitness function. One example would be the idea of curiosity, urging you to act so that your experience new sensations and learn about the environment. On average, this should lead to better and richer models of the world, which give you a better chance of reaching your ultimate goals of survival and reproduction."
]
} |
1703.00956 | 2950040888 | Representation learning and option discovery are two of the biggest challenges in reinforcement learning (RL). Proto-RL is a well known approach for representation learning in MDPs. The representations learned with this framework are called proto-value functions (PVFs). In this paper we address the option discovery problem by showing how PVFs implicitly define options. We do it by introducing eigenpurposes, intrinsic reward functions derived from the learned representations. The options discovered from eigenpurposes traverse the principal directions of the state space. They are useful for multiple tasks because they are independent of the agents' intentions. Moreover, by capturing the diffusion process of a random walk, different options act at different time scales, making them helpful for exploration strategies. We demonstrate features of eigenpurposes in traditional tabular domains as well as in Atari 2600 games. | Continual Curiosity driven Skill Acquisition (CCSA) @cite_18 is the closest approach to ours. CCSA also discovers skills that maximize an intrinsic reward obtained by some extracted representation. While we use PVFs, CCSA uses Incremental Slow Feature Analysis (SFA) @cite_3 to define the intrinsic reward function. has shown that, given a specific choice of adjacency function, PVFs are equivalent to SFA @cite_24 . SFA becomes an approximation of PVFs if the function space used in the SFA does not allow arbitrary mappings from the observed data to an embedding. Our method differs in how we define the initiation and termination sets, as well as in the objective being maximized. CCSA acquires skills that produce a large variation in the slow-feature outputs, leading to options that seek for bottlenecks. Our approach does not seek for bottlenecks, focusing on traversing different directions of the learned representation. | {
"cite_N": [
"@cite_24",
"@cite_18",
"@cite_3"
],
"mid": [
"2146444479",
"",
"2166489517"
],
"abstract": [
"Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.",
"",
"The Slow Feature Analysis (SFA) unsupervised learning framework extracts features representing the underlying causes of the changes within a temporally coherent high-dimensional raw sensory input signal. We develop the first online version of SFA, via a combination of incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, online SFA adapts along with non-stationary environments, which makes it a generally useful unsupervised preprocessor for autonomous learning agents. We compare online SFA to batch SFA in several experiments and show that it indeed learns without a teacher to encode the input stream by informative slow features representing meaningful abstract environmental properties. We extend online SFA to deep networks in hierarchical fashion, and use them to successfully extract abstract object position information from high-dimensional video."
]
} |
1703.01267 | 2594323998 | The square @math of a linear error correcting code @math is the linear code spanned by the component-wise products of every pair of (non-necessarily distinct) words in @math . Squares of codes have gained attention for several applications mainly in the area of cryptography, and typically in those applications, one is concerned about some of the parameters (dimension and minimum distance) of both @math and @math . In this paper, motivated mostly by the study of this problem in the case of linear codes defined over the binary field, squares of cyclic codes are considered. General results on the minimum distance of the squares of cyclic codes are obtained, and constructions of cyclic codes @math with a relatively large dimension of @math and minimum distance of the square @math are discussed. In some cases, the constructions lead to codes @math such that both @math and @math simultaneously have the largest possible minimum distances for their length and dimensions. | A Singleton-like bound relating @math and @math was established in @cite_32 and later the family of codes attaining this bound was characterized in @cite_24 (both works treat in fact the more general setting of products of codes). In particular, unless one of the two parameters ( @math or @math ) is very restricted, Reed-Solomon codes are the only ones which can match this bound (see for more information about these results). | {
"cite_N": [
"@cite_24",
"@cite_32"
],
"mid": [
"1505693772",
"2027526129"
],
"abstract": [
"We characterize product-maximum distance separable (PMDS) pairs of linear codes, i.e., pairs of codes @math and @math whose product under coordinatewise multiplication has maximum possible minimum distance as a function of the code length and the dimensions @math and @math . We prove in particular, for @math , that if the square of the code @math has minimum distance at least 2, and @math is a PMDS pair, then either @math is a generalized Reed–Solomon code, or @math is a direct sum of self-dual codes. In passing we establish coding-theory analogues of classical theorems of additive combinatorics.",
"We give an upper bound that relates the dimensions of some given number of linear codes, with the minimum distance of their componentwise product. A typical result is as follows: given t linear codes Ci of parameters [n,ki]q with full support, one can find codewords ci ∈ Ci such that 1 ≤ w(c1*⋯*ct) ≤ max(t-1, n+t-(k1+⋯+kt))."
]
} |
1703.01267 | 2594323998 | The square @math of a linear error correcting code @math is the linear code spanned by the component-wise products of every pair of (non-necessarily distinct) words in @math . Squares of codes have gained attention for several applications mainly in the area of cryptography, and typically in those applications, one is concerned about some of the parameters (dimension and minimum distance) of both @math and @math . In this paper, motivated mostly by the study of this problem in the case of linear codes defined over the binary field, squares of cyclic codes are considered. General results on the minimum distance of the squares of cyclic codes are obtained, and constructions of cyclic codes @math with a relatively large dimension of @math and minimum distance of the square @math are discussed. In some cases, the constructions lead to codes @math such that both @math and @math simultaneously have the largest possible minimum distances for their length and dimensions. | However, as mentioned above, Reed-Solomon codes have the restriction that @math . Therefore the asymptotic behaviour of families of squares of codes has been considered, where the finite field @math is fixed and @math grows to infinity. The existence, over every finite field, of asymptotically good families We say that a family of codes @math with lengths @math is asymptotically good if @math when @math and the limits @math and @math exist and are strictly positive. of codes whose squares also form an asymptotically good family was established in @cite_0 . For small fields, this result requires a combination of an algebraic geometric construction over a sufficiently large (but constant) extension field and a special concatenation function to achieve a final construction over the small finite field. However, @cite_26 showed that families of codes with such asymptotic properties are not very abundant, since choosing codes uniformly at random (among all codes of a prescribed dimension that grows linearly with the length) will, with high probability, not satisfy the desired properties. | {
"cite_N": [
"@cite_0",
"@cite_26"
],
"mid": [
"2150005435",
"2108277687"
],
"abstract": [
"If C is a binary linear code, let C〈2〉 be the linear code spanned by intersections of pairs of codewords of C. We construct an asymptotically good family of binary linear codes such that, for C ranging in this family, C〈2〉 also form an asymptotically good family. For this, we use algebraic-geometry codes, concatenation, and a fair amount of bilinear algebra. More precisely, the two main ingredients used in our construction are, first, a description of the symmetric square of an odd degree extension field in terms only of field operations of small degree, and second, a recent result of Garcia-Stichtenoth-Bassa-Beelen on the number of points of curves on such an odd degree extension field.",
"Given a linear code @math , one can define the @math -th power of @math as the span of all componentwise products of @math elements of @math . A power of @math may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code \"typically\" fill the whole space? We give a positive answer, for codes of dimension @math and length roughly @math or smaller. Moreover, the convergence speed is exponential if the difference @math is at least linear in @math . The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise computation of the number of quadratic forms of a given rank, and the number of their zeros."
]
} |
1703.01267 | 2594323998 | The square @math of a linear error correcting code @math is the linear code spanned by the component-wise products of every pair of (non-necessarily distinct) words in @math . Squares of codes have gained attention for several applications mainly in the area of cryptography, and typically in those applications, one is concerned about some of the parameters (dimension and minimum distance) of both @math and @math . In this paper, motivated mostly by the study of this problem in the case of linear codes defined over the binary field, squares of cyclic codes are considered. General results on the minimum distance of the squares of cyclic codes are obtained, and constructions of cyclic codes @math with a relatively large dimension of @math and minimum distance of the square @math are discussed. In some cases, the constructions lead to codes @math such that both @math and @math simultaneously have the largest possible minimum distances for their length and dimensions. | Instead of considering the asymptotic setting, this paper focuses on specific values for the length of the code @math ; here the problem is that not many existing results that can be applied to, for example, the setting of linear binary codes with lengths, say, @math . One option is to use Reed-Solomon codes over large enough extension fields paired with the concatenation technique in @cite_0 . Reed-Muller codes are a family of binary codes for which it is relatively easy to determine the minimum distance of their squares. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2150005435"
],
"abstract": [
"If C is a binary linear code, let C〈2〉 be the linear code spanned by intersections of pairs of codewords of C. We construct an asymptotically good family of binary linear codes such that, for C ranging in this family, C〈2〉 also form an asymptotically good family. For this, we use algebraic-geometry codes, concatenation, and a fair amount of bilinear algebra. More precisely, the two main ingredients used in our construction are, first, a description of the symmetric square of an odd degree extension field in terms only of field operations of small degree, and second, a recent result of Garcia-Stichtenoth-Bassa-Beelen on the number of points of curves on such an odd degree extension field."
]
} |
1703.01086 | 2593539516 | This paper introduces a novel rotation-based framework for arbitrary-oriented text detection in natural scene images. We present the Rotation Region Proposal Networks , which are designed to generate inclined proposals with text orientation angle information. The angle information is then adapted for bounding box regression to make the proposals more accurately fit into the text region in terms of the orientation. The Rotation Region-of-Interest pooling layer is proposed to project arbitrary-oriented proposals to a feature map for a text region classifier. The whole framework is built upon a region-proposal-based architecture, which ensures the computational efficiency of the arbitrary-oriented text detection compared with previous text detection systems. We conduct experiments using the rotation-based framework on three real-world scene text detection datasets and demonstrate its superiority in terms of effectiveness and efficiency over previous approaches. | Scene text in the wild is usually aligned from any orientation in real-world applications, and approaches for arbitrary orientations are needed. For example, @cite_5 uses mutual magnitude symmetry and gradient vector symmetry to identify text pixel candidates regardless of the orientation, including curves from natural scene images, and @cite_6 designs a Canny text detector by taking the similarity between an image edge and text to detect text edge pixels and perform text localization. Recently, convolution-network-based approaches were proposed to perform text detection, , Text-CNN @cite_3 , by first using an optimized MSER detector to find the approximate region of the text and then sending region features into a character-based horizontal text CNN classifier to further recognize the character region. In addition, the orientation factor is adopted in the segmentation models developed by Yao al @cite_51 . Their model aims to predict more accurate orientations via an explicit manner of text segmentation and yields outstanding results on the ICDAR2013 @cite_37 , ICDAR2015 @cite_31 and MSRA-TD500 @cite_21 benchmarks. | {
"cite_N": [
"@cite_37",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_31",
"@cite_51"
],
"mid": [
"2008806374",
"1972065312",
"2217433794",
"2468724597",
"1971822075",
"2144554289",
"2464918637"
],
"abstract": [
"This report presents the final results of the ICDAR 2013 Robust Reading Competition. The competition is structured in three Challenges addressing text extraction in different application domains, namely born-digital images, real scene images and real-scene videos. The Challenges are organised around specific tasks covering text localisation, text segmentation and word recognition. The competition took place in the first quarter of 2013, and received a total of 42 submissions over the different tasks offered. This report describes the datasets and ground truth specification, details the performance evaluation protocols used and presents the final results along with a brief summary of the participating methods.",
"With the increasing popularity of practical vision systems and smart phones, text detection in natural scenes becomes a critical yet challenging task. Most existing methods have focused on detecting horizontal or near-horizontal texts. In this paper, we propose a system which detects texts of arbitrary orientations in natural images. Our algorithm is equipped with a two-level classification scheme and two sets of features specially designed for capturing both the intrinsic characteristics of texts. To better evaluate our algorithm and compare it with other competing algorithms, we generate a new dataset, which includes various texts in diverse real-world scenarios; we also propose a protocol for performance evaluation. Experiments on benchmark datasets and the proposed dataset demonstrate that our algorithm compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on texts of arbitrary orientations in complex natural scenes.",
"Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"This paper presents a novel scene text detection algorithm, Canny Text Detector, which takes advantage of the similarity between image edge and text for effective text localization with improved recall rate. As closely related edge pixels construct the structural information of an object, we observe that cohesive characters compose a meaningful word sentence sharing similar properties such as spatial location, size, color, and stroke width regardless of language. However, prevalent scene text detection approaches have not fully utilized such similarity, but mostly rely on the characters classified with high confidence, leading to low recall rate. By exploiting the similarity, our approach can quickly and robustly localize a variety of texts. Inspired by the original Canny edge detector, our algorithm makes use of double threshold and hysteresis tracking to detect texts of low confidence. Experimental results on public datasets demonstrate that our algorithm outperforms the state-of the-art scene text detection methods in terms of detection rate.",
"Abstract Text detection in the real world images captured in unconstrained environment is an important yet challenging computer vision problem due to a great variety of appearances, cluttered background, and character orientations. In this paper, we present a robust system based on the concepts of Mutual Direction Symmetry (MDS), Mutual Magnitude Symmetry (MMS) and Gradient Vector Symmetry (GVS) properties to identify text pixel candidates regardless of any orientations including curves (e.g. circles, arc shaped) from natural scene images. The method works based on the fact that the text patterns in both Sobel and Canny edge maps of the input images exhibit a similar behavior. For each text pixel candidate, the method proposes to explore SIFT features to refine the text pixel candidates, which results in text representatives. Next an ellipse growing process is introduced based on a nearest neighbor criterion to extract the text components. The text is verified and restored based on text direction and spatial study of pixel distribution of components to filter out non-text components. The proposed method is evaluated on three benchmark datasets, namely, ICDAR2005 and ICDAR2011 for horizontal text evaluation, MSRA-TD500 for non-horizontal straight text evaluation and on our own dataset (CUTE80) that consists of 80 images for curved text evaluation to show its effectiveness and superiority over existing methods.",
"Results of the ICDAR 2015 Robust Reading Competition are presented. A new Challenge 4 on Incidental Scene Text has been added to the Challenges on Born-Digital Images, Focused Scene Images and Video Text. Challenge 4 is run on a newly acquired dataset of 1,670 images evaluating Text Localisation, Word Recognition and End-to-End pipelines. In addition, the dataset for Challenge 3 on Video Text has been substantially updated with more video sequences and more accurate ground truth data. Finally, tasks assessing End-to-End system performance have been introduced to all Challenges. The competition took place in the first quarter of 2015, and received a total of 44 submissions. Only the tasks newly introduced in 2015 are reported on. The datasets, the ground truth specification and the evaluation protocols are presented together with the results and a brief summary of the participating methods.",
"Recently, scene text detection has become an active research topic in computer vision and document analysis, because of its great importance and significant challenge. However, vast majority of the existing methods detect text within local regions, typically through extracting character, word or line level candidates followed by candidate aggregation and false positive elimination, which potentially exclude the effect of wide-scope and long-range contextual cues in the scene. To take full advantage of the rich information available in the whole natural image, we propose to localize text in a holistic manner, by casting scene text detection as a semantic segmentation problem. The proposed algorithm directly runs on full images and produces global, pixel-wise prediction maps, in which detections are subsequently formed. To better make use of the properties of text, three types of information regarding text region, individual characters and their relationship are estimated, with a single Fully Convolutional Network (FCN) model. With such predictions of text properties, the proposed algorithm can simultaneously handle horizontal, multi-oriented and curved text in real-world natural images. The experiments on standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500, demonstrate that the proposed algorithm substantially outperforms previous state-of-the-art approaches. Moreover, we report the first baseline result on the recently-released, large-scale dataset COCO-Text."
]
} |
1703.01086 | 2593539516 | This paper introduces a novel rotation-based framework for arbitrary-oriented text detection in natural scene images. We present the Rotation Region Proposal Networks , which are designed to generate inclined proposals with text orientation angle information. The angle information is then adapted for bounding box regression to make the proposals more accurately fit into the text region in terms of the orientation. The Rotation Region-of-Interest pooling layer is proposed to project arbitrary-oriented proposals to a feature map for a text region classifier. The whole framework is built upon a region-proposal-based architecture, which ensures the computational efficiency of the arbitrary-oriented text detection compared with previous text detection systems. We conduct experiments using the rotation-based framework on three real-world scene text detection datasets and demonstrate its superiority in terms of effectiveness and efficiency over previous approaches. | A technique similar to text detection is generic object detection. The detection process can be made faster if the number of proposals is largely reduced. There is a wide variety of region proposal methods, such as Edge Boxes @cite_25 , Selective Search @cite_46 , and Region Proposal Networks (RPNs) @cite_47 . For example, Jaderberg al @cite_42 extends the region proposal method and applies the Edge Boxes method @cite_25 to perform text detection. Their text spotting system achieves outstanding results on several text detection benchmarks. The Connectionist Text Proposal Network (CTPN) @cite_9 is also a detection-based framework for scene text detection. It employs the image feature from the CNN network in LSTM to predict the text region and generate robust proposals. | {
"cite_N": [
"@cite_47",
"@cite_9",
"@cite_42",
"@cite_46",
"@cite_25"
],
"mid": [
"639708223",
"2519818067",
"1922126009",
"2088049833",
"7746136"
],
"abstract": [
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s image, by using the very deep VGG16 model [27]. Online demo is available: http: textdet.com .",
"In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy."
]
} |
1703.01086 | 2593539516 | This paper introduces a novel rotation-based framework for arbitrary-oriented text detection in natural scene images. We present the Rotation Region Proposal Networks , which are designed to generate inclined proposals with text orientation angle information. The angle information is then adapted for bounding box regression to make the proposals more accurately fit into the text region in terms of the orientation. The Rotation Region-of-Interest pooling layer is proposed to project arbitrary-oriented proposals to a feature map for a text region classifier. The whole framework is built upon a region-proposal-based architecture, which ensures the computational efficiency of the arbitrary-oriented text detection compared with previous text detection systems. We conduct experiments using the rotation-based framework on three real-world scene text detection datasets and demonstrate its superiority in terms of effectiveness and efficiency over previous approaches. | This work is inspired by the RPN detection pipeline in regards to the dense-proposal based approach used for detection and RoI pooling operation used to further accelerate the detection pipeline. Detection pipelines based on RPN are widely used in various computer vision applications @cite_14 @cite_23 @cite_12 . The idea is also similar to that of Spatial Transformer Networks (STN) @cite_7 , , a neural network model can rectify an image by learning its affine transformation matrix. Here, we try to extend the model to multi-oriented text detection by injecting angle information. Perhaps the work most related to ours is @cite_14 , where the authors proposed an inception-RPN and made further text detection-specific optimizations to adapt the text detection. We incorporate the rotation factor into the region proposal network so that it is able to generate arbitrary-oriented proposals. We also extend the RoI pooling layer into the Rotation RoI (RRoI) pooling layer and apply angle regression in our framework to perform the rectification process and finally achieve outstanding results. | {
"cite_N": [
"@cite_7",
"@cite_14",
"@cite_12",
"@cite_23"
],
"mid": [
"603908379",
"2395360388",
"2587008894",
"2438869444"
],
"abstract": [
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.",
"In this paper, we develop a novel unified framework called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the inception region proposal network (Inception-RPN) and design a set of text characteristic prior bounding boxes to achieve high word recall with only hundred level candidate proposals. Next, we present a powerful textdetection network that embeds ambiguous text category (ATC) information and multilevel region-of-interest pooling (MLRP) for text and non-text classification and accurate localization. Finally, we apply an iterative bounding box voting scheme to pursue high recall in a complementary manner and introduce a filtering algorithm to retain the most suitable bounding box, while removing redundant inner and outer boxes for each text instance. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.",
"We perform fast vehicle detection from traffic surveillance cameras. A novel deep learning framework, namely Evolving Boxes, is developed that proposes and refines the object boxes under different feature representations. Specifically, our framework is embedded with a light-weight proposal network to generate initial anchor boxes as well as to early discard unlikely regions; a fine-turning network produces detailed features for these candidate boxes. We show intriguingly that by applying different feature fusion techniques, the initial boxes can be refined for both localization and recognition. We evaluate our network on the recent DETRAC benchmark and obtain a significant improvement over the state-of-the-art Faster RCNN by 9.5 mAP. Further, our network achieves 9–13 FPS detection speed on a moderate commercial GPU.",
"The Faster R-CNN has recently demonstrated impressive results on various object detection benchmarks. By training a Faster R-CNN model on the large scale WIDER face dataset, we report state-of-the-art results on two widely used face detection benchmarks, FDDB and the recently released IJB-A."
]
} |
1703.01083 | 2950348756 | Plan recognition algorithms infer agents' plans from their observed actions. Due to imperfect knowledge about the agent's behavior and the environment, it is often the case that there are multiple hypotheses about an agent's plans that are consistent with the observations, though only one of these hypotheses is correct. This paper addresses the problem of how to disambiguate between hypotheses, by querying the acting agent about whether a candidate plan in one of the hypotheses matches its intentions. This process is performed sequentially and used to update the set of possible hypotheses during the recognition process. The paper defines the sequential plan recognition process (SPRP), which seeks to reduce the number of hypotheses using a minimal number of queries. We propose a number of policies for the SPRP which use maximum likelihood and information gain to choose which plan to query. We show this approach works well in practice on two domains from the literature, significantly reducing the number of hypotheses using fewer queries than a baseline approach. Our results can inform the design of future plan recognition systems that interleave the recognition process with intelligent interventions of their users. | Our work relates to different approaches in the PR literature on disambiguation of the hypothesis space during run-time. Most of the approaches admit all of the hypotheses that are consistent with the observed history and rank them @cite_19 @cite_13 . | {
"cite_N": [
"@cite_19",
"@cite_13"
],
"mid": [
"1981637451",
"2287004520"
],
"abstract": [
"We present the PHATT algorithm for plan recognition. Unlike previous approaches to plan recognition, PHATT is based on a model of plan execution. We show that this clarifies several difficult issues in plan recognition including the execution of multiple interleaved root goals, partially ordered plans, and failing to observe actions. We present the PHATT algorithm's theoretical basis, and an implementation based on tree structures. We also investigate the algorithm's complexity, both analytically and empirically. Finally, we present PHATT's integrated constraint reasoning for parametrized actions and temporal constraints.",
"We investigate the use of a simple, discriminative reranking approach to plan recognition in an abductive setting. In contrast to recent work, which attempts to model abductive plan recognition using various formalisms that integrate logic and graphical models (such as Markov Logic Networks or Bayesian Logic Programs), we instead advocate a simpler, more flexible approach in which plans found through an abductive beam-search are discriminatively scored based on arbitrary features. We show that this approach performs well even with relatively few positive training examples, and we obtain state-of-the-art results on two abductive plan recognition datasets, outperforming more complicated systems."
]
} |
1703.01083 | 2950348756 | Plan recognition algorithms infer agents' plans from their observed actions. Due to imperfect knowledge about the agent's behavior and the environment, it is often the case that there are multiple hypotheses about an agent's plans that are consistent with the observations, though only one of these hypotheses is correct. This paper addresses the problem of how to disambiguate between hypotheses, by querying the acting agent about whether a candidate plan in one of the hypotheses matches its intentions. This process is performed sequentially and used to update the set of possible hypotheses during the recognition process. The paper defines the sequential plan recognition process (SPRP), which seeks to reduce the number of hypotheses using a minimal number of queries. We propose a number of policies for the SPRP which use maximum likelihood and information gain to choose which plan to query. We show this approach works well in practice on two domains from the literature, significantly reducing the number of hypotheses using fewer queries than a baseline approach. Our results can inform the design of future plan recognition systems that interleave the recognition process with intelligent interventions of their users. | Lastly, the deployment of probes, tests, and sensors to identify the correct diagnoses or the occurrence of events was inspired by work in sequential diagnosis @cite_8 @cite_0 , active diagnosis @cite_3 @cite_12 , and sensor minimization @cite_9 @cite_11 . @cite_2 suggested a metric that will allow an agent to recognize plans earlier. Background Before defining the SPRP we present some background about plans and PR. There are multiple ways to define a plan and the PR problem [inter alia] Nau:2007tf,ramirez2010probabilistic . We follow the definitions used by (simplified for brevity) in which the observing agent is given a plan library describing the expected behaviors of the observed agent. The refinement methods represent how complex actions can be decomposed into (basic or complex) actions. A plan for achieving a complex action @math is a tree whose root is labeled by @math , and each parent node is labeled with a complex action such that its children nodes are a decomposition of its complex action into constituent actions according to one of the refinement methods. The ordering constraints of each refinement method are used to enforce the order in which the method's constituents were executed @cite_19 . | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_12",
"@cite_11"
],
"mid": [
"2394540668",
"1609747062",
"2126410464",
"2137118045",
"1981637451",
"69219414",
"2466791622",
"1586169328"
],
"abstract": [
"Possessing a sufficient level of situation awareness is essential for effective decision making in dynamic environments. In video games, this includes being aware to some extent of the intentions of the opponents. Such high-level awareness hinges upon inferences over the lower-level situation awareness provided by the game state. Traditional plan recognizers are completely passive processes that leave all the initiative to the observed agent. In a situation where the opponent’s intentions are unclear, the observer is forced to wait until further observations of the opponent’s actions are made to disambiguate the pending goal hypotheses. With the plan recognizer we propose, in contrast, the observer would take the initiative and provoke the opponent, with the expectation that his reaction will give cues as to what his true intentions actually are. Plan recognition is of course only one component of a larger AI system which in addition involves components to make decisions on how to act against the opposing force, execute and monitor planned and reactive actions, and learn from past interactions to adapt accordingly. Our long term objective is to develop plan recognition and planning algorithms that will compete against humans in games and other applications. The unlimited creativity of the human mind coupled with its sometimes chaotic and unpredictable nature makes this challenge very exciting.",
"We study sensor minimization problems in the context of fault diagnosis. Fault diagnosis consists in synthesizing a diagnoser that observes a given plant and identifies faults in the plant as soon as possible after their occurrence. Existing literature on this problem has considered the case of fixed static observers, where the set of observable events is fixed and does not change during execution of the system. In this paper, we consider static observers where the set of observable events is not fixed, but needs to be optimized (e.g., minimized in size). We also consider dynamic observers, where the observer can \"switch\" sensors on or off, thus dynamically changing the set of events it wishes to observe. It is known that checking diagnosability (i.e., whether a given observer is capable of identifying faults) can be solved in polynomial time for static observers, and we show that the same is true for dynamic ones. On the other hand, minimizing the number of (static) observable events required to achieve diagnosability is NP-complete. We show that this is true also in the case of mask-based observation, where some events are observable but not distinguishable. For dynamic observers' synthesis, we prove that amost permissive finite-state observer can be computed in doubly exponential time, using a game-theoretic approach. We further investigate optimization problems for dynamic observers and define a notion of cost of an observer. We show how to compute an optimal observer using results on mean-payoff games by Zwick and Paterson.",
"The need for accurate and timely diagnosis of system failures and the advantages of automated diagnostic systems are well appreciated. However, diagnosability considerations are often not explicitly taken into account in the system design. In particular, design of the controller and that of the diagnostic subsystem are decoupled, and this may significantly affect the diagnosability properties of a system. The authors present an integrated approach to control and diagnosis. More specifically, they present an approach for the design of diagnosable systems by appropriate design of the system controller. This problem, which they refer to as the active diagnosis problem, is studied in the framework of discrete-event systems (DESs); it is based on prior and new results on the theory of diagnosis for DESs and on existing results in supervisory control under partial observations. They formulate the active diagnosis problem as a supervisory control problem where the legal language is an \"appropriate\" regular sublanguage of the regular language generated by the system. They present an iterative procedure for determining the supremal controllable, observable, and diagnosable sublanguage of the legal language and for obtaining the supervisor that synthesizes this language. This procedure provides both a controller that ensures diagnosability of the closed-loop system and a diagnoser for online failure diagnosis. The procedure can be implemented using finite-state machines and is guaranteed to converge in a finite number of iterations. The authors illustrate their approach using a simple pump-valve system.",
"When a system behaves abnormally, sequential diagnosis takes a sequence of measurements of the system until the faults causing the abnormality are identified, and the goal is to reduce the diagnostic cost, defined here as the number of measurements. To propose measurement points, previous work employs a heuristic based on reducing the entropy over a computed set of diagnoses. This approach generally has good performance in terms of diagnostic cost, but can fail to diagnose large systems when the set of diagnoses is too large. Focusing on a smaller set of probable diagnoses scales the approach but generally leads to increased average diagnostic costs. In this paper, we propose a new diagnostic framework employing four new techniques, which scales to much larger systems with good performance in terms of diagnostic cost. First, we propose a new heuristic for measurement point selection that can be computed efficiently, without requiring the set of diagnoses, once the system is modeled as a Bayesian network and compiled into a logical form known as d-DNNF. Second, we extend hierarchical diagnosis, a technique based on system abstraction from our previous work, to handle probabilities so that it can be applied to sequential diagnosis to allow larger systems to be diagnosed. Third, for the largest systems where even hierarchical diagnosis fails, we propose a novel method that converts the system into one that has a smaller abstraction and whose diagnoses form a superset of those of the original system; the new system can then be diagnosed and the result mapped back to the original system. Finally, we propose a novel cost estimation function which can be used to choose an abstraction of the system that is more likely to provide optimal average cost. Experiments with ISCAS-85 benchmark circuits indicate that our approach scales to all circuits in the suite except one that has a flat structure not susceptible to useful abstraction.",
"We present the PHATT algorithm for plan recognition. Unlike previous approaches to plan recognition, PHATT is based on a model of plan execution. We show that this clarifies several difficult issues in plan recognition including the execution of multiple interleaved root goals, partially ordered plans, and failing to observe actions. We present the PHATT algorithm's theoretical basis, and an implementation based on tree structures. We also investigate the algorithm's complexity, both analytically and empirically. Finally, we present PHATT's integrated constraint reasoning for parametrized actions and temporal constraints.",
"We propose a new problem we refer to as goal recognition design (grd), in which we take a domain theory and a set of goals and ask the following questions: to what extent do the actions performed by an agent within the model reveal its objective, and what is the best way to modify a model so that any agent acting in the model reveals its objective as early as possible. Our contribution is the introduction of a new measure we call worst case distinctiveness (wcd) with which we assess a grd model. The wcd represents the maximal length of a prefix of an optimal path an agent may take within a system before it becomes clear at which goal it is aiming. To model and solve the grd problem we choose to use the models and tools from the closely related field of automated planning. We present two methods for calculating the wcd of a grd model, one of which is based on a novel compilation to a classical planning problem. We then propose a way to reduce the wcd of a model by limiting the set of available actions an agent can perform and provide a method for calculating the optimal set of actions to be removed from the model. Our empirical evaluation shows the proposed solution to be effective in computing and minimizing wcd.",
"Diagnosis is the task of detecting fault occurrences in a partially observed system. Depending on the possible observations, a discrete-event system may be diagnosable or not. Active diagnosis aims at controlling the system to render it diagnosable. Past research has proposed solutions for this problem, but their complexity remains to be improved. Here, we solve the decision and synthesis problems for active diagnosability, proving that (1) our procedures are optimal with respect to computational complexity, and (2) the memory required for our diagnoser is minimal. We then study the delay between a fault occurrence and its detection by the diagnoser. We construct a memory-optimal diagnoser whose delay is at most twice the minimal delay, whereas the memory required to achieve optimal delay may be highly greater. We also provide a solution for parametrized active diagnosis, where we automatically construct the most permissive controller respecting a given delay.",
"We address the following sensor selection problem. We assume that a dynamic system possesses a certain property, call it Property D, when a set Γ of sensors is used. There is a cost cA associated with each set A of sensors that is a subset of Γ. Given any set of sensors that is a subset of Γ, it is possible to determine, via a test, whether the resulting system-sensor combination possesses Property D. Each test required to check whether or not Property D holds incurs a fixed cost. For each set of sensors A that is a subset of Γ there is an a priori probability pA that the test will be positive, i.e., the system-sensor combination possesses Property D. The objective is to determine a test strategy, i.e., a sequence of tests, to minimize the expected cost, associated with the tests, that is incurred until a least expensive combination of sensors that results in a system-sensor combination possessing Property D is identified. We determine conditions on the sensor costs cA and the a priori probabilities pA under which the strategy that tests combinations of sensors in increasing order of cost is optimal with respect to the aforementioned objective."
]
} |
1703.01083 | 2950348756 | Plan recognition algorithms infer agents' plans from their observed actions. Due to imperfect knowledge about the agent's behavior and the environment, it is often the case that there are multiple hypotheses about an agent's plans that are consistent with the observations, though only one of these hypotheses is correct. This paper addresses the problem of how to disambiguate between hypotheses, by querying the acting agent about whether a candidate plan in one of the hypotheses matches its intentions. This process is performed sequentially and used to update the set of possible hypotheses during the recognition process. The paper defines the sequential plan recognition process (SPRP), which seeks to reduce the number of hypotheses using a minimal number of queries. We propose a number of policies for the SPRP which use maximum likelihood and information gain to choose which plan to query. We show this approach works well in practice on two domains from the literature, significantly reducing the number of hypotheses using fewer queries than a baseline approach. Our results can inform the design of future plan recognition systems that interleave the recognition process with intelligent interventions of their users. | Incomplete plans include nodes labeled with complex level actions that have not been decomposed using a refinement method. These nodes represent activities that the agent will carry out in future and have yet to be refined. This is similar to the least commitment policies used by some planning approaches to delay variable bindings and commitments as much as possible @cite_20 @cite_21 @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_20"
],
"mid": [
"2153432872",
"98519143",
""
],
"abstract": [
"■ Automated planning technology has become mature enough to be useful in applications that range from game playing to control of space vehicles. In this article, Dana Nau discusses where automated-planning research has been, where it is likely to go, where he thinks it should go, and some major challenges in getting there. The article is an updated version of Nau’s invited talk at AAAI-05 in Pittsburgh, Pennsylvania.",
"Recent applications of plan recognition face several open challenges: (i) matching observations to the plan library is costly, especially with complex multi-featured observations; (ii) computing recognition hypotheses is expensive. We present techniques for addressing these challenges. First, we show a novel application of machine-learning decision-tree to efficiently map multi-featured observations to matching plan steps. Second, we provide efficient lazy-commitment recognition algorithms that avoid enumerating hypotheses with every observation, instead only carrying out bookkeeping incrementally. The algorithms answer queries as to the current state of the agent, as well as its history of selected states. We provide empirical results demonstrating their efficiency and capabilities.",
""
]
} |
1703.00518 | 2592433683 | Consumer protection agencies are charged with safeguarding the public from hazardous products, but the thousands of products under their jurisdiction make it challenging to identify and respond to consumer complaints quickly. From the consumer's perspective, online reviews can provide evidence of product defects, but manually sifting through hundreds of reviews is not always feasible. In this paper, we propose a system to mine Amazon.com reviews to identify products that may pose safety or health hazards. Since labeled data for this task are scarce, our approach combines positive unlabeled learning with domain adaptation to train a classifier from consumer complaints submitted to the U.S. Consumer Product Safety Commission. On a validation set of manually annotated Amazon product reviews, we find that our approach results in an absolute F1 score improvement of 8 over the best competing baseline. Furthermore, we apply the classifier to Amazon reviews of known recalled products; the classifier identifies reviews reporting safety hazards prior to the recall date for 45 of the products. This suggests that the system may be able to provide an early warning system to alert consumers to hazardous products before an official recall is announced. | Very recently, winkler2016toy used a keyword based approach to identify online reviews that report injuries from toy products. In addition to the manual effort required to curate the keyword list, the approach appears to produce low precision rates (9-44 sixteen mentioned an injury. The authors apply the same approach to detect defects in dishwashers, with similar precision values @cite_0 . In contrast, our proposed approach fits a statistical classifier with no human intervention required, resulting in $>85 Other recent work has identified vehicle defects in consumer reviews using standard text classification, with accuracies ranging from 62 not feasible to annotate sufficient messages to use standard supervised learning. Additionally, zhang2015predicting built an unsupervised approach to clustering vehicle defects by subcategory. Such a method may serve to complement our present work by providing more fine-grained clusters of reviews by hazard type. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1983046573"
],
"abstract": [
"The recent surge in the usage of social media has created an enormous amount of user-generated content (UGC). While there are streams of research that seek to mine UGC, these research studies seldom tackle analysis of this textual content from a quality management perspective. In this study, we synthesize existing research studies on text mining and propose an integrated text analytic framework for product defect discovery. The framework effectively leverages rich social media content and quantifies the text using various automatically extracted signal cues. These extracted signal cues can then be used as modeling inputs for product defect discovery. We showcase the usefulness of the framework by performing product defect discovery using UGC in both the automotive and the consumer electronics domains. We use principal component analysis and logistic regression to produce a multivariate explanatory analysis relating defects to quantitative measures derived from text. For our samples, we find that a selection of distinctive terms, product features, and semantic factors are strong indicators of defects, whereas stylistic, social, and sentiment features are not. For high sales volume products, we demonstrate that significant corporate value is derivable from a reduction in defect discovery time and consequently defective product units in circulation."
]
} |
1703.00845 | 2592732803 | This paper presents a study on the use of Convolutional Neural Networks for camera relocalisation and its application to map compression. We follow state of the art visual relocalisation results and evaluate response to different data inputs -- namely, depth, grayscale, RGB, spatial position and combinations of these. We use a CNN map representation and introduce the notion of CNN map compression by using a smaller CNN architecture. We evaluate our proposal in a series of publicly available datasets. This formulation allows us to improve relocalisation accuracy by increasing the number of training trajectories while maintaining a constant-size CNN. | Related to map compression, dimensionality reduction through neural networks was first discussed in @cite_15 . In @cite_34 an evaluation to up-to-date data encoding algorithms for object recognition was presented, and it was extended in @cite_21 to introduce the use of Convolutional Neural Networks for the same task. | {
"cite_N": [
"@cite_15",
"@cite_34",
"@cite_21"
],
"mid": [
"2100495367",
"1976921161",
"1994002998"
],
"abstract": [
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.",
"A large number of novel encodings for bag of visual words models have been proposed in the past two years to improve on the standard histogram of quantized local features. Examples include locality-constrained linear encoding [23], improved Fisher encoding [17], super vector encoding [27], and kernel codebook encoding [20]. While several authors have reported very good results on the challenging PASCAL VOC classification data by means of these new techniques, differences in the feature computation and learning algorithms, missing details in the description of the methods, and different tuning of the various components, make it impossible to compare directly these methods and hard to reproduce the results reported. This paper addresses these shortcomings by carrying out a rigorous evaluation of these new techniques by: (1) fixing the other elements of the pipeline (features, learning, tuning); (2) disclosing all the implementation details, and (3) identifying both those aspects of each method which are particularly important to achieve good performance, and those aspects which are less critical. This allows a consistent comparative analysis of these encoding methods. Several conclusions drawn from our analysis cannot be inferred from the original publications.",
"The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available."
]
} |
1703.00845 | 2592732803 | This paper presents a study on the use of Convolutional Neural Networks for camera relocalisation and its application to map compression. We follow state of the art visual relocalisation results and evaluate response to different data inputs -- namely, depth, grayscale, RGB, spatial position and combinations of these. We use a CNN map representation and introduce the notion of CNN map compression by using a smaller CNN architecture. We evaluate our proposal in a series of publicly available datasets. This formulation allows us to improve relocalisation accuracy by increasing the number of training trajectories while maintaining a constant-size CNN. | @cite_24 introduced the idea of egomotion in CNN training by concatenating the output of two parallel neural networks with two different views of the same image; at the end, this architecture learns valuable features independent of the point of view. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2951590555"
],
"abstract": [
"The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching."
]
} |
1703.00845 | 2592732803 | This paper presents a study on the use of Convolutional Neural Networks for camera relocalisation and its application to map compression. We follow state of the art visual relocalisation results and evaluate response to different data inputs -- namely, depth, grayscale, RGB, spatial position and combinations of these. We use a CNN map representation and introduce the notion of CNN map compression by using a smaller CNN architecture. We evaluate our proposal in a series of publicly available datasets. This formulation allows us to improve relocalisation accuracy by increasing the number of training trajectories while maintaining a constant-size CNN. | In @cite_27 , concluded that sophisticated architectures compensate for lack of training. @cite_30 explore this idea for single view depth estimation where they present a stereopsis based auto-encoder that uses few instances on the KITTI dataset. Then, @cite_11 , @cite_2 , and @cite_16 continued studying the use of elaborated CNN architectures for depth estimation. | {
"cite_N": [
"@cite_30",
"@cite_27",
"@cite_2",
"@cite_16",
"@cite_11"
],
"mid": [
"2949634581",
"2546302380",
"2124907686",
"",
"2951234442"
],
"abstract": [
"A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.",
"In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer. Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where the filters in one or both stages are learned in supervised or unsupervised mode. This paper addresses three questions: 1. How does the non-linearities that follow the filter banks influence the recognition accuracy? 2. does learning the filter banks in an unsupervised or supervised manner improve the performance over random filters or hardwired filters? 3. Is there any advantage to using an architecture with two stages of feature extraction, rather than one? We show that using non-linearities that include rectification and local contrast normalization is the single most important ingredient for good accuracy on object recognition benchmarks. We show that two stages of feature extraction yield better accuracy than one. Most surprisingly, we show that a two-stage system with random filters can yield almost 63 recognition rate on Caltech-101, provided that the proper non-linearities and pooling layers are used. Finally, we show that with supervised refinement, the system achieves state-of-the-art performance on NORB dataset (5.6 ) and unsupervised pre-training followed by supervised refinement produces good accuracy on Caltech-101 (≫ 65 ), and the lowest known error rate on the undistorted, unprocessed MNIST dataset (0.53 ).",
"Predicting the depth (or surface normal) of a scene from single monocular color images is a challenging task. This paper tackles this challenging and essentially underdetermined problem by regression on deep convolutional neural network (DCNN) features, combined with a post-processing refining step using conditional random fields (CRF). Our framework works at two levels, super-pixel level and pixel level. First, we design a DCNN model to learn the mapping from multi-scale image patches to depth or surface normal values at the super-pixel level. Second, the estimated super-pixel depth or surface normal is refined to the pixel level by exploiting various potentials on the depth or surface normal map, which includes a data term, a smoothness term among super-pixels and an auto-regression term characterizing the local structure of the estimation map. The inference problem can be efficiently solved because it admits a closed-form solution. Experiments on the Make3D and NYU Depth V2 datasets show competitive results compared with recent state-of-the-art methods.",
"",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation."
]
} |
1703.00845 | 2592732803 | This paper presents a study on the use of Convolutional Neural Networks for camera relocalisation and its application to map compression. We follow state of the art visual relocalisation results and evaluate response to different data inputs -- namely, depth, grayscale, RGB, spatial position and combinations of these. We use a CNN map representation and introduce the notion of CNN map compression by using a smaller CNN architecture. We evaluate our proposal in a series of publicly available datasets. This formulation allows us to improve relocalisation accuracy by increasing the number of training trajectories while maintaining a constant-size CNN. | Moving from depth to pose estimation was a logical step. One of the first 6D camera pose regressors was presented in @cite_5 via a general regression NN (GRNN) with synthetic poses. More recently, PoseNet is presented in @cite_10 , where they regress the camera pose using a CNN model. This idea is also explored in @cite_19 for image matching via training a CNN for frame interpolation through video sequences. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_10"
],
"mid": [
"2305401973",
"2076148183",
"2951336016"
],
"abstract": [
"This work presents an unsupervised learning based approach to the ubiquitous computer vision problem of image matching. We start from the insight that the problem of frame interpolation implicitly solves for inter-frame correspondences. This permits the application of analysis-by-synthesis: we first train and apply a Convolutional Neural Network for frame interpolation, then obtain correspondences by inverting the learned CNN. The key benefit behind this strategy is that the CNN for frame interpolation can be trained in an unsupervised manner by exploiting the temporal coherence that is naturally contained in real-world video sequences. The present model therefore learns image matching by simply “watching videos”. Besides a promise to be more generally applicable, the presented approach achieves surprising performance comparable to traditional empirically designed methods.",
"With the advent of real-time dense scene reconstruction from handheld cameras, one key aspect to enable robust operation is the ability to relocalise in a previously mapped environment or after loss of measurement. Tasks such as operating on a workspace, where moving objects and occlusions are likely, require a recovery competence in order to be useful. For RGBD cameras, this must also include the ability to relocalise in areas with reduced visual texture. This paper describes a method for relocalisation of a freely moving RGBD camera in small workspaces. The approach combines both 2D image and 3D depth information to estimate the full 6D camera pose. The method uses a general regression over a set of synthetic views distributed throughout an informed estimate of possible camera viewpoints. The resulting relocalisation is accurate and works faster than framerate and the system’s performance is demonstrated through a comparison against visual and geometric feature matching relocalisation techniques on sequences with moving objects and minimal texture.",
"We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show the convnet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. PoseNet code, dataset and an online demonstration is available on our project webpage, at this http URL"
]
} |
1703.00441 | 2593649546 | Learning to Optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning. In this paper, we explore learning an optimization algorithm for training shallow neural nets. Such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms. We develop an extension that is suited to learning optimization algorithms in this setting and demonstrate that the learned optimization algorithm consistently outperforms other known optimization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture. More specifically, we show that an optimization algorithm trained with the proposed method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. | Methods in this category @cite_1 aim to learn what parameter values of the base-level learner are useful across a family of related tasks. The meta-knowledge captures commonalities shared by tasks in the family, which enables learning on a new task from the family to be done more quickly. Most early methods fall into this category; this line of work has blossomed into an area that has later become known as transfer learning and multi-task learning. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2427497464"
],
"abstract": [
"The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art."
]
} |
1703.00441 | 2593649546 | Learning to Optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning. In this paper, we explore learning an optimization algorithm for training shallow neural nets. Such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms. We develop an extension that is suited to learning optimization algorithms in this setting and demonstrate that the learned optimization algorithm consistently outperforms other known optimization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture. More specifically, we show that an optimization algorithm trained with the proposed method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. | Methods in this category @cite_2 aim to learn which base-level learner achieves the best performance on a task. The meta-knowledge captures correlations between different tasks and the performance of different base-level learners on those tasks. One challenge under this setting is to decide on a parameterization of the space of base-level learners that is both rich enough to be capable of representing disparate base-level learners and compact enough to permit tractable search over this space. proposes a nonparametric representation and stores examples of different base-level learners in a database, whereas proposes representing base-level learners as general-purpose programs. The former has limited representation power, while the latter makes search and learning in the space of base-level learners intractable. views the (online) training procedure of any base-learner as a black box function that maps a sequence of training examples to a sequence of predictions and models it as a recurrent neural net. Under this formulation, meta-training reduces to training the recurrent net, and the base-level learner is encoded in the memory state of the recurrent net. | {
"cite_N": [
"@cite_2"
],
"mid": [
"116375701"
],
"abstract": [
"Met alearning is the study of principled methods that exploit metaknowledge to obtain efficient models and solutions by adapting machine learning and data mining processes. While the variety of machine learning and data mining techniques now available can, in principle, provide good model solutions, a methodology is still needed to guide the search for the most appropriate model in an efficient way. Met alearning provides one such methodology that allows systems to become more effective through experience. This book discusses several approaches to obtaining knowledge concerning the performance of machine learning and data mining algorithms. It shows how this knowledge can be reused to select, combine, compose and adapt both algorithms and models to yield faster, more effective solutions to data mining problems. It can thus help developers improve their algorithms and also develop learning systems that can improve themselves. The book will be of interest to researchers and graduate students in the areas of machine learning, data mining and artificial intelligence."
]
} |
1703.00807 | 2594557668 | With the emerging sensing technologies, such as mobile crowdsensing and Internet of Things, people-centric data can be efficiently collected and used for analytics and optimization purposes. These data are typically required to develop and render people-centric services. In this paper, we address the privacy implication, optimal pricing, and bundling of people-centric services. We first define the inverse correlation between the service quality and privacy level from data analytics perspectives. We then present the profit maximization models of selling standalone, complementary, and substitute services. Specifically, the closed-form solutions of the optimal privacy level and subscription fee are derived to maximize the gross profit of service providers. For interrelated people-centric services, we show that cooperation by service bundling of complementary services is profitable compared with the separate sales but detrimental for substitutes. We also show that the market value of a service bundle is correlated with the degree of contingency between the interrelated services. Finally, we incorporate the profit sharing models from game theory for dividing the bundling profit among the cooperative service providers. | People are the primary focus in people-centric sensing, which has applications in transportation systems @cite_23 , assistive healthcare @cite_18 , and urban monitoring @cite_26 , just to name a few. In this section, we first review related work on pricing models in sensing and communication systems. Then, we discuss the crucial issue of privacy awareness in people-centric sensing. Finally, we review related work on pricing and incentive mechanisms for mobile crowdsensing. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_23"
],
"mid": [
"2034161254",
"2052052388",
"2108196201"
],
"abstract": [
"As the domains of pervasive computing and sensor networking are expanding, there is an ongoing trend towards assistive living and healthcare support environments that can effectively assimilate these technologies according to human needs. Most of the existing research in assistive healthcare follows a more passive approach and has focused on collecting and processing data using a static-topology and an application-aware infrastructure. However, with the technological advances in sensing, computation, storage, and communications, a new era is about to emerge changing the traditional view of sensor-based assistive environments where people are passive data consumers, with one where people carry mobile sensing elements involving large volumes of data related to everyday human activities. This evolution will be driven by people-centric sensing and will turn mobile phones into global mobile sensing devices enabling thousands new personal, social, and public sensing applications. In this paper, we discuss our vision for people-centric sensing in assistive healthcare environments and study the security challenges it brings. This highly dynamic and mobile setting presents new challenges for information security, data privacy and ethics, caused by the ubiquitous nature of data traces originating from sensors carried by people. We aim to instigate discussion on these critical issues because people-centric sensing will never succeed without adequate provisions on security and privacy. To that end, we discuss the latest advances in security and privacy protection strategies that hold promise in this new exciting paradigm. We hope this work will better highlight the need for privacy in people-centric sensing applications and spawn further research in this area. Copyright © 2011 John Wiley & Sons, Ltd.",
"People-centric urban sensing systems (PC-USSs) refer to using human-carried mobile devices such as smartphones and tablets for urban-scale distributed data collection, analysis, and sharing to facilitate interaction between humans and their surrounding environments. A main obstacle to the widespread deployment and adoption of PC-USSs are the privacy concerns of participating individuals as well as the concerns about data integrity. To tackle this open challenge, this paper presents the design and evaluation of VPA, a novel peer-to-peer based solution to verifiable privacy-preserving data aggregation in PC-USSs. VPA achieves strong user privacy by letting each user exchange random shares of its datum with other peers, while at the same time ensures data integrity through a combination of Trusted Platform Module and homomorphic message authentication code. VPA can support a wide range of statistical additive and non-additive aggregation functions such as Sum, Average, Variance, Count, Max Min, Median, Histogram, and Percentile with accurate aggregation results. The efficacy and efficiency of VPA are confirmed by thorough analytical and simulation results.",
"For the last two decades, intelligent transportation systems (ITS) have emerged as an efficient way of improving the performance of transportation systems, enhancing travel security, and providing more choices to travelers. A significant change in ITS in recent years is that much more data are collected from a variety of sources and can be processed into various forms for different stakeholders. The availability of a large amount of data can potentially lead to a revolution in ITS development, changing an ITS from a conventional technology-driven system into a more powerful multifunctional data-driven intelligent transportation system (D2ITS) : a system that is vision, multisource, and learning algorithm driven to optimize its performance. Furthermore, D2ITS is trending to become a privacy-aware people-centric more intelligent system. In this paper, we provide a survey on the development of D2ITS, discussing the functionality of its key components and some deployment issues associated with D2ITS Future research directions for the development of D2ITS is also presented."
]
} |
1703.00807 | 2594557668 | With the emerging sensing technologies, such as mobile crowdsensing and Internet of Things, people-centric data can be efficiently collected and used for analytics and optimization purposes. These data are typically required to develop and render people-centric services. In this paper, we address the privacy implication, optimal pricing, and bundling of people-centric services. We first define the inverse correlation between the service quality and privacy level from data analytics perspectives. We then present the profit maximization models of selling standalone, complementary, and substitute services. Specifically, the closed-form solutions of the optimal privacy level and subscription fee are derived to maximize the gross profit of service providers. For interrelated people-centric services, we show that cooperation by service bundling of complementary services is profitable compared with the separate sales but detrimental for substitutes. We also show that the market value of a service bundle is correlated with the degree of contingency between the interrelated services. Finally, we incorporate the profit sharing models from game theory for dividing the bundling profit among the cooperative service providers. | Pricing models ensure financial stability and resiliency in sensing and communication systems. The authors in @cite_27 presented a cooperative pricing model for Internet providers offering the Internet service under one coalition. The cooperative pricing increases the profit and encourages the Internet providers to upgrade their network connections. A pricing scheme based on the customer data usage of Internet services was introduced in @cite_9 . Unlike flat-rate pricing, the usage-based pricing enables a fair allocation of the Internet resource among the customers. The authors in @cite_2 presented a pricing model of accessing femtocell and macrocell by mobile devices which enables high service quality and maximizes the profit of network operators. The authors in @cite_29 proposed a pricing and transmission scheduling models to maximize the profit of accessing a wireless network by mobile customers. The customer demand is modeled as a Markov chain where applying only two price options is found sufficient for each demand state. The pricing models of people-centric services is more challenging compared to other communication systems. Specifically, the resources and utility of people-centric services are not easily measured as other systems, e.g., the bandwidth and connection speed are easily defined for an Internet service. | {
"cite_N": [
"@cite_9",
"@cite_27",
"@cite_29",
"@cite_2"
],
"mid": [
"2345043033",
"2129573324",
"2133471947",
"2072671275"
],
"abstract": [
"As Internet traffic grows exponentially due to the pervasive Internet accesses via mobile devices and increasing adoptions of cloud-based applications, broadband providers start to shift from flat-rate to usage-based pricing, which has gained support from regulators such as the FCC. We consider generic congestion-prone network services and study usage-based pricing of service providers under market competition. Based on a novel model that captures users' preferences over price and congestion alternatives, we derive the induced congestion and market share of the service providers under a market equilibrium and design algorithms to calculate them. By analyzing different market structures, we reveal how users' value on usage and sensitivity to congestion influence the optimal price, revenue, and competition of service providers, as well as the social welfare. We also obtain the conditions under which monopolistic providers have strong incentives to implement service differentiation via Paris Metro Pricing and whether regulators should encourage such practices.",
"One of the challenges facing the networking industry today is to increase the profitability of Internet services. This calls for economic mechanisms that can enable providers to charge more for better services and collect a fair share of the increased revenues. In this papery we present a generic model for pricing Internet services that are jointly offered by a group of providers. We show that non-cooperative pricing strategies between providers may lead to unfair distribution of profit and may even discourage future upgrades to the network. As an alternative, we propose a fair revenue-sharing policy based on the weighted proportional fairness criterion. We show that this fair allocation policy encourages collaboration among providers and hence can produce higher profits for all providers. Based on the analysis, we suggest a scalable algorithm for providers to implement this policy in a distributed way and study its convergence property.",
"This paper considers the problem of pricing and transmission scheduling for an access point (AP) in a wireless network, where the AP provides service to a set of mobile users. The goal of the AP is to maximize its own time-average profit. We first obtain the optimum time-average profit of the AP and prove the \"Optimality of Two Prices\" theorem. We then develop an online scheme that jointly solves the pricing and transmission scheduling problem in a dynamic environment. The scheme uses an admission price and a business decision as tools to regulate the incoming traffic and to maximize revenue. We show the scheme can achieve any average profit that is arbitrarily close to the optimum, with a tradeoff in average delay. This holds for general Markovian dynamics for channel and user state variation, and does not require a priori knowledge of the Markov model. The model and methodology developed in this paper are general and apply to other stochastic settings where a single party tries to maximize its time-average profit.",
"Femtocells can effectively resolve the poor connectivity issue of indoor cellular users. This paper investigates the economic incentive for a cellular operator to add femtocell service on top of its existing macrocell service. We model the interactions between a cellular operator and users as a Stackelberg game: The operator first determines spectrum allocations and pricings of femtocell and macrocell services, and then heterogeneous users choose between the two services and the amount of resource to request. In the ideal case where the femtocell service has the same full spatial coverage as the macrocell service, we show that the operator will choose to provide femtocell service only, as this leads to a better user quality of service and a higher operator profit. However, if we impose the constraint that no users' payoffs decrease after introducing the femtocell service, then the operator will always continue providing the macrocell service (with or without the femtocell service). Furthermore, we study the impact of operational cost, limited coverage, and spatial reuse on femtocell service provision. As the operational cost increases, fewer users are served by femtocell service and the operator's profit decreases. When the femtocell service has limited spatial coverage, the operator always provides the macrocell service beside the femtocell service. However, when the coverage is high or the total resource is low, the operator will set the prices such that all users who can access femtocell will choose to use the femtocell service only. Finally, spatial reuse of spectrum will increase the efficiency of femtocell services and gives the operator more incentives to allocate spectrum to femtocells."
]
} |
1703.00807 | 2594557668 | With the emerging sensing technologies, such as mobile crowdsensing and Internet of Things, people-centric data can be efficiently collected and used for analytics and optimization purposes. These data are typically required to develop and render people-centric services. In this paper, we address the privacy implication, optimal pricing, and bundling of people-centric services. We first define the inverse correlation between the service quality and privacy level from data analytics perspectives. We then present the profit maximization models of selling standalone, complementary, and substitute services. Specifically, the closed-form solutions of the optimal privacy level and subscription fee are derived to maximize the gross profit of service providers. For interrelated people-centric services, we show that cooperation by service bundling of complementary services is profitable compared with the separate sales but detrimental for substitutes. We also show that the market value of a service bundle is correlated with the degree of contingency between the interrelated services. Finally, we incorporate the profit sharing models from game theory for dividing the bundling profit among the cooperative service providers. | Pricing and incentive mechanisms are required to encourage participation in data collection. A reward-based incentive mechanism for mobile crowdsensing was presented in @cite_22 . The reservation wages of the participants are utilized to reduce the total data cost by selecting the sufficient set of participants with the lowest rates. @cite_7 , the participants are paid according to their reliability. The reliability is defined as a probabilistic process and measured based on the historical records of the participants in completing crowdsensing tasks. The authors in @cite_6 considered the heterogeneity of crowdsensing participants and proposed asymmetric payment model which encourages competition among the participants. @cite_20 , the authors introduced a profit maximization and pricing model to optimize the amount of data that should be bought from the sensing participants. None of the existing papers on people-centric sensing in the literature consider the problem of jointly optimizing the pricing and privacy level in people-centric services where data analytics is heavily applied. Moreover, existing works do not consider bundling interrelated people-centric services as complements or substitutes. Therefore, there is a practical demand for privacy-aware pricing, bundling, and profit allocation models which are the major contributions of this paper. | {
"cite_N": [
"@cite_20",
"@cite_22",
"@cite_7",
"@cite_6"
],
"mid": [
"2258723440",
"2126045912",
"2032596903",
"2222808023"
],
"abstract": [
"Big data has been emerging as a new approach in utilizing large datasets to optimize complex system operations. Big data is fueled with Internet-of-Things (IoT) services that generate immense sensory data from numerous sensors and devices. While most current research focus of big data is on machine learning and resource management design, the economic modeling and analysis have been largely overlooked. This paper thus investigates the big data market model and optimal pricing scheme. We first study the utility of data from the data science perspective, i.e., using the machine learning methods. We then introduce the market model and develop an optimal pricing scheme afterward. The case study shows clearly the suitability of the proposed data utility functions. The numerical examples demonstrate that big data and IoT service provider can achieve the maximum profit through the proposed market model.",
"This paper analyzes and compares different incentive mechanisms for a client to motivate the collaboration of smartphone users on both data acquisition and distributed computing applications.",
"The recent paradigm of mobile crowd sensing (MCS) enables a broad range of mobile applications. A critical challenge for the paradigm is to incentivize phone users to be workers providing sensing services. While some theoretical incentive mechanisms for general-purpose crowdsourcing have been proposed, it is still an open issue as to how to incorporate the theoretical framework into the practical MCS system. In this paper, we propose an incentive mechanism based on a quality-driven auction (QDA). The mechanism is specifically for the MCS system, where the worker is paid off based on the quality of sensed data instead of working time, as adopted in the literature. We theoretically prove that the mechanism is truthful, individual rational, platform profitable, and social-welfare optimal. Moreover, we incorporate our incentive mechanism into a Wi-Fi fingerprint-based indoor localization system to incentivize the MCS-based fingerprint collection. We present a probabilistic model to evaluate the reliability of the submitted data, which resolves the issue that the ground truth for the data reliability is unavailable. We realize and deploy an indoor localization system to evaluate our proposed incentive mechanism and present extensive experimental results.",
"Many crowdsourcing scenarios are heterogeneous in the sense that, not only the workers’ types (e.g., abilities or costs) are different, but the beliefs (probabilistic knowledge) about their respective types are also different. In this paper, we design an incentive mechanism for such scenarios using an asymmetric all-pay contest (or auction) model. Our design objective is an optimal mechanism, i.e., one that maximizes the crowdsourcing revenue minus cost. To achieve this, we furnish the contest with a prize tuple which is an array of reward functions each for a potential winner. We prove and characterize the unique equilibrium of this contest, and solve the optimal prize tuple. In addition, this study discovers a counter-intuitive property, called strategy autonomy (SA), which means that heterogeneous workers behave independently of one another as if they were in a homogeneous setting. In game-theoretical terms, it says that an asymmetric auction admits a symmetric equilibrium. Not only theoretically interesting, but SA also has important practical implications on mechanism complexity, energy efficiency, crowdsourcing revenue, and system scalability. By scrutinizing seven mechanisms, our extensive performance evaluation demonstrates the superior performance of our mechanism as well as offers insights into the SA property."
]
} |
1703.00503 | 2594523482 | In this paper, we present a general framework for learning social affordance grammar as a spatiotemporal AND-OR graph (ST-AOG) from RGB-D videos of human interactions, and transfer the grammar to humanoids to enable a real-time motion inference for human-robot interaction (HRI). Based on Gibbs sampling, our weakly supervised grammar learning can automatically construct a hierarchical representation of an interaction with long-term joint sub-tasks of both agents and short term atomic actions of individual agents. Based on a new RGB-D video dataset with rich instances of human interactions, our experiments of Baxter simulation, human evaluation, and real Baxter test demonstrate that the model learned from limited training data successfully generates human-like behaviors in unseen scenarios and outperforms both baselines. | . In the existing affordance research, the domain is usually limited to object affordances @cite_30 @cite_10 @cite_3 @cite_16 @cite_24 @cite_5 @cite_8 @cite_11 , e.g., possible manipulations of objects, and indoor scene affordances @cite_25 @cite_26 , e.g., walkable or standable surface, where social interactions are not considered. @cite_21 is the first to propose a social affordance representation for HRI. However, it could only synthesize human skeletons rather than control a real robot, and did not have the ability to generalize the interactions to unseen scenarios. We are also interested in learning social affordance knowledge, but emphasize on transferring such knowledge to a humanoid in a more flexible setting. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_8",
"@cite_21",
"@cite_3",
"@cite_24",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"2155217025",
"2115815548",
"2304253768",
"2963738870",
"2028798328",
"276494664",
"",
"1891689858",
"2149173366",
"",
"1920293286"
],
"abstract": [
"Affordances encode relationships between actions, objects, and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy, and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We illustrate the benefits of the acquired knowledge in imitation games.",
"For scene understanding, one popular approach has been to model the object-object relationships. In this paper, we hypothesize that such relationships are only an artifact of certain hidden factors, such as humans. For example, the objects, monitor and keyboard, are strongly spatially correlated only because a human types on the keyboard while watching the monitor. Our goal is to learn this hidden human context (i.e., the human-object relationships), and also use it as a cue for labeling the scenes. We present Infinite Factored Topic Model (IFTM), where we consider a scene as being generated from two types of topics: human configurations and human-object relationships. This enables our algorithm to hallucinate the possible configurations of the humans in the scene parsimoniously. Given only a dataset of scenes containing objects but not humans, we show that our algorithm can recover the human object relationships. We then test our algorithm on the task of attribute and object labeling in 3D scenes and show consistent improvements over the state-of-the-art.",
"Semantic information can help robots understand unknown environments better. In order to obtain semantic information efficiently and link it to a metric map, we present a new robot semantic mapping approach through human activity recognition in a human-robot coexisting environment. An intelligent mobile robot platform called ASCCbot creates a metric map while wearable motion sensors attached to the human body are used to recognize human activities. Combining pre-learned models of activity-furniture correlation and location-furniture correlation, the robot determines the probability distribution of the furniture types through a Bayesian framework and labels them on the metric map. Computer simulations and real experiments demonstrate that the proposed approach is able to create a semantic map of an indoor environment effectively. A framework for robot semantic mapping through human activity recognition.Human activity recognition is realized through wearable motion sensors.Validated through both simulation and experiments.",
"In this paper, we present an approach for robot learning of social affordance from human activity videos. We consider the problem in the context of human-robot interaction: Our approach learns structural representations of human-human (and human-object-human) interactions, describing how body-parts of each agent move with respect to each other and what spatial relations they should maintain to complete each sub-event (i.e., sub-goal). This enables the robot to infer its own movement in reaction to the human body motion, allowing it to naturally replicate such interactions. We introduce the representation of social affordance and propose a generative model for its weakly supervised learning from human demonstration videos. Our approach discovers critical steps (i.e., latent sub-events) in an interaction and the typical motion associated with them, learning what body-parts should be involved and how. The experimental results demonstrate that our Markov Chain Monte Carlo (MCMC) based learning algorithm automatically discovers semantically meaningful social affordance from RGB-D videos, which allows us to generate appropriate full body motion for an agent.",
"Affordances define the action possibilities on an object in the environment and in robotics they play a role in basic cognitive capabilities. Previous works have focused on affordance models for just one object even though in many scenarios they are defined by configurations of multiple objects that interact with each other. We employ recent advances in statistical relational learning to learn affordance models in such cases. Our models generalize over objects and can deal effectively with uncertainty. Two-object interaction models are learned from robotic interaction with the objects in the world and employed in situations with arbitrary numbers of objects. We illustrate these ideas with experimental results of an action recognition task where a robot manipulates objects on a shelf.",
"Objects in human environments support various functionalities which govern how people interact with their environments in order to perform tasks. In this work, we discuss how to represent and learn a functional understanding of an environment in terms of object affordances. Such an understanding is useful for many applications such as activity detection and assistive robotics. Starting with a semantic notion of affordances, we present a generative model that takes a given environment and human intention into account, and grounds the affordances in the form of spatial locations on the object and temporal trajectories in the 3D environment. The probabilistic model also allows uncertainties and variations in the grounded affordances. We apply our approach on RGB-D videos from Cornell Activity Dataset, where we first show that we can successfully ground the affordances, and we then show that learning such affordances improves performance in the labeling tasks.",
"",
"Reasoning about objects and their affordances is a fundamental problem for visual intelligence. Most of the previous work casts this problem as a classification task where separate classifiers are trained to label objects, recognize attributes, or assign affordances. In this work, we consider the problem of object affordance reasoning using a knowledge base representation. Diverse information of objects are first harvested from images and other meta-data sources. We then learn a knowledge base (KB) using a Markov Logic Network (MLN). Given the learned KB, we show that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zero-shot affordance prediction and object recognition given human poses.",
"This paper investigates object categorization according to function, i.e., learning the affordances of objects from human demonstration. Object affordances (functionality) are inferred from observations of humans using the objects in different types of actions. The intended application is learning from demonstration, in which a robot learns to employ objects in household tasks, from observing a human performing the same tasks with the objects. We present a method for categorizing manipulated objects and human manipulation actions in context of each other. The method is able to simultaneously segment and classify human hand actions, and detect and classify the objects involved in the action. This can serve as an initial step in a learning from demonstration method. Experiments show that the contextual information improves the classification of both objects and actions.",
"",
"In this paper, we present a new framework - task-oriented modeling, learning and recognition which aims at understanding the underlying functions, physics and causality in using objects as “tools”. Given a task, such as, cracking a nut or painting a wall, we represent each object, e.g. a hammer or brush, in a generative spatio-temporal representation consisting of four components: i) an affordance basis to be grasped by hand; ii) a functional basis to act on a target object (the nut), iii) the imagined actions with typical motion trajectories; and iv) the underlying physical concepts, e.g. force, pressure, etc. In a learning phase, our algorithm observes only one RGB-D video, in which a rational human picks up one object (i.e. tool) among a number of candidates to accomplish the task. From this example, our algorithm learns the essential physical concepts in the task (e.g. forces in cracking nuts). In an inference phase, our algorithm is given a new set of objects (daily objects or stones), and picks the best choice available together with the inferred affordance basis, functional basis, imagined human actions (sequence of poses), and the expected physical quantity that it will produce. From this new perspective, any objects can be viewed as a hammer or a shovel, and object recognition is not merely memorizing typical appearance examples for each category but reasoning the physical mechanisms in various tasks to achieve generalization."
]
} |
1703.00503 | 2594523482 | In this paper, we present a general framework for learning social affordance grammar as a spatiotemporal AND-OR graph (ST-AOG) from RGB-D videos of human interactions, and transfer the grammar to humanoids to enable a real-time motion inference for human-robot interaction (HRI). Based on Gibbs sampling, our weakly supervised grammar learning can automatically construct a hierarchical representation of an interaction with long-term joint sub-tasks of both agents and short term atomic actions of individual agents. Based on a new RGB-D video dataset with rich instances of human interactions, our experiments of Baxter simulation, human evaluation, and real Baxter test demonstrate that the model learned from limited training data successfully generates human-like behaviors in unseen scenarios and outperforms both baselines. | . In recent years, several structural representations of human activities for the recognition purposes have been proposed for human action recognition @cite_29 @cite_23 @cite_18 @cite_1 and for group activity recognition @cite_7 @cite_6 @cite_14 @cite_20 @cite_22 @cite_0 @cite_31 . There also have been studies of robot learning of grammar models @cite_2 @cite_4 @cite_9 , but they were not aimed for HRI. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_29",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_20"
],
"mid": [
"2416663518",
"",
"",
"",
"2003708924",
"2139117248",
"",
"1907587592",
"2047499569",
"2116137332",
"2137275576",
"2112913186",
"2952563226",
"1972696612"
],
"abstract": [
"We propose a stochastic graph-based framework for a robot to understand tasks from human demonstrations and perform them with feedback control. It unifies both knowledge representation and action planning in the same hierarchical data structure, allowing a robot to expand its spatial, temporal, and causal knowledge at varying levels of abstraction. The learning system can watch human demonstrations, generalize learned concepts, and perform tasks in new environments, across different robotic platforms. We show the success of our system by having a robot perform a cloth-folding task after watching few human demonstrations. The robot can accurately reproduce the learned skill, as well as generalize the task to other articles of clothing.",
"",
"",
"",
"This paper describes a stochastic methodology for the recognition of various types of high-level group activities. Our system maintains a probabilistic representation of a group activity, describing how individual activities of its group members must be organized temporally, spatially, and logically. In order to recognize each of the represented group activities, our system searches for a set of group members that has the maximum posterior probability of satisfying its representation. A hierarchical recognition algorithm utilizing a Markov chain Monte Carlo (MCMC)-based probability distribution sampling has been designed, detecting group activities and finding the acting groups simultaneously. The system has been tested to recognize complex activities such as a group of thieves stealing an object from another group' and a group assaulting a person'. Videos downloaded from YouTube as well as videos that we have taken are tested. Experimental results show that our system recognizes a wide range of group activities more reliably and accurately, as compared to previous approaches.",
"Analyzing videos of human activities involves not only recognizing actions (typically based on their appearances), but also determining the story plot of the video. The storyline of a video describes causal relationships between actions. Beyond recognition of individual actions, discovering causal relationships helps to better understand the semantic meaning of the activities. We present an approach to learn a visually grounded storyline model of videos directly from weakly labeled data. The storyline model is represented as an AND-OR graph, a structure that can compactly encode storyline variation across videos. The edges in the AND-OR graph correspond to causal relationships which are represented in terms of spatio-temporal constraints. We formulate an Integer Programming framework for action recognition and storyline extraction using the storyline model and visual groundings learned from training data.",
"",
"Realistic videos of human actions exhibit rich spatiotemporal structures at multiple levels of granularity: an action can always be decomposed into multiple finer-grained elements in both space and time. To capture this intuition, we propose to represent videos by a hierarchy of mid-level action elements (MAEs), where each MAE corresponds to an action-related spatiotemporal segment in the video. We introduce an unsupervised method to generate this representation from videos. Our method is capable of distinguishing action-related segments from background segments and representing actions at multiple spatiotemporal resolutions. Given a set of spatiotemporal segments generated from the training data, we introduce a discriminative clustering algorithm that automatically discovers MAEs at multiple levels of granularity. We develop structured models that capture a rich set of spatial, temporal and hierarchical relations among the segments, where the action label and multiple levels of MAE labels are jointly inferred. The proposed model achieves state-of-the-art performance in multiple action recognition benchmarks. Moreover, we demonstrate the effectiveness of our model in real-world applications such as action recognition in large-scale untrimmed videos and action parsing.",
"In this paper, we go beyond recognizing the actions of individuals and focus on group activities. This is motivated from the observation that human actions are rarely performed in isolation; the contextual information of what other people in the scene are doing provides a useful cue for understanding high-level activities. We propose a novel framework for recognizing group activities which jointly captures the group activity, the individual person actions, and the interactions among them. Two types of contextual information, group-person interaction and person-person interaction, are explored in a latent variable framework. In particular, we propose three different approaches to model the person-person interaction. One approach is to explore the structures of person-person interaction. Differently from most of the previous latent structured models, which assume a predefined structure for the hidden layer, e.g., a tree structure, we treat the structure of the hidden layer as a latent variable and implicitly infer it during learning and inference. The second approach explores person-person interaction in the feature level. We introduce a new feature representation called the action context (AC) descriptor. The AC descriptor encodes information about not only the action of an individual person in the video, but also the behavior of other people nearby. The third approach combines the above two. Our experimental results demonstrate the benefit of using contextual information for disambiguating group activities.",
"With the advent of drones, aerial video analysis becomes increasingly important; yet, it has received scant attention in the literature. This paper addresses a new problem of parsing low-resolution aerial videos of large spatial areas, in terms of 1) grouping, 2) recognizing events and 3) assigning roles to people engaged in events. We propose a novel framework aimed at conducting joint inference of the above tasks, as reasoning about each in isolation typically fails in our setting. Given noisy tracklets of people and detections of large objects and scene surfaces (e.g., building, grass), we use a spatiotemporal AND-OR graph to drive our joint inference, using Markov Chain Monte Carlo and dynamic programming. We also introduce a new formalism of spatiotemporal templates characterizing latent sub-events. For evaluation, we have collected and released a new aerial videos dataset using a hex-rotor flying over picnic areas rich with group events. Our results demonstrate that we successfully address above inference tasks under challenging conditions.",
"Complex human activities occurring in videos can be defined in terms of temporal configurations of primitive actions. Prior work typically hand-picks the primitives, their total number, and temporal relations (e.g., allow only followed-by), and then only estimates their relative significance for activity recognition. We advance prior work by learning what activity parts and their spatiotemporal relations should be captured to represent the activity, and how relevant they are for enabling efficient inference in realistic videos. We represent videos by spatiotemporal graphs, where nodes correspond to multiscale video segments, and edges capture their hierarchical, temporal, and spatial relationships. Access to video segments is provided by our new, multiscale segmenter. Given a set of training spatiotemporal graphs, we learn their archetype graph, and pdf's associated with model nodes and edges. The model adaptively learns from data relevant video segments and their relations, addressing the “what” and “how.” Inference and learning are formulated within the same framework - that of a robust, least-squares optimization - which is invariant to arbitrary permutations of nodes in spatiotemporal graphs. The model is used for parsing new videos in terms of detecting and localizing relevant activity parts. We out-perform the state of the art on benchmark Olympic and UT human-interaction datasets, under a favorable complexity-vs.-accuracy trade-off.",
"This paper describes a syntactic approach to imitation learning that captures important task structures in the form of probabilistic activity grammars from a reasonably small number of samples under noisy conditions. We show that these learned grammars can be recursively applied to help recognize unforeseen, more complicated tasks that share underlying structures. The grammars enforce an observation to be consistent with the previously observed behaviors which can correct unexpected, out-of-context actions due to errors of the observer and or demonstrator. To achieve this goal, our method (1) actively searches for frequently occurring action symbols that are subsets of input samples to uncover the hierarchical structure of the demonstration, and (2) considers the uncertainties of input symbols due to imperfect low-level detectors. We evaluate the proposed method using both synthetic data and two sets of real-world humanoid robot experiments. In our Towers of Hanoi experiment, the robot learns the important constraints of the puzzle after observing demonstrators solving it. In our Dance Imitation experiment, the robot learns 3 types of dances from human demonstrations. The results suggest that under reasonable amount of noise, our method is capable of capturing the reusable task structures and generalizing them to cope with recursions.",
"Rich semantic relations are important in a variety of visual recognition problems. As a concrete example, group activity recognition involves the interactions and relative spatial relations of a set of people in a scene. State of the art recognition methods center on deep learning approaches for training highly effective, complex classifiers for interpreting images. However, bridging the relatively low-level concepts output by these methods to interpret higher-level compositional scenes remains a challenge. Graphical models are a standard tool for this task. In this paper, we propose a method to integrate graphical models and deep neural networks into a joint framework. Instead of using a traditional inference method, we use a sequential inference modeled by a recurrent neural network. Beyond this, the appropriate structure for inference can be learned by imposing gates on edges between nodes. Empirical results on group activity recognition demonstrate the potential of this model to handle highly structured learning tasks.",
"This paper presents a principled framework for analyzing collective activities at different levels of semantic granularity from videos. Our framework is capable of jointly tracking multiple individuals, recognizing activities performed by individuals in isolation (i.e., atomic activities such as walking or standing), recognizing the interactions between pairs of individuals (i.e., interaction activities) as well as understanding the activities of group of individuals (i.e., collective activities). A key property of our work is that it can coherently combine bottom-up information stemming from detections or fragments of tracks (or tracklets) with top-down evidence. Top-down evidence is provided by a newly proposed descriptor that captures the coherent behavior of groups of individuals in a spatial-temporal neighborhood of the sequence. Top-down evidence provides contextual information for establishing accurate associations between detections or tracklets across frames and, thus, for obtaining more robust tracking results. Bottom-up evidence percolates upwards so as to automatically infer collective activity labels. Experimental results on two challenging data sets demonstrate our theoretical claims and indicate that our model achieves enhances tracking results and the best collective classification results to date."
]
} |
1703.00835 | 2950742125 | Software testing is an important tool to ensure software quality. However, testing in robotics is a hard task due to dynamic environments and the expensive development and time-consuming execution of test cases. Most testing approaches use model-based and or simulation-based testing to overcome these problems. We propose a model-free skill-centric testing approach in which a robot autonomously executes skills in the real world and compares it to previous experiences. The robot selects specific skills in order to identify errors in the software by maximising the expected information gain. We use deep learning to model the sensor data observed during previous successful executions of a skill and to detect irregularities. This information is connected to functional profiling data such that certain misbehaviour can be related to specific functions. We evaluate our approach in simulation and in experiments with a KUKA LWR 4+ robot by purposefully introducing bugs to the software. We demonstrate that these bugs can be detected with high accuracy and without the need for the implementation of specific tests or models. | Work that also relies on automatic data storage from previous successful experiences was proposed by @cite_13 . Data is queried automatically during the execution of skills and stored to a database. It is directly taken from listening to ROS topics and stored to the NoSQL database . They demonstrate the applicability of automatic data storage to fault analysis by hand-crafting a hierarchy @cite_3 , which represents different levels of abstraction. Developers can then manually work through the hierarchy and identify potential errors by comparing current sensor data on several levels to previous experiences. This demonstrates that such an approach makes sense in principle, however, in our method the identification of problems is done completely autonomously. | {
"cite_N": [
"@cite_13",
"@cite_3"
],
"mid": [
"2004246956",
"2095848001"
],
"abstract": [
"During operation of robots large amounts of data are produced and processed for instance in perception, actuation, or decision making. Nowadays this data is typically volatile and disposed right after use. But this data can be valuable and useful later. Therefore we propose a database system that taps into common robot middleware to record any and all data produced at run-time. We present two examples using this data in fault analysis and performance evaluation and describe real-world experiments run on the domestic service robot HERB.",
"This paper revisits the data-information-knowledge-wisdom (DIKW) hierarchy by examining the articulation of the hierarchy in a number of widely read textbooks, and analysing their statements about the nature of data, information, knowledge, and wisdom. The hierarchy referred to variously as the 'Knowledge Hierarchy', the 'Information Hierarchy' and the 'Knowledge Pyramid' is one of the fundamental, widely recognized and 'taken-for-granted' models in the information and knowledge literatures. It is often quoted, or used implicitly, in definitions of data, information and knowledge in the information management, information systems and knowledge management literatures, but there has been limited direct discussion of the hierarchy. After revisiting Ackoff's original articulation of the hierarchy, definitions of data, information, knowledge and wisdom as articulated in recent textbooks in information systems and knowledge management are reviewed and assessed, in pursuit of a consensus on definitions and transformation processes. This process brings to the surface the extent of agreement and dissent in relation to these definitions, and provides a basis for a discussion as to whether these articulations present an adequate distinction between data, information, and knowledge. Typically information is defined in terms of data, knowledge in terms of information, and wisdom in terms of knowledge, but there is less consensus in the description of the processes that transform elements lower in the hierarchy into those above them, leading to a lack of definitional clarity. In addition, there is limited reference to wisdom in these texts."
]
} |
1703.00900 | 2952591421 | We present improved deterministic distributed algorithms for a number of well-studied matching problems, which are simpler, faster, more accurate, and or more general than their known counterparts. The common denominator of these results is a deterministic distributed rounding method for certain linear programs, which is the first such rounding method, to our knowledge. A sampling of our end results is as follows. -- An @math -round deterministic distributed algorithm for computing a maximal matching, in @math -node graphs with maximum degree @math . This is the first improvement in about 20 years over the celebrated @math -round algorithm of Hanckowiak, Karonski, and Panconesi [SODA'98, PODC'99]. -- A deterministic distributed algorithm for computing a @math -approximation of maximum matching in @math rounds. This is exponentially faster than the classic @math -round @math -approximation of Panconesi and Rizzi [DIST'01]. With some modifications, the algorithm can also find an @math -maximal matching which leaves only an @math -fraction of the edges on unmatched nodes. -- An @math -round deterministic distributed algorithm for computing a @math -approximation of a maximum weighted matching, and also for the more general problem of maximum weighted @math -matching. These improve over the @math -round @math -approximation algorithm of Panconesi and Sozio [DIST'10], where @math denotes the maximum normalized weight. | We work with the standard model of distributed computing @cite_38 : the network is abstracted as a graph @math , with @math , @math , and maximum degree @math . Each node has a unique identifier. In each round, each node can send a message to each of its neighbors. We do not limit the message sizes, but for all the algorithms that we present, @math -bit messages suffice. We assume that all nodes have knowledge of @math up to a constant factor. If this is not the case, it is enough to try exponentially increasing estimates for @math . | {
"cite_N": [
"@cite_38"
],
"mid": [
"2112009244"
],
"abstract": [
"This paper deals with distributed graph algorithms. Processors reside in the vertices of a graph G and communicate only with their neighbors. The system is synchronous and reliable, there is no limit on message lengths and local computation is instantaneous. The results: A maximal independent set in an n-cycle cannot be found faster than Ω(log* n) and this is optimal by [CV]. The d-regular tree of radius r cannot be colored with fewer than √d colors in time 2r 3. If Δ is the largest degree in G which has order n, then in time O(log*n) it can be colored with O(Δ2) colors."
]
} |
1703.00900 | 2952591421 | We present improved deterministic distributed algorithms for a number of well-studied matching problems, which are simpler, faster, more accurate, and or more general than their known counterparts. The common denominator of these results is a deterministic distributed rounding method for certain linear programs, which is the first such rounding method, to our knowledge. A sampling of our end results is as follows. -- An @math -round deterministic distributed algorithm for computing a maximal matching, in @math -node graphs with maximum degree @math . This is the first improvement in about 20 years over the celebrated @math -round algorithm of Hanckowiak, Karonski, and Panconesi [SODA'98, PODC'99]. -- A deterministic distributed algorithm for computing a @math -approximation of maximum matching in @math rounds. This is exponentially faster than the classic @math -round @math -approximation of Panconesi and Rizzi [DIST'01]. With some modifications, the algorithm can also find an @math -maximal matching which leaves only an @math -fraction of the edges on unmatched nodes. -- An @math -round deterministic distributed algorithm for computing a @math -approximation of a maximum weighted matching, and also for the more general problem of maximum weighted @math -matching. These improve over the @math -round @math -approximation algorithm of Panconesi and Sozio [DIST'10], where @math denotes the maximum normalized weight. | Ghaffari, Kuhn, and Maus @cite_16 recently proved a -type result which shows that for efficient deterministic distributed graph algorithms is deterministically fractional values to integral values while approximately preserving some linear constraints. Stating this result in full generality requires some definitions. See @cite_16 for the precise statement. To put it more positively, if we find an efficient deterministic method for rounding, we would get efficient algorithms for essentially all the classic local graph problems, including the four mentioned above. Our results become more instructive when viewed in this context. The common denominator of our results is a deterministic distributed method which allows us to round fractional matchings to integral matchings. This can be more generally seen as rounding the fractional solutions of a special class of (LPs) to integral solutions. To the best of our knowledge, this is the first known method. We can now say that | {
"cite_N": [
"@cite_16"
],
"mid": [
"2552279664"
],
"abstract": [
"This paper is centered on the complexity of graph problems in the well-studied LOCAL model of distributed computing, introduced by Linial [FOCS '87]. It is widely known that for many of the classic distributed graph problems (including maximal independent set (MIS) and (Δ+1)-vertex coloring), the randomized complexity is at most polylogarithmic in the size n of the network, while the best deterministic complexity is typically 2O(√logn). Understanding and potentially narrowing down this exponential gap is considered to be one of the central long-standing open questions in the area of distributed graph algorithms. We investigate the problem by introducing a complexity-theoretic framework that allows us to shed some light on the role of randomness in the LOCAL model. We define the SLOCAL model as a sequential version of the LOCAL model. Our framework allows us to prove completeness results with respect to the class of problems which can be solved efficiently in the SLOCAL model, implying that if any of the complete problems can be solved deterministically in logn rounds in the LOCAL model, we can deterministically solve all efficient SLOCAL-problems (including MIS and (Δ+1)-coloring) in logn rounds in the LOCAL model. Perhaps most surprisingly, we show that a rather rudimentary looking graph coloring problem is complete in the above sense: Color the nodes of a graph with colors red and blue such that each node of sufficiently large polylogarithmic degree has at least one neighbor of each color. The problem admits a trivial zero-round randomized solution. The result can be viewed as showing that the only obstacle to getting efficient determinstic algorithms in the LOCAL model is an efficient algorithm to approximately round fractional values into integer values. In addition, our formal framework also allows us to develop polylogarithmic-time randomized distributed algorithms in a simpler way. As a result, we provide a polylog-time distributed approximation scheme for arbitrary distributed covering and packing integer linear programs."
]
} |
1703.00857 | 2950879004 | Due to the proliferation of online social networks (OSNs), users find themselves participating in multiple OSNs. These users leave their activity traces as they maintain friendships and interact with other users in these OSNs. In this work, we analyze how users maintain friendship in multiple OSNs by studying users who have accounts in both Twitter and Instagram. Specifically, we study the similarity of a user's friendship and the evenness of friendship distribution in multiple OSNs. Our study shows that most users in Twitter and Instagram prefer to maintain different friendships in the two OSNs, keeping only a small clique of common friends in across the OSNs. Based upon our empirical study, we conduct link prediction experiments to predict missing friendship links in multiple OSNs using the neighborhood features, neighborhood friendship maintenance features and cross-link features. Our link prediction experiments shows that un- supervised methods can yield good accuracy in predicting links in one OSN using another OSN data and the link prediction accuracy can be further improved using supervised method with friendship maintenance and others measures as features. | The study on structural properties and user behaviors in multiple OSNs is an emerging topic and the research subject has been gaining attractions in recent years. Magnani and Rossi @cite_5 did a study on the structural properties in multiple OSNs and proposed to represent multiple OSNs as a . They had also extended the degree and closeness centrality measures to multi-layer network. Their work however did not consider other network structural properties or behaviors such as the friendship similarity and evenness across networks. The linkage of user accounts across multiple OSNs belong to the same person is also a widely studied topic @cite_17 @cite_10 . With wider adoption of the new user linkage methods by proposed by previous research works, researchers also studied user behaviors across multiple OSNs. Benevenuto, et. al, performed a macro-level analysis of user behaviors such as browsing and content posting at different OSNs @cite_14 . Zafarani and Liu conducted an empirical study on users in 20 social media sites and showed that the most users join and stay active in less than 3 social media sites @cite_18 . analyzed the user migration patterns across seven OSNs @cite_20 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_5",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2278069886",
"",
"2036400208",
"",
"2293772388",
"1980680715"
],
"abstract": [
"The rise of social media has led to an explosion in the number of possible sites users can join. However, this same profusion of social media sites has made it nearly impossible for users to actively engage in all of them simultaneously. Accordingly, users must make choices about which sites to use or to neglect. In this paper, we study users that have joined multiple sites. We study how individuals are distributed across sites, the way they select sites to join, and behavioral patterns they exhibit while selecting sites. Our study demonstrates that while users have a tendency to join the most popular or trendiest sites, this does not fully explain users' selections. We demonstrate that peer pressure also influences the decisions users make about joining emerging sites.",
"",
"In this paper we introduce a new model to represent an interconnected network of networks. This model is fundamental to reason about the real organization of on-line social networks, where users belong to and interact on different networks at the same time. In addition we extend traditional SNA measures to deal with this multiplicity of networks and we apply the model to a real dataset extracted from two microblogging sites.",
"",
"The incredible growth of the social web over the last decade has ushered in a flurry of new social media sites. On one hand, users have an inordinate number of choices; on the other hand, users are constrained by limited time and resources and have to choose sites in order to remain social and active. Hence, dynamic social media entails user migration, a well studied phenomenon in fields such as sociology and psychology. Users are valuable assets for social media sites as they help contribute to the growth of a site and generate revenue by increased traffic. We are intrigued to know if social media user migration can be studied, and what migration patterns are. In particular, we investigate whether people migrate, and if they do, how they migrate. We formalize site and attention migration to help identify the migration between popular social media sites and determine clear patterns of migration between sites. This work suggests a feasible way to study migration patterns in social media. The discovered patterns can help understand social media sites and gauge their popularity to improve business intelligence and revenue generation through the retention of users.",
"People use various social media for different purposes. The information on an individual site is often incomplete. When sources of complementary information are integrated, a better profile of a user can be built to improve online services such as verifying online information. To integrate these sources of information, it is necessary to identify individuals across social media sites. This paper aims to address the cross-media user identification problem. We introduce a methodology (MOBIUS) for finding a mapping among identities of individuals across social media sites. It consists of three key components: the first component identifies users' unique behavioral patterns that lead to information redundancies across sites; the second component constructs features that exploit information redundancies due to these behavioral patterns; and the third component employs machine learning for effective user identification. We formally define the cross-media user identification problem and show that MOBIUS is effective in identifying users across social media sites. This study paves the way for analysis and mining across social media sites, and facilitates the creation of novel online services across sites."
]
} |
1703.00857 | 2950879004 | Due to the proliferation of online social networks (OSNs), users find themselves participating in multiple OSNs. These users leave their activity traces as they maintain friendships and interact with other users in these OSNs. In this work, we analyze how users maintain friendship in multiple OSNs by studying users who have accounts in both Twitter and Instagram. Specifically, we study the similarity of a user's friendship and the evenness of friendship distribution in multiple OSNs. Our study shows that most users in Twitter and Instagram prefer to maintain different friendships in the two OSNs, keeping only a small clique of common friends in across the OSNs. Based upon our empirical study, we conduct link prediction experiments to predict missing friendship links in multiple OSNs using the neighborhood features, neighborhood friendship maintenance features and cross-link features. Our link prediction experiments shows that un- supervised methods can yield good accuracy in predicting links in one OSN using another OSN data and the link prediction accuracy can be further improved using supervised method with friendship maintenance and others measures as features. | There were few link prediction studies done on multidimensional networks. Rossetti et. al performed supervised and unsupervised multidimensional link predictions on the DBLP and IMDb networks @cite_2 . In that study, the researchers used neighborhood features such as Common Neighbors and Adamic-Adar to predict user collaboration in the different dimensions of a network. For example, they predicted the collaboration of authors in DBLP with the publishing venues defined as the dimensions. Our link prediction experiment differs from the previous study as we predict friendship of users in different OSNs instead of different dimensions of the same network. Multiple OSNs is quite different from multidimensional networks as there are unmatched user accounts across multiple OSNs while user accounts matching is not required in multidimensional OSN. Furthermore, our friendship link prediction methods not only consider friendship neighborhood features but also friendship maintenance features. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1990408974"
],
"abstract": [
"Complex networks have been receiving increasing attention by the scientific community, also due to the availability of massive network data from diverse domains. One problem largely studied so far is Link Prediction, i.e. the problem of predicting new upcoming connections in the network. However, one aspect of complex networks has been disregarded so far: real networks are often multidimensional, i.e. multiple connections may reside between any two nodes. In this context, we define the problem of Multidimensional Link Prediction, and we introduce several predictors based on structural analysis of the networks. We present the results obtained on real networks, showing the performances of both the introduced multidimensional versions of the Common Neighbors and Adamic-Adar, and the derived predictors aimed at capturing the multidimensional and temporal information extracted from the data. Our findings show that the evolution of multidimensional networks can be predicted, and that supervised models may improve the accuracy of underlying unsupervised predictors, if used in conjunction with them."
]
} |
1703.00440 | 2592367810 | Most exact methods for k-nearest neighbour search suffer from the curse of dimensionality; that is, their query times exhibit exponential dependence on either the ambient or the intrinsic dimensionality. Dynamic Continuous Indexing (DCI) offers a promising way of circumventing the curse and successfully reduces the dependence of query time on intrinsic dimensionality from exponential to sublinear. In this paper, we propose a variant of DCI, which we call Prioritized DCI, and show a remarkable improvement in the dependence of query time on intrinsic dimensionality. In particular, a linear increase in intrinsic dimensionality, or equivalently, an exponential increase in the number of points near a query, can be mostly counteracted with just a linear increase in space. We also demonstrate empirically that Prioritized DCI significantly outperforms prior methods. In particular, relative to Locality-Sensitive Hashing (LSH), Prioritized DCI reduces the number of distance evaluations by a factor of 14 to 116 and the memory consumption by a factor of 21. | There is a vast literature on algorithms for nearest neighbour search. They can be divided into two categories: exact algorithms and approximate algorithms. Early exact algorithms are deterministic and store points in tree-based data structures. Examples include @math -d trees @cite_7 , R-trees @cite_16 and X-trees @cite_6 @cite_9 , which divide the vector space into a hierarchy of half-spaces, hyper-rectangles or Voronoi polygons and keep track of the points that lie in each cell. While their query times are logarithmic in the size of the dataset, they exhibit exponential dependence on the ambient dimensionality. A different method @cite_18 partitions the space by intersecting multiple hyperplanes. It effectively trades off space for time and achieves polynomial query time in ambient dimensionality at the cost of exponential space complexity in ambient dimensionality. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_9",
"@cite_6",
"@cite_16"
],
"mid": [
"1997744504",
"2165558283",
"2109424811",
"",
"2118269922"
],
"abstract": [
"We present a solution to the point location problem in arrangements of hyperplanes in Ed with running time O(d5 log n) and space O(nd+?) for arbitrary ? > 0, where n is the number of hyperplanes. The main result is the d5 factor in the expression for the running time. All previously known algorithms are exponential in d or log n. This leads to nonuniform polynomial algorithms for NP-complete problems.",
"This paper develops the multidimensional binary search tree (or k -d tree, where k is the dimensionality of the search space) as a data structure for storage of information to be retrieved by associative searches. The k -d tree is defined and examples are given. It is shown to be quite efficient in its storage requirements. A significant advantage of this structure is that a single data structure can handle many types of queries very efficiently. Various utility algorithms are developed; their proven average running times in an n record file are: insertion, O (log n ); deletion of the root, O ( n ( k -1) k ); deletion of a random node, O (log n ); and optimization (guarantees logarithmic performance of searches), O ( n log n ). Search algorithms are given for partial match queries with t keys specified [proven maximum running time of O ( n ( k - t ) k )] and for nearest neighbor queries [empirically observed average running time of O (log n ).] These performances far surpass the best currently known algorithms for these tasks. An algorithm is presented to handle any general intersection query. The main focus of this paper is theoretical. It is felt, however, that k -d trees could be quite useful in many applications, and examples of potential uses are given.",
"Similarity search in multimedia databases requires an efficient support of nearest neighbor search on a large set of high dimensional points as a basic operation for query processing. As recent theoretical results show, state of the art approaches to nearest neighbor search are not efficient in higher dimensions. In our new approach, we therefore precompute the result of any nearest neighbor search which corresponds to a computation of the voronoi cell of each data point. In a second step, we store the voronoi cells in an index structure efficient for high dimensional data spaces. As a result, nearest neighbor search corresponds to a simple point query on the index structure. Although our technique is based on a precomputation of the solution space, it is dynamic, i.e. it supports insertions of new data points. An extensive experimental evaluation of our technique demonstrates the high efficiency for uniformly distributed as well as real data. We obtained a significant reduction of the search time compared to nearest neighbor search in the X tree (up to a factor of 4).",
"",
"In order to handle spatial data efficiently, as required in computer aided design and geo-data applications, a database system needs an index mechanism that will help it retrieve data items quickly according to their spatial locations However, traditional indexing methods are not well suited to data objects of non-zero size located m multi-dimensional spaces In this paper we describe a dynamic index structure called an R-tree which meets this need, and give algorithms for searching and updating it. We present the results of a series of tests which indicate that the structure performs well, and conclude that it is useful for current database systems in spatial applications"
]
} |
1703.00440 | 2592367810 | Most exact methods for k-nearest neighbour search suffer from the curse of dimensionality; that is, their query times exhibit exponential dependence on either the ambient or the intrinsic dimensionality. Dynamic Continuous Indexing (DCI) offers a promising way of circumventing the curse and successfully reduces the dependence of query time on intrinsic dimensionality from exponential to sublinear. In this paper, we propose a variant of DCI, which we call Prioritized DCI, and show a remarkable improvement in the dependence of query time on intrinsic dimensionality. In particular, a linear increase in intrinsic dimensionality, or equivalently, an exponential increase in the number of points near a query, can be mostly counteracted with just a linear increase in space. We also demonstrate empirically that Prioritized DCI significantly outperforms prior methods. In particular, relative to Locality-Sensitive Hashing (LSH), Prioritized DCI reduces the number of distance evaluations by a factor of 14 to 116 and the memory consumption by a factor of 21. | Our work is most closely related to Dynamic Continuous Indexing (DCI) @cite_14 , which is an exact randomized algorithm for Euclidean space whose query time is linear in ambient dimensionality, sublinear in dataset size and sublinear in intrinsic dimensionality and uses space linear in the dataset size. Rather than partitioning the vector space, it uses multiple global one-dimensional indices, each of which orders data points along a certain random direction and combines these indices to find points that are near the query along multiple random directions. The proposed algorithm builds on the ideas introduced by DCI and achieves a significant improvement in the dependence on intrinsic dimensionality. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2183040431"
],
"abstract": [
"Existing methods for retrieving k-nearest neighbours suffer from the curse of dimensionality. We argue this is caused in part by inherent deficiencies of space partitioning, which is the underlying strategy used by most existing methods. We devise a new strategy that avoids partitioning the vector space and present a novel randomized algorithm that runs in time linear in dimensionality of the space and sub-linear in the intrinsic dimensionality and the size of the dataset and takes space constant in dimensionality of the space and linear in the size of the dataset. The proposed algorithm allows fine-grained control over accuracy and speed on a per-query basis, automatically adapts to variations in data density, supports dynamic updates to the dataset and is easy-to-implement. We show appealing theoretical properties and demonstrate empirically that the proposed algorithm outperforms locality-sensitivity hashing (LSH) in terms of approximation quality, speed and space efficiency."
]
} |
1703.00144 | 2593095092 | Recently low displacement rank (LDR) matrices, or so-called structured matrices, have been proposed to compress large-scale neural networks. Empirical results have shown that neural networks with weight matrices of LDR matrices, referred as LDR neural networks, can achieve significant reduction in space and computational complexity while retaining high accuracy. We formally study LDR matrices in deep learning. First, we prove the universal approximation property of LDR neural networks with a mild condition on the displacement operators. We then show that the error bounds of LDR neural networks are as efficient as general neural networks with both single-layer and multiple-layer structure. Finally, we propose back-propagation based training algorithm for general LDR neural networks. | For feedforward neural networks with one hidden layer, @cite_6 and @cite_16 proved separately the universal approximation property, which guarantees that for any given continuous function or decision function and any error bound @math , there always exists a single-hidden layer neural network that approximates the function within @math integrated error. However, this property does not specify the number of neurons needed to construct such a neural network. In practice, there must be a limit on the maximum amount of neurons due to the computational limit. Moreover, the magnitude of the coefficients can be neither too large nor too small. To address these issues for general neural networks, @cite_16 proved that it is sufficient to approximate functions with weights and biases whose absolute values are bounded by a constant (depending on the activation function). @cite_7 further extended this result to an arbitrarily small bound. @cite_14 showed that feedforward networks with one layer of sigmoidal nonlinearities achieve an integrated squared error with order of @math , where @math is the number of neurons. | {
"cite_N": [
"@cite_14",
"@cite_16",
"@cite_7",
"@cite_6"
],
"mid": [
"2166116275",
"2137983211",
"1988115241",
"2103496339"
],
"abstract": [
"Approximation properties of a class of artificial neural networks are established. It is shown that feedforward networks with one layer of sigmoidal nonlinearities achieve integrated squared error of order O(1 n), where n is the number of nodes. The approximated function is assumed to have a bound on the first moment of the magnitude distribution of the Fourier transform. The nonlinear parameters associated with the sigmoidal nodes, as well as the parameters of linear combination, are adjusted in the approximation. In contrast, it is shown that for series expansions with n terms, in which only the parameters of linear combination are adjusted, the integrated squared approximation error cannot be made smaller than order 1 n sup 2 d uniformly for functions satisfying the same smoothness assumption, where d is the dimension of the input to the function. For the class of functions examined, the approximation rate and the parsimony of the parameterization of the networks are shown to be advantageous in high-dimensional settings. >",
"Abstract This paper rigorously establishes that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available. In this sense, multilayer feedforward networks are a class of universal approximators.",
"Abstract We show that standard multilayer feedforward networks with as few as a single hidden layer and arbitrary bounded and nonconstant activation function are universal approximators with respect to L p (μ) performance criteria, for arbitrary finite input environment measures μ, provided only that sufficiently many hidden units are available. If the activation function is continuous, bounded and nonconstant, then continuous mappings can be learned uniformly over compact input sets. We also give very general conditions ensuring that networks with sufficiently smooth activation functions are capable of arbitrarily accurate approximation to a function and its derivatives.",
"In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks."
]
} |
1703.00170 | 2950047481 | TCP is the protocol of transport the most used in the Internet and have a heavy-dependence on delay. Reunion Island have a specific Internet connection, based on main links to France, located 10.000 km away. As a result, the minimal delay between Reunion Island and France is around 180 ms. In this paper, we will study TCP traces collected in Reunion Island University. The goal is to determine the metrics to study the impacts of long delays on TCP performance. | A recent research about the presence of services in Africa has been explained in @cite_2 . The author has explained that despite the presence of servers in the continent, most of the traffic continues to go to America. It is a very similar situation as in Reunion Island with France servers. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2300286844"
],
"abstract": [
"It is well known that Africa's mobile and fixed Internet infrastructure is progressing at a rapid pace. A flurry of recent research has quantified this, highlighting the expansion of its underlying connectivity network. However, improving the infrastructure is not useful without appropriately provisioned services to utilise it. This paper measures the availability of web content infrastructure in Africa. Whereas others have explored web infrastructure in developed regions, we shed light on practices in developing regions. To achieve this, we apply a comprehensive measurement methodology to collect data from a variety of sources. We focus on a large content delivery network to reveal that Africa's content infrastructure is, indeed, expanding. However, we find much web content is still served from the US and Europe. We discover that many of the problems faced are actually caused by significant inter-AS delays in Africa, which contribute to local ISPs not sharing their cache capacity. We discover that a related problem is the poor DNS configuration used by some ISPs, which confounds the attempts of providers to optimise their delivery. We then explore a number of other websites to show that large web infrastructure deployments are a rarity in Africa and that even regional websites host their services abroad. We conclude by making suggestions for improvements."
]
} |
1703.00099 | 2594164766 | Task-oriented dialog systems have been applied in various tasks, such as automated personal assistants, customer service providers and tutors. These systems work well when users have clear and explicit intentions that are well-aligned to the systems' capabilities. However, they fail if users intentions are not explicit. To address this shortcoming, we propose a framework to interleave non-task content (i.e. everyday social conversation) into task conversations. When the task content fails, the system can still keep the user engaged with the non-task content. We trained a policy using reinforcement learning algorithms to promote long-turn conversation coherence and consistency, so that the system can have smooth transitions between task and non-task content. To test the effectiveness of the proposed framework, we developed a movie promotion dialog system. Experiments with human users indicate that a system that interleaves social and task content achieves a better task success rate and is also rated as more engaging compared to a pure task-oriented system. | Current task-oriented dialog systems focus on completing a task together with the user. They can perform bus information search @cite_23 , flight booking @cite_1 , direction giving @cite_25 , etc. However, these systems can only focus on one task at a time. The famous personal assistants, such as Apple's Siri are composed of many of these single-task systems. These single-task systems' underlying mechanisms are mainly frame-based or agenda-based @cite_26 . The architecture of traditional dialog systems is slot-filling, which pre-defines the structure of a dialog state as a set of slots to be filled during the conversation. For an airline booking system, an example slot is destination city". An example corresponding system utterance generated from that slot is "Which city are you flying to?" Recently researchers have also started to look into end-to-end learning for task-oriented systems. Though the progress is still preliminary @cite_9 , the premise of having a learning method that generalizes across domains is appealing. | {
"cite_N": [
"@cite_26",
"@cite_9",
"@cite_1",
"@cite_23",
"@cite_25"
],
"mid": [
"119621972",
"2403702038",
"2011902711",
"178897730",
"2044818951"
],
"abstract": [
"Dialog management can be seen as a solution to two specific problems: (1) providing a coherent overall structure to interaction that extends beyond the single turn, (2) correctly manage mixed-initiative interaction, allowing users to guide interaction as per their (not necessarily explicitly shared) goals while allowing the system to guide interaction towards successful completion. We propose a dialog management architecture based on the following elements: handlers that manage interaction focussed on tightly coupled sets of information, a product that reflects mutually agreed-upon information and an agenda that orders the topics relevant to task completion.",
"Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (, 2014a). We show similar result patterns on data extracted from an online concierge service.",
"Abstract This paper describes PEGASUS, a spoken dialogue interface for on-line air travel planning that we have recently developed. PEGASUS leverages off our spoken language technology development in the ATIS domain, and enables users to book flights using the American Airlines EAASY SABRE system. The input query is transformed by the speech understanding system to a frame representation that captures its meaning. The tasks of the System Manager include transforming the semantic representation into an EAASY SABRE command, transmitting it to the application backend, formatting and interpreting the resulting information, and managing the dialogue. Preliminary evaluation results suggest that users can learn to make productive use of PEGASUS for travel planning, although much work remains to be done.",
"In this paper, we describe how a research spoken dialog system was made available to the general public. The Let’s Go Public spoken dialog system provides bus schedule information to the Pittsburgh population during off-peak times. This paper describes the changes necessary to make the system usable for the general public and presents analysis of the calls and strategies we have used to ensure high performance.",
"An attempt is made to ascertain rules for the sequencing of a limited part of natural conversation and to determine some properties and empirical consequences of the operation of those rules. Two formulations of conversational openings are suggested and the properties \"nonterminality\" and \"conditional relevance\" are developed to explicate the operation of one of them and to suggest some of its interactional consequences. Some discussion is offered of the fit between the sequencing structure and the tasks of conversational openings."
]
} |
1703.00099 | 2594164766 | Task-oriented dialog systems have been applied in various tasks, such as automated personal assistants, customer service providers and tutors. These systems work well when users have clear and explicit intentions that are well-aligned to the systems' capabilities. However, they fail if users intentions are not explicit. To address this shortcoming, we propose a framework to interleave non-task content (i.e. everyday social conversation) into task conversations. When the task content fails, the system can still keep the user engaged with the non-task content. We trained a policy using reinforcement learning algorithms to promote long-turn conversation coherence and consistency, so that the system can have smooth transitions between task and non-task content. To test the effectiveness of the proposed framework, we developed a movie promotion dialog system. Experiments with human users indicate that a system that interleaves social and task content achieves a better task success rate and is also rated as more engaging compared to a pure task-oriented system. | Differing from task-oriented systems, non-task-oriented systems do not have a stated goal to work towards. Nevertheless, they are useful for social relationship bonding and have many other use cases, such as keeping elderly people company @cite_22 , facilitating language learning @cite_10 , and simply entertaining users @cite_15 . Because non-task systems do not have a goal, they do not have a set of restricted states or slots to follow. A variety of methods were therefore proposed to generate responses for them, such as machine translation @cite_4 , retrieval-based response selection @cite_14 , and sequence-to-sequence models with different structures, such as, vanilla recurrent neural networks @cite_0 , hierarchical neural models @cite_7 , and memory neural networks @cite_2 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_10"
],
"mid": [
"2160458012",
"10957333",
"",
"889023230",
"",
"2175256910",
"2565288664",
"1488026787"
],
"abstract": [
"This system demonstration paper presents IRIS (Informal Response Interactive System), a chat-oriented dialogue system based on the vector space model framework. The system belongs to the class of example-based dialogue systems and builds its chat capabilities on a dual search strategy over a large collection of dialogue samples. Additional strategies allowing for system adaptation and learning implemented over the same vector model space framework are also described and discussed.",
"We present a data-driven approach to generating responses to Twitter status posts, based on phrase-based Statistical Machine Translation. We find that mapping conversational stimuli onto responses is more difficult than translating between languages, due to the wider range of possible responses, the larger fraction of unaligned words phrases, and the presence of large phrase pairs whose alignment cannot be further decomposed. After addressing these challenges, we compare approaches based on SMT and Information Retrieval in a human evaluation. We show that SMT outperforms IR on this task, and its output is preferred over actual human responses in 15 of cases. As far as we are aware, this is the first work to investigate the use of phrase-based SMT to directly translate a linguistic stimulus into an appropriate response.",
"",
"We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.",
"",
"A long-term goal of machine learning is to build intelligent conversational agents. One recent popular approach is to train end-to-end models on a large amount of real dialog transcripts between humans (, 2015; Vinyals & Le, 2015; , 2015). However, this approach leaves many questions unanswered as an understanding of the precise successes and shortcomings of each model is hard to assess. A contrasting recent proposal are the bAbI tasks (, 2015b) which are synthetic data that measure the ability of learning machines at various reasoning tasks over toy language. Unfortunately, those tests are very small and hence may encourage methods that do not scale. In this work, we propose a suite of new tasks of a much larger scale that attempt to bridge the gap between the two regimes. Choosing the domain of movies, we provide tasks that test the ability of models to answer factual questions (utilizing OMDB), provide personalization (utilizing MovieLens), carry short conversations about the two, and finally to perform on natural dialogs from Reddit. We provide a dataset covering 75k movie entities and with 3.5M training examples. We present results of various models on these tasks, and evaluate their performance.",
"",
"CSIEC (Computer Simulation in Educational Communication), is not only an intelligent web-based human-computer dialogue system with natural language for English instruction, but also a learning assessment system for learners and teachers. Its multiple functions including grammar gap filling exercises, talk show and chatting on a given topic, can satisfy the various needs from the students with different backgrounds and learning abilities. In this paper we present a case study of the integration of CSIEC's multiple functions into English syllabus design in a middle school and its pedagogical effectiveness. The comparison of two examination results before and after the system integration shows great improvement of students' performance, and the survey data also indicates the students' favor to this system."
]
} |
1703.00099 | 2594164766 | Task-oriented dialog systems have been applied in various tasks, such as automated personal assistants, customer service providers and tutors. These systems work well when users have clear and explicit intentions that are well-aligned to the systems' capabilities. However, they fail if users intentions are not explicit. To address this shortcoming, we propose a framework to interleave non-task content (i.e. everyday social conversation) into task conversations. When the task content fails, the system can still keep the user engaged with the non-task content. We trained a policy using reinforcement learning algorithms to promote long-turn conversation coherence and consistency, so that the system can have smooth transitions between task and non-task content. To test the effectiveness of the proposed framework, we developed a movie promotion dialog system. Experiments with human users indicate that a system that interleaves social and task content achieves a better task success rate and is also rated as more engaging compared to a pure task-oriented system. | To combine these two types of conversation systems smoothly, we trained a response selection policy with reinforcement learning algorithms. Reinforcement learning algorithms have been used in traditional task-oriented systems to track dialog states @cite_17 . They have also been used in non-task oriented systems. The Q-learning method was used to choose among a set of statistical templates and several neural model generated responses in @cite_5 , while the policy gradient method was used in @cite_3 . Different from these pure task or pure non-task systems, we applied reinforcement learning algorithms to train policies that choose among task and non-task candidate responses to optimize towards a coherent, consistent and informative conversation with respect to different users. | {
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_17"
],
"mid": [
"2565274151",
"2410983263",
"2438667436"
],
"abstract": [
"",
"Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.",
"In a spoken dialog system, determining which action a machine should take in a given situation is a difficult problem because automatic speech recognition is unreliable and hence the state of the conversation can never be known with certainty. Much of the research in spoken dialog systems centres on mitigating this uncertainty and recent work has focussed on three largely disparate techniques: parallel dialog state hypotheses, local use of confidence scores, and automated planning. While in isolation each of these approaches can improve action selection, taken together they currently lack a unified statistical framework that admits global optimization. In this paper we cast a spoken dialog system as a partially observable Markov decision process (POMDP). We show how this formulation unifies and extends existing techniques to form a single principled framework. A number of illustrations are used to show qualitatively the potential benefits of POMDPs compared to existing techniques, and empirical results from dialog simulations are presented which demonstrate significant quantitative gains. Finally, some of the key challenges to advancing this method - in particular scalability - are briefly outlined."
]
} |
1703.00377 | 2953101770 | Boosting is a popular ensemble algorithm that generates more powerful learners by linearly combining base models from a simpler hypothesis class. In this work, we investigate the problem of adapting batch gradient boosting for minimizing convex loss functions to online setting where the loss at each iteration is i.i.d sampled from an unknown distribution. To generalize from batch to online, we first introduce the definition of online weak learning edge with which for strongly convex and smooth loss functions, we present an algorithm, Streaming Gradient Boosting (SGB) with exponential shrinkage guarantees in the number of weak learners. We further present an adaptation of SGB to optimize non-smooth loss functions, for which we derive a O(ln N N) convergence rate. We also show that our analysis can extend to adversarial online learning setting under a stronger assumption that the online weak learning edge will hold in adversarial setting. We finally demonstrate experimental results showing that in practice our algorithms can achieve competitive results as classic gradient boosting while using less computation. | Online boosting algorithms have been evolving since their batch counterparts are introduced. @cite_2 developed some of the first online boosting algorithm, and their work are applied to online feature selection and online semi-supervised learning . @cite_6 introduced online gradient boosting for the classification setting albeit without a theoretical analysis. @cite_3 developed the first convergence guarantees of online boosting for classification. Then @cite_0 presented two online classification boosting algorithms that are proved to be respectively optimal and adaptive. | {
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_6",
"@cite_2"
],
"mid": [
"1876956220",
"2162139029",
"2049963988",
"1529840045"
],
"abstract": [
"We study online boosting, the task of converting any weak online learner into a strong online learner. Based on a novel and natural definition of weak online learnability, we develop two online boosting algorithms. The first algorithm is an online version of boost-by-majority. By proving a matching lower bound, we show that this algorithm is essentially optimal in terms of the number of weak learners and the sample complexity needed to achieve a specified accuracy. The second algorithm is adaptive and parameter-free, albeit not optimal.",
"Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulting estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. An unusual regularization technique, early stopping, is employed based on CV or a test set. This paper studies numerical convergence, consistency and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting's greedy optimization to the infinimum of the loss function over the linear span. Using the numerical convergence result, we find early-stopping strategies under which boosting is shown to be consistent based on i.i.d. samples, and we obtain bounds on the rates of convergence for boosting estimators. Simulation studies are also presented to illustrate the relevance of our theoretical results for providing insights to practical aspects of boosting. As a side product, these results also reveal the importance of restricting the greedy search step-sizes. as known in practice through the work of Friedman and others. Moreover, our results lead to a rigorous proof that for a linearly separable problem, AdaBoost with E → 0 step-size becomes an L 1 -margin maximizer when left to run to convergence.",
"On-line boosting is one of the most successful on-line algorithms and thus applied in many computer vision applications. However, even though boosting, in general, is well known to be susceptible to class-label noise, on-line boosting is mostly applied to self-learning applications such as visual object tracking, where label-noise is an inherent problem. This paper studies the robustness of on-line boosting. Since mainly the applied loss function determines the behavior of boosting, we propose an on-line version of GradientBoost, which allows us to plug in arbitrary loss-functions into the on-line learner. Hence, we can easily study the importance and the behavior of different loss-functions. We evaluate various on-line boosting algorithms in form of a competitive study on standard machine learning problems as well as on common computer vision applications such as tracking and autonomous training of object detectors. Our results show that using on-line Gradient-Boost with robust loss functions leads to superior results in all our experiments.",
"Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. However, these algorithms have been used mainly in batch mode, i.e., they require the entire training set to be available at once and, in some cases, require random access to the data. In this paper, we present online versions of bagging and boosting that require only one pass through the training data. We build on previously presented work by describing some theoretical results. We also compare the online and batch algorithms experimentally in terms of accuracy and running time."
]
} |
1703.00069 | 2593978077 | Compositing is one of the most common operations in photo editing. To generate realistic composites, the appearances of foreground and background need to be adjusted to make them compatible. Previous approaches to harmonize composites have focused on learning statistical relationships between hand-crafted appearance features of the foreground and background, which is unreliable especially when the contents in the two layers are vastly different. In this work, we propose an end-to-end deep convolutional neural network for image harmonization, which can capture both the context and semantic information of the composite images during harmonization. We also introduce an efficient way to collect large-scale and high-quality training data that can facilitate the training process. Experiments on the synthesized dataset and real composite images show that the proposed network outperforms previous state-of-the-art methods. | Image Harmonization. Generating realistic composite images requires a good match for both the appearances and contents between foreground and background regions. Existing methods use color and tone matching techniques to ensure consistent appearances, such as transferring global statistics @cite_14 @cite_0 , applying gradient domain methods @cite_31 @cite_21 , matching multi-scale statistics @cite_30 or utilizing semantic information @cite_29 . While these methods directly match appearances to generate realistic composite images, realism of the image is not considered. Lalonde and Efros @cite_6 predict the realism of photos by learning color statistics from natural images and use these statistics to adjust foreground appearances to improve the chromatic compatibility. On the other hand, a data-driven method @cite_32 is developed to improve the realism of computer-generated images by retrieving a set of real images with similar global layouts for transferring appearances. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_29",
"@cite_21",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_31"
],
"mid": [
"",
"2129112648",
"2467156531",
"",
"2123576187",
"2164147879",
"",
"2070604790"
],
"abstract": [
"",
"We use a simple statistical analysis to impose one image's color characteristics on another. We can achieve color correction by choosing an appropriate source image and apply its characteristic to another image.",
"Skies are common backgrounds in photos but are often less interesting due to the time of photographing. Professional photographers correct this by using sophisticated tools with painstaking efforts that are beyond the command of ordinary users. In this work, we propose an automatic background replacement algorithm that can generate realistic, artifact-free images with a diverse styles of skies. The key idea of our algorithm is to utilize visual semantics to guide the entire process including sky segmentation, search and replacement. First we train a deep convolutional neural network for semantic scene parsing, which is used as visual prior to segment sky regions in a coarse-to-fine manner. Second, in order to find proper skies for replacement, we propose a data-driven sky search scheme based on semantic layout of the input image. Finally, to re-compose the stylized sky with the original foreground naturally, an appearance transfer method is developed to match statistics locally and semantically. We show that the proposed algorithm can automatically generate a set of visually pleasing results. In addition, we demonstrate the effectiveness of the proposed algorithm with extensive user studies.",
"",
"Computer-generated (CG) images have achieved high levels of realism. This realism, however, comes at the cost of long and expensive manual modeling, and often humans can still distinguish between CG and real images. We introduce a new data-driven approach for rendering realistic imagery that uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our hybrid images appear more realistic than the originals.",
"Why does placing an object from one photograph into another often make the colors of that object suddenly look wrong? One possibility is that humans prefer distributions of colors that are often found in nature; that is, we find pleasing these color combinations that we see often. Another possibility is that humans simply prefer colors to be consistent within an image, regardless of what they are. In this paper, we explore some of these issues by studying the color statistics of a large dataset of natural images, and by looking at differences in color distribution in realistic and unrealistic images. We apply our findings to two problems: 1) classifying composite images into realistic vs. non- realistic, and 2) recoloring image regions for realistic compositing.",
"",
"Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The first set of tools permits the seamless importation of both opaque and transparent source image regions into a destination region. The second set is based on similar mathematical ideas and allows the user to modify the appearance of the image seamlessly, within a selected region. These changes can be arranged to affect the texture, the illumination, and the color of objects lying in the region, or to make tileable a rectangular selection."
]
} |
1703.00069 | 2593978077 | Compositing is one of the most common operations in photo editing. To generate realistic composites, the appearances of foreground and background need to be adjusted to make them compatible. Previous approaches to harmonize composites have focused on learning statistical relationships between hand-crafted appearance features of the foreground and background, which is unreliable especially when the contents in the two layers are vastly different. In this work, we propose an end-to-end deep convolutional neural network for image harmonization, which can capture both the context and semantic information of the composite images during harmonization. We also introduce an efficient way to collect large-scale and high-quality training data that can facilitate the training process. Experiments on the synthesized dataset and real composite images show that the proposed network outperforms previous state-of-the-art methods. | In addition, realism of the image has been studied and used to improve the harmonization results. @cite_23 perform human subject experiments to identify most significant statistical measures that determine the realism of composite images and adjust foreground appearances accordingly. Recently, @cite_35 learn a CNN model to predict the realism of a composite image and incorporate the realism score into a color optimization function for appearance adjustment on the foreground region. Different from the above-mentioned methods, our end-to-end CNN model directly learn from pairs of a composite image as the input and a ground truth image, which ensures the realism of the output results. | {
"cite_N": [
"@cite_35",
"@cite_23"
],
"mid": [
"2952186111",
"2165633874"
],
"abstract": [
"What makes an image appear realistic? In this work, we are answering this question from a data-driven perspective by learning the perception of visual realism directly from large amounts of data. In particular, we train a Convolutional Neural Network (CNN) model that distinguishes natural photographs from automatically generated composite images. The model learns to predict visual realism of a scene in terms of color, lighting and texture compatibility, without any human annotations pertaining to it. Our model outperforms previous works that rely on hand-crafted heuristics, for the task of classifying realistic vs. unrealistic photos. Furthermore, we apply our learned model to compute optimal parameters of a compositing method, to maximize the visual realism score predicted by our CNN model. We demonstrate its advantage against existing methods via a human perception study.",
"Compositing is one of the most commonly performed operations in computer graphics. A realistic composite requires adjusting the appearance of the foreground and background so that they appear compatible; unfortunately, this task is challenging and poorly understood. We use statistical and visual perception experiments to study the realism of image composites. First, we evaluate a number of standard 2D image statistical measures, and identify those that are most significant in determining the realism of a composite. Then, we perform a human subjects experiment to determine how the changes in these key statistics influence human judgements of composite realism. Finally, we describe a data-driven algorithm that automatically adjusts these statistical measures in a foreground to make it more compatible with its background in a composite. We show a number of compositing results, and evaluate the performance of both our algorithm and previous work with a human subjects study."
]
} |
1703.00069 | 2593978077 | Compositing is one of the most common operations in photo editing. To generate realistic composites, the appearances of foreground and background need to be adjusted to make them compatible. Previous approaches to harmonize composites have focused on learning statistical relationships between hand-crafted appearance features of the foreground and background, which is unreliable especially when the contents in the two layers are vastly different. In this work, we propose an end-to-end deep convolutional neural network for image harmonization, which can capture both the context and semantic information of the composite images during harmonization. We also introduce an efficient way to collect large-scale and high-quality training data that can facilitate the training process. Experiments on the synthesized dataset and real composite images show that the proposed network outperforms previous state-of-the-art methods. | Learning-based Image Editing. Recently, neural network based methods for image editing tasks such as image colorization @cite_34 @cite_22 @cite_28 , inpainting @cite_11 and filtering @cite_13 , have drawn much attention due to their efficiency and impressive results. Similar to autoencoders @cite_3 , these methods adopt an unsupervised learning scheme that learns feature representations of the input image, where raw data is used for supervision. Although our method shares the similar concept, to the best of our knowledge it is the first end-to-end trainable CNN architecture designed for image harmonization. | {
"cite_N": [
"@cite_22",
"@cite_28",
"@cite_3",
"@cite_34",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"",
"2163922914",
"2461158874",
"1920280450",
"2963420272"
],
"abstract": [
"",
"",
"The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.",
"We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.",
"Photo retouching enables photographers to invoke dramatic visual impressions by artistically enhancing their photos through stylistic color and tone adjustments. However, it is also a time-consuming and challenging task that requires advanced skills beyond the abilities of casual photographers. Using an automated algorithm is an appealing alternative to manual work, but such an algorithm faces many hurdles. Many photographic styles rely on subtle adjustments that depend on the image content and even its semantics. Further, these adjustments are often spatially varying. Existing automatic algorithms are still limited and cover only a subset of these challenges. Recently, deep learning has shown unique abilities to address hard problems. This motivated us to explore the use of deep neural networks (DNNs) in the context of photo editing. In this article, we formulate automatic photo adjustment in a manner suitable for this approach. We also introduce an image descriptor accounting for the local semantics of an image. Our experiments demonstrate that training DNNs using these descriptors successfully capture sophisticated photographic styles. In particular and unlike previous techniques, it can model local adjustments that depend on image semantics. We show that this yields results that are qualitatively and quantitatively better than previous work.",
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods."
]
} |
1703.00194 | 2950091171 | Safe path planning is a crucial component in autonomous robotics. The many approaches to find a collision free path can be categorically divided into trajectory optimisers and sampling-based methods. When planning using occupancy maps, the sampling-based approach is the prevalent method. The main drawback of such techniques is that the reasoning about the expected cost of a plan is limited to the search heuristic used by each method. We introduce a novel planning method based on trajectory optimisation to plan safe and efficient paths in continuous occupancy maps. We extend the expressiveness of the state-of-the-art functional gradient optimisation methods by devising a stochastic gradient update rule to optimise a path represented as a Gaussian process. This approach avoids the need to commit to a specific resolution of the path representation, whether spatial or parametric. We utilise a continuous occupancy map representation in order to define our optimisation objective, which enables fast computation of occupancy gradients. We show that this approach is essential in order to ensure convergence to the optimal path, and present results and comparisons to other planning methods in both simulation and with real laser data. The experiments demonstrate the benefits of using this technique when planning for safe and efficient paths in continuous occupancy maps. | Optimisation is a widely used approach for finding feasible paths, where the planned path is the local extrema of a pre-defined arbitrary cost function. Loosely speaking, the cost function captures the costs and penalties associated with a configuration-space state, e.g. distance from obstacles. Khatib pioneered the use of artificial potential field for collision avoidance @cite_28 . (CHOMP) utilises covariate gradients from a precomputed obstacle cost to minimise the trajectory's obstacle and smoothness functionals @cite_2 . The (STOMP) planner uses noisy perturbations to perform optimisation under constraints where the cost functional is non-differentiable @cite_1 . Both CHOMP and STOMP commit to a waypoint representation which require to trade-off expressiveness with computational costs. proposed the Gaussian process motion planner which uses a Gaussian process generated by linear time varying stochastic differential equations for path representation @cite_10 . perform trajectory optimisation in a (RKHS) @cite_17 . However, all these methods fall short when planning using occupancy maps as discussed in section . | {
"cite_N": [
"@cite_28",
"@cite_1",
"@cite_2",
"@cite_10",
"@cite_17"
],
"mid": [
"2103120971",
"2019965290",
"2161819990",
"2419216244",
""
],
"abstract": [
"This paper presents a unique real-time obstacle avoidance approach for manipulators and mobile robots based on the artificial potential field concept. Collision avoidance, tradi tionally considered a high level planning problem, can be effectively distributed between different levels of control, al lowing real-time robot operations in a complex environment. This method has been extended to moving obstacles by using a time-varying artificial patential field. We have applied this obstacle avoidance scheme to robot arm mechanisms and have used a new approach to the general problem of real-time manipulator control. We reformulated the manipulator con trol problem as direct control of manipulator motion in oper ational space—the space in which the task is originally described—rather than as control of the task's corresponding joint space motion obtained only after geometric and kine matic transformation. Outside the obstacles' regions of influ ence, we caused the end effector to move in a straight line with an...",
"We present a new approach to motion planning using a stochastic trajectory optimization framework. The approach relies on generating noisy trajectories to explore the space around an initial (possibly infeasible) trajectory, which are then combined to produced an updated trajectory with lower cost. A cost function based on a combination of obstacle and smoothness cost is optimized in each iteration. No gradient information is required for the particular optimization algorithm that we use and so general costs for which derivatives may not be available (e.g. costs corresponding to constraints and motor torques) can be included in the cost function. We demonstrate the approach both in simulation and on a mobile manipulation system for unconstrained and constrained tasks. We experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in.",
"In this paper, we present CHOMP (covariant Hamiltonian optimization for motion planning), a method for trajectory optimization invariant to reparametrization. CHOMP uses functional gradient techniques to iteratively improve the quality of an initial trajectory, optimizing a functional that trades off between a smoothness and an obstacle avoidance component. CHOMP can be used to locally optimize feasible trajectories, as well as to solve motion planning queries, converging to low-cost trajectories even when initialized with infeasible ones. It uses Hamiltonian Monte Carlo to alleviate the problem of convergence to high-cost local minima (and for probabilistic completeness), and is capable of respecting hard constraints along the trajectory. We present extensive experiments with CHOMP on manipulation and locomotion tasks, using seven-degree-of-freedom manipulators and a rough-terrain quadruped robot.",
"Motion planning is a fundamental tool in robotics, used to generate collision-free, smooth, trajectories, while satisfying task-dependent constraints. In this paper, we present a novel approach to motion planning using Gaussian processes. In contrast to most existing trajectory optimization algorithms, which rely on a discrete state parameterization in practice, we represent the continuous-time trajectory as a sample from a Gaussian process (GP) generated by a linear time-varying stochastic differential equation. We then provide a gradient-based optimization technique that optimizes continuous-time trajectories with respect to a cost functional. By exploiting GP interpolation, we develop the Gaussian Process Motion Planner (GPMP), that finds optimal trajectories parameterized by a small number of states. We benchmark our algorithm against recent trajectory optimization algorithms by solving 7-DOF robotic arm planning problems in simulation and validate our approach on a real 7-DOF WAM arm.",
""
]
} |
1703.00194 | 2950091171 | Safe path planning is a crucial component in autonomous robotics. The many approaches to find a collision free path can be categorically divided into trajectory optimisers and sampling-based methods. When planning using occupancy maps, the sampling-based approach is the prevalent method. The main drawback of such techniques is that the reasoning about the expected cost of a plan is limited to the search heuristic used by each method. We introduce a novel planning method based on trajectory optimisation to plan safe and efficient paths in continuous occupancy maps. We extend the expressiveness of the state-of-the-art functional gradient optimisation methods by devising a stochastic gradient update rule to optimise a path represented as a Gaussian process. This approach avoids the need to commit to a specific resolution of the path representation, whether spatial or parametric. We utilise a continuous occupancy map representation in order to define our optimisation objective, which enables fast computation of occupancy gradients. We show that this approach is essential in order to ensure convergence to the optimal path, and present results and comparisons to other planning methods in both simulation and with real laser data. The experiments demonstrate the benefits of using this technique when planning for safe and efficient paths in continuous occupancy maps. | Traditional occupancy grid maps discretise the map into a fixed grid in order to estimate the occupancy posterior @cite_7 . In order to make computations tractable each cell is considered as an independent random variable. The computational gains are substantial since the posterior calculation can be done separately for each cell. The drawback is the loss of spatial relationship between neighbouring cells. To alleviate this problem, a non-parametric approach based on Gaussian Processes (GPs) was proposed in @cite_14 . The (GPOM) produces probabilistic occupancy posteriors based on sensor observations. Using a parameterised covariance function, GPOM captures spatial relationships, which enables continuous inference. The computational complexity is its main limitation, as it scales cubically with the number of observations. | {
"cite_N": [
"@cite_14",
"@cite_7"
],
"mid": [
"1977189000",
"1999050017"
],
"abstract": [
"We introduce a new statistical modelling technique for building occupancy maps. The problem of mapping is addressed as a classification task where the robot's environment is classified into regions of occupancy and free space. This is obtained by employing a modified Gaussian process as a non-parametric Bayesian learning technique to exploit the fact that real-world environments inherently possess structure. This structure introduces dependencies between points on the map which are not accounted for by many common mapping techniques such as occupancy grids. Our approach is an 'anytime' algorithm that is capable of generating accurate representations of large environments at arbitrary resolutions to suit many applications. It also provides inferences with associated variances into occluded regions and between sensor beams, even with relatively few observations. Crucially, the technique can handle noisy data, potentially from multiple sources, and fuse it into a robust common probabilistic representation of the robot's surroundings. We demonstrate the benefits of our approach on simulated datasets with known ground truth and in outdoor urban environments.",
"An approach to robot perception and world modeling that uses a probabilistic tesselated representation of spatial information called the occupancy grid is reviewed. The occupancy grid is a multidimensional random field that maintains stochastic estimates of the occupancy state of the cells in a spatial lattice. To construct a sensor-derived map of the robot's world, the cell state estimates are obtained by interpreting the incoming range readings using probabilistic sensor models. Bayesian estimation procedures allow the incremental updating of the occupancy grid, using readings taken from several sensors over multiple points of view. The use of occupancy grids from mapping and for navigation is examined. Operations on occupancy grids and extensions of the occupancy grid framework are briefly considered. >"
]
} |
1703.00154 | 2592531347 | Building a complete inertial navigation system using the limited quality data provided by current smartphones has been regarded challenging, if not impossible. This paper shows that by careful crafting and accounting for the weak information in the sensor samples, smartphones are capable of pure inertial navigation. We present a probabilistic approach for orientation and use-case free inertial odometry, which is based on double-integrating rotated accelerations. The strength of the model is in learning additive and multiplicative IMU biases online. We are able to track the phone position, velocity, and pose in real-time and in a computationally lightweight fashion by solving the inference with an extended Kalman filter. The information fusion is completed with zero-velocity updates (if the phone remains stationary), altitude correction from barometric pressure readings (if available), and pseudo-updates constraining the momentary speed. We demonstrate our approach using an iPad and iPhone in several indoor dead-reckoning applications and in a measurement tool setup. | Inertial navigation systems have been studied for decades. The classical literature cover primarily navigation applications for aircraft and large vehicles @cite_4 @cite_23 @cite_28 @cite_12 . The development of handheld consumer-grade devices has awakened an interest in pedestrian navigation applications, where the challenges are slightly different from those in the classical approaches. That is, the limited quality of smartphone MEMS sensors and abrupt motions of hand-held devices pose additional challenges which have so far prevented generic inertial navigation solutions for smartphone applications. | {
"cite_N": [
"@cite_28",
"@cite_4",
"@cite_12",
"@cite_23"
],
"mid": [
"1564768010",
"1569116522",
"1493051473",
"1531532259"
],
"abstract": [
"Inertial navigation is widely used for the guidance of aircraft, missiles ships and land vehicles, as well as in a number of novel applications such as surveying underground pipelines in drilling operations. This book discusses the physical principles of inertial navigation, the associated growth of errors and their compensation. It draws current technological developments, provides an indication of potential future trends and covers a broad range of applications. New chapters on MEMS (microelectromechanical systems) technology and inertial system applications are included.",
"Coordinate frames and transformations ordinary differential equations inertial measurement unit inertial navigation system system error dynamics stochastic processes and error models linear estimation INS initialization and alignment the global positioning system (GPS) geodetic application.",
"This book offers a guide for avionics system engineers who want to compare the performance of the various types of inertial navigation systems. The author emphasizes systems used on or near the surface of the planet, but says the principles can be applied to craft in space or underwater with a little tinkering. Part of the material is adapted from the authors doctoral dissertation, but much is from his lecture notes for a one-semester graduate course in inertial navigation systems for students who were already adept in classical mechanics, kinematics, inertial instrument theory, and inertial platform mechanization. This book was first published in 1971 but no revision has been necessary so far because the earth's spin is being so much more stable than its magnetic field.",
"From the Publisher: \"Estimation with Applications to Tracking and Navigation treats the estimation of various quantities from inherently inaccurate remote observations. It explains state estimator design using a balanced combination of linear systems, probability, and statistics.\" \"The authors provide a review of the necessary background mathematical techniques and offer an overview of the basic concepts in estimation. They then provide detailed treatments of all the major issues in estimation with a focus on applying these techniques to real systems.\" \"Suitable for graduate engineering students and engineers working in remote sensors and tracking, Estimation with Applications to Tracking and Navigation provides expert coverage of this important area.\"--BOOK JACKET."
]
} |
1703.00154 | 2592531347 | Building a complete inertial navigation system using the limited quality data provided by current smartphones has been regarded challenging, if not impossible. This paper shows that by careful crafting and accounting for the weak information in the sensor samples, smartphones are capable of pure inertial navigation. We present a probabilistic approach for orientation and use-case free inertial odometry, which is based on double-integrating rotated accelerations. The strength of the model is in learning additive and multiplicative IMU biases online. We are able to track the phone position, velocity, and pose in real-time and in a computationally lightweight fashion by solving the inference with an extended Kalman filter. The information fusion is completed with zero-velocity updates (if the phone remains stationary), altitude correction from barometric pressure readings (if available), and pseudo-updates constraining the momentary speed. We demonstrate our approach using an iPad and iPhone in several indoor dead-reckoning applications and in a measurement tool setup. | The extensive survey by Harle @cite_27 covers many approaches with different constraints for the use of inertial sensors for pedestrian dead-reckoning (PDR). Typically INS systems either constrain the motion model or rely on external sensors. In fact, we are not aware of any previous system which would have all the capabilities that we demonstrate in this paper. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2044831755"
],
"abstract": [
"With the continual miniaturisation of sensors and processing nodes, Pedestrian Dead Reckoning (PDR) systems are becoming feasible options for indoor tracking. These use inertial and other sensors, often combined with domain-specific knowledge about walking, to track user movements. There is currently a wealth of relevant literature spread across different research communities. In this survey, a taxonomy of modern PDRs is developed and used to contextualise the contributions from different areas. Techniques for step detection, characterisation, inertial navigation and step-and-heading-based dead-reckoning are reviewed and compared. Techniques that incorporate building maps through particle filters are analysed, along with hybrid systems that use absolute position fixes to correct dead-reckoning output. In addition, consideration is given to the possibility of using smartphones as PDR sensing devices. The survey concludes that PDR techniques alone can offer good short- to medium- term tracking under certain circumstances, but that regular absolute position fixes from partner systems will be needed to ensure long-term operation and to cope with unexpected behaviours. It concludes by identifying a detailed list of challenges for PDR researchers."
]
} |
1703.00395 | 2950237263 | We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images. | explored using variational autoencoders with recurrent encoders and decoders for compression of small images. This type of autoencoder is trained to maximize the lower bound of a log-likelihood, or equivalently to minimize where @math plays the role of the encoder, and @math plays the role of the decoder. While used a Gaussian distribution for the encoder, we can link their approach to the work of @cite_0 by assuming it to be uniform, @math . If we also assume a Gaussian likelihood with fixed variance, @math , the objective function can be written Here, @math is a constant which encompasses the negative entropy of the encoder and the normalization constant of the Gaussian likelihood. Note that this equation is identical to a rate-distortion trade-off with @math and quantization replaced by additive uniform noise. However, not all distortions have an equivalent formulation as a variational autoencoder . This only works if @math is normalizable in @math and the normalization constant does not depend on @math , or otherwise @math will not be constant. An direct empirical comparison of our approach with variational autoencoders is provided in Appendix . | {
"cite_N": [
"@cite_0"
],
"mid": [
"2953001887"
],
"abstract": [
"We introduce a general framework for end-to-end optimization of the rate--distortion performance of nonlinear transform codes assuming scalar quantization. The framework can be used to optimize any differentiable pair of analysis and synthesis transforms in combination with any differentiable perceptual metric. As an example, we consider a code built from a linear transform followed by a form of multi-dimensional local gain control. Distortion is measured with a state-of-the-art perceptual metric. When optimized over a large database of images, this representation offers substantial improvements in bitrate and perceptual appearance over fixed (DCT) codes, and over linear transform codes optimized for mean squared error."
]
} |
1703.00095 | 2954088770 | This paper considers the problem of active object recognition using touch only. The focus is on adaptively selecting a sequence of wrist poses that achieves accurate recognition by enclosure grasps. It seeks to minimize the number of touches and maximize recognition confidence. The actions are formulated as wrist poses relative to each other, making the algorithm independent of absolute workspace coordinates. The optimal sequence is approximated by Monte Carlo tree search. We demonstrate results in a physics engine and on a real robot. In the physics engine, most object instances were recognized in at most 16 grasps. On a real robot, our method recognized objects in 2--9 grasps and outperformed a greedy baseline. | Similar to our active end-effector pose selection, active viewpoints have been used to select camera poses to gather information, such as in @cite_9 . Sensing with vision only is considered active perception. Our work is closer to interactive perception, which physically contacts the environment @cite_37 . | {
"cite_N": [
"@cite_9",
"@cite_37"
],
"mid": [
"2012224615",
"2337977475"
],
"abstract": [
"This paper introduces a tactile or contact method whereby an autonomous robot equipped with suitable sensors can choose the next sensing action involving touch in order to accurately localize an object in its environment. The method uses an information gain metric based on the uncertainty of the object's pose to determine the next best touching action. Intuitively, the optimal action is the one that is the most informative. The action is then carried out and the state of the object's pose is updated using an estimator. The method is further extended to choose the most informative action to simultaneously localize and estimate the object's model parameter or model class. Results are presented both in simulation and in experiment on the DARPA Autonomous Robotic Manipulation Software (ARM-S) robot.",
"Recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment. These approaches are subsumed under the term Interactive Perception (IP). This view of perception provides the following benefits. First, interaction with the environment creates a rich sensory signal that would otherwise not be present. Second, knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal. In this survey, we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of IP. We close this survey by discussing remaining open questions. With this survey, we hope to help define the field of Interactive Perception and to provide a valuable resource for future research."
]
} |
1703.00095 | 2954088770 | This paper considers the problem of active object recognition using touch only. The focus is on adaptively selecting a sequence of wrist poses that achieves accurate recognition by enclosure grasps. It seeks to minimize the number of touches and maximize recognition confidence. The actions are formulated as wrist poses relative to each other, making the algorithm independent of absolute workspace coordinates. The optimal sequence is approximated by Monte Carlo tree search. We demonstrate results in a physics engine and on a real robot. In the physics engine, most object instances were recognized in at most 16 grasps. On a real robot, our method recognized objects in 2--9 grasps and outperformed a greedy baseline. | Early work have explored touch-only object recognition not involving active planning. Bajcsy @cite_20 compared human haptic exploratory procedures (EPs) observed by Lederman and Klatzky @cite_7 to robots, and Allen @cite_22 extended them to a tactile robotic hand. Gaston @cite_5 , Grimson @cite_10 , and Siegel @cite_8 used Interpretation Trees for recognition and pose estimation. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_5",
"@cite_10",
"@cite_20"
],
"mid": [
"2180140657",
"1969299770",
"",
"2061734820",
"1532977286",
""
],
"abstract": [
"The use of touch sensing as part of a large system being built for 3D shape recovery and object recognition using touch and vision methods is described. The authors focus on three exploratory procedures they have devised to acquire and interpret sparse 3D touch data: grasping by containment, planar surface exploration, and surface contour exploration. Experimental results for each of these procedures are presented. >",
"Abstract Two experiments establish links between desired knowledge about objects and hand movements during haptic object exploration. Experiment 1 used a match-to-sample task, in which blindfolded subjects were directed to match objects on a particular dimension (e.g., texture). Hand movements during object exploration were reliably classified as “exploratory procedures,” each procedure defined by its invariant and typical properties. The movement profile, i.e., the distribution of exploratory procedures, was directly related to the desired object knowledge that was required for the match. Experiment 2 addressed the reasons for the specific links between exploratory procedures and knowledge goals. Hand movements were constrained, and performance on various matching tasks was assessed. The procedures were considered in terms of their necessity, sufficiency, and optimality of performance for each task. The results establish that in free exploration, a procedure is generally used to acquire information about an object property, not because it is merely sufficient, but because it is optimal or even necessary. Hand movements can serve as “windows,” through which it is possible to learn about the underlying representation of objects in memory and the processes by which such representations are derived and utilized.",
"",
"This paper discusses how data from multiple tactile sensors may be used to identify and locate one object, from among a set of known objects. We use only local information from sensors: 1) the position of contact points and 2) ranges of surface normals at the contact points. The recognition and localization process is structured as the development and pruning of a tree of consistent hypotheses about pairings between contact points and object surfaces. In this paper, we deal with polyhedral objects constrained to lie on a known plane, i.e., having three degrees of positioning freedom relative to the sensors. We illustrate the performance of the algorithm by simulation.",
"This paper discusses how local measurements of three-dimensional positions and surface normals (recorded by a set of tactile sensors, or by three-dimensional range sensors), may be used to identify and locate objects from among a set of known objects. The objects are modeled as polyhedra having up to six degrees of freedom relative to the sensors. We show that inconsistent hypotheses about pairings between sensed points and object surfaces can be discarded efficiently by using local constraints on distances between faces, angles between face normals, and angles (relative to the surface normals) of vectors between sensed points. We show by simulation and by mathematical bounds that the number of hypotheses consistent with these constraints is small We also show how to recover the position and orientation of the object from the sensory data. The algorithm's performance on data obtained from a triangulation range sensor is illustrated.",
""
]
} |
1703.00095 | 2954088770 | This paper considers the problem of active object recognition using touch only. The focus is on adaptively selecting a sequence of wrist poses that achieves accurate recognition by enclosure grasps. It seeks to minimize the number of touches and maximize recognition confidence. The actions are formulated as wrist poses relative to each other, making the algorithm independent of absolute workspace coordinates. The optimal sequence is approximated by Monte Carlo tree search. We demonstrate results in a physics engine and on a real robot. In the physics engine, most object instances were recognized in at most 16 grasps. On a real robot, our method recognized objects in 2--9 grasps and outperformed a greedy baseline. | Solving for lookahead policy directly is impractically costly, as every possible state in each step ahead needs to be considered. We tackle this in two ways. First, we use a Monte Carlo optimization method from reinforcement learning literature @cite_15 . Second, instead of modeling the state space, we formulate a probability dependent only on the observations and actions. It is considerably lower dimensional and generalizes to any object descriptor and robot platform. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2107726111"
],
"abstract": [
"This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word \"reinforcement.\" The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning."
]
} |
1703.00095 | 2954088770 | This paper considers the problem of active object recognition using touch only. The focus is on adaptively selecting a sequence of wrist poses that achieves accurate recognition by enclosure grasps. It seeks to minimize the number of touches and maximize recognition confidence. The actions are formulated as wrist poses relative to each other, making the algorithm independent of absolute workspace coordinates. The optimal sequence is approximated by Monte Carlo tree search. We demonstrate results in a physics engine and on a real robot. In the physics engine, most object instances were recognized in at most 16 grasps. On a real robot, our method recognized objects in 2--9 grasps and outperformed a greedy baseline. | Monte Carlo tree search (MCTS) @cite_38 has become popular for real-time decisions in AI. It is an online alternative to dynamic programming and uses repeated simulations to construct a tree in a best-first order. Kocsic and Szepesv a ri @cite_21 showed that tree policy using the UCT (Upper Confidence bounds applied to Trees) guarantees asymptotic optimality. Feldman and Domshlak @cite_30 introduced BRUE, a purely exploring MCTS that guarantees exponential cost reduction. Silver and Veness @cite_27 extended MCTS to partially-observable models. MCTS has been used for game solving @cite_36 and belief-space planning in robotics @cite_34 @cite_24 @cite_0 , but has not been applied to manipulation. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_36",
"@cite_21",
"@cite_24",
"@cite_0",
"@cite_27",
"@cite_34"
],
"mid": [
"2157136665",
"2126316555",
"2151210636",
"1625390266",
"2016122564",
"2962795549",
"",
""
],
"abstract": [
"We consider online planning in Markov decision processes (MDPs). In online planning, the agent focuses on its current state only, deliberates about the set of possible policies from that state onwards and, when interrupted, uses the outcome of that exploratory deliberation to choose what action to perform next. Formally, the performance of algorithms for online planning is assessed in terms of simple regret, the agent's expected performance loss when the chosen action, rather than an optimal one, is followed. To date, state-of-the-art algorithms for online planning in general MDPs are either best effort, or guarantee only polynomial-rate reduction of simple regret over time. Here we introduce a new Monte-Carlo tree search algorithm, BRUE, that guarantees exponential- rate and smooth reduction of simple regret. At a high level, BRUE is based on a simple yet non-standard state-space sampling scheme, MCTS2e, in which different parts of each sample are dedicated to different exploratory objectives. We further extend BRUE with a variant of \"learning by forgetting.\" The resulting parametrized algorithm, BRUE(α), exhibits even more attractive formal guarantees than BRUE. Our empirical evaluation shows that both BRUE and its generalization, BRUE(α), are also very effective in practice and compare favorably to the state-of-the-art.",
"Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.",
"The combination of modern Reinforcement Learning and Deep Learning approaches holds the promise of making significant progress on challenging applications requiring both rich perception and policy-selection. The Arcade Learning Environment (ALE) provides a set of Atari games that represent a useful benchmark set of such applications. A recent breakthrough in combining model-free reinforcement learning with deep learning, called DQN, achieves the best real-time agents thus far. Planning-based approaches achieve far higher scores than the best model-free approaches, but they exploit information that is not available to human players, and they are orders of magnitude slower than needed for real-time play. Our main goal in this work is to build a better real-time Atari game playing agent than DQN. The central idea is to use the slow planning-based agents to provide training data for a deep-learning architecture capable of real-time play. We proposed new agents based on this idea and show that they outperform DQN.",
"For large state-space Markovian Decision Problems Monte-Carlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.",
"",
"Many problems in artificial intelligence require adaptively making a sequence of decisions with uncertain outcomes under partial observability. Solving such stochastic optimization problems is a fundamental but notoriously difficult challenge. In this paper, we introduce the concept of adaptive submodularity, generalizing submodular set functions to adaptive policies. We prove that if a problem satisfies this property, a simple adaptive greedy algorithm is guaranteed to be competitive with the optimal policy. In addition to providing performance guarantees for both stochastic maximization and coverage, adaptive submodularity can be exploited to drastically speed up the greedy algorithm by using lazy evaluations. We illustrate the usefulness of the concept by giving several examples of adaptive submodular objectives arising in diverse AI applications including management of sensing resources, viral marketing and active learning. Proving adaptive submodularity for these problems allows us to recover existing results in these applications as special cases, improve approximation guarantees and handle natural generalizations.",
"",
""
]
} |
1703.00177 | 2949130069 | We present a generative method to estimate 3D human motion and body shape from monocular video. Under the assumption that starting from an initial pose optical flow constrains subsequent human motion, we exploit flow to find temporally coherent human poses of a motion sequence. We estimate human motion by minimizing the difference between computed flow fields and the output of an artificial flow renderer. A single initialization step is required to estimate motion over multiple frames. Several regularization functions enhance robustness over time. Our test scenarios demonstrate that optical flow effectively regularizes the under-constrained problem of human shape and motion estimation from monocular video. | 3D human pose estimation is often based on the use of a body model. Human body representations exist in 2D and 3D. Many of the following methods utilize the 3D human body model SCAPE @cite_25 . SCAPE is a deformable mesh model learned from body scans. Pose and shape of the model are parametrized by a set of body part rotations and low dimensional shape deformations. In recent work the SMPL model, a more accurate blend shape model compatible with existing rendering engines, has been presented by Loper al @cite_32 . | {
"cite_N": [
"@cite_32",
"@cite_25"
],
"mid": [
"1967554269",
"1989191365"
],
"abstract": [
"We present a learned model of human body shape and pose-dependent shape variation that is more accurate than previous models and is compatible with existing graphics pipelines. Our Skinned Multi-Person Linear model (SMPL) is a skinned vertex-based model that accurately represents a wide variety of body shapes in natural human poses. The parameters of the model are learned from data including the rest pose template, blend weights, pose-dependent blend shapes, identity-dependent blend shapes, and a regressor from vertices to joint locations. Unlike previous models, the pose-dependent blend shapes are a linear function of the elements of the pose rotation matrices. This simple formulation enables training the entire model from a relatively large number of aligned 3D meshes of different people in different poses. We quantitatively evaluate variants of SMPL using linear or dual-quaternion blend skinning and show that both are more accurate than a Blend-SCAPE model trained on the same data. We also extend SMPL to realistically model dynamic soft-tissue deformations. Because it is based on blend skinning, SMPL is compatible with existing rendering engines and we make it available for research purposes.",
"We introduce the SCAPE method (Shape Completion and Animation for PEople)---a data-driven method for building a human shape model that spans variation in both subject shape and pose. The method is based on a representation that incorporates both articulated and non-rigid deformations. We learn a pose deformation model that derives the non-rigid surface deformation as a function of the pose of the articulated skeleton. We also learn a separate model of variation based on body shape. Our two models can be combined to produce 3D surface models with realistic muscle deformation for different people in different poses, when neither appear in the training set. We show how the model can be used for shape completion --- generating a complete surface mesh given a limited set of markers specifying the target shape. We present applications of shape completion to partial view completion and motion capture animation. In particular, our method is capable of constructing a high-quality animated surface model of a moving person, with realistic muscle deformation, using just a single static scan and a marker motion capture sequence of the person."
]
} |
1703.00177 | 2949130069 | We present a generative method to estimate 3D human motion and body shape from monocular video. Under the assumption that starting from an initial pose optical flow constrains subsequent human motion, we exploit flow to find temporally coherent human poses of a motion sequence. We estimate human motion by minimizing the difference between computed flow fields and the output of an artificial flow renderer. A single initialization step is required to estimate motion over multiple frames. Several regularization functions enhance robustness over time. Our test scenarios demonstrate that optical flow effectively regularizes the under-constrained problem of human shape and motion estimation from monocular video. | A variety of approaches to 3D pose estimation have been presented using various cues including shape from shading, silhouettes and edges. Due to the highly ill-posed and under-constrained nature of the problem these methods often require user interaction e.g. through manual annotation of body joints on the image @cite_29 @cite_12 . | {
"cite_N": [
"@cite_29",
"@cite_12"
],
"mid": [
"2018854916",
"2108665084"
],
"abstract": [
"This paper investigates the problem of recovering information about the configuration of an articulated object, such as a human figure, from point correspondences in a single image. Unlike previous approaches, the proposed reconstruction method does not assume that the imagery was acquired with a calibrated camera. An analysis is presented which demonstrates that there is a family of solutions to this reconstruction problem parameterized by a single variable. A simple and effective algorithm is proposed for recovering the entire set of solutions by considering the foreshortening of the segments of the model in the image. Results obtained by applying this algorithm to real images are presented.",
"Recovering the 3D coordinates of various joints of the human body from an image is a critical first step for several model-based human tracking and optical motion capture systems. Unlike previous approaches that have used a restrictive camera model or assumed a calibrated camera, our work deals with the general case of a perspective uncalibrated camera and is thus well suited for archived video. The input to the system is an image of the human body and correspondences of several body landmarks, while the output is the set of 3D coordinates of the landmarks in a body-centric coordinate system. Using ideas from 3D model based invariants, we set up a polynomial system of equations in the unknown head pitch, yaw and roll angles. If we are able to make the often-valid assumption that the torso twist is small, there are finite numbers of solutions to the head-orientation that can be computed readily. Once the head orientation is computed, the epipolar geometry of the camera is recovered, leading to solutions to the 3D joint positions. Results are presented on synthetic and real images."
]
} |
1703.00177 | 2949130069 | We present a generative method to estimate 3D human motion and body shape from monocular video. Under the assumption that starting from an initial pose optical flow constrains subsequent human motion, we exploit flow to find temporally coherent human poses of a motion sequence. We estimate human motion by minimizing the difference between computed flow fields and the output of an artificial flow renderer. A single initialization step is required to estimate motion over multiple frames. Several regularization functions enhance robustness over time. Our test scenarios demonstrate that optical flow effectively regularizes the under-constrained problem of human shape and motion estimation from monocular video. | Guan al @cite_6 have been the first to present a detailed method to recover human pose together with an accurate shape estimate from single images. Based on manual initialization, parameters of the SCAPE model are optimized exploiting edge overlap and shading. The work is based on @cite_9 , a method that recovers the 3D pose from silhouettes from 3-4 calibrated cameras. Similar methods have been presented by B a lan al @cite_33 and Sigal al @cite_36 , also requiring multi-view input. Hasler al @cite_11 fit a statistical body model @cite_22 into monocular image silhouettes. A similar approach is followed by Chen al @cite_16 . In recent work, Bogo al. @cite_5 present the first method to extract both pose and shape from a single image fully automatically. 2D joint locations are found using the CNN-based approach DeepCut @cite_27 , then projected joints of the SMPL model are fitted against the 2D locations. The presented method is similar to ours as it also relies on 2D features. In contrast to our work no consistency with the image silhouette or temporal coherency is guaranteed. | {
"cite_N": [
"@cite_33",
"@cite_22",
"@cite_36",
"@cite_9",
"@cite_6",
"@cite_27",
"@cite_5",
"@cite_16",
"@cite_11"
],
"mid": [
"2150297019",
"1993846356",
"2103025041",
"2134484928",
"2545173102",
"2951256101",
"2483862638",
"1522277130",
"1992475172"
],
"abstract": [
"Strong lighting is common in natural scenes yet is often viewed as a nuisance for object pose estimation and tracking. In human shape and pose estimation, cast shadows can be confused with foreground structure while self shadowing and shading variation on the body cause the appearance of the person to change with pose. Rather than attempt to minimize the effects of lighting and shadows, we show that strong lighting in a scene actually makes pose and shape estimation more robust. Additionally, by recovering multiple body poses we are able to automatically estimate the lighting in the scene and the albedo of the body. Our approach makes use of a detailed 3D body model, the parameters of which are directly recovered from image data. We provide a thorough exploration of human pose estimation under strong lighting conditions and show: 1. the estimation of the light source from cast shadows; 2. the estimation of the light source and the albedo of the body from multiple body poses; 3. that a point light and cast shadows on the ground plane can be treated as an additional \"shadow camera\" that improves pose and shape recovery, particularly in monocular scenes. Additionally we introduce the notion of albedo constancy which employs lighting normalized image data for matching. Our experiments with multiple subjects show that rather than causing problems, strong lighting improves human pose and shape estimation.",
"A circuit for controlling a display panel identifying malfunctions in an engine generator receives a plurality of electrical signals from the engine generator, each of which identifies a particular trouble. The electrical signal may be produced by closing a switch. It is caused to operate a latch that lights a light associated with the particular malfunction. Indications of other malfunctions are suppressed until the circuit is reset. A manual reset tests all lights and then leaves them off ready to respond. A power-up reset does not test lights but leaves all lights off ready to respond. The circuit is rendered especially appropriate for military use by hardening against radiation and against pulses of electromagnetic interference.",
"Estimation of three-dimensional articulated human pose and motion from images is a central problem in computer vision. Much of the previous work has been limited by the use of crude generative models of humans represented as articulated collections of simple parts such as cylinders. Automatic initialization of such models has proved difficult and most approaches assume that the size and shape of the body parts are known a priori. In this paper we propose a method for automatically recovering a detailed parametric model of non-rigid body shape and pose from monocular imagery. Specifically, we represent the body using a parameterized triangulated mesh model that is learned from a database of human range scans. We demonstrate a discriminative method to directly recover the model parameters from monocular images using a conditional mixture of kernel regressors. This predicted pose and shape are used to initialize a generative model for more detailed pose and shape estimation. The resulting approach allows fully automatic pose and shape recovery from monocular and multi-camera imagery. Experimental results show that our method is capable of robustly recovering articulated pose, shape and biometric measurements (e.g. height, weight, etc.) in both calibrated and uncalibrated camera environments.",
"Much of the research on video-based human motion capture assumes the body shape is known a priori and is represented coarsely (e.g. using cylinders or superquadrics to model limbs). These body models stand in sharp contrast to the richly detailed 3D body models used by the graphics community. Here we propose a method for recovering such models directly from images. Specifically, we represent the body using a recently proposed triangulated mesh model called SCAPE which employs a low-dimensional, but detailed, parametric model of shape and pose-dependent deformations that is learned from a database of range scans of human bodies. Previous work showed that the parameters of the SCAPE model could be estimated from marker-based motion capture data. Here we go further to estimate the parameters directly from image data. We define a cost function between image observations and a hypothesized mesh and formulate the problem as optimization over the body shape and pose parameters using stochastic search. Our results show that such rich generative models enable the automatic recovery of detailed human shape and pose from images.",
"We describe a solution to the challenging problem of estimating human body shape from a single photograph or painting. Our approach computes shape and pose parameters of a 3D human body model directly from monocular image cues and advances the state of the art in several directions. First, given a user-supplied estimate of the subject's height and a few clicked points on the body we estimate an initial 3D articulated body pose and shape. Second, using this initial guess we generate a tri-map of regions inside, outside and on the boundary of the human, which is used to segment the image using graph cuts. Third, we learn a low-dimensional linear model of human shape in which variations due to height are concentrated along a single dimension, enabling height-constrained estimation of body shape. Fourth, we formulate the problem of parametric human shape from shading. We estimate the body pose, shape and reflectance as well as the scene lighting that produces a synthesized body that robustly matches the image evidence. Quantitative experiments demonstrate how smooth shading provides powerful constraints on human shape. We further demonstrate a novel application in which we extract 3D human models from archival photographs and paintings.",
"This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation. Models and code available at this http URL.",
"We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fit it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.",
"In this paper we propose a probabilistic framework that models shape variations and infers dense and detailed 3D shapes from a single silhouette. We model two types of shape variations, the object phenotype variation and its pose variation using two independent Gaussian Process Latent Variable Models (GPLVMs) respectively. The proposed shape variation models are learnt from 3D samples without prior knowledge about object class, e.g. object parts and skeletons, and are combined to fully span the 3D shape space. A novel probabilistic inference algorithm for 3D shape estimation is proposed by maximum likelihood estimates of the GPLVM latent variables and the camera parameters that best fit generated 3D shapes to given silhouettes. The proposed inference involves a small number of latent variables and it is computationally efficient. Experiments on both human body and shark data demonstrate the efficacy of our new approach.",
"In this paper we propose a multilinear model of human pose and body shape which is estimated from a database of registered 3D body scans in different poses. The model is generated by factorizing the measurements into pose and shape dependent components. By combining it with an ICP based registration method, we are able to estimate pose and body shape of dressed subjects from single images. If several images of the subject are available, shape and poses can be optimized simultaneously for all input images. Additionally, while estimating pose and shape, we use the model as a virtual calibration pattern and also recover the parameters of the perspective camera model the images were created with."
]
} |
1703.00177 | 2949130069 | We present a generative method to estimate 3D human motion and body shape from monocular video. Under the assumption that starting from an initial pose optical flow constrains subsequent human motion, we exploit flow to find temporally coherent human poses of a motion sequence. We estimate human motion by minimizing the difference between computed flow fields and the output of an artificial flow renderer. A single initialization step is required to estimate motion over multiple frames. Several regularization functions enhance robustness over time. Our test scenarios demonstrate that optical flow effectively regularizes the under-constrained problem of human shape and motion estimation from monocular video. | 3D human pose estimation can serve as a preliminary step for image based rendering techniques. In early work Carranza al @cite_18 have been the first to present free-viewpoint video using model-based reconstruction of human motion using the subject's silhouette in multiple camera views. Zhou al @cite_10 and Jain al @cite_37 present updates to model-based pose estimation for subsequent reshaping of humans in images and videos respectively. Rogge al @cite_0 fit a 3D model for automatic cloth exchange in videos. All methods utilize various cues, none of them uses optical flow for motion estimation. | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_10",
"@cite_37"
],
"mid": [
"1997337978",
"2153903029",
"2075834168",
"2088230067"
],
"abstract": [
"We present a semi-automatic approach to exchange the clothes of an actor for arbitrary virtual garments in conventional monocular video footage as a postprocess. We reconstruct the actor's body shape and motion from the input video using a parameterized body model. The reconstructed dynamic 3D geometry of the actor serves as an animated mannequin for simulating the virtual garment. It also aids in scene illumination estimation, necessary to realistically light the virtual garment. An image-based warping technique ensures realistic compositing of the rendered virtual garment and the original video. We present results for eight real-world video sequences featuring complex test cases to evaluate performance for different types of motion, camera settings, and illumination conditions.",
"",
"We present an easy-to-use image retouching technique for realistic reshaping of human bodies in a single image. A model-based approach is taken by integrating a 3D whole-body morphable model into the reshaping process to achieve globally consistent editing effects. A novel body-aware image warping approach is introduced to reliably transfer the reshaping effects from the model to the image, even under moderate fitting errors. Thanks to the parametric nature of the model, our technique parameterizes the degree of reshaping by a small set of semantic attributes, such as weight and height. It allows easy creation of desired reshaping effects by changing the full-body attributes, while producing visually pleasing results even for loosely-dressed humans in casual photographs with a variety of poses and shapes.",
"We present a system for quick and easy manipulation of the body shape and proportions of a human actor in arbitrary video footage. The approach is based on a morphable model of 3D human shape and pose that was learned from laser scans of real people. The algorithm commences by spatio-temporally fitting the pose and shape of this model to the actor in either single-view or multi-view video footage. Once the model has been fitted, semantically meaningful attributes of body shape, such as height, weight or waist girth, can be interactively modified by the user. The changed proportions of the virtual human model are then applied to the actor in all video frames by performing an image-based warping. By this means, we can now conveniently perform spatio-temporal reshaping of human actors in video footage which we show on a variety of video sequences."
]
} |
1703.00177 | 2949130069 | We present a generative method to estimate 3D human motion and body shape from monocular video. Under the assumption that starting from an initial pose optical flow constrains subsequent human motion, we exploit flow to find temporally coherent human poses of a motion sequence. We estimate human motion by minimizing the difference between computed flow fields and the output of an artificial flow renderer. A single initialization step is required to estimate motion over multiple frames. Several regularization functions enhance robustness over time. Our test scenarios demonstrate that optical flow effectively regularizes the under-constrained problem of human shape and motion estimation from monocular video. | Different works have been presented exploiting optical flow for different purposes. Sapp al @cite_17 and Fragkiadaki al @cite_19 use optical flow for segmentation as a preliminary step for pose estimation. Both exploit the rigid structure revealing property of optical flow, rather than information about motion. Fablet and Black @cite_30 use optical flow to learn motion models for automatic detection of human motion. Efros al @cite_14 categorize human motion viewed from a distance by building an optical flow-based motion descriptor. Both methods label motion without revealing the underlying movement pattern. | {
"cite_N": [
"@cite_30",
"@cite_19",
"@cite_14",
"@cite_17"
],
"mid": [
"2117834610",
"",
"2138105460",
"2093949207"
],
"abstract": [
"This paper proposes a solution for the automatic detection and tracking of human motion in image sequences. Due to the complexity of the human body and its motion, automatic detection of 3D human motion remains an open, and important, problem. Existing approaches for automatic detection and tracking focus on 2D cues and typically exploit object appearance (color distribution, shape) or knowledge of a static background. In contrast, we exploit 2D optical flow information which provides rich descriptive cues, while being independent of object and background appearance. To represent the optical flow patterns of people from arbitrary viewpoints, we develop a novel representation of human motion using low-dimensional spatio-temporal models that are learned using motion capture data of human subjects. In addition to human motion (the foreground) we probabilistically model the motion of generic scenes (the background); these statistical models are defined as Gibbsian fields specified from the first-order derivatives of motion observations. Detection and tracking are posed in a principled Bayesian framework which involves the computation of a posterior probability distribution over the model parameters (i.e., the location and the type of the human motion) given a sequence of optical flow observations. Particle filtering is used to represent and predict this non-Gaussian posterior distribution over time. The model parameters of samples from this distribution are related to the pose parameters of a 3D articulated model (e.g. the approximate joint angles and movement direction). Thus the approach proves suitable for initializing more complex probabilistic models of human motion. As shown by experiments on real image sequences, our method is able to detect and track people under different viewpoints with complex backgrounds.",
"",
"Our goal is to recognize human action at a distance, at resolutions where a whole person may be, say, 30 pixels tall. We introduce a novel motion descriptor based on optical flow measurements in a spatiotemporal volume for each stabilized human figure, and an associated similarity measure to be used in a nearest-neighbor framework. Making use of noisy optical flow measurements is the key challenge, which is addressed by treating optical flow not as precise pixel displacements, but rather as a spatial pattern of noisy measurements which are carefully smoothed and aggregated to form our spatiotemporal motion descriptor. To classify the action being performed by a human figure in a query sequence, we retrieve nearest neighbor(s) from a database of stored, annotated video sequences. We can also use these retrieved exemplars to transfer 2D 3D skeletons onto the figures in the query sequence, as well as two forms of data-based action synthesis \"do as I do\" and \"do as I say\". Results are demonstrated on ballet, tennis as well as football datasets.",
"We address the problem of articulated human pose estimation in videos using an ensemble of tractable models with rich appearance, shape, contour and motion cues. In previous articulated pose estimation work on unconstrained videos, using temporal coupling of limb positions has made little to no difference in performance over parsing frames individually [8, 28]. One crucial reason for this is that joint parsing of multiple articulated parts over time involves intractable inference and learning problems, and previous work has resorted to approximate inference and simplified models. We overcome these computational and modeling limitations using an ensemble of tractable submodels which couple locations of body joints within and across frames using expressive cues. Each submodel is responsible for tracking a single joint through time (e.g., left elbow) and also models the spatial arrangement of all joints in a single frame. Because of the tree structure of each submodel, we can perform efficient exact inference and use rich temporal features that depend on image appearance, e.g., color tracking and optical flow contours. We propose and experimentally investigate a hierarchy of submodel combination methods, and we find that a highly efficient max-marginal combination method outperforms much slower (by orders of magnitude) approximate inference using dual decomposition. We apply our pose model on a new video dataset of highly varied and articulated poses from TV shows. We show significant quantitative and qualitative improvements over state-of-the-art single-frame pose estimation approaches."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.