aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1904.02969 | 2930360627 | We present semantic attribute matching networks (SAM-Net) for jointly establishing correspondences and transferring attributes across semantically similar images, which intelligently weaves the advantages of the two tasks while overcoming their limitations. SAM-Net accomplishes this through an iterative process of establishing reliable correspondences by reducing the attribute discrepancy between the images and synthesizing attribute transferred images using the learned correspondences. To learn the networks using weak supervisions in the form of image pairs, we present a semantic attribute matching loss based on the matching similarity between an attribute transferred source feature and a warped target feature. With SAM-Net, the state-of-the-art performance is attained on several benchmarks for semantic matching and attribute transfer. | In parametric methods, inspired by the seminal work of @cite_9 , numerous methods have been presented, such as the work of @cite_26 , AdaIN @cite_28 , and WCT @cite_12 . Since these methods are globally formulated, they have shown limited performance for photorealistic stylization tasks @cite_5 @cite_54 . To alleviate these limitations, proposed a deep photo style transfer @cite_54 that computes and uses the semantic labels. proposed Photo-WCT @cite_5 to eliminate the artifacts using additional smoothing step. However, these methods still have been formulated without considering semantically meaningful correspondence fields. | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_9",
"@cite_54",
"@cite_5",
"@cite_12"
],
"mid": [
"2950689937",
"",
"1924619199",
"2604721644",
"2788095258",
"2962772087"
],
"abstract": [
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
"",
"In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks. Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.",
"This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.",
"Photorealistic image style transfer algorithms aim at stylizing a content photo using the style of a reference photo with the constraint that the stylized photo should remains photorealistic. While several methods exist for this task, they tend to generate spatially inconsistent stylizations with noticeable artifacts. In addition, these methods are computationally expensive, requiring several minutes to stylize a VGA photo. In this paper, we present a novel algorithm to address the limitations. The proposed algorithm consists of a stylization step and a smoothing step. While the stylization step transfers the style of the reference photo to the content photo, the smoothing step encourages spatially consistent stylizations. Unlike existing algorithms that require iterative optimization, both steps in our algorithm have closed-form solutions. Experimental results show that the stylized photos generated by our algorithm are twice more preferred by human subjects in average. Moreover, our method runs 60 times faster than the state-of-the-art approach. Code and additional results are available at this https URL",
"Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures by simple feature coloring."
]
} |
1904.02969 | 2930360627 | We present semantic attribute matching networks (SAM-Net) for jointly establishing correspondences and transferring attributes across semantically similar images, which intelligently weaves the advantages of the two tasks while overcoming their limitations. SAM-Net accomplishes this through an iterative process of establishing reliable correspondences by reducing the attribute discrepancy between the images and synthesizing attribute transferred images using the learned correspondences. To learn the networks using weak supervisions in the form of image pairs, we present a semantic attribute matching loss based on the matching similarity between an attribute transferred source feature and a warped target feature. With SAM-Net, the state-of-the-art performance is attained on several benchmarks for semantic matching and attribute transfer. | Among non-parametric methods, the seminal work of @cite_48 first searches local neural patches, which are similar to the patch of content image, in the target style image to preserve the local structure prior of content image, and then uses them to synthesize the stylized image. @cite_35 sped up this process using the feed-forward networks to decode the synthesize features. Inspired by this, various approaches have been proposed to synthesize locally blended features efficiently @cite_40 @cite_46 @cite_42 @cite_21 @cite_22 . However, the aforementioned methods are tailored to the artistic style transfer, and thus they focused on finding the patches to reconstruct more plausible images, rather than finding semantically meaningful dense correspondences. They generally estimate the nearest neighbor patches using weak implicit regularization methods such as WTA. Recently, @cite_37 introduced a deep feature reshuffle technique to connect both parametric and non-parametric methods, but they search the nearest neighbor using an expectation-maximization (EM) that also produces limited localization accuracy. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_22",
"@cite_48",
"@cite_42",
"@cite_21",
"@cite_40",
"@cite_46"
],
"mid": [
"2564755245",
"",
"2572730214",
"",
"2781372203",
"",
"2951745349",
"2952226636"
],
"abstract": [
"Artistic style transfer is an image synthesis problem where the content of an image is reproduced with the style of another. Recent works show that a visually appealing style transfer can be achieved by using the hidden activations of a pretrained convolutional neural network. However, existing methods either apply (i) an optimization procedure that works for any style image but is very expensive, or (ii) an efficient feedforward network that only allows a limited number of trained styles. In this work we propose a simpler optimization objective based on local matching that combines the content structure and style textures in a single layer of the pretrained network. We show that our objective has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame-by-frame performance on video. Furthermore, we use 80,000 natural images and 80,000 paintings to train an inverse network that approximates the result of the optimization. This results in a procedure for artistic style transfer that is efficient but also allows arbitrary content and style images.",
"",
"The recent work of , who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.",
"",
"Recently, the community of style transfer is trying to incorporate semantic information into traditional system. This practice achieves better perceptual results by transferring the style between semantically-corresponding regions. Yet, few efforts are invested to address the computation bottleneck of back-propagation. In this paper, we propose a new framework for fast semantic style transfer. Our method decomposes the semantic style transfer problem into feature reconstruction part and feature decoder part. The reconstruction part tactfully solves the optimization problem of content loss and style loss in feature space by particularly reconstructed feature. This significantly reduces the computation of propagating the loss through the whole network. The decoder part transforms the reconstructed feature into the stylized image. Through a careful bridging of the two modules, the proposed approach not only achieves competitive results as backward optimization methods but also is about two orders of magnitude faster.",
"",
"This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.",
"recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions."
]
} |
1904.03045 | 2928762708 | Machine Learning systems rely on data for training, input and ongoing feedback and validation. Data in the field can come from varied sources, often anonymous or unknown to the ultimate users of the data. Whenever data is sourced and used, its consumers need assurance that the data accuracy is as described, that the data has been obtained legitimately, and they need to understand the terms under which the data is made available so that they can honour them. Similarly, suppliers of data require assurances that their data is being used legitimately by authorised parties, in accordance with their terms, and that usage is appropriately recompensed. Furthermore, both parties may want to agree on a specific set of quality of service (QoS) metrics, which can be used to negotiate service quality based on cost, and then receive affirmation that data is being supplied within those agreed QoS levels. Here we present a conceptual architecture which enables data sharing agreements to be encoded and computationally enforced, remuneration to be made when required, and a trusted audit trail to be produced for later analysis or reproduction of the environment. Our architecture uses blockchain-based distributed ledger technology, which can facilitate transactions in situations where parties do not have an established trust relationship or centralised command and control structures. We explore techniques to promote faith in the accuracy of the supplied data, and to let data users determine trade-offs between data quality and cost. Our system is exemplified through consideration of a case study using multiple data sources from different parties to monitor traffic levels in urban locations. | In considering supply chains in the agri-food industry, Opara @cite_11 defines traceability as the collection, documentation, maintenance, and application of information related to all processes in the supply chain in a manner that provides guarantee to the consumer and other stakeholders on the origin, location and life history of a product as well as assisting in crises management in the event of a safety and quality breach.' This definition applies well to the requirements for traceability in data systems. Opara @cite_11 further identifies six important elements of traceability which combine to constitute an integrated food supply chain traceability system: product traceability, process traceability, genetic traceability, inputs traceability, disease and pest traceability and measurement traceability. Whilst these are not all directly applicable to a data system, parallels can readily be drawn such that a system to provide traceability in a data ecosystem would need to provide traceability on: products, processes, inputs, errors or corrupt data, and measurements. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2183812742"
],
"abstract": [
"In recent times, the accurate and timely traceability of products and activities in the supply chain has become a new factor in food and agribusiness. Increasingly, consumers in many parts of the world demand for verifiable evidence of traceability as an important criterion of food product quality safety. This trend has been underpinned by several market-pull factors including increasing global demand for food products originating from diverse sources, high incidence of food-related health hazards and increasing concern over the impacts of genetically modified organisms (GMOs) on the human food chain and the environment. In order to meet consumer demands for consistent supply of top quality, safe and nutritious foods, as well as rebuild public confidence in the food chain, the design and implementation of full backward and forward traceable supply chains from farm to end-user has become an important part of the overall food quality assurance system. Farmers, postharvest handling operators, marketers, research practitioners and policy makers need good understanding of the concepts and implications of supply chain traceability to assist in developing and implementing appropriate technological interventions to meet consumer demands for traceable agricultural supply chains. The objectives of this article are to: (a) review the concepts of supply chain management and traceability in agriculture, and (b) highlight the technological challenges in implementing traceable agricultural supply chains. Development of appropriate measurement tools for food product labeling and identification, activity process characterization, information systems for data capture, analysis, storage and communication, and the integration of the overall traceable supply chain are essential for success."
]
} |
1904.03082 | 2941432869 | The losses arising from a system being hit by cyber attacks can be staggeringly high, but defending against such attacks can also be costly. This work proposes an attack countermeasure selection approach based on cost impact analysis that takes into account the impacts of actions by both the attacker and the defender. We consider a networked system providing services whose provision depends on other components in the network. We model the costs and losses to service availability from compromises and defensive actions to the components, and show that while containment of the attack can be an effective defensive strategy, it can be more cost-efficient to allow parts of the attack to continue further whilst focusing on recovering services to a functional state. Based on this insight, we build a countermeasure selection method that chooses the most cost-effective action based on its impact on expected losses and costs over a given time horizon. Our method is evaluated using simulations in synthetic graphs representing network dependencies and vulnerabilities, and found to perform well in comparison to alternatives. | Our work differs from most of the existing literature on resilience by focusing on the actions and investment choices during an ongoing event (attack) and the recovery phase, instead of preparatory planning and capability investment. This focus is intended to address the evolving nature of systems, and adaptation to conditions such as loss of confidentiality or network unavailability etc. Most cyber resilience works have focused on the planning and design stage, such as @cite_14 @cite_3 @cite_15 @cite_10 . Additionally, papers considering reactive response and recovery apply to narrow settings which do not apply to our work. For example, the approaches by @cite_18 @cite_2 only apply to settings where a control action to correct for a deviation from desired performance is easy to determine in advance, and to apply automatically. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_10"
],
"mid": [
"1993881789",
"1980156780",
"2125025316",
"2757657826",
"2254046393",
"2766738887"
],
"abstract": [
"This paper presents an unifying graph-based model for representing the infrastructure, behavior and missions of an enterprise. We describe how the model can be used to achieve resiliency against a wide class of failures and attacks. We introduce an algorithm for recommending resilience establishing actions based on dynamic updates to the models. Without loss of generality, we show the effectiveness of the algorithm for preserving latency based quality of service (QoS). Our models and the recommendation algorithms are implemented in a software framework that we seek to release as an open source framework for simulating resilient cyber systems.",
"The Internet has become essential to all aspects of modern life, and thus the consequences of network disruption have become increasingly severe. It is widely recognised that the Internet is not sufficiently resilient, survivable, and dependable, and that significant research, development, and engineering is necessary to improve the situation. This paper provides an architectural framework for resilience and survivability in communication networks and provides a survey of the disciplines that resilience encompasses, along with significant past failures of the network infrastructure. A resilience strategy is presented to defend against, detect, and remediate challenges, a set of principles for designing resilient networks is presented, and techniques are described to analyse network resilience.",
"Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers. We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.",
"Successful recovery from a disrupted state to maintain optimal performance is a key feature that a resilient complex engineered system should have. In the engineering design community, the current focus of engineering resilience research is primarily directed toward improving overall system performance in the presence of likelihood failures. Little attention has been given to the study of how the system responds during and or after the occurrence of a failure event. This paper proposes the use of control theory as a strategy to enable resilient behavior in complex engineered systems. Control theory has various benefits in its application to a resilient engineered system, with the main advantage being its ability to regulate and govern system states, even while the failure is taking place. In the context of implementation within a complex engineered system, such a controller should be designed such that, when a disturbance occurs, the controller should simultaneously be able to take timely action to correct the shift in system performance. To date, the fusion of control theory with engineering resilience has not been explored in-depth by the engineering design community. This paper, thus, presents a resilience modeling and analysis approach using fundamental control theory. The resilience of a power distribution system is employed as a case study to demonstrate the effectiveness of the proposed approach. The presented study also expects to aid in the concurrent development of resilience functions in complex engineered systems under uncertainty.",
"Building resilience into today's complex infrastructures is critical to the daily functioning of society and its ability to withstand and recover from natural disasters, epidemics, and cyber-threats. This study proposes quantitative measures that capture and implement the definition of engineering resilience advanced by the National Academy of Sciences. The approach is applicable across physical, information, and social domains. It evaluates the critical functionality, defined as a performance function of time set by the stakeholders. Critical functionality is a source of valuable information, such as the integrated system resilience over a time interval, and its robustness. The paper demonstrates the formulation on two classes of models: 1) multi-level directed acyclic graphs, and 2) interdependent coupled networks. For both models synthetic case studies are used to explore trends. For the first class, the approach is also applied to the Linux operating system. RESULTS indicate that desired resilience and robustness levels are achievable by trading off different design parameters, such as redundancy, node recovery time, and backup supply available. The nonlinear relationship between network parameters and resilience levels confirms the utility of the proposed approach, which is of benefit to analysts and designers of complex systems and networks. Language: en",
"There are many models and metrics developed to study the resilience of networks. Eigenvalues are the roots of the characteristic polynomial for a given graph and are mathematically rigorous compared to a statistical measure such as degree distribution. The graph energy is the sum of absolute values of eigenvalues; there is a subtle difference between the adjacency, Laplacian, and normalized Laplacian graph energy calculations. Our primary objective in this paper is to understand what different graph energy mean from a network resilience point of view. We calculate the adjacency, Laplacian, and normalized Laplacian graph energies on four backbone networks under targeted node and link attack scenarios. While adjacency and Laplacian graph energy decrease with node and link attacks, the normalized Laplacian energy increases with link attacks converging to a maximum value equal to the network order. The structural similarities of physical-level topologies is revealed by the close values of adjacency and Laplacian energies."
]
} |
1904.03008 | 2941709082 | Planning in stochastic and partially observable environments is a central issue in artificial intelligence. One commonly used technique for solving such a problem is by constructing an accurate model firstly. Although some recent approaches have been proposed for learning optimal behaviour under model uncertainty, prior knowledge about the environment is still needed to guarantee the performance of the proposed algorithms. With the benefits of the Predictive State Representations (PSRs) approach for state representation and model prediction, in this paper, we introduce an approach for planning from scratch, where an offline PSR model is firstly learned and then combined with online Monte-Carlo tree search for planning with model uncertainty. By comparing with the state-of-the-art approach of planning with model uncertainty, we demonstrated the effectiveness of the proposed approaches along with the proof of their convergence. The effectiveness and scalability of our proposed approach are also tested on the RockSample problem, which are infeasible for the state-of-the-art BA-POMDP based approaches. | With the benefits of online and sample-based planning for solving larger problems @cite_33 @cite_0 @cite_22 @cite_13 , some approaches have been proposed to solve the BA-POMDP model in an online manner. In the work of @cite_6 , an online POMDP solver is proposed by focusing on finding the optimal action to perform in the current belief of the agent. @cite_32 @cite_7 extend the Monte-Carlo Tree Search method POMCP to BA-POMDPs, results in the state-of-the-art framework for learning and planning in BA-POMDPs. In the work of @cite_7 , a Factored Bayes-Adaptive POMDP model is introduced by exploiting the underlying structure of some specific domains. While these approaches show promising performance on some problems, like other Bayesian-based approaches in the literature, the performance is very dependent on the prior knowledge. | {
"cite_N": [
"@cite_22",
"@cite_33",
"@cite_7",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_13"
],
"mid": [
"2171084228",
"1512919909",
"2901283691",
"2963067607",
"2168839459",
"2144913588",
"2952258289"
],
"abstract": [
"This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent's belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, Monte-Carlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 x 10 battleship and partially observable PacMan, with approximately 1018 and 1056 states respectively. Our Monte-Carlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.",
"A critical issue for the application of Markov decision processes (MDPs) to realistic problems is how the complexity of planning scales with the size of the MDP. In stochastic environments with very large or infinite state spaces, traditional planning and reinforcement learning algorithms may be inapplicable, since their running time typically grows linearly with the state space size in the worst case. In this paper we present a new algorithm that, given only a generative model (a natural and common type of simulator) for an arbitrary MDP, performs on-line, near-optimal planning with a per-state running time that has no dependence on the number of states. The running time is exponential in the horizon time (which depends only on the discount factor γ and the desired degree of approximation to the optimal policy). Our algorithm thus provides a different complexity trade-off than classical algorithms such as value iteration—rather than scaling linearly in both horizon time and state space size, our running time trades an exponential dependence on the former in exchange for no dependence on the latter. Our algorithm is based on the idea of sparse sampling. We prove that a randomly sampled look-ahead tree that covers only a vanishing fraction of the full look-ahead tree nevertheless suffices to compute near-optimal actions from any state of an MDP. Practical implementations of the algorithm are discussed, and we draw ties to our related recent results on finding a near-best strategy from a given class of strategies in very large partially observable MDPs (Kearns, Mansour, & Ng. Neural information processing systems 13, to appear).",
"Model-based Bayesian Reinforcement Learning (BRL) provides a principled solution to dealing with the exploration-exploitation trade-off, but such methods typically assume a fully observable environments. The few Bayesian RL methods that are applicable in partially observable domains, such as the Bayes-Adaptive POMDP (BA-POMDP), scale poorly. To address this issue, we introduce the Factored BA-POMDP model (FBA-POMDP), a framework that is able to learn a compact model of the dynamics by exploiting the underlying structure of a POMDP. The FBA-POMDP framework casts the problem as a planning task, for which we adapt the Monte-Carlo Tree Search planning algorithm and develop a belief tracking method to approximate the joint posterior over the state and model variables. Our empirical results show that this method outperforms a number of BRL baselines and is able to learn efficiently when the factorization is known, as well as learn both the factorization and the model parameters simultaneously.",
"The POMDP is a powerful framework for reasoning under outcome and information uncertainty, but constructing an accurate POMDP model is difficult. Bayes-Adaptive Partially Observable Markov Decision Processes (BA-POMDPs) extend POMDPs to allow the model to be learned during execution. BA-POMDPs are a Bayesian RL approach that, in principle, allows for an optimal trade-off between exploitation and exploration. Unfortunately, BA-POMDPs are currently impractical to solve for any non-trivial domain. In this paper, we extend the Monte-Carlo Tree Search method POMCP to BA-POMDPs and show that the resulting method, which we call BA-POMCP, is able to tackle problems that previous solution methods have been unable to solve. Additionally, we introduce several techniques that exploit the BA-POMDP structure to improve the efficiency of BA-POMCP along with proof of their convergence.",
"Bayesian learning methods have recently been shown to provide an elegant solution to the exploration-exploitation trade-off in reinforcement learning. However most investigations of Bayesian reinforcement learning to date focus on the standard Markov Decision Processes (MDPs). The primary focus of this paper is to extend these ideas to the case of partially observable domains, by introducing the Bayes-Adaptive Partially Observable Markov Decision Processes. This new framework can be used to simultaneously (1) learn a model of the POMDP domain through interaction with the environment, (2) track the state of the system under partial observability, and (3) plan (near-)optimal sequences of actions. An important contribution of this paper is to provide theoretical results showing how the model can be finitely approximated while preserving good learning performance. We present approximate algorithms for belief tracking and planning in this model, as well as empirical results that illustrate how the model estimate and agent's return improve as a function of experience.",
"Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently.",
"The partially observable Markov decision process (POMDP) provides a principled general framework for planning under uncertainty, but solving POMDPs optimally is computationally intractable, due to the \"curse of dimensionality\" and the \"curse of history\". To overcome these challenges, we introduce the Determinized Sparse Partially Observable Tree (DESPOT), a sparse approximation of the standard belief tree, for online planning under uncertainty. A DESPOT focuses online planning on a set of randomly sampled scenarios and compactly captures the \"execution\" of all policies under these scenarios. We show that the best policy obtained from a DESPOT is near-optimal, with a regret bound that depends on the representation size of the optimal policy. Leveraging this result, we give an anytime online planning algorithm, which searches a DESPOT for a policy that optimizes a regularized objective function. Regularization balances the estimated value of a policy under the sampled scenarios and the policy size, thus avoiding overfitting. The algorithm demonstrates strong experimental results, compared with some of the best online POMDP algorithms available. It has also been incorporated into an autonomous driving system for real-time vehicle control. The source code for the algorithm is available online."
]
} |
1904.03008 | 2941709082 | Planning in stochastic and partially observable environments is a central issue in artificial intelligence. One commonly used technique for solving such a problem is by constructing an accurate model firstly. Although some recent approaches have been proposed for learning optimal behaviour under model uncertainty, prior knowledge about the environment is still needed to guarantee the performance of the proposed algorithms. With the benefits of the Predictive State Representations (PSRs) approach for state representation and model prediction, in this paper, we introduce an approach for planning from scratch, where an offline PSR model is firstly learned and then combined with online Monte-Carlo tree search for planning with model uncertainty. By comparing with the state-of-the-art approach of planning with model uncertainty, we demonstrated the effectiveness of the proposed approaches along with the proof of their convergence. The effectiveness and scalability of our proposed approach are also tested on the RockSample problem, which are infeasible for the state-of-the-art BA-POMDP based approaches. | Given the accurate model of the environment to be known a prior, combining approximate offline and online solving approaches is an efficient way to tackle large POMDPs by using offline algorithms to compute lower and upper bounds on the optimal value function @cite_2 . For the fully observable domains, in the work of @cite_26 , offline and online value functions are combined in the UCT algorithm, where the offline value function is learned by using the @math algorithm @cite_17 and used as prior knowledge in the UCT search tree, experimental results in a @math Go program (MoGo) demonstrates the effectiveness of such a combination. | {
"cite_N": [
"@cite_26",
"@cite_17",
"@cite_2"
],
"mid": [
"",
"2100677568",
"2096976789"
],
"abstract": [
"",
"This article introduces a class of incremental learning procedures specialized for prediction – that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage.",
"Planning in partially observable environments remains a challenging problem, despite significant recent advances in offline approximation techniques. A few online methods have also been proposed recently, and proven to be remarkably scalable, but without the theoretical guarantees of their offline counterparts. Thus it seems natural to try to unify offline and online techniques, preserving the theoretical properties of the former, and exploiting the scalability of the latter. In this paper, we provide theoretical guarantees on an anytime algorithm for POMDPs which aims to reduce the error made by approximate offline value iteration algorithms through the use of an efficient online searching procedure. The algorithm uses search heuristics based on an error analysis of lookahead search, to guide the online search towards reachable beliefs with the most potential to reduce error. We provide a general theorem showing that these search heuristics are admissible, and lead to complete and ∊-optimal algorithms. This is, to the best of our knowledge, the strongest theoretical result available for online POMDP solution methods. We also provide empirical evidence showing that our approach is also practical, and can find (provably) near-optimal solutions in reasonable time."
]
} |
1904.03107 | 2952180055 | Self-attention network (SAN) has recently attracted increasing interest due to its fully parallelized computation and flexibility in modeling dependencies. It can be further enhanced with multi-headed attention mechanism by allowing the model to jointly attend to information from different representation subspaces at different positions (, 2017). In this work, we propose a novel convolutional self-attention network (CSAN), which offers SAN the abilities to 1) capture neighboring dependencies, and 2) model the interaction between multiple attention heads. Experimental results on WMT14 English-to-German translation task demonstrate that the proposed approach outperforms both the strong Transformer baseline and other existing works on enhancing the locality of SAN. Comparing with previous work, our model does not introduce any new parameters. | Concerning modeling locality for s, injected several CNN layers @cite_13 to fuse local information, the output of which is fed to the subsequent SAN layer. Several researches proposed to revise the attention distribution with a parametric localness bias, and succeed on machine translation @cite_1 and natural language inference @cite_17 . While both models introduce additional parameters, our approach is a more lightweight solution without introducing any new parameters. Closely related to this work, applied a positional mask to encode temporal order, which only allows SANs to attend to the previous or following tokens in the sequence. In contrast, we employ a positional mask (i.e. the tokens outside the local window is masked as @math ) to encode the distance-aware local information. | {
"cite_N": [
"@cite_1",
"@cite_13",
"@cite_17"
],
"mid": [
"2889657534",
"",
"2905016804"
],
"abstract": [
"Self-attention networks have proven to be of profound value for its strength of capturing global dependencies. In this work, we propose to model localness for self-attention networks, which enhances the ability of capturing useful local context. We cast localness modeling as a learnable Gaussian bias, which indicates the central and scope of the local region to be paid more attention. The bias is then incorporated into the original attention distribution to form a revised distribution. To maintain the strength of capturing long distance dependencies and enhance the ability of capturing short-range dependencies, we only apply localness modeling to lower layers of self-attention networks. Quantitative and qualitative analyses on Chinese-English and English-German translation tasks demonstrate the effectiveness and universality of the proposed approach.",
"",
"Natural Language Inference (NLI) is an active research area, where numerous approaches based on recurrent neural networks (RNNs), convolutional neural networks (CNNs), and self-attention networks (SANs) has been proposed. Although obtaining impressive performance, previous recurrent approaches are hard to train in parallel; convolutional models tend to cost more parameters, while self-attention networks are not good at capturing local dependency of texts. To address this problem, we introduce a Gaussian prior to selfattention mechanism, for better modeling the local structure of sentences. Then we propose an efficient RNN CNN-free architecture named Gaussian Transformer for NLI, which consists of encoding blocks modeling both local and global dependency, high-order interaction blocks collecting the evidence of multi-step inference, and a lightweight comparison block saving lots of parameters. Experiments show that our model achieves new state-of-the-art performance on both SNLI and MultiNLI benchmarks with significantly fewer parameters and considerably less training time. Besides, evaluation using the Hard NLI datasets demonstrates that our approach is less affected by the undesirable annotation artifacts."
]
} |
1904.03158 | 2931240292 | This paper presents a hierarchical control strategy based on hybrid systems theory, nonlinear control, and safety-critical systems to enable cooperative locomotion of robotic guide dogs and visually impaired people. We address high-dimensional and complex hybrid dynamical models that represent collaborative locomotion. At the high level of the control scheme, local and nonlinear baseline controllers, based on the virtual constraints approach, are designed to induce exponentially stable dynamic gaits. The baseline controller for the leash is assumed to be a nonlinear controller that keeps the human in a safe distance from the dog while following it. At the lower level, a real-time quadratic programming (QP) is solved for modifying the baseline controllers of the robot as well as the leash to avoid obstacles. In particular, the QP framework is set up based on control barrier functions (CBFs) to compute optimal control inputs that guarantee safety while being close to the baseline controllers. The stability of the complex periodic gaits is investigated through the Poincare return map. To demonstrate the power of the analytical foundation, the control algorithms are transferred into an extensive numerical simulation of a complex model that represents cooperative locomotion of a quadrupedal robot, referred to as Vision 60, and a human model. The complex model has 16 continuous-time domains with 60 state variables and 20 control inputs. | Although important theoretical and technological advances have occurred for the construction and control of guide robots, state-of-the-art approaches are mainly tailored to the deployment of wheeled vehicles and legged guide robots (e.g., @cite_23 @cite_33 @cite_17 ). Unlike wheeled guide robots, legged robots are complex dynamical systems with hybrid nature and high degrees of freedom (DOF). This complicates the design of feedback control algorithms that ensure stable and safe cooperative locomotion of guide dogs and human. Hybrid systems theory has become a powerful approach for modeling and control of legged robots both in theory and practice @cite_28 @cite_18 @cite_26 @cite_13 @cite_12 @cite_15 @cite_7 @cite_24 @cite_8 @cite_25 @cite_2 @cite_11 @cite_5 @cite_3 . Existing nonlinear control approaches that address the hybrid nature of legged locomotion models are developed based on hybrid reduction @cite_9 , controlled symmetries @cite_11 , transverse linearization @cite_5 , and hybrid zero dynamics (HZD) @cite_18 @cite_13 . State-of-the art nonlinear control approaches for dynamic legged locomotion have been tailored to stable locomotion of legged robots, but stable and safe cooperative locomotion of legged guide robots and visually impaired people. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_11",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_3",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2610901891",
"",
"",
"",
"2042921298",
"",
"1779583466",
"2512603523",
"",
"2889628315",
"1543608798",
"2022781970",
"",
"",
"1515982654",
"2792099719",
"2120436369"
],
"abstract": [
"",
"This paper presents three feedback controllers that achieve an asymptotically stable, periodic, and fast walking gait for a 3-D bipedal robot consisting of a torso, revolute knees, and passive (unactuated) point feet. The walking surface is assumed to be rigid and flat; the contact between the robot and the walking surface is assumed to inhibit yaw rotation. The studied robot has 8 DOF in the single support phase and six actuators. In addition to the reduced number of actuators, the interest of studying robots with point feet is that the feedback control solution must explicitly account for the robot's natural dynamics in order to achieve balance while walking. We use an extension of the method of virtual constraints and hybrid zero dynamics (HZD), a very successful method for planar bipeds, in order to simultaneously compute a periodic orbit and an autonomous feedback controller that realizes the orbit, for a 3-D (spatial) bipedal walking robot. This method allows the computations for the controller design and the periodic orbit to be carried out on a 2-DOF subsystem of the 8-DOF robot model. The stability of the walking gait under closed-loop control is evaluated with the linearization of the restricted Poincare map of the HZD. Most periodic walking gaits for this robot are unstable when the controlled outputs are selected to be the actuated coordinates. Three strategies are explored to produce stable walking. The first strategy consists of imposing a stability condition during the search of a periodic gait by optimization. The second strategy uses an event-based controller to modify the eigenvalues of the (linearized) Poincare map. In the third approach, the effect of output selection on the zero dynamics is discussed and a pertinent choice of outputs is proposed, leading to stabilization without the use of a supplemental event-based controller.",
"",
"",
"",
"While legged animals are adept at traversing rough landscapes, it remains a very challenging task for a legged robot to negotiate unknown terrain. Control systems for legged robots are plagued by dynamic constraints from underactuation, actuator power limits, and frictional ground contact; rather than relying purely on disturbance rejection, considerable advantage can be obtained by planning nominal trajectories which are more easily stabilized. In this paper, we present an approach for designing nominal periodic trajectories for legged robots that maximize a measure of robustness against uncertainty in the geometry of the terrain. We propose a direct collocation method which solves simultaneously for a nominal periodic control input, for many possible one-step solution trajectories (using ground profiles drawn from a distribution over terrain), and for the periodic solution to a jump Riccati equation which provides an expected infinite-horizon cost-to-go for each of these samples. We demonstrate that this trajectory optimization scheme can recover the known deadbeat open-loop control solution for the Spring Loaded Inverted Pendulum (SLIP) on unknown terrain. Moreover, we demonstrate that it generalizes to other models like the bipedal compass gait walker, resulting in a dramatic increase in the number of steps taken over moderate terrain when compared against a limit cycle optimized for efficiency only.",
"",
"The purpose of this paper is to apply methods from geometric mechanics to the analysis and control of bipedal robotic walkers. We begin by introducing a generalization of Routhian reduction, functional Routhian Reduction, which allows for the conserved quantities to be functions of the cyclic variables rather than constants. Since bipedal robotic walkers are naturally modeled as hybrid systems, which are inherently nonsmooth, in order to apply this framework to these systems it is necessary to first extend functional Routhian reduction to a hybrid setting. We apply this extension, along with potential shaping and controlled symmetries, to derive a feedback control law that provably results in walking gaits on flat ground for a three-dimensional bipedal walker given walking gaits in two dimensions.",
"While the goal of robotic bipedal walking to date has been the development of anthropomorphic gait, the community as a whole has been unable to agree upon an appropriate model to generate such gait. In this paper, we describe a method to segment human walking data in order to generate a robotic model capable of human-like walking. Generating the model requires the determination of the sequence of contact point enforcements which requires solving a combinatorial scheduling problem. We resolve this problem by transforming the detection of contact point enforcements into a constrained switched system optimal control problem for which we develop a provably convergent algorithm. We conclude the paper by illustrating the performance of the algorithm on identifying a model for robotic bipedal walking.",
"",
"Navigation in pedestrian environments is critical to enabling independent mobility for the blind and visually impaired (BVI) in their daily lives. White canes have been commonly used to obtain contact feedback for following walls, curbs, or man-made trails, whereas guide dogs can assist in avoiding physical contact with obstacles or other pedestrians. However, the infrastructures of tactile trails or guide dogs are expensive to maintain. Inspired by the autonomous lane following of self-driving cars, we wished to combine the capabilities of existing navigation solutions for BVI users. We proposed an autonomous, trail-following robotic guide dog that would be robust to variances of background textures, illuminations, and interclass trail variations. A deep convolutional neural network (CNN) is trained from both the virtual and realworld environments. Our work included major contributions: 1) conducting experiments to verify that the performance of our models trained in virtual worlds was comparable to that of models trained in the real world; 2) conducting user studies with 10 blind users to verify that the proposed robotic guide dog could effectively assist them in reliably following man-made trails.",
"Rigid bodies, plastic impact, persistent contact, Coulomb friction, and massless limbs are ubiquitous simplifications introduced to reduce the complexity of mechanics models despite the obvious physical inaccuracies that each incurs individually. In concert, it is well known that the interaction of such idealized approximations can lead to conflicting and even paradoxical results. As robotics modeling moves from the consideration of isolated behaviors to the analysis of tasks requiring their composition, a mathematically tractable framework for building models that combine these simple approximations yet achieve reliable results is overdue. In this paper we present a formal hybrid dynamical system model that introduces suitably restricted compositions of these familiar abstractions with the guarantee of consistency analogous to global existence and uniqueness in classical dynamical systems. The hybrid system developed here provides a discontinuous but self-consistent approximation to the continuous though possibly very stiff and fast dynamics of a physical robot undergoing intermittent impacts. The modeling choices sacrifice some quantitative numerical efficiencies while maintaining qualitatively correct and analytically tractable results with consistency guarantees promoting their use in formal reasoning about mechanism, feedback control, and behavior design in robots that make and break contact with their environment.",
"We propose a constructive control design for stabilization of non-periodic trajectories of underactuated robots. An important example of such a system is an underactuated âdynamic walkingâ biped robot traversing rough or uneven terrain. The stabilization problem is inherently challenging due to the nonlinearity, open-loop instability, hybrid (impact) dynamics, and target motions which are not known in advance. The proposed technique is to compute a transverse linearization about the desired motion: a linear impulsive system which locally represents âtransversalâ dynamics about a target trajectory. This system is then exponentially stabilized using a modified receding-horizon control design, providing exponential orbital stability of the target trajectory of the original nonlinear system. The proposed method is experimentally verified using a compass-gait walker: a two-degree-of-freedom biped with hip actuation but pointed stilt-like feet. The technique is, however, very general and can be applied to a wide variety of hybrid nonlinear systems.",
"",
"",
"For an underactuated biped on a constant-slope terrain, the hybrid zero dynamics (HZD) controller framework provides exponentially stable walking motions. In this paper, we quantify the stability of such a control system on rough terrain by estimating the expected number of steps before failure. In addition, we show how to switch between multiple HZD controllers (optionally using terrain look-ahead) to increase the stability dramatically, e.g., 10 thousand steps compared to 10. To do this robustly, we make use of the new meshing method proposed in this paper.",
"Hybrid zero dynamics (HZD) has emerged as a popular framework for dynamic walking but has significant implementation difficulties when applied to the high degrees of freedom humanoids. The primary impediment is the process of gait design—it is difficult for optimizers to converge on a viable set of virtual constraints defining a gait. This paper presents a methodology that allows for fast and reliable generation of dynamic robotic walking gaits through the HZD framework, even in the presence of underactuation. Specifically, we describe an optimization formulation that builds upon the novel combination of HZD and direct collocation methods. Furthermore, achieving a scalable implementation required developing a defect-variable substitution formulation to simplify expressions, which ultimately allows us to generate compact analytic Jacobians of the constraints. We experimentally validate our methodology on an underactuated humanoid, DURUS, a spring-legged machine designed to facilitate energy-economical walking. We show that the optimization approach, in concert with the HZD framework, yields dynamic and stable walking gaits in hardware with a total electrical cost of transport of 1.33.",
"The GuideCane is a device designed to help blind or visually impaired users navigate safely and quickly among obstacles and other hazards. During operation, the user pushes the lightweight GuideCane forward. When the GuideCane's ultrasonic sensors detect an obstacle, the embedded computer determines a suitable direction of motion that steers the GuideCane and the user around it. The steering action results in a very noticeable force felt in the handle, which easily guides the user without any conscious effort on his her part."
]
} |
1904.03100 | 2933442455 | Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces. Concerning the information aggregation, a common practice is to use a concatenation followed by a linear transformation, which may not fully exploit the expressiveness of multi-head attention. In this work, we propose to improve the information aggregation for multi-head attention with a more powerful routing-by-agreement algorithm. Specifically, the routing algorithm iteratively updates the proportion of how much a part (i.e. the distinct information learned from a specific subspace) should be assigned to a whole (i.e. the final output representation), based on the agreement between parts and wholes. Experimental results on linguistic probing tasks and machine translation tasks prove the superiority of the advanced information aggregation over the standard linear transformation. | Multi-head attention has shown promising empirical results in many NLP tasks, such as machine translation @cite_7 @cite_13 , semantic role labeling @cite_2 , and subject-verb agreement task @cite_3 . The strength of multi-head attention lies in the rich expressiveness by using multiple attention functions in different representation subspaces. | {
"cite_N": [
"@cite_3",
"@cite_13",
"@cite_7",
"@cite_2"
],
"mid": [
"2950399211",
"2798761464",
"2626778328",
"2798638375"
],
"abstract": [
"Recently, non-recurrent architectures (convolutional, self-attentional) have outperformed RNNs in neural machine translation. CNNs and self-attentional networks can connect distant words via shorter network paths than RNNs, and it has been speculated that this improves their ability to model long-range dependencies. However, this theoretical argument has not been tested empirically, nor have alternative explanations for their strong performance been explored in-depth. We hypothesize that the strong performance of CNNs and self-attentional networks could also be due to their ability to extract semantic features from the source text, and we evaluate RNNs, CNNs and self-attention networks on two tasks: subject-verb agreement (where capturing long-range dependencies is required) and word sense disambiguation (where semantic feature extraction is required). Our experimental results show that: 1) self-attentional networks and CNNs do not outperform RNNs in modeling subject-verb agreement over long distances; 2) self-attentional networks perform distinctly better than RNNs and CNNs on word sense disambiguation.",
"",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",
"The current state-of-the-art end-to-end semantic role labeling (SRL) model is a deep neural network architecture with no explicit linguistic features. However, prior work has shown that gold syntax trees can dramatically improve SRL, suggesting that neural network models could see great improvements from explicit modeling of syntax. In this work, we present linguistically-informed self-attention (LISA): a new neural network model that combines multi-head self-attention with multi-task learning across dependency parsing, part-of-speech, predicate detection and SRL. For example, syntax is incorporated by training one of the attention heads to attend to syntactic parents for each token. Our model can predict all of the above tasks, but it is also trained such that if a high-quality syntactic parse is already available, it can be beneficially injected at test time without re-training our SRL model. In experiments on the CoNLL-2005 SRL dataset LISA achieves an increase of 2.5 F1 absolute over the previous state-of-the-art on newswire with predicted predicates and more than 2.0 F1 on out-of-domain data. On ConLL-2012 English SRL we also show an improvement of more than 3.0 F1, a 13 reduction in error."
]
} |
1904.03100 | 2933442455 | Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces. Concerning the information aggregation, a common practice is to use a concatenation followed by a linear transformation, which may not fully exploit the expressiveness of multi-head attention. In this work, we propose to improve the information aggregation for multi-head attention with a more powerful routing-by-agreement algorithm. Specifically, the routing algorithm iteratively updates the proportion of how much a part (i.e. the distinct information learned from a specific subspace) should be assigned to a whole (i.e. the final output representation), based on the agreement between parts and wholes. Experimental results on linguistic probing tasks and machine translation tasks prove the superiority of the advanced information aggregation over the standard linear transformation. | Recently, the routing-by-agreement algorithm, which origins from the capsule networks @cite_6 , becomes an appealing alternative to representation composition. The majority of existing work on capsule networks has focused on computer vision tasks, such as MNIST tasks @cite_25 @cite_28 , CIFAR tasks @cite_10 , and object segmentation task @cite_21 . The applications of capsule networks in NLP tasks, however, have not been widely investigated to date. testify capsule networks on text classification tasks and propose to aggregate a sequence of vectors via dynamic routing for sequence encoding. use routing-by-agreement strategies to aggregate layer representations dynamically. Inspired by these successes, we apply the routing algorithms to multi-head attention on both linguistic probing and machine translation tasks, which demonstrates the necessity and effectiveness of advanced information aggregation for multi-head attention. | {
"cite_N": [
"@cite_28",
"@cite_21",
"@cite_6",
"@cite_10",
"@cite_25"
],
"mid": [
"2785994986",
"2797472209",
"",
"2775143585",
"2751777443"
],
"abstract": [
"A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules [a group of capsules forms a capsule layer and can be used in place of a traditional layer in a neural net]. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45 compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attack than our baseline convolutional neural network.",
"Convolutional neural networks (CNNs) have shown remarkable results over the last several years for a wide range of computer vision tasks. A new architecture recently introduced by , referred to as a capsule networks with dynamic routing, has shown great initial results for digit recognition and small image classification. The success of capsule networks lies in their ability to preserve more information about the input by replacing max-pooling layers with convolutional strides and dynamic routing, allowing for preservation of part-whole relationships in the data. This preservation of the input is demonstrated by reconstructing the input from the output capsule vectors. Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature. We extend the idea of convolutional capsules with locally-connected routing and propose the concept of deconvolutional capsules. Further, we extend the masked reconstruction to reconstruct the positive input class. The proposed convolutional-deconvolutional capsule network, called SegCaps, shows strong results for the task of object segmentation with substantial decrease in parameter space. As an example application, we applied the proposed SegCaps to segment pathological lungs from low dose CT scans and compared its accuracy and efficiency with other U-Net-based architectures. SegCaps is able to handle large image sizes (512 x 512) as opposed to baseline capsules (typically less than 32 x 32). The proposed SegCaps reduced the number of parameters of U-Net architecture by 95.4 while still providing a better segmentation accuracy.",
"",
"In recent years, convolutional neural networks (CNN) have played an important role in the field of deep learning. Variants of CNN's have proven to be very successful in classification tasks across different domains. However, there are two big drawbacks to CNN's: their failure to take into account of important spatial hierarchies between features, and their lack of rotational invariance. As long as certain key features of an object are present in the test data, CNN's classify the test data as the object, disregarding features' relative spatial orientation to each other. This causes false positives. The lack of rotational invariance in CNN's would cause the network to incorrectly assign the object another label, causing false negatives. To address this concern, propose a novel type of neural network using the concept of capsules in a recent paper. With the use of dynamic routing and reconstruction regularization, the capsule network model would be both rotation invariant and spatially aware. The capsule network has shown its potential by achieving a state-of-the-art result of 0.25 test error on MNIST without data augmentation such as rotation and scaling, better than the previous baseline of 0.39 . To further test out the application of capsule networks on data with higher dimensionality, we attempt to find the best set of configurations that yield the optimal test error on CIFAR10 dataset.",
"A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule."
]
} |
1904.02655 | 2931836675 | Industrial process control systems try to keep an output variable within a given tolerance around a target value. PID control systems have been widely used in industry to control input variables in order to reach this goal. However, this kind of Transfer Function based approach cannot be extended to complex processes where input data might be non-numeric, high dimensional, sparse, etc. In such cases, there is still a need for determining the subspace of input data that produces an output within a given range. This paper presents a non-stochastic heuristic to determine input values for a mathematical function or trained regression model given an output range. The proposed method creates a synthetic training data set of input combinations with a class label that indicates whether the output is within the given target range or not. Then, a decision tree classifier is used to determine the subspace of input data of interest. This method is more general than a traditional controller as the target range for the output does not have to be centered around a reference value and it can be applied given a regression model of the output variable, which may have categorical variables as inputs and may be high dimensional, sparse... The proposed heuristic is validated with a proof of concept on a real use case where the quality of a lamination factory is established to identify the suitable subspace of production variable values. | The patent by @cite_5 reports on a method for creating dynamic models of etch processes in semiconductor manufacturing. As stated in the patent . The method in this patent uses . This search to find input values is trial-and-error. This paper presents a method that automatises that process. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2255540041"
],
"abstract": [
"A method and system are disclosed for creating dynamic models of etch processes in semiconductor manufacturing. In one embodiment, a method comprises modeling an etch process used in semiconductor manufacturing to generate a dynamic process model. The dynamic process model is used to determine input values that result in a desired output value. A process recipe is optimized for the etch process with the input values."
]
} |
1904.02683 | 2927558373 | In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person and the object, contact positions, and forces and torques actuated by the human limbs. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of their interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent MoCap dataset with ground truth contact forces and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments. | aims to recover the 3D joint configuration of the person from the input image. Recent human 3D pose estimators either attempt to build a from image pixels to the 3D joints of the human body or break down the task into : estimating pixel coordinates of the joints in the input image and then lifting the 2D skeleton to 3D. Existing direct approaches either rely on generative models to search the state space for a plausible 3D skeleton that aligns with the image evidence @cite_39 @cite_49 @cite_32 or, more recently, extract deep features from images and learn a discriminative regressor from the 2D image to the 3D pose @cite_38 @cite_63 @cite_26 @cite_11 . Building on the recent progress in 2D human pose estimation @cite_54 @cite_2 @cite_52 @cite_33 , two-stage methods have been shown to be very effective @cite_31 @cite_53 @cite_42 @cite_15 and achieve state-of-the-art results @cite_69 on 3D human pose benchmarks @cite_36 . To deal with depth ambiguities, these estimators rely on good pose priors, which are either hand-crafted or learnt from large-scale MoCap data @cite_53 @cite_42 @cite_38 . However, unlike our work, these methods do not consider explicit models for 3D person-object interactions with contacts. | {
"cite_N": [
"@cite_38",
"@cite_69",
"@cite_26",
"@cite_33",
"@cite_15",
"@cite_36",
"@cite_54",
"@cite_53",
"@cite_2",
"@cite_32",
"@cite_52",
"@cite_39",
"@cite_42",
"@cite_63",
"@cite_49",
"@cite_31",
"@cite_11"
],
"mid": [
"2778680124",
"2612706635",
"2554247908",
"2559085405",
"2583372902",
"2101032778",
"2307770531",
"2963688992",
"2555751471",
"1508437923",
"",
"1643263348",
"2483862638",
"2557698284",
"1954310228",
"1943191679",
"2963592930"
],
"abstract": [
"We describe Human Mesh Recovery (HMR), an end-to-end framework for reconstructing a full 3D mesh of a human body from a single RGB image. In contrast to most current methods that compute 2D or 3D joint locations, we produce a richer and more useful mesh representation that is parameterized by shape and 3D joint angles. The main objective is to minimize the reprojection loss of keypoints, which allow our model to be trained using in-the-wild images that only have ground truth 2D annotations. However, reprojection loss alone is highly under constrained. In this work we address this problem by introducing an adversary trained to tell whether a human body parameter is real or not using a large database of 3D human meshes. We show that HMR can be trained with and without using any coupled 2D-to-3D supervision. We do not rely on intermediate 2D keypoint detection and infer 3D pose and shape parameters directly from image pixels. Our model runs in real-time given a bounding box containing the person. We demonstrate our approach on various images in-the-wild and out-perform previous optimizationbased methods that output 3D meshes and show competitive results on tasks such as 3D joint location estimation and part segmentation.",
"Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30 on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.",
"This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30 on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images.",
"We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.",
"We explore 3D human pose estimation from a single RGB image. While many approaches try to directly predict 3D pose from image measurements, we explore a simple architecture that reasons through intermediate 2D pose predictions. Our approach is based on two key observations (1) Deep neural nets have revolutionized 2D pose estimation, producing accurate 2D predictions even for poses with self-occlusions (2) Big-datasets of 3D mocap data are now readily available, making it tempting to lift predicted 2D poses to 3D through simple memorization (e.g., nearest neighbors). The resulting architecture is straightforward to implement with off-the-shelf 2D pose estimation systems and 3D mocap libraries. Importantly, we demonstratethatsuchmethodsoutperformalmostallstate-of-theart 3D pose estimation systems, most of which directly try to regress 3D pose from 2D measurements.",
"We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20 improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http: vision.imar.ro human3.6m .",
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"This paper addresses the challenge of 3D full-body human pose estimation from a monocular image sequence. Here, two cases are considered: (i) the image locations of the human joints are provided and (ii) the image locations of joints are unknown. In the former case, a novel approach is introduced that integrates a sparsity-driven 3D geometric prior and temporal smoothness. In the latter case, the former case is extended by treating the image locations of the joints as latent variables to take into account considerable uncertainties in 2D joint locations. A deep fully convolutional network is trained to predict the uncertainty maps of the 2D joint locations. The 3D pose estimates are realized via an Expectation-Maximization algorithm over the entire sequence, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Empirical evaluation on the Human3.6M dataset shows that the proposed approaches achieve greater 3D pose estimation accuracy over state-of-the-art baselines. Further, the proposed approach outperforms a publicly available 2D pose estimation baseline on the challenging PennAction dataset.",
"We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets.",
"Local optimization and filtering have been widely applied to model-based 3D human motion capture. Global stochastic optimization has recently been proposed as promising alternative solution for tracking and initialization. In order to benefit from optimization and filtering, we introduce a multi-layer framework that combines stochastic optimization, filtering, and local optimization. While the first layer relies on interacting simulated annealing and some weak prior information on physical constraints, the second layer refines the estimates by filtering and local optimization such that the accuracy is increased and ambiguities are resolved over time without imposing restrictions on the dynamics. In our experimental evaluation, we demonstrate the significant improvements of the multi-layer framework and provide quantitative 3D pose tracking results for the complete HumanEva-II dataset. The paper further comprises a comparison of global stochastic optimization with particle filtering, annealed particle filtering, and local optimization.",
"",
"A probabilistic method for tracking 3D articulated human figures in monocular image sequences is presented. Within a Bayesian framework, we define a generative model of image appearance, a robust likelihood function based on image graylevel differences, and a prior probability distribution over pose and joint angles that models how humans move. The posterior probability distribution over model parameters is represented using a discrete set of samples and is propagated over time using particle filtering. The approach extends previous work on parameterized optical flow estimation to exploit a complex 3D articulated motion model. It also extends previous work on human motion tracking by including a perspective camera model, by modeling limb self occlusion, and by recovering 3D motion from a monocular sequence. The explicit posterior probability distribution represents ambiguities due to image matching, model singularities, and perspective projection. The method relies only on a frame-to-frame assumption of brightness constancy and hence is able to track people under changing viewpoints, in grayscale image sequences, and with complex unknown backgrounds.",
"We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fit it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.",
"This paper addresses the problem of 3D human pose estimation from a single image. We follow a standard two-step pipeline by first detecting the 2D position of the N body joints, and then using these observations to infer 3D pose. For the first step, we use a recent CNN-based detector. For the second step, most existing approaches perform 2N-to-3N regression of the Cartesian joint coordinates. We show that more precise pose estimates can be obtained by representing both the 2D and 3D human poses using NxN distance matrices, and formulating the problem as a 2D-to-3D distance matrix regression. For learning such a regressor we leverage on simple Neural Network architectures, which by construction, enforce positivity and symmetry of the predicted matrices. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate consistent performance gains over state-of-the-art. Qualitative evaluation on the images in-the-wild of the LSP dataset, using the regressor learned on Human3.6M, reveals very promising generalization results.",
"In this paper, we address the problem of 3D articulated multi-person tracking in busy street scenes from a moving, human-level observer. In order to handle the complexity of multi-person interactions, we propose to pursue a two-stage strategy. A multi-body detection-based tracker first analyzes the scene and recovers individual pedestrian trajectories, bridging sensor gaps and resolving temporary occlusions. A specialized articulated tracker is then applied to each recovered pedestrian trajectory in parallel to estimate the tracked person's precise body pose over time. This articulated tracker is implemented in a Gaussian Process framework and operates on global pedestrian silhouettes using a learned statistical representation of human body dynamics. We interface the two tracking levels through a guided segmentation stage, which combines traditional bottom-up cues with top-down information from a human detector and the articulated tracker's shape prediction. We show the proposed approach's viability and demonstrate its performance for articulated multi-person tracking on several challenging video sequences of a busy inner-city scenario.",
"Estimating 3D human pose from 2D joint locations is central to the analysis of people in images and video. To address the fact that the problem is inherently ill posed, many methods impose a prior over human poses. Unfortunately these priors admit invalid poses because they do not model how joint-limits vary with pose. Here we make two key contributions. First, we collect a motion capture dataset that explores a wide range of human poses. From this we learn a pose-dependent model of joint limits that forms our prior. Both dataset and prior are available for research purposes. Second, we define a general parametrization of body pose and a new, multi-stage, method to estimate 3D pose from 2D joint locations using an over-complete dictionary of poses. Our method shows good generalization while avoiding impossible poses. We quantitatively compare our method with recent work and show state-of-the-art results on 2D to 3D pose estimation using the CMU mocap dataset. We also show superior results using manual annotations on real images and automatic detections on the Leeds sports pose dataset.",
"We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks."
]
} |
1904.02683 | 2927558373 | In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person and the object, contact positions, and forces and torques actuated by the human limbs. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of their interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent MoCap dataset with ground truth contact forces and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments. | involves both recognition of actions and modeling of interactions. In action recognition, most existing approaches that model human-object interactions do not consider 3D, instead model interactions and contacts in the 2D image space @cite_40 @cite_55 @cite_0 @cite_64 . Recent work in scene understanding @cite_23 @cite_24 consider interactions in 3D but have focused on static scene elements rather than manipulated objects as we do in this work. Tracking 3D poses of people interacting with the environment has been demonstrated for bipedal walking @cite_50 @cite_51 or in sports scenarios @cite_60 . However, these works do not consider interactions with objects. Furthermore, @cite_60 requires manual annotation of the input video. | {
"cite_N": [
"@cite_64",
"@cite_60",
"@cite_55",
"@cite_0",
"@cite_24",
"@cite_40",
"@cite_23",
"@cite_50",
"@cite_51"
],
"mid": [
"1989560997",
"2004987096",
"2158234032",
"1976546217",
"2030358157",
"2169393274",
"2115815548",
"",
"2535579907"
],
"abstract": [
"We introduce an approach for learning human actions as interactions between persons and objects in realistic videos. Previous work typically represents actions with low-level features such as image gradients or optical flow. In contrast, we explicitly localize in space and track over time both the object and the person, and represent an action as the trajectory of the object w.r.t. to the person position. Our approach relies on state-of-the-art techniques for human detection [32], object detection [10], and tracking [39]. We show that this results in human and object tracks of sufficient quality to model and localize human-object interactions in realistic videos. Our human-object interaction features capture the relative trajectory of the object w.r.t. the human. Experimental results on the Coffee and Cigarettes dataset [25], the video dataset of [19], and the Rochester Daily Activities dataset [29] show that 1) our explicit human-object model is an informative cue for action recognition; 2) it is complementary to traditional low-level descriptors such as 3D--HOG [23] extracted over human tracks. We show that combining our human-object interaction features with 3D-HOG improves compared to their individual performance as well as over the state of the art [23], [29].",
"This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skelet al size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.",
"We investigate a discriminatively trained model of person-object interactions for recognizing common human actions in still images. We build on the locally order-less spatial pyramid bag-of-features model, which was shown to perform extremely well on a range of object, scene and human action recognition tasks. We introduce three principal contributions. First, we replace the standard quantized local HOG SIFT features with stronger discriminatively trained body part and object detectors. Second, we introduce new person-object interaction features based on spatial co-occurrences of individual body parts and objects. Third, we address the combinatorial problem of a large number of possible interaction pairs and propose a discriminative selection procedure using a linear support vector machine (SVM) with a sparsity inducing regularizer. Learning of action-specific body part and object interactions bypasses the difficult problem of estimating the complete human body pose configuration. Benefits of the proposed model are shown on human action recognition in consumer photographs, outperforming the strong bag-of-features baseline.",
"Detecting objects in cluttered scenes and estimating articulated human body parts from 2D images are two challenging problems in computer vision. The difficulty is particularly pronounced in activities involving human-object interactions (e.g., playing tennis), where the relevant objects tend to be small or only partially visible and the human body parts are often self-occluded. We observe, however, that objects and human poses can serve as mutual context to each other-recognizing one facilitates the recognition of the other. In this paper, we propose a mutual context model to jointly model objects and human poses in human-object interaction activities. In our approach, object detection provides a strong prior for better human pose estimation, while human pose estimation improves the accuracy of detecting the objects that interact with the human. On a six-class sports data set and a 24-class people interacting with musical instruments data set, we show that our mutual context model outperforms state of the art in detecting very difficult objects and estimating human poses, as well as classifying human-object interaction activities.",
"We present an approach which exploits the coupling between human actions and scene geometry to use human pose as a cue for single-view 3D scene understanding. Our method builds upon recent advances in still-image pose estimation to extract functional and geometric constraints on the scene. These constraints are then used to improve single-view 3D scene understanding approaches. The proposed method is validated on monocular time-lapse sequences from YouTube and still images of indoor scenes gathered from the Internet. We demonstrate that observing people performing different actions can significantly improve estimates of 3D scene geometry.",
"Interpretation of images and videos containing humans interacting with different objects is a daunting task. It involves understanding scene or event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects. While each of these perceptual tasks can be conducted independently, recognition rate improves when interactions between them are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which integrates various perceptual tasks involved in understanding human-object interactions. Previous approaches to object and action recognition rely on static shape or appearance feature matching and motion analysis, respectively. Our approach goes beyond these traditional approaches and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. Such constraints allow us to recognize objects and actions when the appearances are not discriminative enough. We also demonstrate the use of such constraints in recognition of actions from static images without using any motion information.",
"For scene understanding, one popular approach has been to model the object-object relationships. In this paper, we hypothesize that such relationships are only an artifact of certain hidden factors, such as humans. For example, the objects, monitor and keyboard, are strongly spatially correlated only because a human types on the keyboard while watching the monitor. Our goal is to learn this hidden human context (i.e., the human-object relationships), and also use it as a cue for labeling the scenes. We present Infinite Factored Topic Model (IFTM), where we consider a scene as being generated from two types of topics: human configurations and human-object relationships. This enables our algorithm to hallucinate the possible configurations of the humans in the scene parsimoniously. Given only a dataset of scenes containing objects but not humans, we show that our algorithm can recover the human object relationships. We then test our algorithm on the task of attribute and object labeling in 3D scenes and show consistent improvements over the state-of-the-art.",
"",
"Motion and interaction with the environment are fundamentally intertwined. Few people-tracking algorithms exploit such interactions, and those that do assume that surface geometry and dynamics are given. This paper concerns the converse problem, i.e., the inference of contact and environment properties from motion. For 3D human motion, with a 12-segment articulated body model, we show how one can estimate the forces acting on the body in terms of internal forces (joint torques), gravity, and the parameters of a contact model (e.g., the geometry and dynamics of a spring-based model). This is tested on motion capture data and video-based tracking data, with walking, jogging, cartwheels, and jumping."
]
} |
1904.02683 | 2927558373 | In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person and the object, contact positions, and forces and torques actuated by the human limbs. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of their interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent MoCap dataset with ground truth contact forces and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments. | There is also related work on modeling person-object interactions in robotics @cite_13 and computer animation @cite_66 . Similarly to people, humanoid robots interact with the environment by creating and breaking contacts @cite_7 , for example, during walking. Typically, generating artificial motion is formulated as an optimal control problem, transcribed into a high-dimensional numerical optimization problem, seeking to minimize an objective function under contact and feasibility constraints @cite_4 @cite_44 . A known difficulty is handling the non-smoothness of the resulting optimization problem introduced by the creation and breaking of contacts @cite_22 . Due to this difficulty, the sequence of contacts is often computed separately and not treated as a decision variable in the optimizer @cite_65 @cite_27 . Recent work has shown that it may be possible to decide both the continuous movement and the contact sequence together, either by implicitly formulating the contact constraints @cite_5 or by using invariances to smooth the resulting optimization problem @cite_58 @cite_37 . | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_65",
"@cite_44",
"@cite_27",
"@cite_5",
"@cite_58",
"@cite_13",
"@cite_66"
],
"mid": [
"2788030459",
"1582919213",
"2128131727",
"2086853455",
"2114193135",
"2110194613",
"2345626358",
"2101340954",
"2042408133",
"2087617385",
"2032548678"
],
"abstract": [
"We present a single trajectory optimization formulation for legged locomotion that automatically determines the gait sequence, step timings, footholds, swing-leg motions, and six-dimensional body motion over nonflat terrain, without any additional modules. Our phase-based parameterization of feet motion and forces allows to optimize over the discrete gait sequence using only continuous decision variables. The system is represented using a simplified centroidal dynamics model that is influenced by the feet's location and forces. We explicitly enforce friction cone constraints, depending on the shape of the terrain. The nonlinear programming problem solver generates highly dynamic motion plans with full flight phases for a variety of legged systems with arbitrary morphologies in an efficient manner. We validate the feasibility of the generated plans in simulation and on the real quadruped robot ANYmal. Additionally, the entire solver software TOWR , which used to generate these motions is made freely available.",
"In this overview paper, we first survey numerical approaches to solve nonlinear optimal control problems, and second, we present our most recent algorithmic developments for real-time optimization in nonlinear model predictive control. In the survey part, we discuss three direct optimal control approaches in detail: (i) single shooting, (ii) collocation, and (iii) multiple shooting, and we specify why we believe the direct multiple shooting method to be the method of choice for nonlinear optimal control problems in robotics. We couple it with an efficient robot model generator and show the performance of the algorithm at the example of a five link robot arm. In the real-time optimization part, we outline the idea of nonlinear model predictive control and the real-time challenge it poses to numerical optimization. As one solution approach, we discuss the real-time iteration scheme.",
"Planar, underactuated, biped walkers form an important domain of applications for hybrid dynamical systems. This paper presents the design of exponentially stable walking controllers for general planar bipedal systems that have one degree-of-freedom greater than the number of available actuators. The within-step control action creates an attracting invariant set - a two-dimensional zero dynamics submanifold of the full hybrid model $whose restriction dynamics admits a scalar linear time-invariant return map. Exponentially stable periodic orbits of the zero dynamics correspond to exponentially stabilizable orbits of the full model. A convenient parameterization of the hybrid zero dynamics is imposed through the choice of a class of output functions. Parameter optimization is used to tune the hybrid zero dynamics in order to achieve closed-loop, exponentially stable walking with low energy consumption, while meeting natural kinematic and dynamic constraints. The general theory developed in the paper is illustrated on a five link walker, consisting of a torso and two legs with knees.",
"We demonstrate in this paper our motion generation sheme for the generation of stable bipedal walking motions and we expand it to enhance its flexibility and independency. An algorithm for the control of appropriate orientations of the feet and the trunk permits the robot to turn in a natural and safe way. Polygonal constraints on the positions of the computed feet positions serve to improve its reliability. A logic for the succession of the support phases and an algorithm for the automatic control of their orientations bridge the gap to more autonomy and to more practicability.",
"Humanoid robotics hardware and control techniques have advanced rapidly during the last five years. Presently, several companies have announced the commercial availability of various humanoid robot prototypes. In order to improve the autonomy and overall functionality of these robots, reliable sensors, safety mechanisms, and general integrated software tools and techniques are needed. We believe that the development of practical motion planning algorithms and obstacle avoidance software for humanoid robots represents an important enabling technology. This paper gives an overview of some of our recent efforts to develop motion planning methods for humanoid robots for application tasks involving navigation, object grasping and manipulation, footstep placement, and dynamically-stable full-body motions. We show experimental results obtained by implementations running within a simulation environment as well as on actual humanoid robot hardware.",
"Designing and controlling an anthropomorphic mechatronic system that is able to perform a dynamic running motion is a challenging task. One difficulty is that the fundamental principles of natural human running motions are not yet fully understood. The purpose of this paper is to show that mathematical optimization is a helpful tool to gain this insight into fast and complex motions. We present physics-based running motions for complex models of human-like running in three dimensions that have been generated by optimization. Running is modeled as a multiphase periodic motion with discontinuities, based on multibody system models of the locomotor system with actuators and spring-damper elements at each joint. The problem of generating gaits is formulated as offline optimal control problem and solved by an efficient direct multiple shooting method. We present optimization results using energy-related criteria and show that they have a close resemblance to running motions of humans. The results provide information about the internal forces and torques required to produce natural human running, as well as on the resulting kinematics.",
"We present a contact planner for complex legged locomotion tasks: standing up, climbing stairs using a handrail, crossing rubble, and getting out of a car. The need for such a planner was shown at the DARPA Robotics Challenge, where such behaviors could not be demonstrated (except for egress). Current planners suffer from their prohibitive algorithmic complexity because they deploy a tree of robot configurations projected in contact with the environment. We tackle this issue by introducing a reduction property: the reachability condition. This condition defines a geometric approximation of the contact manifold, which is of low dimension, presents a Cartesian topology, and can be efficiently sampled and explored. The hard contact planning problem can then be decomposed into two subproblems: first, we plan a path for the root without considering the whole-body configuration, using a sampling-based algorithm; then, we generate a discrete sequence of whole-body configurations in static equilibrium along this path, using a deterministic contact-selection algorithm. The reduction breaks the algorithm complexity encountered in previous works, resulting in the first interactive implementation of a contact planner (open source). While no contact planner has yet been proposed with theoretical completeness, we empirically show the interest of our framework: in a few seconds, with high success rates, we generate complex contact plans for various scenarios and two robots: HRP-2 and HyQ. These plans are validated in dynamic simulations or on the real HRP-2 robot.",
"Direct methods for trajectory optimization are widely used for planning locally optimal trajectories of robotic systems. Many critical tasks, such as locomotion and manipulation, often involve impacting the ground or objects in the environment. Most state-of-the-art techniques treat the discontinuous dynamics that result from impacts as discrete modes and restrict the search for a complete path to a specified sequence through these modes. Here we present a novel method for trajectory planning of rigid-body systems that contact their environment through inelastic impacts and Coulomb friction. This method eliminates the requirement for a priori mode ordering. Motivated by the formulation of multi-contact dynamics as a Linear Complementarity Problem for forward simulation, the proposed algorithm poses the optimization problem as a Mathematical Program with Complementarity Constraints. We leverage Sequential Quadratic Programming to naturally resolve contact constraint forces while simultaneously optimizing a trajectory that satisfies the complementarity constraints. The method scales well to high-dimensional systems with large numbers of possible modes. We demonstrate the approach on four increasingly complex systems: rotating a pinned object with a finger, simple grasping and manipulation, planar walking with the Spring Flamingo robot, and high-speed bipedal running on the FastRunner platform.",
"We present a motion synthesis framework capable of producing a wide variety of important human behaviors that have rarely been studied, including getting up from the ground, crawling, climbing, moving heavy objects, acrobatics (hand-stands in particular), and various cooperative actions involving two characters and their manipulation of the environment. Our framework is not specific to humans, but applies to characters of arbitrary morphology and limb configuration. The approach is fully automatic and does not require domain knowledge specific to each behavior. It also does not require pre-existing examples or motion capture data. At the core of our framework is the contact-invariant optimization (CIO) method we introduce here. It enables simultaneous optimization of contact and behavior. This is done by augmenting the search space with scalar variables that indicate whether a potential contact should be active in a given phase of the movement. These auxiliary variables affect not only the cost function but also the dynamics (by enabling and disabling contact forces), and are optimized together with the movement trajectory. Additional innovations include a continuation scheme allowing helper forces at the potential contacts rather than the torso, as well as a feature-based model of physics which is particularly well-suited to the CIO framework. We expect that CIO can also be used with a full physics model, but leave that extension for future work.",
"We present an online trajectory optimization method and software platform applicable to complex humanoid robots performing challenging tasks such as getting up from an arbitrary pose on the ground and recovering from large disturbances using dexterous acrobatic maneuvers. The resulting behaviors, illustrated in the attached video, are computed only 7 × slower than real time, on a standard PC. The video also shows results on the acrobot problem, planar swimming and one-legged hopping. These simpler problems can already be solved in real time, without pre-computing anything.",
"This paper presents a human walking model built from experimental data based on a wide range of normalized velocities. The model is structured on two levels. On the first level, global spatial and temporal characteristics (normalized length and step duration) are generated. On the second level, a set of parameterized trajectories produce both the position of the body in space and the internal body configuration. This is performed for a standard structure and an average configuration of the human body."
]
} |
1904.02683 | 2927558373 | In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person and the object, contact positions, and forces and torques actuated by the human limbs. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of their interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent MoCap dataset with ground truth contact forces and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments. | methods often require depth or RGB-D data as input @cite_29 @cite_21 @cite_25 , which is restrictive since depth information is not always available (e.g. for outdoor scenes or specular objects), as is the case of our instructional videos. Recent work has also attempted to recover object pose from RGB input only @cite_16 @cite_28 @cite_12 @cite_30 @cite_8 @cite_68 @cite_17 . However, we found that the performance of these methods is limited for the stick-like objects we consider in this work. Instead, we recover the 3D pose of the object via localizing and segmenting the object in 2D, and then jointly recovering the 3D trajectory of both the human limbs and the object. As a result, both the object and the human pose help each other to improve their joint 3D trajectory by leveraging the contact constraints. | {
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_68",
"@cite_16",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2962783853",
"2797394534",
"2604236302",
"1022526533",
"2217081155",
"2962748819",
"2472269674",
"2520352517",
"2767032778",
"2962956488"
],
"abstract": [
"Estimating the 6D pose of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using an untangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over state-of-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.",
"We introduce a novel method for robust and accurate 3D object pose estimation from a single color image under large occlusions. Following recent approaches, we first predict the 2D projections of 3D points related to the target object and then compute the 3D pose from these correspondences using a geometric method. Unfortunately, as the results of our experiments show, predicting these 2D projections using a regular CNN or a Convolutional Pose Machine is highly sensitive to partial occlusions, even when these methods are trained with partially occluded examples. Our solution is to predict heatmaps from multiple small patches independently and to accumulate the results to obtain accurate and robust predictions. Training subsequently becomes challenging because patches with similar appearances but different positions on the object correspond to different heatmaps. However, we provide a simple yet effective solution to deal with such ambiguities. We show that our approach outperforms existing methods on two challenging datasets: The Occluded LineMOD dataset and the YCB-Video dataset, both exhibiting cluttered scenes with highly occluded objects.",
"We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a “holistic” approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses. We improve the state-of-the-art on the LINEMOD dataset from 73.7 [2] to 89.3 of correctly registered RGB frames. We are also the first to report results on the Occlusion dataset [1 ] using color images only. We obtain 54 of frames passing the Pose 6D criterion on average on several sequences of the T-LESS dataset, compared to the 67 of the state-of-the-art [10] on the same sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously.",
"In this paper we propose a novel framework, Latent-Class Hough Forests, for 3D object detection and pose estimation in heavily cluttered and occluded scenes. Firstly, we adapt the state-of-the-art template matching feature, LINEMOD [14], into a scale-invariant patch descriptor and integrate it into a regression forest using a novel template-based split function. In training, rather than explicitly collecting representative negative samples, our method is trained on positive samples only and we treat the class distributions at the leaf nodes as latent variables. During the inference process we iteratively update these distributions, providing accurate estimation of background clutter and foreground occlusions and thus a better detection rate. Furthermore, as a by-product, the latent class distributions can provide accurate occlusion aware segmentation masks, even in the multi-instance scenario. In addition to an existing public dataset, which contains only single-instance sequences with large amounts of clutter, we have collected a new, more challenging, dataset for multiple-instance detection containing heavy 2D and 3D clutter as well as foreground occlusions. We evaluate the Latent-Class Hough Forest on both of these datasets where we outperform state-of-the art methods.",
"6D object detection and pose estimation in the crowd (scenes with multiple object instances, severe foreground occlusions and background distractors), has become an important problem in many rapidly evolving technological areas such as robotics and augmented reality. Single shot-based 6D pose estimators with manually designed features are still unable to tackle the above challenges, motivating the research towards unsupervised feature learning and next-best-view estimation. In this work, we present a complete framework for both single shot-based 6D object pose estimation and next-best-view prediction based on Hough Forests, the state of the art object pose estimator that performs classification and regression jointly. Rather than using manually designed features we a) propose an unsupervised feature learnt from depth-invariant patches using a Sparse Autoencoder and b) offer an extensive evaluation of various state of the art features. Furthermore, taking advantage of the clustering performed in the leaf nodes of Hough Forests, we learn to estimate the reduction of uncertainty in other views, formulating the problem of selecting the next-best-view. To further improve 6D object pose estimation, we propose an improved joint registration and hypotheses verification module as a final refinement step to reject false detections. We provide two additional challenging datasets inspired from realistic scenarios to extensively evaluate the state of the art and our framework. One is related to domestic environments and the other depicts a bin-picking scenario mostly found in industrial settings. We show that our framework significantly outperforms state of the art both on public and on our datasets.",
"We propose a scalable, efficient and accurate approach to retrieve 3D models for objects in the wild. Our contribution is twofold. We first present a 3D pose estimation approach for object categories which significantly outperforms the state-of-the-art on Pascal3D+. Second, we use the estimated pose as a prior to retrieve 3D models which accurately represent the geometry of objects in RGB images. For this purpose, we render depth images from 3D models under our predicted pose and match learned image descriptors of RGB images against those of rendered depth images using a CNN-based multi-view metric learning approach. In this way, we are the first to report quantitative results for 3D model retrieval on Pascal3D+, where our method chooses the same models as human annotators for 50 of the validation images on average. In addition, we show that our method, which was trained purely on Pascal3D+, retrieves rich and accurate 3D models from ShapeNet given RGB images of objects in the wild.",
"In recent years, the task of estimating the 6D pose of object instances and complete scenes, i.e. camera localization, from a single input image has received considerable attention. Consumer RGB-D cameras have made this feasible, even for difficult, texture-less objects and scenes. In this work, we show that a single RGB image is sufficient to achieve visually convincing results. Our key concept is to model and exploit the uncertainty of the system at all stages of the processing pipeline. The uncertainty comes in the form of continuous distributions over 3D object coordinates and discrete distributions over object labels. We give three technical contributions. Firstly, we develop a regularized, auto-context regression framework which iteratively reduces uncertainty in object coordinate and object label predictions. Secondly, we introduce an efficient way to marginalize object coordinate distributions over depth. This is necessary to deal with missing depth information. Thirdly, we utilize the distributions over object labels to detect multiple objects simultaneously with a fixed budget of RANSAC hypotheses. We tested our system for object pose estimation and camera localization on commonly used data sets. We see a major improvement over competing systems.",
"Point Pair Features is a widely used method to detect 3D objects in point clouds, however they are prone to fail in presence of sensor noise and background clutter. We introduce novel sampling and voting schemes that significantly reduces the influence of clutter and sensor noise. Our experiments show that with our improvements, PPFs become competitive against state-of-the-art methods as it outperforms them on several objects from challenging benchmarks, at a low computational cost.",
"Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at this https URL.",
"We propose a simple and efficient method for exploiting synthetic images when training a Deep Network to predict a 3D pose from an image. The ability of using synthetic images for training a Deep Network is extremely valuable as it is easy to create a virtually infinite training set made of such images, while capturing and annotating real images can be very cumbersome. However, synthetic images do not resemble real images exactly, and using them for training can result in suboptimal performance. It was recently shown that for exemplar-based approaches, it is possible to learn a mapping from the exemplar representations of real images to the exemplar representations of synthetic images. In this paper, we show that this approach is more general, and that a network can also be applied after the mapping to infer a 3D pose: At run-time, given a real image of the target object, we first compute the features for the image, map them to the feature space of synthetic images, and finally use the resulting features as input to another network which predicts the 3D pose. Since this network can be trained very effectively by using synthetic images, it performs very well in practice, and inference is faster and more accurate than with an exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for 3D object pose estimation from color images, and the NYU dataset for 3D hand pose estimation from depth maps. We show that it allows us to outperform the state-of-the-art on both datasets."
]
} |
1904.02683 | 2927558373 | In this paper, we introduce a method to automatically reconstruct the 3D motion of a person interacting with an object from a single RGB video. Our method estimates the 3D poses of the person and the object, contact positions, and forces and torques actuated by the human limbs. The main contributions of this work are three-fold. First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of their interactions. This is cast as a large-scale trajectory optimization problem. Second, we develop a method to automatically recognize from the input video the position and timing of contacts between the person and the object or the ground, thereby significantly simplifying the complexity of the optimization. Third, we validate our approach on a recent MoCap dataset with ground truth contact forces and demonstrate its performance on a new dataset of Internet videos showing people manipulating a variety of tools in unconstrained environments. | Our work is also related to recent efforts in learning form Internet instructional videos @cite_41 @cite_18 @cite_56 that aim to segment input videos into clips containing consistent actions. In contrast, we focus on extracting a detailed representation of the object manipulation in the form of a 3D person-object trajectory with contacts and underlying manipulation forces. | {
"cite_N": [
"@cite_41",
"@cite_18",
"@cite_56"
],
"mid": [
"1963321737",
"2962795934",
""
],
"abstract": [
"We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task. In particular, we focus on the cooking domain, where the instructions correspond to the recipe. Our technique relies on an HMM to align the recipe steps to the (automatically generated) speech transcript. We then refine this alignment using a state-of-the-art visual food detector, based on a deep convolutional neural network. We show that our technique outperforms simpler techniques based on keyword spotting. It also enables interesting applications, such as automatically illustrating recipes with keyframes, and searching within a video for events of interest.",
"We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks1 that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos.",
""
]
} |
1904.02688 | 2930301633 | Weighted model counting has emerged as a prevalent approach for probabilistic inference. In this paper, we are interested in weighted DNF counting, or briefly, weighted #DNF, which admits a fully polynomial randomized approximation scheme, as shown by Karp and Luby. To this date, the best algorithm for approximating #DNF is due to Karp, Luby and Madras. The drawback of this algorithm is that it runs in quadratic time and hence is not suitable for fast online reasoning. To overcome this, we propose a novel approach that combines approximate model counting with deep learning. We conduct detailed experiments to validate our approach, and show that our model learns and generalizes from #DNF instances with a very high accuracy. | WMC is widely studied in the literature due to its connections to probabilistic inference. WMC is @math -hard @cite_2 , even for very restricted classes of formulas @cite_22 . In fact, Toda proved that the class @math contains the entire polynomial hierarchy @cite_10 . | {
"cite_N": [
"@cite_10",
"@cite_22",
"@cite_2"
],
"mid": [
"2151536080",
"1998076865",
""
],
"abstract": [
"Two complexity classes, PP and (+)P, are compared with PH (the polynomial-time hierarchy). The main results are as follows: (1) every set in PH is reducible in a certain sense to a set in PP, an (2) every set in PH is reducible to a set in (+)P under randomized polynomial-time reducibility with two-sided bounded error probability. It follows from these results that neither PP nor (+)P is a subset of or equivalent to PH unless PH collapses to a finite level. This is strong evidence that both classes are strictly harder than PH. >",
"Several enumeration and reliability problems are shown to be # P-complete, and hence, at least as hard as NP-complete problems. Included are important problems in network reliability analysis, namely, computing the probability that a graph is connected and counting the number of minimum cardinality @math -cuts or directed network cuts. Also shown to be # P-complete are counting vertex covers in a bipartite graph, counting antichains in a partial order, and approximating the probability that a graph is connected and the probability that a pair of vertices is connected.",
""
]
} |
1904.02688 | 2930301633 | Weighted model counting has emerged as a prevalent approach for probabilistic inference. In this paper, we are interested in weighted DNF counting, or briefly, weighted #DNF, which admits a fully polynomial randomized approximation scheme, as shown by Karp and Luby. To this date, the best algorithm for approximating #DNF is due to Karp, Luby and Madras. The drawback of this algorithm is that it runs in quadratic time and hence is not suitable for fast online reasoning. To overcome this, we propose a novel approach that combines approximate model counting with deep learning. We conduct detailed experiments to validate our approach, and show that our model learns and generalizes from #DNF instances with a very high accuracy. | Knowledge compilation @cite_32 @cite_29 approaches, which push computational overhead to an offline compilation phase, enable WMC in linear time @cite_25 , but these approaches have limited scalability due to the inherent complexity of model counting. Approximate model counting methods such as hashing-based techniques @cite_9 @cite_16 @cite_14 provide probabilistic accuracy guarantees, but are not efficient in an online setting. We combine the advantages of both approaches to achieve fast online inference while providing more scalability. This falls in line with observations made recently in @cite_33 . | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_9",
"@cite_29",
"@cite_32",
"@cite_16",
"@cite_25"
],
"mid": [
"2767126716",
"2806205134",
"2952143373",
"2038897673",
"1589170163",
"2572061353",
"2105197197"
],
"abstract": [
"Propositional model counting is a fundamental problem in artificial intelligence with a wide variety of applications, such as probabilistic inference, decision making under uncertainty, and probabilistic databases. Consequently, the problem is of theoretical as well as practical interest. When the constraints are expressed as DNF formulas, Monte Carlo-based techniques have been shown to provide a fully polynomial randomized approximation scheme (FPRAS). For CNF constraints, hashing-based approximation techniques have been demonstrated to be highly successful. Furthermore, it was shown that hashing-based techniques also yield an FPRAS for DNF counting without usage of Monte Carlo sampling. Our analysis, however, shows that the proposed hashing-based approach to DNF counting provides poor time complexity compared to the Monte Carlo-based DNF counting techniques. Given the success of hashing-based techniques for CNF constraints, it is natural to ask: Can hashing-based techniques provide an efficient FPRAS for DNF counting? In this paper, we provide a positive answer to this question. To this end, we introduce two novel algorithmic techniques: and , along with a new hash family of . These innovations allow us to design a hashing-based FPRAS for DNF counting of similar complexity (up to polylog factors) as that of prior works. Furthermore, we expect these techniques to have potential applications beyond DNF counting.",
"We introduce collapsed compilation, a novel approximate inference algorithm for discrete probabilistic graphical models. It is a collapsed sampling algorithm that incrementally selects which variable to sample next based on the partial sample obtained so far. This online collapsing, together with knowledge compilation inference on the remaining variables, naturally exploits local structure and context- specific independence in the distribution. These properties are naturally exploited in exact inference, but are difficult to harness for approximate inference. More- over, by having a partially compiled circuit available during sampling, collapsed compilation has access to a highly effective proposal distribution for importance sampling. Our experimental evaluation shows that collapsed compilation performs well on standard benchmarks. In particular, when the amount of exact inference is equally limited, collapsed compilation is competitive with the state of the art, and outperforms it on several benchmarks.",
"Integration is affected by the curse of dimensionality and quickly becomes intractable as the dimensionality of the problem grows. We propose a randomized algorithm that, with high probability, gives a constant-factor approximation of a general discrete integral defined over an exponentially large set. This algorithm relies on solving only a small number of instances of a discrete combinatorial optimization problem subject to randomly generated parity constraints used as a hash function. As an application, we demonstrate that with a small number of MAP queries we can efficiently approximate the partition function of discrete graphical models, which can in turn be used, for instance, for marginal computation or model selection.",
"Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the statements in the knowledge base or use an incomplete inference mechanism. The former approach is often too restrictive for practical applications, whereas the latter leads to uncertainty about exactly what can and cannot be inferred from the knowledge base. We present a third alternative, in which knowledge given in a general representation language is translated (compiled) into a tractable form—allowing for efficient subsequent query answering. We show how propositional logical theories can be compiled into Horn theories that approximate the original information. The approximations bound the original theory from below and above in terms of logical strength. The procedures are extended to other tractable languages (for example, binary clauses) and to the first-order case. Finally, we demonstrate the generality of our approach by compiling concept descriptions in a general frame-based language into a tractable form.",
"Knowledge compilation is an AI technique for addressing computationally demanding reasoning problems. In this paper we survey recent results in knowledge compilation of propositional knowledge bases. We first define and limit the scope of such a technique, then we survey exact and approximate knowledge compilation methods. We include a discussion of compilation for nondmonotonic knowledge bases.",
"Probabilistic inference via model counting has emerged as a scalable technique with strong formal guarantees, thanks to recent advances in hashing-based approximate counting. State-of-the-art hashing-based counting algorithms use an NP oracle (SAT solver in practice), such that the number of oracle invocations grows linearly in the number of variables n in the input constraint. We present a new approach to hashing-based approximate model counting in which the number of oracle invocations grows logarithmically in n, while still providing strong theoretical guarantees. We use this technique to design an algorithm for #CNF with strongly probably approximately correct (SPAC) guarantees, i.e. PAC guarantee plus expected return value matching the exact model count. Our experiments show that this algorithm outperforms state-of-the-art techniques for approximate counting by 1-2 orders of magnitude in running time. We also show that our algorithm can be easily adapted to give a new fully polynomial randomized approximation scheme (FPRAS) for #DNF.",
"We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. We then provide a knowledge compilation map, which analyzes a large number of existing target compilation languages according to their succinctness and their polytime transformations and queries. We argue that such analysis is necessary for placing new compilation approaches within the context of existing ones. We also go beyond classical, flat target compilation languages based on CNF and DNF, and consider a richer, nested class based on directed acyclic graphs (such as OBDDs), which we show to include a relatively large number of target compilation languages."
]
} |
1904.02688 | 2930301633 | Weighted model counting has emerged as a prevalent approach for probabilistic inference. In this paper, we are interested in weighted DNF counting, or briefly, weighted #DNF, which admits a fully polynomial randomized approximation scheme, as shown by Karp and Luby. To this date, the best algorithm for approximating #DNF is due to Karp, Luby and Madras. The drawback of this algorithm is that it runs in quadratic time and hence is not suitable for fast online reasoning. To overcome this, we propose a novel approach that combines approximate model counting with deep learning. We conduct detailed experiments to validate our approach, and show that our model learns and generalizes from #DNF instances with a very high accuracy. | For , WMC admits the KLM FPRAS @cite_3 . A hashing-based approach is also given in @cite_14 , but it applies only to unweighted formulas, i.e., it is not possible to solve weighted . Although it is possible to transform WMC to MC, as stated in @cite_8 , this approximation-preserving reduction does not apply to . Hence, the KLM algorithm @cite_34 for weighted remains the state of the art for weighted despite the recent progress in WMC. KLM, however, is not well-suited for online inference, since it runs in quadratic time in the size of its input formula. | {
"cite_N": [
"@cite_34",
"@cite_14",
"@cite_3",
"@cite_8"
],
"mid": [
"2094500233",
"2767126716",
"2066720893",
"2202286026"
],
"abstract": [
"We develop polynomial time Monte-Carlo algorithms which produce good approximate solutions to enumeration problems for which it is known that the computation of the exact solution is very hard. We start by developing a Monte-Carlo approximation algorithm for the DNF counting problem, which is the problem of counting the number of satisfying truth assignments to a formula in disjunctive normal form. The input to the algorithm is the formula and two parameters e and δ. The algorithm produces an estimate which is between 1 − ϵ and 1 + ϵ times the number of satisfying truth assignments with probability at least 1 − δ. The running time of the algorithm is linear in the length of the formula times 1ϵ2 times ln(1δ). On the other hand, the problem of computing the exact answer for the DNF counting problem is known to be #P-complete, which implies that there is no polynomial time algorithm for the exact solution if P ≠ NP. This paper improves and gives new applications of some of the work previously reported. Variants of an ϵ, δ approximation algorithm for the DNF counting problem have been highly tailored to be especially efficient for the network reliability problems to which they are applied. In this paper the emphasis is on the development and analysis of a much more efficient ϵ, δ approximation algorithm for the DNF counting problem. The running time of the algorithm presented here substantially improves the running time of versions of this algorithm given previously. We give a new application of the algorithm to a problem which is relevant to physical chemistry and statistical physics. The resulting ϵ, δ approximation algorithm is substantially faster than the fastest known deterministic solution for the problem.",
"Propositional model counting is a fundamental problem in artificial intelligence with a wide variety of applications, such as probabilistic inference, decision making under uncertainty, and probabilistic databases. Consequently, the problem is of theoretical as well as practical interest. When the constraints are expressed as DNF formulas, Monte Carlo-based techniques have been shown to provide a fully polynomial randomized approximation scheme (FPRAS). For CNF constraints, hashing-based approximation techniques have been demonstrated to be highly successful. Furthermore, it was shown that hashing-based techniques also yield an FPRAS for DNF counting without usage of Monte Carlo sampling. Our analysis, however, shows that the proposed hashing-based approach to DNF counting provides poor time complexity compared to the Monte Carlo-based DNF counting techniques. Given the success of hashing-based techniques for CNF constraints, it is natural to ask: Can hashing-based techniques provide an efficient FPRAS for DNF counting? In this paper, we provide a positive answer to this question. To this end, we introduce two novel algorithmic techniques: and , along with a new hash family of . These innovations allow us to design a hashing-based FPRAS for DNF counting of similar complexity (up to polylog factors) as that of prior works. Furthermore, we expect these techniques to have potential applications beyond DNF counting.",
"",
"The recent surge of interest in reasoning about probabilistic graphical models has led to the development of various techniques for probabilistic reasoning. Of these, techniques based on weighted model counting are particularly interesting since they can potentially leverage recent advances in unweighted model counting and in propositional satisfiability solving. In this paper, we present a new approach to weighted model counting via reduction to unweighted model counting. Our reduction, which is polynomial-time and preserves the normal form (CNF DNF) of the input formula, allows us to exploit advances in unweighted model counting to solve weighted model counting instances. Experiments with weighted model counters built using our reduction indicate that these counters performs much better than a state-of-the-art weighted model counter."
]
} |
1904.02826 | 2933667566 | Here we consider, in the context of causal inference, the basic question: 'what can be estimated from data?'. We call this the question of estimability. We consider the usual definition adopted in the causal inference literature -- identifiability -- in a general mathematical setting and show why it is an inadequate formal translation of the concept of estimability. Despite showing that identifiability implies the existence of a Fisher-consistent estimator, we show that this estimator may be discontinuous, and hence unstable, in general. The difficulty arises because the causal inference problem is in general an ill-posed inverse problem. Inverse problems have three conditions which must be satisfied in order to be considered well-posed: existence, uniqueness, and stability of solutions. We illustrate how identifiability corresponds to the question of uniqueness; in contrast, we take estimability to mean satisfaction of all three conditions, i.e. well-posedness. It follows that mere identifiability does not guarantee well-posedness of a causal inference procedure, i.e. estimability, and apparent solutions to causal inference problems can be essentially useless with even the smallest amount of imperfection. These concerns apply, in particular, to causal inference approaches that focus on identifiability while ignoring the additional stability requirements needed for estimability. | As mentioned, we largely take an inverse problems approach in the present work, but relate these ideas back to the statistical and causal inference literature. Importantly, while the inverse problems, statistical, and econometrics communities have long focused on stability and well-posedness results, it appears that the (graphical) causal inference literature is much more lacking in such results. One exception that we are aware of is the recent article by @cite_1 , which in fact provides a realistic example of the stability issues raised in the present work. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1486066286"
],
"abstract": [
"This paper is concerned with estimating the effects of actions from causal assumptions, represented concisely as a directed graph, and statistical knowledge, given as a probability distribution. We provide a necessary and sufficient graphical condition for the cases when the causal effect of an arbitrary set of variables on another arbitrary set can be determined uniquely from the available information, as well as an algorithm which computes the effect whenever this condition holds. Furthermore, we use our results to prove completeness of do-calculus [Pearl, 1995], and a version of an identification algorithm in [Tian, 2002] for the same identification problem. Finally, we derive a complete characterization of semi-Markovian models in which all causal effects are identifiable."
]
} |
1904.02794 | 2951280664 | We propose Visual Query Detection (VQD), a new visual grounding task. In VQD, a system is guided by natural language to localize a number of objects in an image. VQD is related to visual referring expression recognition, where the task is to localize only object. We describe the first dataset for VQD and we propose baseline algorithms that demonstrate the difficulty of the task compared to referring expression recognition. | VQA systems take in an image and open-ended natural language question and then generate a text-based answer @cite_1 @cite_15 @cite_22 @cite_16 . Many VQA datasets have been created. However, initial datasets, e.g., VQAv1 @cite_1 and COCO-QA @cite_6 , exhibited significant language bias in which many questions could be answered correctly without looking at the image, e.g., for VQAv1 it was possible to achieve 50 | {
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_6",
"@cite_15",
"@cite_16"
],
"mid": [
"2899235052",
"2950761309",
"2949218037",
"2952228917",
"2785017694"
],
"abstract": [
"Most counting questions in visual question answering (VQA) datasets are simple and require no more than object detection. Here, we study algorithms for complex counting questions that involve relationships between objects, attribute identification, reasoning, and more. To do this, we created TallyQA, the world's largest dataset for open-ended counting. We propose a new algorithm for counting that uses relation networks with region proposals. Our method lets relation networks be efficiently used with high-resolution imagery. It yields state-of-the-art results compared to baseline and recent systems on both TallyQA and the HowMany-QA benchmark.",
"We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).",
"This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.",
"Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at this http URL as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.",
"Bar charts are an effective way for humans to convey information to each other, but today's algorithms cannot parse them. Existing methods fail when faced with minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract semantic information from vast quantities of literature in science, business, and other areas."
]
} |
1904.02794 | 2951280664 | We propose Visual Query Detection (VQD), a new visual grounding task. In VQD, a system is guided by natural language to localize a number of objects in an image. VQD is related to visual referring expression recognition, where the task is to localize only object. We describe the first dataset for VQD and we propose baseline algorithms that demonstrate the difficulty of the task compared to referring expression recognition. | Unlike VQA, RER algorithms must produce evidence to justify their outputs. A RER algorithm outputs a box around the image location matching the input string, making it easier to tell if an algorithm is behaving correctly. The RefCOCO and RefCOCO+ datasets for RER were collected from the two-player ReferIt' Game @cite_9 . The first player is asked to describe an outlined object and the second player has to correctly localize it from player one's description. The test datasets are futher split into the testA' and testB' splits. The split testA' contains object categories sampled randomly to be close to the original data distribution, while testB' contains objects sampled from the most frequent object categories, excluding categories such as sky', sand', floor', etc. Since, there is a time limit on the game, the descriptions are short, e.g., guy in a yellow t-shirt,' pink,' etc. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2251512949"
],
"abstract": [
"In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes. This dataset is larger and more varied than previous REG datasets and allows us to study referring expressions in real-world scenes. We provide an in depth analysis of the resulting dataset. Based on our findings, we design a new optimization based model for generating referring expressions and perform experimental evaluations on 3 test sets."
]
} |
1904.02794 | 2951280664 | We propose Visual Query Detection (VQD), a new visual grounding task. In VQD, a system is guided by natural language to localize a number of objects in an image. VQD is related to visual referring expression recognition, where the task is to localize only object. We describe the first dataset for VQD and we propose baseline algorithms that demonstrate the difficulty of the task compared to referring expression recognition. | The Visual7W dataset for VQA includes a pointing' task that is closely related to RER @cite_24 . Pointing questions require choosing which box of the four boxes correctly answered a query. Systems did not generate their own boxes, and there is always one correct box. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2136462581"
],
"abstract": [
"We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model’s capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks."
]
} |
1904.02884 | 2926400157 | Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making black-box attacks feasible in real-world applications. Due to the threat of adversarial attacks, many methods have been proposed to improve the robustness. Several state-of-the-art defenses are shown to be robust against transferable adversarial examples. In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models. By optimizing a perturbation over an ensemble of translated images, the generated adversarial example is less sensitive to the white-box model being attacked and has better transferability. To improve the efficiency of attacks, we further show that our method can be implemented by convolving the gradient at the untranslated image with a pre-defined kernel. Our method is generally applicable to any gradient-based attack method. Extensive experiments on the ImageNet dataset validate the effectiveness of the proposed method. Our best attack fools eight state-of-the-art defenses at an 82 success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques. | Deep neural networks have been shown to be vulnerable to adversarial examples first in the visual domain @cite_7 . Then several methods are proposed to generate adversarial examples for the purpose of high success rates and minimal size of perturbations @cite_28 @cite_35 @cite_37 . They also exist in the physical world @cite_35 @cite_4 @cite_25 . Although adversarial examples are recently crafted for many other domains, we focus on image classification tasks in this paper. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_25"
],
"mid": [
"2460937040",
"2516574342",
"",
"1673923490",
"1945616565",
"2736899637"
],
"abstract": [
"Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input @math and any target classification @math , it is possible to find a new input @math that is similar to @math but classified as @math . This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from @math to @math . In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with @math probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"",
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",
"Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems. We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations. We synthesize two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. We apply our algorithm to complex three-dimensional objects, using 3D-printing to manufacture the first physical adversarial objects. Our results demonstrate the existence of 3D adversarial objects in the physical world."
]
} |
1904.02884 | 2926400157 | Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making black-box attacks feasible in real-world applications. Due to the threat of adversarial attacks, many methods have been proposed to improve the robustness. Several state-of-the-art defenses are shown to be robust against transferable adversarial examples. In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models. By optimizing a perturbation over an ensemble of translated images, the generated adversarial example is less sensitive to the white-box model being attacked and has better transferability. To improve the efficiency of attacks, we further show that our method can be implemented by convolving the gradient at the untranslated image with a pre-defined kernel. Our method is generally applicable to any gradient-based attack method. Extensive experiments on the ImageNet dataset validate the effectiveness of the proposed method. Our best attack fools eight state-of-the-art defenses at an 82 success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques. | Black-box adversaries have no access to the model parameters or gradients. The transferability @cite_34 of adversarial examples can be used to attack a black-box model. Several methods @cite_31 @cite_30 have been proposed to improve the transferability, which enable powerful black-box attacks. Besides the transfer-based black-box attacks, there is another line of work that performs attacks based on adaptive queries. For example, Papernot al @cite_27 use queries to distill the knowledge of the target model and train a surrogate model. They therefore turn the black-box attacks to the white-box attacks. Recent methods use queries to estimate the gradient or the decision boundary of the black-box model @cite_19 @cite_10 to generate adversarial examples. However, these methods usually require a large number of queries, which is impractical in real-world applications. In this paper, we resort to transfer-based black-box attacks. | {
"cite_N": [
"@cite_30",
"@cite_19",
"@cite_27",
"@cite_31",
"@cite_34",
"@cite_10"
],
"mid": [
"2791683151",
"2746600820",
"",
"2774644650",
"",
"2773022113"
],
"abstract": [
"Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples --- crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0 , which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6 . We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at this https URL.",
"Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs. Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to efficiently attack black-box models. By exploiting zeroth order optimization, improved attacks to the targeted DNN can be accomplished, sparing the need for training substitute models and avoiding the loss in attack transferability. Experimental results on MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective as the state-of-the-art white-box attack (e.g., Carlini and Wagner's attack) and significantly outperforms existing black-box attacks via substitute models.",
"",
"Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.",
"",
"Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class probabilities (score-based attacks), neither of which are available in most real-world scenarios. In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against. Here we emphasise the importance of attacks which solely rely on the final model decision. Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks. Previous attacks in this category were limited to simple models or simple datasets. Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial. The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet. We apply the attack on two black-box algorithms from Clarifai.com. The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems. An implementation of the attack is available as part of Foolbox at this https URL ."
]
} |
1904.02884 | 2926400157 | Deep neural networks are vulnerable to adversarial examples, which can mislead classifiers by adding imperceptible perturbations. An intriguing property of adversarial examples is their good transferability, making black-box attacks feasible in real-world applications. Due to the threat of adversarial attacks, many methods have been proposed to improve the robustness. Several state-of-the-art defenses are shown to be robust against transferable adversarial examples. In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models. By optimizing a perturbation over an ensemble of translated images, the generated adversarial example is less sensitive to the white-box model being attacked and has better transferability. To improve the efficiency of attacks, we further show that our method can be implemented by convolving the gradient at the untranslated image with a pre-defined kernel. Our method is generally applicable to any gradient-based attack method. Extensive experiments on the ImageNet dataset validate the effectiveness of the proposed method. Our best attack fools eight state-of-the-art defenses at an 82 success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques. | A large variety of methods have been proposed to increase the robustness of deep learning models. Besides directly making the models produce correct predictions for adversarial examples, some methods attempt to detect them instead @cite_18 @cite_3 . However, most of the non-certified defenses demonstrate the robustness by causing obfuscated gradients, which can be successfully circumvented by new attacks @cite_0 . Although these defenses are not robust in the white-box setting, some of them @cite_22 @cite_9 @cite_16 @cite_6 empirically show the resistance against transferable adversarial examples in the black-box setting. In this paper, we focus on generating more transferable adversarial examples against these defenses. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_16"
],
"mid": [
"2593892853",
"2620038827",
"",
"2781957615",
"",
"2787708942",
"2767962654"
],
"abstract": [
"Machine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small detector'' subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. Our method is orthogonal to prior work on addressing adversarial perturbations, which has mostly focused on making the classification network itself more robust. We show empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans. Moreover, while the detectors have been trained to detect only a specific adversary, they generalize to similar and weaker adversaries. In addition, we propose an adversarial attack that fools both the classifier and the detector and a novel training procedure for the detector that counteracts this attack.",
"Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks.",
"",
"Although the recent progress is substantial, deep learning methods can be vulnerable to the maliciously generated adversarial examples. In this paper, we present a novel training procedure and a thresholding test strategy, towards robust detection of adversarial examples. In training, we propose to minimize the reverse cross-entropy (RCE), which encourages a deep network to learn latent representations that better distinguish adversarial examples from normal ones. In testing, we propose to use a thresholding strategy as the detector to filter out adversarial examples for reliable predictions. Our method is simple to implement using standard algorithms, with little extra training cost compared to the common cross-entropy minimization. We apply our method to defend various attacking methods on the widely used MNIST and CIFAR-10 datasets, and achieve significant improvements on robust predictions under all the threat models in the adversarial setting.",
"",
"We identify obfuscated gradients as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat optimization-based attacks, we find defenses relying on this effect can be circumvented. For each of the three types of obfuscated gradients we discover, we describe indicators of defenses exhibiting this effect and develop attack techniques to overcome it. In a case study, examining all defenses accepted to ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 8 defenses relying on obfuscated gradients. Using our new attack techniques, we successfully circumvent all 7 of them.",
"Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at this https URL."
]
} |
1904.02670 | 2932977863 | Previous work has found strong links between the choice of social media images and users' emotions, demographics and personality traits. In this study, we examine which attributes of profile and posted images are associated with depression and anxiety of Twitter users. We used a sample of 28,749 Facebook users to build a language prediction model of survey-reported depression and anxiety, and validated it on Twitter on a sample of 887 users who had taken anxiety and depression surveys. We then applied it to a different set of 4,132 Twitter users to impute language-based depression and anxiety labels, and extracted interpretable features of posted and profile pictures to uncover the associations with users' depression and anxiety, controlling for demographics. For depression, we find that profile pictures suppress positive emotions rather than display more negative emotions, likely because of social media self-presentation biases. They also tend to show the single face of the user (rather than show her in groups of friends), marking increased focus on the self, emblematic for depression. Posted images are dominated by grayscale and low aesthetic cohesion across a variety of image features. Profile images of anxious users are similarly marked by grayscale and low aesthetic cohesion, but less so than those of depressed users. Finally, we show that image features can be used to predict depression and anxiety, and that multitask learning that includes a joint modeling of demographics improves prediction performance. Overall, we find that the image attributes that mark depression and anxiety offer a rich lens into these conditions largely congruent with the psychological literature, and that images on Twitter allow inferences about the mental health status of users. | Researchers have used images to study personality, measured using the Big Five model @cite_38 , based on profile pictures (with facial features). Others have also used posted images. One of the earliest works predicted self-assessed personalities of 100 users using their Facebook profile images @cite_31 with @math 65 However, in the health domain manifestation of mental health conditions in individual users based on social media images is under explored, despite recent work being done on studying public health of communities @cite_51 @cite_57 @cite_34 . Table presents a summary of the relevant works: number of users, traits, image types studied and features used. | {
"cite_N": [
"@cite_38",
"@cite_57",
"@cite_31",
"@cite_34",
"@cite_51"
],
"mid": [
"2071559616",
"2611552941",
"2003950529",
"",
"2611167769"
],
"abstract": [
"ABSTRACT The five-factor model of personality is a hierarchical organization of personality traits in terms of five basic dimensions: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. Research using both natural language adjectives and theoretically based personality questionnaires supports the comprehensiveness of the model and its applicability across observers and cultures. This article summarizes the history of the model and its supporting evidence; discusses conceptions of the nature of the factors; and outlines an agenda for theorizing about the origins and operation of the factors. We argue that the model should prove useful both for individual assessment and for the elucidation of a number of topics of interest to personality psychologists.",
"Social media sites are challenged by both the scale and variety of deviant behavior online. While algorithms can detect spam and obscenity, behaviors that break community guidelines on some sites are difficult because they have multimodal subtleties (images and or text). Identifying these posts is often regulated to a few moderators. In this paper, we develop a deep learning classifier that jointly models textual and visual characteristics of pro-eating disorder content that violates community guidelines. Using a million Tumblr photo posts, our classifier discovers deviant content efficiently while also maintaining high recall (85 ). Our approach uses human sensitivity throughout to guide the creation, curation, and understanding of this approach to challenging, deviant content. We discuss how automation might impact community moderation, and the ethical and social obligations of this area.",
"In this paper, we address the issue of personality and interaction style recognition from profile pictures in Facebook. We recruited volunteers among Facebook users and collected a dataset of profile pictures, labeled with gold standard self-assessed personality and interaction style labels. Then, we exploited a bag-of-visual-words technique to extract features from pictures. Finally, different machine learning approaches were used to test the effectiveness of these features in predicting personality and interaction style traits. Our good results show that this task is very promising, because profile pictures convey a lot of information about a user and are directly connected to impression formation and identity management.",
"",
"Content shared on social media platforms has been identified to be valuable in gaining insights into people's mental health experiences. Although there has been widespread adoption of photo-sharing platforms such as Instagram in recent years, the role of visual imagery as a mechanism of self-disclosure is less understood. We study the nature of visual attributes manifested in images relating to mental health disclosures on Instagram. Employing computer vision techniques on a corpus of thousands of posts, we extract and examine three visual attributes: visual features (e.g., color), themes, and emotions in images. Our findings indicate the use of imagery for unique self-disclosure needs, quantitatively and qualitatively distinct from those shared via the textual modality: expressions of emotional distress, calls for help, and explicit display of vulnerability. We discuss the relationship of our findings to literature in visual sociology, in mental health self disclosure, and implications for the design of health interventions."
]
} |
1708.02439 | 2727792416 | Nowadays, it is still difficult to adapt Convolutional Neural Network (CNN) based models for deployment on embedded devices. The heavy computation and large memory footprint of CNN models become the main burden in real application. In this paper, we propose a "Sparse Shrink" algorithm to prune an existing CNN model. By analyzing the importance of each channel via sparse reconstruction, the algorithm is able to prune redundant feature maps accordingly. The resulting pruned model thus directly saves computational resource. We have evaluated our algorithm on CIFAR-100. As shown in our experiments, we can reduce 56.77 parameters and 73.84 multiplication in total with only minor decrease in accuracy. These results have demonstrated the effectiveness of our "Sparse Shrink" algorithm. | Extensive work have been done to accelerate the testing of CNN models or lower its memory cost. Some @cite_17 @cite_13 of them speed up the testing by explore the sparsity in CNN models with low rank decomposition. Vasilache @cite_21 speed up the convolution operation by a Fast Fourier Transform implementation. However, these algorithms focus on either accelerating test speed or lower memory footprint of CNN models without changing their model structures. | {
"cite_N": [
"@cite_21",
"@cite_13",
"@cite_17"
],
"mid": [
"1789336918",
"1902041153",
"2950967261"
],
"abstract": [
"We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units. We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA's cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA's cuDNN implementation for many common convolutional layers (up to 23.5x for some synthetic kernel configurations). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided.",
"This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs). Unlike existing methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We minimize the reconstruction error of the nonlinear responses, subject to a low-rank constraint which helps to reduce the complexity of filters. We develop an effective solution to this constrained nonlinear optimization problem. An algorithm is also presented for reducing the accumulated error when multiple layers are approximated. A whole-model speedup ratio of 4× is demonstrated on a large network trained for ImageNet, while the top-5 error rate is only increased by 0.9 . Our accelerated model has a comparably fast speed as the “AlexNet” [11], but is 4.7 more accurate.",
"The focus of this paper is speeding up the evaluation of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition, showing a possible 2.5x speedup with no loss in accuracy, and 4.5x speedup with less than 1 drop in accuracy, still achieving state-of-the-art on standard benchmarks."
]
} |
1708.02439 | 2727792416 | Nowadays, it is still difficult to adapt Convolutional Neural Network (CNN) based models for deployment on embedded devices. The heavy computation and large memory footprint of CNN models become the main burden in real application. In this paper, we propose a "Sparse Shrink" algorithm to prune an existing CNN model. By analyzing the importance of each channel via sparse reconstruction, the algorithm is able to prune redundant feature maps accordingly. The resulting pruned model thus directly saves computational resource. We have evaluated our algorithm on CIFAR-100. As shown in our experiments, we can reduce 56.77 parameters and 73.84 multiplication in total with only minor decrease in accuracy. These results have demonstrated the effectiveness of our "Sparse Shrink" algorithm. | Network pruning has been studied by several researchers @cite_19 @cite_18 @cite_11 @cite_14 . Lecun @cite_19 and Hassibi @cite_3 show that a portion of weights can be set to zero by analyzing their value and Hessian matrix. Han @cite_5 @cite_22 gradually prune the small-weights in a network, and further reduce storage requirement by compressing weights in fully connected layer with matrix factorization and vector quantization. Rastegari @cite_14 binarize both the weights and layer inputs, such that the resulting network mainly uses XNOR operations. Stepniewski @cite_11 prunes network with genetic algorithm and simulated annealing. However, these algorithms only makes use of intra-kernel sparsity, without doing channel wise pruning. This limits GPUs to expolit computational savings. Different from existing algorithms, our Sparse Shrink'' algorithm directly prune network structure in convolutional layer by channel wise pruning. The most related work on channel wise pruning would be Structured pruning'' @cite_0 . It naively remove the incoming and outgoing weights of a pruned channel. In contrast, we modify convolutional kernel in the upper layer by reconstructing original feature maps in order to reduce decrease in accuracy. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_11"
],
"mid": [
"",
"2951978180",
"2119144962",
"2125389748",
"2952826672",
"2114766824",
"2963674932",
"2072458630"
],
"abstract": [
"",
"We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9 less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16 in top-1 accuracy.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.",
"Real time application of deep learning algorithms is often hindered by high computational complexity and frequent memory accesses. Network pruning is a promising technique to solve this problem. However, pruning usually results in irregular network connections that not only demand extra representation efforts but also do not fit well on parallel computation. We introduce structured sparsity at various scales for convolutional neural networks, which are channel wise, kernel wise and intra kernel strided sparsity. This structured sparsity is very advantageous for direct computational resource savings on embedded computers, parallel computing environments and hardware based systems. To decide the importance of network connections and paths, the proposed method uses a particle filtering approach. The importance weight of each particle is assigned by computing the misclassification rate with corresponding connectivity pattern. The pruned network is re-trained to compensate for the losses due to pruning. While implementing convolutions as matrix products, we particularly show that intra kernel strided sparsity with a simple constraint can significantly reduce the size of kernel and feature map matrices. The pruned network is finally fixed point optimized with reduced word length precision. This results in significant reduction in the total storage size providing advantages for on-chip memory based implementations of deep neural networks.",
"We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"Approaches combining genetic algorithms and neural networks have received a great deal of attention in recent years. As a result, much work has been reported in two major areas of neural network design: training and topology optimisation. This paper focuses on the key issues associated with the problem of pruning a multilayer perceptron using genetic algorithms and simulated annealing. The study presented considers a number of aspects associated with network training that may alter the behaviour of a stochastic topology optimiser. Enhancements are discussed that can improve topology searches. Simulation results for the two mentioned stochastic optimisation methods applied to non-linear system identification are presented and compared with a simple random search."
]
} |
1708.02660 | 2742434688 | Knowing where people look and click on visual designs can provide clues about how the designs are perceived, and where the most important or relevant content lies. The most important content of a visual design can be used for effective summarization or to facilitate retrieval from a database. We present automated models that predict the relative importance of different elements in data visualizations and graphic designs. Our models are neural networks trained on human clicks and importance annotations on hundreds of designs. We collected a new dataset of crowdsourced importance, and analyzed the predictions of our models with respect to ground truth importance and human eye movements. We demonstrate how such predictions of importance can be used for automatic design retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven applications are on par with, or outperform, current state-of-the-art methods, including natural image saliency. We also provide a demonstration of how our importance predictions can be built into interactive design tools to offer immediate feedback during the design process. | Designers and researchers have long studied eye movements as a clue to understanding the perception of interfaces @cite_34 @cite_24 . There have also been several recent studies of eye movements and the perception of designs @cite_42 @cite_46 . However, measuring eye movements is an expensive and time-consuming process, and is rarely feasible for practical applications. | {
"cite_N": [
"@cite_24",
"@cite_46",
"@cite_34",
"@cite_42"
],
"mid": [
"1517003397",
"2011673487",
"1538622809",
"1952173784"
],
"abstract": [
"Publisher Summary This chapter discusses the application of eye movements to user interfaces, both for analyzing interfaces (measuring usability) and as an actual control medium within a human–computer dialogue. For usability analysis, the user's eye movements are recorded during system use and later analyzed retrospectively; however, the eye movements do not affect the interface in real time. As a direct control medium, the eye movements are obtained and used in real time as an input to the user–computer dialogue. The eye movements might be the sole input, typically for disabled users or hands-busy applications, or might be used as one of several inputs, combining with mouse, keyboard, sensors, or other devices. From the perspective of mainstream eye-movement research, human–computer interaction, together with related work in the broader field of communications and media research, appears as a new and very promising area of applied work. Both basic and applied work can profit from integration within a unified field of eye-movement research. Application of eye tracking in human–computer interaction remains a very promising approach; its technological and market barriers are finally being reduced.",
"Information graphics, or infographics, combine elements of data visualization with design and have become an increasingly popular means for disseminating data. While several studies have suggested that aesthetics in visualization and infographics relate to desirable outcomes like engagement and memorability, it remains unknown how quickly aesthetic impressions are formed, and what it is that makes an infographic appealing. We address these questions by analyzing 1,278 participants' ratings on appeal after seeing infographics for 500ms. Our results establish that: 1) people form a reliable first impression of the appeal of an infographic based on a mere exposure effect, 2) this first impression is largely based on colorfulness and visual complexity, and 3) age, gender, and education level influence the preferred level of colorfulness and complexity. More generally, these findings suggest that outcomes such as engagement and memorability might be determined much earlier than previously thought.",
"to the Human Visual System (HVS).- Visual Attention.- Neurological Substrate of the HVS.- Visual Psychophysics.- Taxonomy and Models of Eye Movements.- Eye Tracking Systems.- Eye Tracking Techniques.- Head-Mounted System Hardware Installation.- Head-Mounted System Software Development.- Head-Mounted System Calibration.- Table-Mounted System Hardware Installation.- Table-Mounted System Software Development.- Table-Mounted System Calibration.- Eye Movement Analysis.- Eye Tracking Methodology.- Experimental Design.- Suggested Empirical Guidelines.- Case Studies.- Eye Tracking Applications.- Diversity and Types of Eye Tracking Applications.- Neuroscience and Psychology.- Industrial Engineering and Human Factors.- Marketing Advertising.- Computer Science.- Conclusion.",
"In this paper we move beyond memorability and investigate how visualizations are recognized and recalled. For this study we labeled a dataset of 393 visualizations and analyzed the eye movements of 33 participants as well as thousands of participant-generated text descriptions of the visualizations. This allowed us to determine what components of a visualization attract people's attention, and what information is encoded into memory. Our findings quantitatively support many conventional qualitative design guidelines, including that (1) titles and supporting text should convey the message of a visualization, (2) if used appropriately, pictograms do not interfere with understanding and can improve recognition, and (3) redundancy helps effectively communicate the message. Importantly, we show that visualizations memorable “at-a-glance” are also capable of effectively conveying the message of the visualization. Thus, a memorable visualization is often also an effective one."
]
} |
1708.02531 | 2744530641 | Cross-modal hashing is usually regarded as an effective technique for large-scale textual-visual cross retrieval, where data from different modalities are mapped into a shared Hamming space for matching. Most of the traditional textual-visual binary encoding methods only consider holistic image representations and fail to model descriptive sentences. This renders existing methods inappropriate to handle the rich semantics of informative cross-modal data for quality textual-visual search tasks. To address the problem of hashing cross-modal data with semantic-rich cues, in this paper, a novel integrated deep architecture is developed to effectively encode the detailed semantics of informative images and long descriptive sentences, named as Textual-Visual Deep Binaries (TVDB). In particular, region-based convolutional networks with long short-term memory units are introduced to fully explore image regional details while semantic cues of sentences are modeled by a text convolutional network. Additionally, we propose a stochastic batch-wise training routine, where high-quality binary codes and deep encoding functions are efficiently optimized in an alternating manner. Experiments are conducted on three multimedia datasets, i.e. Microsoft COCO, IAPR TC-12, and INRIA Web Queries, where the proposed TVDB model significantly outperforms state-of-the-art binary coding methods in the task of cross-modal retrieval. | The cutting-edge studies in vision and language achieve promising results in terms of visual question answering @cite_3 @cite_64 @cite_68 , caption generation @cite_18 @cite_27 @cite_38 , and real-valued cross-modal retrieval @cite_26 @cite_12 @cite_44 @cite_55 @cite_19 @cite_58 @cite_60 @cite_65 @cite_63 @cite_28 @cite_11 . The best-performing real-valued cross-modal retrieval models typically rely on densely annotated image-region and text pairs for embedding. However, these methods are far from satisfactory for large-scale data retrieval due to the inefficient similarity computation of real-valued embeddings. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_64",
"@cite_26",
"@cite_60",
"@cite_28",
"@cite_55",
"@cite_58",
"@cite_65",
"@cite_3",
"@cite_44",
"@cite_19",
"@cite_27",
"@cite_63",
"@cite_68",
"@cite_12",
"@cite_11"
],
"mid": [
"2963758027",
"2950178297",
"2950761309",
"154472438",
"1931795219",
"2950012948",
"",
"2077069816",
"2951805548",
"2952246170",
"2953276893",
"2963735856",
"2951183276",
"1811254738",
"2189070436",
"2082721209",
"1949478088"
],
"abstract": [
"We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).",
"Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time.",
"In the traditional object recognition pipeline, descriptors are densely sampled over an image, pooled into a high dimensional non-linear representation and then passed to a classifier. In recent years, Fisher Vectors have proven empirically to be the leading representation for a large variety of applications. The Fisher Vector is typically taken as the gradients of the log-likelihood of descriptors, with respect to the parameters of a Gaussian Mixture Model (GMM). Motivated by the assumption that different distributions should be applied for different datasets, we present two other Mixture Models and derive their Expectation-Maximization and Fisher Vector expressions. The first is a Laplacian Mixture Model (LMM), which is based on the Laplacian distribution. The second Mixture Model presented is a Hybrid Gaussian-Laplacian Mixture Model (HGLMM) which is based on a weighted geometric mean of the Gaussian and Laplacian distribution. An interesting property of the Expectation-Maximization algorithm for the latter is that in the maximization step, each dimension in each component is chosen to be either a Gaussian or a Laplacian. Finally, by using the new Fisher Vectors derived from HGLMMs, we achieve state-of-the-art results for both the image annotation and the image search by a sentence tasks.",
"In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content, and one matching CNN learning the joint representation of image and sentence. The matching CNN composes words to different semantic fragments and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results on benchmark databases of bidirectional image and sentence retrieval demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. Specifically, our proposed m-CNNs for bidirectional image and sentence retrieval on Flickr30K and Microsoft COCO databases achieve the state-of-the-art performances.",
"",
"This paper develops a novel framework for semantic image retrieval based on the notion of a scene graph. Our scene graphs represent objects (“man”, “boat”), attributes of objects (“boat is white”) and relationships between objects (“man standing on boat”). We use these scene graphs as queries to retrieve semantically related images. To this end, we design a conditional random field model that reasons about possible groundings of scene graphs to test images. The likelihoods of these groundings are used as ranking scores for retrieval. We introduce a novel dataset of 5,000 human-generated scene graphs grounded to images and use this dataset to evaluate our method for image retrieval. In particular, we evaluate retrieval using full scene graphs and small scene subgraphs, and show that our method outperforms retrieval methods that use only objects or low-level image features. In addition, we show that our full model can be used to improve object localization compared to baseline methods.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus.",
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.",
"In this paper, we address the task of natural language object retrieval, to localize a target object within a given image based on a natural language query of the object. Natural language object retrieval differs from text-based image retrieval task as it involves spatial information about objects within the scene and global scene context. To address this issue, we propose a novel Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate boxes for object retrieval, integrating spatial configurations and global scene-level contextual information into the network. Our model processes query text, local image descriptors, spatial configurations and global context features through a recurrent network, outputs the probability of the query text conditioned on each candidate box as a score for the box, and can transfer visual-linguistic knowledge from image captioning domain to our task. Experimental results demonstrate that our method effectively utilizes both local and global information, outperforming previous baseline methods significantly on different datasets and scenarios, and can exploit large scale vision and language datasets for knowledge transfer.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html .",
"In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7 of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: http: idl.baidu.com FM-IQA.html .",
"To support cross-modal information retrieval, cross-modal learning to rank approaches utilize ranking examples (e.g., an example may be a text query and its corresponding ranked images) to learn appropriate ranking (similarity) function. However, the fact that each modality is represented with intrinsically different low-level features hinders these approaches from better reducing the heterogeneity-gap between the modalities and thus giving satisfactory retrieval results. In this paper, we consider learning with neural networks, from the perspective of optimizing the listwise ranking loss of the cross-modal ranking examples. The proposed model, named Cross-Modal Ranking Neural Network (CMRNN), benefits from the advance of both neural networks on learning high-level semantics and learning to rank techniques on learning ranking function, such that the learned cross-modal ranking function is implicitly embedded in the learned high-level representation for data objects with different modalities (e.g., text and imagery) to perform cross-modal retrieval directly. We compare CMRNN to existing state-of-the-art cross-modal ranking methods on two datasets and show that it achieves a better performance.",
"This paper addresses the problem of matching images and captions in a joint latent space learnt with deep canonical correlation analysis (DCCA). The image and caption data are represented by the outputs of the vision and text based deep neural networks. The high dimensionality of the features presents a great challenge in terms of memory and speed complexity when used in DCCA framework. We address these problems by a GPU implementation and propose methods to deal with overfitting. This makes it possible to evaluate DCCA approach on popular caption-image matching benchmarks. We compare our approach to other recently proposed techniques and present state of the art results on three datasets."
]
} |
1708.02668 | 2742395519 | While Markov Random Fields (MRFs) are widely used in computer vision, they present a quite challenging inference problem. MRF inference can be accelerated by pre-processing techniques like Dead End Elimination (DEE) or QPBO-based approaches which compute the optimal labeling of a subset of variables. These techniques are guaranteed to never wrongly label a variable but they often leave a large number of variables unlabeled. We address this shortcoming by interpreting pre-processing as a classification problem, which allows us to trade off false positives (i.e., giving a variable an incorrect label) versus false negatives (i.e., failing to label a variable). We describe an efficient discriminative rule that finds optimal solutions for a subset of variables. Our technique provides both per-instance and worst-case guarantees concerning the quality of the solution. Empirical studies were conducted over several benchmark datasets. We obtain a speedup factor of 2 to 12 over expansion moves without preprocessing, and on difficult non-submodular energy functions produce slightly lower energy. | A popular approach to the inference problem is to find the optimal labeling for a subset of the variables @cite_17 @cite_3 @cite_24 @cite_41 @cite_23 @cite_40 @cite_2 @cite_13 @cite_4 . A partial labeling that holds in every global minimizer is said to be @cite_32 . Techniques like QPBO @cite_32 @cite_18 find an optimal partial labeling by enforcing an even stronger condition: a partial labeling that will decrease the energy if it is substituted into any complete labeling. QPBO is naturally viewed as a pre-processing method since it finds persistent partial labelings, and leaves the task of labeling the remaining variables to some other algorithm. This stronger property is sometimes called an @cite_32 , which was generalized by @cite_40 . QPBO in particular is widely used in computer vision since it often finds the correct label for the majority of the variables. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_41",
"@cite_32",
"@cite_3",
"@cite_24",
"@cite_40",
"@cite_23",
"@cite_2",
"@cite_13",
"@cite_17"
],
"mid": [
"2153396823",
"73592085",
"2952068903",
"",
"",
"1981238278",
"2951647103",
"2144164641",
"2952759329",
"2951056087",
"1972728295"
],
"abstract": [
"Optimization techniques based on graph cuts have become a standard tool for many vision applications. These techniques allow to minimize efficiently certain energy functions corresponding to pairwise Markov random fields (MRFs). Currently, there is an accepted view within the computer vision community that graph cuts can only be used for optimizing a limited class of MRF energies (e.g., submodular functions). In this survey, we review some results that show that graph cuts can be applied to a much larger class of energy functions (in particular, nonsubmodular functions). While these results are well-known in the optimization community, to our knowledge they were not used in the context of computer vision and MRF optimization. We demonstrate the relevance of these results to vision on the problem of binary texture restoration.",
"We extend the concept of generalized roof duality from pseudo-boolean functions to real-valued functions over multi-label variables. In particular, we prove that an analogue of the persistency property holds for energies of any order with any number of linearly ordered labels. Moreover, we show how the optimal submodular relaxation can be constructed in the first-order case.",
"Consider a convex relaxation @math of a pseudo-boolean function @math . We say that the relaxation is totally half-integral if @math is a polyhedral function with half-integral extreme points @math , and this property is preserved after adding an arbitrary combination of constraints of the form @math , @math , and @math where @math is a constant. A well-known example is the roof duality relaxation for quadratic pseudo-boolean functions @math . We argue that total half-integrality is a natural requirement for generalizations of roof duality to arbitrary pseudo-boolean functions. Our contributions are as follows. First, we provide a complete characterization of totally half-integral relaxations @math by establishing a one-to-one correspondence with bisubmodular functions . Second, we give a new characterization of bisubmodular functions. Finally, we show some relationships between general totally half-integral relaxations and relaxations based on the roof duality.",
"",
"",
"We consider the problem of optimizing multilabel MRFs, which is in general NP-hard and ubiquitous in low-level computer vision. One approach for its solution is to formulate it as an integer linear programming and relax the integrality constraints. The approach we consider in this paper is to first convert the multi-label MRF into an equivalent binary-label MRF and then to relax it. The resulting relaxation can be efficiently solved using a maximum flow algorithm. Its solution provides us with a partially optimal labelling of the binary variables. This partial labelling is then easily transferred to the multi-label problem. We study the theoretical properties of the new relaxation and compare it with the standard one. Specifically, we compare tightness, and characterize a subclass of problems where the two relaxations coincide. We propose several combined algorithms based on the technique and demonstrate their performance on challenging computer vision problems.",
"We consider discrete pairwise energy minimization problem (weighted constraint satisfaction, max-sum labeling) and methods that identify a globally optimal partial assignment of variables. When finding a complete optimal assignment is intractable, determining optimal values for a part of variables is an interesting possibility. Existing methods are based on different sufficient conditions. We propose a new sufficient condition for partial optimality which is: (1) verifiable in polynomial time (2) invariant to reparametrization of the problem and permutation of labels and (3) includes many existing sufficient conditions as special cases. We pose the problem of finding the maximum optimal partial assignment identifiable by the new sufficient condition. A polynomial method is proposed which is guaranteed to assign same or larger part of variables than several existing approaches. The core of the method is a specially constructed linear program that identifies persistent assignments in an arbitrary multi-label setting.",
"We consider energy minimization for undirected graphical models, also known as the MAP-inference problem for Markov random fields. Although combinatorial methods, which return a provably optimal integral solution of the problem, made a significant progress in the past decade, they are still typically unable to cope with large-scale datasets. On the other hand, large scale datasets are often defined on sparse graphs and convex relaxation methods, such as linear programming relaxations then provide good approximations to integral solutions. We propose a novel method of combining combinatorial and convex programming techniques to obtain a global solution of the initial combinatorial problem. Based on the information obtained from the solution of the convex relaxation, our method confines application of the combinatorial solver to a small fraction of the initial graphical model, which allows to optimally solve much larger problems. We demonstrate the efficacy of our approach on a computer vision energy minimization benchmark.",
"We consider the NP-hard problem of MAP-inference for undirected discrete graphical models. We propose a polynomial time and practically efficient algorithm for finding a part of its optimal solution. Specifically, our algorithm marks some labels of the considered graphical model either as (i) optimal, meaning that they belong to all optimal solutions of the inference problem; (ii) non-optimal if they provably do not belong to any solution. With access to an exact solver of a linear programming relaxation to the MAP-inference problem, our algorithm marks the maximal possible (in a specified sense) number of labels. We also present a version of the algorithm, which has access to a suboptimal dual solver only and still can ensure the (non-)optimality for the marked labels, although the overall number of the marked labels may decrease. We propose an efficient implementation, which runs in time comparable to a single run of a suboptimal dual solver. Our method is well-scalable and shows state-of-the-art results on computational benchmarks from machine learning and computer vision.",
"We consider the energy minimization problem for undirected graphical models, also known as MAP-inference problem for Markov random fields which is NP-hard in general. We propose a novel polynomial time algorithm to obtain a part of its optimal non-relaxed integral solution. Our algorithm is initialized with variables taking integral values in the solution of a convex relaxation of the MAP-inference problem and iteratively prunes those, which do not satisfy our criterion for partial optimality. We show that our pruning strategy is in a certain sense theoretically optimal. Also empirically our method outperforms previous approaches in terms of the number of persistently labelled variables. The method is very general, as it is applicable to models with arbitrary factors of an arbitrary order and can employ any solver for the considered relaxed problem. Our method's runtime is determined by the runtime of the convex relaxation solver for the MAP-inference problem.",
"THE prediction of a protein's tertiary structure is still a considerable problem because the huge amount of possible conformational space1 makes it computationally difficult. With regard to side-chain modelling, a solution has been attempted by the grouping of side-chain conformations into representative sets of rotamers2–5. Nonetheless, an exhaustive combinatorial search is still limited to carefully identified packing units5,6containing a limited number of residues. For larger systems other strategies had to be develop-ped, such as the Monte Carlo Procedure6,7 and the genetic algorithm and clustering approach8. Here we present a theorem, referred to as the 'dead-end elimination' theorem, which imposes a suitable condition to identify rotamers that cannot be members of the global minimum energy conformation. Application of this theorem effectively controls the computational explosion of the rotamer combinatorial problem, thereby allowing the determination of the global minimum energy conformation of a large collection of side chains."
]
} |
1708.02668 | 2742395519 | While Markov Random Fields (MRFs) are widely used in computer vision, they present a quite challenging inference problem. MRF inference can be accelerated by pre-processing techniques like Dead End Elimination (DEE) or QPBO-based approaches which compute the optimal labeling of a subset of variables. These techniques are guaranteed to never wrongly label a variable but they often leave a large number of variables unlabeled. We address this shortcoming by interpreting pre-processing as a classification problem, which allows us to trade off false positives (i.e., giving a variable an incorrect label) versus false negatives (i.e., failing to label a variable). We describe an efficient discriminative rule that finds optimal solutions for a subset of variables. Our technique provides both per-instance and worst-case guarantees concerning the quality of the solution. Empirical studies were conducted over several benchmark datasets. We obtain a speedup factor of 2 to 12 over expansion moves without preprocessing, and on difficult non-submodular energy functions produce slightly lower energy. | QPBO generalizes the binary graph cut reduction that uses max-flow to find an optimal partial labeling @cite_32 @cite_18 @cite_35 . If the energy function is submodular For every pairwise cost, we have @math . the partial labeling is complete (i.e., it labels every variable and finds a global minimizer). However, the computational expense of running max-flow is non-trivial. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_32"
],
"mid": [
"2137117160",
"2153396823",
""
],
"abstract": [
"Many computer vision applications rely on the efficient optimization of challenging, so-called non-submodular, binary pairwise MRFs. A promising graph cut based approach for optimizing such MRFs known as \"roof duality\" was recently introduced into computer vision. We study two methods which extend this approach. First, we discuss an efficient implementation of the \"probing\" technique introduced recently by (2006). It simplifies the MRF while preserving the global optimum. Our code is 400-700 faster on some graphs than the implementation of the work of (2006). Second, we present a new technique which takes an arbitrary input labeling and tries to improve its energy. We give theoretical characterizations of local minima of this procedure. We applied both techniques to many applications, including image segmentation, new view synthesis, super-resolution, diagram recognition, parameter learning, texture restoration, and image deconvolution. For several applications we see that we are able to find the global minimum very efficiently, and considerably outperform the original roof duality approach. In comparison to existing techniques, such as graph cut, TRW, BP, ICM, and simulated annealing, we nearly always find a lower energy.",
"Optimization techniques based on graph cuts have become a standard tool for many vision applications. These techniques allow to minimize efficiently certain energy functions corresponding to pairwise Markov random fields (MRFs). Currently, there is an accepted view within the computer vision community that graph cuts can only be used for optimizing a limited class of MRF energies (e.g., submodular functions). In this survey, we review some results that show that graph cuts can be applied to a much larger class of energy functions (in particular, nonsubmodular functions). While these results are well-known in the optimization community, to our knowledge they were not used in the context of computer vision and MRF optimization. We demonstrate the relevance of these results to vision on the problem of binary texture restoration.",
""
]
} |
1708.02668 | 2742395519 | While Markov Random Fields (MRFs) are widely used in computer vision, they present a quite challenging inference problem. MRF inference can be accelerated by pre-processing techniques like Dead End Elimination (DEE) or QPBO-based approaches which compute the optimal labeling of a subset of variables. These techniques are guaranteed to never wrongly label a variable but they often leave a large number of variables unlabeled. We address this shortcoming by interpreting pre-processing as a classification problem, which allows us to trade off false positives (i.e., giving a variable an incorrect label) versus false negatives (i.e., failing to label a variable). We describe an efficient discriminative rule that finds optimal solutions for a subset of variables. Our technique provides both per-instance and worst-case guarantees concerning the quality of the solution. Empirical studies were conducted over several benchmark datasets. We obtain a speedup factor of 2 to 12 over expansion moves without preprocessing, and on difficult non-submodular energy functions produce slightly lower energy. | There are also techniques directly finding optimal partial labeling for the multi-label case, but the computational costs for these methods are significant. Kovtun @cite_30 @cite_25 described an approach constructing a series of binary one-verse-the-rest auxiliary problems and solve each of them via graph cuts. MQPBO @cite_24 and generalized roof duality @cite_4 proposed generalizations of QPBO to multi-label MRFs. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_25",
"@cite_24"
],
"mid": [
"1496918921",
"73592085",
"",
"1981238278"
],
"abstract": [
"Optimal labeling problems are NP-hard in many practically important cases. Sufficient conditions for optimal label detection in every pixel are formulated. Knowing the values of the optimal labeling in some pixels, as a result of applying the proposed algorithm, allows to decrease the complexity of the original problem essentially.",
"We extend the concept of generalized roof duality from pseudo-boolean functions to real-valued functions over multi-label variables. In particular, we prove that an analogue of the persistency property holds for energies of any order with any number of linearly ordered labels. Moreover, we show how the optimal submodular relaxation can be constructed in the first-order case.",
"",
"We consider the problem of optimizing multilabel MRFs, which is in general NP-hard and ubiquitous in low-level computer vision. One approach for its solution is to formulate it as an integer linear programming and relax the integrality constraints. The approach we consider in this paper is to first convert the multi-label MRF into an equivalent binary-label MRF and then to relax it. The resulting relaxation can be efficiently solved using a maximum flow algorithm. Its solution provides us with a partially optimal labelling of the binary variables. This partial labelling is then easily transferred to the multi-label problem. We study the theoretical properties of the new relaxation and compare it with the standard one. Specifically, we compare tightness, and characterize a subclass of problems where the two relaxations coincide. We propose several combined algorithms based on the technique and demonstrate their performance on challenging computer vision problems."
]
} |
1708.02668 | 2742395519 | While Markov Random Fields (MRFs) are widely used in computer vision, they present a quite challenging inference problem. MRF inference can be accelerated by pre-processing techniques like Dead End Elimination (DEE) or QPBO-based approaches which compute the optimal labeling of a subset of variables. These techniques are guaranteed to never wrongly label a variable but they often leave a large number of variables unlabeled. We address this shortcoming by interpreting pre-processing as a classification problem, which allows us to trade off false positives (i.e., giving a variable an incorrect label) versus false negatives (i.e., failing to label a variable). We describe an efficient discriminative rule that finds optimal solutions for a subset of variables. Our technique provides both per-instance and worst-case guarantees concerning the quality of the solution. Empirical studies were conducted over several benchmark datasets. We obtain a speedup factor of 2 to 12 over expansion moves without preprocessing, and on difficult non-submodular energy functions produce slightly lower energy. | Recently, Swoboda et. al. @cite_13 use standard MRF inference algorithms to iteratively update the set of persistent variables. Shekhovtsov @cite_40 formalized the problem to maximize the number of optimally labeled variables as an LP. They also proposed to combine these two approaches together which can take advantage of both of them @cite_2 . The number of variables labeled by these approaches are significantly more than Kovtun's approach and MQPBO. However, the running time of these approaches is significantly longer, since these approaches involve solving complex programming (either via standard MRF inference solver or LP solver) iteratively. | {
"cite_N": [
"@cite_40",
"@cite_13",
"@cite_2"
],
"mid": [
"2951647103",
"2951056087",
"2952759329"
],
"abstract": [
"We consider discrete pairwise energy minimization problem (weighted constraint satisfaction, max-sum labeling) and methods that identify a globally optimal partial assignment of variables. When finding a complete optimal assignment is intractable, determining optimal values for a part of variables is an interesting possibility. Existing methods are based on different sufficient conditions. We propose a new sufficient condition for partial optimality which is: (1) verifiable in polynomial time (2) invariant to reparametrization of the problem and permutation of labels and (3) includes many existing sufficient conditions as special cases. We pose the problem of finding the maximum optimal partial assignment identifiable by the new sufficient condition. A polynomial method is proposed which is guaranteed to assign same or larger part of variables than several existing approaches. The core of the method is a specially constructed linear program that identifies persistent assignments in an arbitrary multi-label setting.",
"We consider the energy minimization problem for undirected graphical models, also known as MAP-inference problem for Markov random fields which is NP-hard in general. We propose a novel polynomial time algorithm to obtain a part of its optimal non-relaxed integral solution. Our algorithm is initialized with variables taking integral values in the solution of a convex relaxation of the MAP-inference problem and iteratively prunes those, which do not satisfy our criterion for partial optimality. We show that our pruning strategy is in a certain sense theoretically optimal. Also empirically our method outperforms previous approaches in terms of the number of persistently labelled variables. The method is very general, as it is applicable to models with arbitrary factors of an arbitrary order and can employ any solver for the considered relaxed problem. Our method's runtime is determined by the runtime of the convex relaxation solver for the MAP-inference problem.",
"We consider the NP-hard problem of MAP-inference for undirected discrete graphical models. We propose a polynomial time and practically efficient algorithm for finding a part of its optimal solution. Specifically, our algorithm marks some labels of the considered graphical model either as (i) optimal, meaning that they belong to all optimal solutions of the inference problem; (ii) non-optimal if they provably do not belong to any solution. With access to an exact solver of a linear programming relaxation to the MAP-inference problem, our algorithm marks the maximal possible (in a specified sense) number of labels. We also present a version of the algorithm, which has access to a suboptimal dual solver only and still can ensure the (non-)optimality for the marked labels, although the overall number of the marked labels may decrease. We propose an efficient implementation, which runs in time comparable to a single run of a suboptimal dual solver. Our method is well-scalable and shows state-of-the-art results on computational benchmarks from machine learning and computer vision."
]
} |
1708.02668 | 2742395519 | While Markov Random Fields (MRFs) are widely used in computer vision, they present a quite challenging inference problem. MRF inference can be accelerated by pre-processing techniques like Dead End Elimination (DEE) or QPBO-based approaches which compute the optimal labeling of a subset of variables. These techniques are guaranteed to never wrongly label a variable but they often leave a large number of variables unlabeled. We address this shortcoming by interpreting pre-processing as a classification problem, which allows us to trade off false positives (i.e., giving a variable an incorrect label) versus false negatives (i.e., failing to label a variable). We describe an efficient discriminative rule that finds optimal solutions for a subset of variables. Our technique provides both per-instance and worst-case guarantees concerning the quality of the solution. Empirical studies were conducted over several benchmark datasets. We obtain a speedup factor of 2 to 12 over expansion moves without preprocessing, and on difficult non-submodular energy functions produce slightly lower energy. | Dead End Elimination (DEE) @cite_17 and the recent Persistency Relaxation (PR) algorithm @cite_27 are the only existing method with cheaper computational costs than max-flow. DEE checks a local sufficient condition which only involves a single vertex and its adjacent edges. PR generalizes DEE to check a larger partial labeling, which gives improved results on standard benchmarks. | {
"cite_N": [
"@cite_27",
"@cite_17"
],
"mid": [
"2435789594",
"1972728295"
],
"abstract": [
"Markov Random Fields (MRFs) are a widely used graphical model, but the inference problem is NP-hard. For first-order MRFs with binary labels, Dead End Elimination (DEE) [7] and QPBO [2, 14] can find the optimal labeling for some variables, the much harder case of larger label sets has been addressed by Kovtun [16, 17] and related methods [12, 23, 24, 25], which impose substantial computational overhead. We describe an efficient algorithm to correctly label a subset of the variables for arbitrary MRFs, with particularly good performance on binary MRFs. We propose a sufficient condition to check if a partial labeling is optimal, which is a generalization of DEE's purely local test. We give a hierarchy of relaxations that provide larger optimal partial labelings at the cost of additional computation. Empirical studies were conducted on several benchmarks, using expansion moves [4] for inference. Our algorithm runs in a few seconds, and improves the speed of MRF inference with expansion moves by a factor of 1.5 to 12.",
"THE prediction of a protein's tertiary structure is still a considerable problem because the huge amount of possible conformational space1 makes it computationally difficult. With regard to side-chain modelling, a solution has been attempted by the grouping of side-chain conformations into representative sets of rotamers2–5. Nonetheless, an exhaustive combinatorial search is still limited to carefully identified packing units5,6containing a limited number of residues. For larger systems other strategies had to be develop-ped, such as the Monte Carlo Procedure6,7 and the genetic algorithm and clustering approach8. Here we present a theorem, referred to as the 'dead-end elimination' theorem, which imposes a suitable condition to identify rotamers that cannot be members of the global minimum energy conformation. Application of this theorem effectively controls the computational explosion of the rotamer combinatorial problem, thereby allowing the determination of the global minimum energy conformation of a large collection of side chains."
]
} |
1708.02668 | 2742395519 | While Markov Random Fields (MRFs) are widely used in computer vision, they present a quite challenging inference problem. MRF inference can be accelerated by pre-processing techniques like Dead End Elimination (DEE) or QPBO-based approaches which compute the optimal labeling of a subset of variables. These techniques are guaranteed to never wrongly label a variable but they often leave a large number of variables unlabeled. We address this shortcoming by interpreting pre-processing as a classification problem, which allows us to trade off false positives (i.e., giving a variable an incorrect label) versus false negatives (i.e., failing to label a variable). We describe an efficient discriminative rule that finds optimal solutions for a subset of variables. Our technique provides both per-instance and worst-case guarantees concerning the quality of the solution. Empirical studies were conducted over several benchmark datasets. We obtain a speedup factor of 2 to 12 over expansion moves without preprocessing, and on difficult non-submodular energy functions produce slightly lower energy. | Methods that optimally label a subset of the variables can obviously be used to pre-process and accelerate MRF inference algorithms such as expansion moves. For example, Radhakrishnan and Su @cite_0 used DEE while Alahari et. al. @cite_36 applied Kovtun's approach. | {
"cite_N": [
"@cite_0",
"@cite_36"
],
"mid": [
"2118290885",
"2153215387"
],
"abstract": [
"We apply the dead-end elimination (DEE) strategy from protein design as a heuristic for the max-flow min-cut formulation of the image segmentation problem. DEE combines aspects of constraint propagation and branch-and-bound to eliminate solutions incompatible with global optimization of the objective function. Though DEE can be used for segmentation into an arbitrary number of regions, in this paper we evaluate only the case of binary segmentation. We provide a runtime analysis and evaluation of DEE applied to two min-cut algorithms. Preliminary results show that DEE consistently reduces the search space for the Edmonds?Karp algorithm; tuning DEE as a heuristic for Boykov?Kolmogorov and other algorithms is future work.",
"In this paper, we present novel techniques that improve the computational and memory efficiency of algorithms for solving multi-label energy functions arising from discrete MRFs or CRFs. These methods are motivated by the observations that the performance of minimization algorithms depends on: 1) the initialization used for the primal and dual variables and 2) the number of primal variables involved in the energy function. Our first method (dynamic α-expansion) works by \"recycling\" results from previous problem instances. The second method simplifies the energy function by \"reducing\" the number of unknown variables present in the problem. Further, we show that it can also be used to generate a good initialization for the dynamic α-expansion algorithm by \"reusing\" dual variables. We test the performance of our methods on energy functions encountered in the problems of stereo matching and color and object-based segmentation. Experimental results show that our methods achieve a substantial improvement in the performance of α-expansion, as well as other popular algorithms such as sequential tree-re-weighted message passing and max-product belief propagation. We also demonstrate the applicability of our schemes for certain higher order energy functions, such as the one described, for interactive texture-based image and video segmentation. In most cases, we achieve a 10-15 times speed-up in the computation time. Our modified α-expansion algorithm provides similar performance to Fast-PD, but is conceptually much simpler. Both α-expansion and Fast-PD can be made orders of magnitude faster when used in conjunction with the \"reduce\" scheme proposed in this paper."
]
} |
1708.02579 | 2744809494 | Deep convolutional neural networks (CNNs) are the deep learning model of choice for performing object detection, classification, semantic segmentation and natural language processing tasks. CNNs require billions of operations to process a frame. This computational complexity, combined with the inherent parallelism of the convolution operation make CNNs an excellent target for custom accelerators. However, when optimizing for different CNN hierarchies and data access patterns, it is difficult for custom accelerators to achieve close to 100 computational efficiency. In this work, we present Snowflake, a scalable and efficient accelerator that is agnostic to CNN workloads, and was designed to always perform at near-peak hardware utilization. Snowflake is able to achieve a computational efficiency of over 91 on modern CNN models. Snowflake, implemented on a Xilinx Zynq XC7Z045 SoC is capable of achieving a peak throughput of 128G-ops s and a measured throughput of 100 frames per second and 120 G-ops s on the AlexNet CNN model, 36 frames per second and 116G- ops s on the GoogLeNet CNN model and 17 frames per second and 122 G-ops s on the ResNet-50 CNN model. To the best of our knowledge, Snowflake is the only implemented system capable of achieving over 91 efficiency on modern CNNs and the only implemented system with GoogLeNet and ResNet as part of the benchmark suite. | A variety of CNN accelerator designs have been proposed in recent years. Eyeriss is an ASIC implementation and uses a @math grid of processing elements to accelerate CNN processing. The grid is fed by a 108 ,KB scratchpad buffer. Eyeriss provides two performance figures for their design, one that includes DRAM load latency and one that does not. A case is made for the latter that DRAM latency is easy to optimize and hence can be ignored. Based on these two performance figures, Eyeriss achieves a computational efficiency on AlexNet of 68.8 designs described in @cite_14 and @cite_12 are FPGA-based accelerators capable of achieving 73.3 | {
"cite_N": [
"@cite_14",
"@cite_12"
],
"mid": [
"2520083297",
"2276486856"
],
"abstract": [
"With the recent advancement of multilayer convolutional neural networks (CNN), deep learning has achieved amazing success in many areas, especially in visual content understanding and classification. To improve the performance and energy-efficiency of the computation-demanding CNN, the FPGA-based acceleration emerges as one of the most attractive alternatives. In this paper we design and implement Caffeine, a hardware software co-designed library to efficiently accelerate the entire CNN on FPGAs. First, we propose a uniformed convolutional matrix-multiplication representation for both computation-intensive convolutional layers and communication-intensive fully connected (FCN) layers. Second, we design Caffeine with the goal to maximize the underlying FPGA computing and bandwidth resource utilization, with a key focus on the bandwidth optimization by the memory access reorganization not studied in prior work. Moreover, we implement Caffeine in the portable high-level synthesis and provide various hardware software definable parameters for user configurations. Finally, we also integrate Caffeine into the industry-standard software deep learning framework Caffe. We evaluate Caffeine and its integration with Caffe by implementing VGG16 and AlexNet network on multiple FPGA platforms. Caffeine achieves a peak performance of 365 GOPS on Xilinx KU060 FPGA and 636 GOPS on Virtex7 690t FPGA. This is the best published result to our best knowledge. We achieve more than 100x speedup on FCN layers over previous FPGA accelerators. An end-to-end evaluation with Caffe integration shows up to 7.3x and 43.5x performance and energy gains over Caffe on a 12-core Xeon server, and 1.5x better energy-efficiency over the GPU implementation on a medium-sized FPGA (KU060). Performance projections to a system with a high-end FPGA (Virtex7 690t) shows even higher gains.",
"In recent years, convolutional neural network (CNN) based methods have achieved great success in a large number of applications and have been among the most powerful and widely used techniques in computer vision. However, CNN-based methods are com-putational-intensive and resource-consuming, and thus are hard to be integrated into embedded systems such as smart phones, smart glasses, and robots. FPGA is one of the most promising platforms for accelerating CNN, but the limited bandwidth and on-chip memory size limit the performance of FPGA accelerator for CNN. In this paper, we go deeper with the embedded FPGA platform on accelerating CNNs and propose a CNN accelerator design on embedded FPGA for Image-Net large-scale image classification. We first present an in-depth analysis of state-of-the-art CNN models and show that Convolutional layers are computational-centric and Fully-Connected layers are memory-centric. Then the dynamic-precision data quantization method and a convolver design that is efficient for all layer types in CNN are proposed to improve the bandwidth and resource utilization. Results show that only 0.4 accuracy loss is introduced by our data quantization flow for the very deep VGG16 model when 8 4-bit quantization is used. A data arrangement method is proposed to further ensure a high utilization of the external memory bandwidth. Finally, a state-of-the-art CNN, VGG16-SVD, is implemented on an embedded FPGA platform as a case study. VGG16-SVD is the largest and most accurate network that has been implemented on FPGA end-to-end so far. The system on Xilinx Zynq ZC706 board achieves a frame rate at 4.45 fps with the top-5 accuracy of 86.66 using 16-bit quantization. The average performance of convolutional layers and the full CNN is 187.8 GOP s and 137.0 GOP s under 150MHz working frequency, which outperform previous approaches significantly."
]
} |
1708.02349 | 2743691986 | We present a Temporal Context Network (TCN) for precise temporal localization of human activities. Similar to the Faster-RCNN architecture, proposals are placed at equal intervals in a video which span multiple temporal scales. We propose a novel representation for ranking these proposals. Since pooling features only inside a segment is not sufficient to predict activity boundaries, we construct a representation which explicitly captures context around a proposal for ranking it. For each temporal segment inside a proposal, features are uniformly sampled at a pair of scales and are input to a temporal convolutional neural network for classification. After ranking proposals, non-maximum suppression is applied and classification is performed to obtain final detections. TCN outperforms state-of-the-art methods on the ActivityNet dataset and the THUMOS14 dataset. | For object detection in images, proposals are a critical elements for obtaining efficient and accurate detections @cite_30 @cite_23 . Motivated by this approach, @cite_43 introduced action proposals which extends object proposals to videos. For spatio-temporal localization of actions, multiple methods use spatio-temporal region proposals @cite_37 @cite_47 @cite_16 @cite_14 . However, these methods are typically applied to datasets containing short videos, and hence the major focus has been on spatial localization rather than temporal localization. Moreover, spatio-temporal localization requires training data containing frame level bounding box annotations. For many applications, simply labeling the action boundaries in the video is sufficient, which is a significantly less cumbersome annotation task. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_14",
"@cite_43",
"@cite_23",
"@cite_47",
"@cite_16"
],
"mid": [
"2117539524",
"1923332106",
"1945129080",
"2018068650",
"",
"",
"2175354415"
],
"abstract": [
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.",
"We address the problem of action detection in videos. Driven by the latest progress in object detection from 2D images, we build action models using rich feature hierarchies derived from shape and kinematic cues. We incorporate appearance and motion in two ways. First, starting from image region proposals we select those that are motion salient and thus are more likely to contain the action. This leads to a significant reduction in the number of regions being processed and allows for faster computations. Second, we extract spatio-temporal feature representations to build strong classifiers using Convolutional Neural Networks. We link our predictions to produce detections consistent in time, which we call action tubes. We show that our approach outperforms other techniques in the task of action detection.",
"In this paper we target at generating generic action proposals in unconstrained videos. Each action proposal corresponds to a temporal series of spatial bounding boxes, i.e., a spatio-temporal video tube, which has a good potential to locate one human action. Assuming each action is performed by a human with meaningful motion, both appearance and motion cues are utilized to measure the actionness of the video tubes. After picking those spatiotemporal paths of high actionness scores, our action proposal generation is formulated as a maximum set coverage problem, where greedy search is performed to select a set of action proposals that can maximize the overall actionness score. Compared with existing action proposal approaches, our action proposals do not rely on video segmentation and can be generated in nearly real-time. Experimental results on two challenging datasets, MSRII and UCF 101, validate the superior performance of our action proposals as well as competitive results on action detection and search.",
"This paper considers the problem of action localization, where the objective is to determine when and where certain actions appear. We introduce a sampling strategy to produce 2D+t sequences of bounding boxes, called tubelets. Compared to state-of-the-art alternatives, this drastically reduces the number of hypotheses that are likely to include the action of interest. Our method is inspired by a recent technique introduced in the context of image localization. Beyond considering this technique for the first time for videos, we revisit this strategy for 2D+t sequences obtained from super-voxels. Our sampling strategy advantageously exploits a criterion that reflects how action related motion deviates from background motion. We demonstrate the interest of our approach by extensive experiments on two public datasets: UCF Sports and MSR-II. Our approach significantly outperforms the state-of-the-art on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.",
"",
"",
"This paper is on action localization in video with the aid of spatio-temporal proposals. To alleviate the computational expensive segmentation step of existing proposals, we propose bypassing the segmentations completely by generating proposals directly from the dense trajectories used to represent videos during classification. Our Action localization Proposals from dense Trajectories (APT) use an efficient proposal generation algorithm to handle the high number of trajectories in a video. Our spatio-temporal proposals are faster than current methods and outperform the localization and classification accuracy of current proposals on the UCF Sports, UCF 101, and MSR-II video datasets. Corrected version: we fixed a mistake in our UCF-101 ground truth. Numbers are different; conclusions are unchanged"
]
} |
1708.02349 | 2743691986 | We present a Temporal Context Network (TCN) for precise temporal localization of human activities. Similar to the Faster-RCNN architecture, proposals are placed at equal intervals in a video which span multiple temporal scales. We propose a novel representation for ranking these proposals. Since pooling features only inside a segment is not sufficient to predict activity boundaries, we construct a representation which explicitly captures context around a proposal for ranking it. For each temporal segment inside a proposal, features are uniformly sampled at a pair of scales and are input to a temporal convolutional neural network for classification. After ranking proposals, non-maximum suppression is applied and classification is performed to obtain final detections. TCN outperforms state-of-the-art methods on the ActivityNet dataset and the THUMOS14 dataset. | Methods using category-independent classifiers to obtain many segments in a long video are more closely related to our approach. For example, @cite_4 exploit three segment-based 3D ConvNets: a proposal network for identifying candidate clips that may contain actions, a classification network for learning a classification model and a localization network for fine-tuning the learned classification network to localize each action instance. @cite_22 introduce Deep Action Proposals (DAPs) and use a LSTM to encode information in a fixed clip (512 frames) of a video. After encoding information in the video clip, the LSTM scores K (64) predefined start and end positions in that clip. The start and end positions are selected based on statistics of the video dataset. We show that our method performs better than global representations like LSTMs which create a single feature representation for all scales in a video for localization of activities. In contemporary work, @cite_8 proposed a convolutional-de-convolutional (CDC) network by combing temporal upsampling and spatial downsampling for activity detection. Such an architecture helps in precise localization of activity boundaries. We show that the activity proposals generated by our method can further improve CDC's performance. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_8"
],
"mid": [
"2394849137",
"2519328139",
"2953153458"
],
"abstract": [
"We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes on the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. Only the proposal network and the localization network are used during prediction. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7 to 7.4 on MEXaction2 and increases from 15.0 to 19.0 on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5.",
"Object proposals have contributed significantly to recent advances in object understanding in images. Inspired by the success of this approach, we introduce Deep Action Proposals (DAPs), an effective and efficient algorithm for generating temporal action proposals from long videos. We show how to take advantage of the vast capacity of deep learning models and memory cells to retrieve from untrimmed videos temporal segments, which are likely to contain actions. A comprehensive evaluation indicates that our approach outperforms previous work on a large scale action benchmark, runs at 134 FPS making it practical for large-scale scenarios, and exhibits an appealing ability to generalize, i.e. to retrieve good quality temporal proposals of actions unseen in training.",
"Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. We will update the camera-ready version and publish the source codes online soon."
]
} |
1708.02349 | 2743691986 | We present a Temporal Context Network (TCN) for precise temporal localization of human activities. Similar to the Faster-RCNN architecture, proposals are placed at equal intervals in a video which span multiple temporal scales. We propose a novel representation for ranking these proposals. Since pooling features only inside a segment is not sufficient to predict activity boundaries, we construct a representation which explicitly captures context around a proposal for ranking it. For each temporal segment inside a proposal, features are uniformly sampled at a pair of scales and are input to a temporal convolutional neural network for classification. After ranking proposals, non-maximum suppression is applied and classification is performed to obtain final detections. TCN outperforms state-of-the-art methods on the ActivityNet dataset and the THUMOS14 dataset. | Context has been widely used in various computer vision algorithms. For example, it helps in tasks like object detection @cite_2 , semantic segmentation @cite_36 , referring expressions @cite_15 etc. In videos it has been used for action and activity recognition @cite_31 @cite_27 . However, for temporal localization of activities, existing methods do not employ temporal context, which we show is critical for solving this problem. | {
"cite_N": [
"@cite_31",
"@cite_36",
"@cite_27",
"@cite_2",
"@cite_15"
],
"mid": [
"2233838193",
"2125215748",
"2152556536",
"1932624639",
"2949107813"
],
"abstract": [
"Activity recognition in video has recently benefited from the use of the context e.g., inter-relationships among the activities and objects. However, these approaches require data to be labeled and entirely available at the outset. In contrast, we formulate a continuous learning framework for context aware activity recognition from unlabeled video data which has two distinct advantages over most existing methods. First, we propose a novel active learning technique which not only exploits the informativeness of the individual activity instances but also utilizes their contextual information during the query selection process, this leads to significant reduction in expensive manual annotation effort. Second, the learned models can be adapted online as more data is available. We formulate a conditional random field (CRF) model that encodes the context and devise an information theoretic approach that utilizes entropy and mutual information of the nodes to compute the set of most informative query instances, which need to be labeled by a human. These labels are combined with graphical inference techniques for incrementally updating the model as new videos come in. Experiments on four challenging datasets demonstrate that our framework achieves superior performance with significantly less amount of manual labeling.",
"In this paper we study the role of context in existing state-of-the-art detection and segmentation approaches. Towards this goal, we label every pixel of PASCAL VOC 2010 detection challenge with a semantic category. We believe this data will provide plenty of challenges to the community, as it contains 520 additional classes for semantic segmentation and object detection. Our analysis shows that nearest neighbor based approaches perform poorly on semantic segmentation of contextual classes, showing the variability of PASCAL imagery. Furthermore, improvements of exist ing contextual models for detection is rather modest. In order to push forward the performance in this difficult scenario, we propose a novel deformable part-based model, which exploits both local context around each candidate detection as well as global context at the level of the scene. We show that this contextual reasoning significantly helps in detecting objects at all scales.",
"We first propose a new spatio-temporal context distribution feature of interest points for human action recognition. Each action video is expressed as a set of relative XYT coordinates between pairwise interest points in a local region. We learn a global GMM (referred to as Universal Background Model, UBM) using the relative coordinate features from all the training videos, and then represent each video as the normalized parameters of a video-specific GMM adapted from the global GMM. In order to capture the spatio-temporal relationships at different levels, multiple GMMs are utilized to describe the context distributions of interest points over multi-scale local regions. To describe the appearance information of an action video, we also propose to use GMM to characterize the distribution of local appearance features from the cuboids centered around the interest points. Accordingly, an action video can be represented by two types of distribution features: 1) multiple GMM distributions of spatio-temporal context; 2) GMM distribution of local video appearance. To effectively fuse these two types of heterogeneous and complementary distribution features, we additionally propose a new learning algorithm, called Multiple Kernel Learning with Augmented Features (AFMKL), to learn an adapted classifier based on multiple kernels and the pre-learned classifiers of other action classes. Extensive experiments on KTH, multi-view IXMAS and complex UCF sports datasets demonstrate that our method generally achieves higher recognition accuracy than other state-of-the-art methods.",
"We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2 and 73.9 correspondingly, surpassing any other published work by a significant margin.",
"Humans refer to objects in their environments all the time, especially in dialogue with other people. We explore generating and comprehending natural language referring expressions for objects in images. In particular, we focus on incorporating better measures of visual context into referring expression models and find that visual comparison to other objects within an image helps improve performance significantly. We also develop methods to tie the language generation process together, so that we generate expressions for all objects of a particular category jointly. Evaluation on three recent datasets - RefCOCO, RefCOCO+, and RefCOCOg, shows the advantages of our methods for both referring expression generation and comprehension."
]
} |
1904.02605 | 2934448960 | We present a new color photometric stereo (CPS) method that can recover high quality, detailed 3D face geometry in a single shot. Our system uses three uncalibrated near point lights of different colors and a single camera. We first utilize 3D morphable model (3DMM) and semantic segmentation of facial parts to achieve robust self-calibration of light sources. We then address the spectral ambiguity problem by incorporating albedo consensus, albedo similarity, and proxy prior into a unified framework. We avoid the need for spatial constancy of albedo and use a new measure for albedo similarity that is based on the albedo norm profile. Experiments show that our new approach produces state-of-the-art results in single image with high-fidelity geometry that includes details such as wrinkles. | Structured light @cite_9 @cite_54 and multi-view stereo @cite_2 have been used to reconstruct faces. While they can accurately reconstruct coarse shapes, they are less successful in recovering high frequency details such as wrinkles. On the other hand, photometric stereo @cite_10 is capable of extracting such high frequency details. Techniques that combine stereo and photometric stereo exist @cite_23 @cite_47 @cite_11 , but it is at the expense of a complicated hardware setup. | {
"cite_N": [
"@cite_9",
"@cite_54",
"@cite_23",
"@cite_2",
"@cite_47",
"@cite_10",
"@cite_11"
],
"mid": [
"2106715340",
"2002896099",
"2100810849",
"2120534609",
"2016462422",
"1975089519",
"2203784528"
],
"abstract": [
"We present an end-to-end system that goes from video sequences to high resolution, editable, dynamically controllable face models. The capture system employs synchronized video cameras and structured light projectors to record videos of a moving face from multiple viewpoints. A novel spacetime stereo algorithm is introduced to compute depth maps accurately and overcome over-fitting deficiencies in prior work. A new template fitting and tracking procedure fills in missing data and yields point correspondence across the entire sequence without using markers. We demonstrate a data-driven, interactive method for inverse kinematics that draws on the large set of fitted templates and allows for posing new expressions by dragging surface points directly. Finally, we describe new tools that model the dynamics in the input sequence to enable new animations, created via key-framing or texture-synthesis techniques.",
"We describe a high-resolution, real-time 3-D shape measurement system based on a digital fringe projection and phase-shifting technique. It utilizes a single-chip digital light processing projector to project computer-generated fringe patterns onto the object, and a high-speed CCD camera synchronized with the projector to acquire the fringe images at a frame rate of 120 frames s. A color CCD camera is also used to capture images for texture mapping. Based on a three-step phase-shifting technique, each frame of the 3-D shape is reconstructed using three consecutive fringe images. Therefore the 3-D data acquisition speed of the system is 40 frames s. With this system, together with the fast three-step phase-shifting algorithm and parallel processing software we developed, high-resolution, real-time 3-D shape measurement is realized at a frame rate of up to 40 frames s and a resolution of 532×500 points per frame.",
"We estimate surface normal maps of an object from either its diffuse or specular reflectance using four spherical gradient illumination patterns. In contrast to traditional photometric stereo, the spherical patterns allow normals to be estimated simultaneously from any number of viewpoints. We present two polarized lighting techniques that allow the diffuse and specular normal maps of an object to be measured independently. For scattering materials, we show that the specular normal maps yield the best record of detailed surface shape while the diffuse normals deviate from the true surface normal due to subsurface scattering, and that this effect is dependent on wavelength. We show several applications of this acquisition technique. First, we capture normal maps of a facial performance simultaneously from several viewing positions using time-multiplexed illumination. Second, we show that highresolution normal maps based on the specular component can be used with structured light 3D scanning to quickly acquire high-resolution facial surface geometry using off-the-shelf digital still cameras. Finally, we present a realtime shading model that uses independently estimated normal maps for the specular and diffuse color channels to reproduce some of the perceptually important effects of subsurface scattering.",
"This paper proposes a novel approach to motion capture from multiple, synchronized video streams, specifically aimed at recording dense and accurate models of the structure and motion of highly deformable surfaces such as skin, that stretches, shrinks, and shears in the midst of normal facial expressions. Solving this problem is a key step toward effective performance capture for the entertainment industry, but progress so far has been hampered by the lack of appropriate local motion and smoothness models. The main technical contribution of this paper is a novel approach to regularization adapted to nonrigid tangential deformations. Concretely, we estimate the nonrigid deformation parameters at each vertex of a surface mesh, smooth them over a local neighborhood for robustness, and use them to regularize the tangential motion estimation. To demonstrate the power of the proposed approach, we have integrated it into our previous work for markerless motion capture [9], and compared the performances of the original and new algorithms on three extremely challenging face datasets that include highly nonrigid skin deformations, wrinkles, and quickly changing expressions. Additional experiments with a dataset featuring fast-moving cloth with complex and evolving fold structures demonstrate that the adaptability of the proposed regularization scheme to nonrigid tangential motion does not hamper its robustness, since it successfully recovers the shape and motion of the cloth without overfitting it despite the absence of stretch or shear in this case.",
"We present a novel process for acquiring detailed facial geometry with high resolution diffuse and specular photometric information from multiple viewpoints using polarized spherical gradient illumination. Key to our method is a new pair of linearly polarized lighting patterns which enables multiview diffuse-specular separation under a given spherical illumination condition from just two photographs. The patterns -- one following lines of latitude and one following lines of longitude -- allow the use of fixed linear polarizers in front of the cameras, enabling more efficient acquisition of diffuse and specular albedo and normal maps from multiple viewpoints. In a second step, we employ these albedo and normal maps as input to a novel multi-resolution adaptive domain message passing stereo reconstruction algorithm to create high resolution facial geometry. To do this, we formulate the stereo reconstruction from multiple cameras in a commonly parameterized domain for multiview reconstruction. We show competitive results consisting of high-resolution facial geometry with relightable reflectance maps using five DSLR cameras. Our technique scales well for multiview acquisition without requiring specialized camera systems for sensing multiple polarization states.",
"A novel technique called photometric stereo is introduced. The idea of photometric stereo is to vary the direction of incident illumination between successive images, while holding the viewing direction constant. It is shown that this provides sufficient information to determine surface orientation at each image point. Since the imaging geometry is not changed, the correspondence between image points is known a priori. The technique is photometric because it uses the radiance values recorded at a single image location, in successive views, rather than the relative positions of displaced features. Photometric stereo is used in computer-based image understanding. It can be applied in two ways. First, it is a general technique for deter-mining surface orientation at each image point. Second, it is a technique for determining object points that have a particular surface orientation. These applications are illustrated using synthesized examples.",
"Photometric stereo (PS) is an established technique for high-detail reconstruction of 3D geometry and appearance. To correct for surface integration errors, PS is often combined with multiview stereo (MVS). With dynamic objects, PS reconstruction also faces the problem of computing optical flow (OF) for image alignment under rapid changes in illumination. Current PS methods typically compute optical flow and MVS as independent stages, each one with its own limitations and errors introduced by early regularization. In contrast, scene flow methods estimate geometry and motion, but lack the fine detail from PS. This paper proposes photogeometric scene flow (PGSF) for high-quality dynamic 3D reconstruction. PGSF performs PS, OF, and MVS simultaneously. It is based on two key observations: (i) while image alignment improves PS, PS allows for surfaces to be relit to improve alignment, (ii) PS provides surface gradients that render the smoothness term in MVS unnecessary, leading to truly data-driven, continuous depth estimates. This synergy is demonstrated in the quality of the resulting RGB appearance, 3D geometry, and 3D motion."
]
} |
1904.02605 | 2934448960 | We present a new color photometric stereo (CPS) method that can recover high quality, detailed 3D face geometry in a single shot. Our system uses three uncalibrated near point lights of different colors and a single camera. We first utilize 3D morphable model (3DMM) and semantic segmentation of facial parts to achieve robust self-calibration of light sources. We then address the spectral ambiguity problem by incorporating albedo consensus, albedo similarity, and proxy prior into a unified framework. We avoid the need for spatial constancy of albedo and use a new measure for albedo similarity that is based on the albedo norm profile. Experiments show that our new approach produces state-of-the-art results in single image with high-fidelity geometry that includes details such as wrinkles. | CPS has the key benefit of acquiring only one image and hence can be directly used to reconstruct dynamic objects. Most existing approaches use red, green, and blue lights along with a color camera @cite_32 @cite_42 @cite_31 . Hern 'a ndez @cite_8 apply the technique to dynamic cloth reconstruction, where they use a planar board with cloth sample fixed in the center to calibrate the coupled matrix containing reflectance, camera response, lighting spectrum, and lighting directions. Vogiatzis and Hern 'a ndez @cite_55 first construct a coarse 3D face using structure from motion and then impose constant chromaticity constraint for shape refinement. Klaudiny @cite_22 use a specular sphere to estimate lighting directions. To ensure constant chromaticity, they apply uniform make-up to faces. Bringier @cite_43 explicitly calibrate the spectral response of camera and assume gray color or known uniform color. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_55",
"@cite_42",
"@cite_32",
"@cite_43",
"@cite_31"
],
"mid": [
"",
"2148151066",
"2004734153",
"1973635906",
"2157441059",
"2077066844",
"2045286529"
],
"abstract": [
"",
"We present an algorithm and the associated capture methodology to acquire and track the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispec- tral photometric stereo is an attractive alternative because it can recover a dense normal field from an un-textured surface. We show how to capture such data and register it over time to generate a single deforming surface. Experiments were performed on video sequences of un- textured cloth, filmed under spatially separated red, green, and blue light sources. Our first finding is that using zero- depth-silhouettes as the initial boundary condition already produces rather smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D optical flow, one can register the first frame's reconstruction to every subsequent frame.",
"This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.",
"We propose a method for shape reconstruction from color shades produced by multiple chromatic light sources. The linear relation between surface-normal vectors and three-dimensional response vectors for a uniformly colored and illuminated region of a surface can be reconstructed in two steps. In the first step a quadratic form of metric in response space induced from a natural metric in normal space is reconstructed. At this stage proper image segmentation can be obtained. In the second step an exact mapping from response space into the space of surface normals is reconstructed. The matrix for this mapping is one of the square roots of the quadratic-form matrix that satisfies the integrability constraint. The method is in all respects much simpler than existing methods for solving the depth-from-shading task for monochromatic images.",
"When a Lambertian surface is illuminated by several chromatic lights the surface normals may be recovered from a single color image. A robust regression is used to find the ellipsoid in color space on which at least half the pixels lie. Then the matrix giving the linear relationship between the color and the surface normal, for non-outlier points is found as a root of the ellipsoid quadratic form. But this root is recovered only up to an arbitrary rotation. An integrability condition can be used to determine the correct rotation. The rotation of recovered surface normals is needed to align partial derivatives p and q with the camera plane and thus establish the object's attitude. Here a new smoothness condition approximating the integrability condition is introduced that allows one to solve for the rotation matrix in closed form. >",
"Textured surface analysis is essential for many applications. We present a three-dimensional recovery approach for real textured surfaces based on photometric stereo. The aim is to be able to measure the textured surfaces with a high degree of accuracy. For this, we use a color digital sensor and principles of color photometric stereo. This method uses a single color image, instead of a sequence of gray-scale images, to recover the surface of the three dimensions. It can thus be integrated into dynamic systems where there is significant relative motion between the object and the camera. To evaluate the performance of our method, we compare it on real textured surfaces to traditional photometric stereo using three images. We thus show that it is possible to have similar results with just one color image.",
"The photometric-stereo method is one technique for three-dimensional shape determination that has been implemented in a variety of experimental settings and that has produced consistently good results. The idea is to use intensity values recorded from multiple images obtained from the same viewpoint but under different conditions of illumination. The resulting radiometric constraint makes it possible to obtain local estimates of both surface orientation and surface curvature without requiring either global smoothness assumptions or prior image segmentation. Photometric stereo is moved one step closer to practical possibility by a description of an experimental setting in which surface gradient estimation is achieved on full-frame video data at near-video-frame rates (i.e., 15 Hz). The implementation uses commercially available hardware. Reflectance is modeled empirically with measurements obtained from a calibration sphere. Estimation of the gradient (p, q) requires only simple table lookup. Curvature estimation additionally uses the reflectance map R(p, q). The required lookup table and reflectance maps are derived during calibration. Because reflectance is modeled empirically, no prior physical model of the reflectance characteristics of the objects to be analyzed is assumed. At the same time, if a good physical model is available, it can be retrofitted to the method for implementation purposes. Photometric stereo is subject to error in the presence of cast shadows and interreflection. No purely local technique can succeed because these phenomena are inherently nonlocal. Nevertheless, it is demonstrated that one can exploit the redundancy in three-light-source photometric stereo to detect locally, in most cases, the presence of cast shadows and interreflection. Detection is facilitated by the explicit inclusion of a local confidence estimate in the lookup table used for gradient estimation."
]
} |
1904.02605 | 2934448960 | We present a new color photometric stereo (CPS) method that can recover high quality, detailed 3D face geometry in a single shot. Our system uses three uncalibrated near point lights of different colors and a single camera. We first utilize 3D morphable model (3DMM) and semantic segmentation of facial parts to achieve robust self-calibration of light sources. We then address the spectral ambiguity problem by incorporating albedo consensus, albedo similarity, and proxy prior into a unified framework. We avoid the need for spatial constancy of albedo and use a new measure for albedo similarity that is based on the albedo norm profile. Experiments show that our new approach produces state-of-the-art results in single image with high-fidelity geometry that includes details such as wrinkles. | To eliminate the need of constant chromaticity, there are methods @cite_19 @cite_33 that combine spectral and time-multiplexing; optical flow is then used to align adjacent frames. Jank 'o @cite_18 make use of temporal constancy of surface reflectance to eliminte the need of time-multiplexing but still require the use of an image sequence. Gotardo @cite_11 simultaneously solve for color photometric stereo, optical flow, and stereo matching within each 3-frame time window but requires using 9 color lights. Rahman @cite_36 arrange complementary color lights on a ring but their approach requires using two images under complementary illuminations as input. Anderson @cite_51 assume piecewise constant chromoticity by segmenting a scene into different chromaticities. To calibrate chromaticities, they also require a stereo camera pair to obtain coarse geometry. | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_36",
"@cite_19",
"@cite_51",
"@cite_11"
],
"mid": [
"1579055455",
"1524170530",
"2403721818",
"2094974006",
"2147191585",
"2203784528"
],
"abstract": [
"In this paper we present a novel method to apply photometric stereo on textured dynamic surfaces. We aim at exploiting the high accuracy of photometric stereo and reconstruct local surface orientation from illumination changes. The main difficulty derives from the fact that photometric stereo requires varying illumination while the object remains still, which makes it quite impractical to use for dynamic surfaces. Using coloured lights gives a clear solution to this problem; however, the system of equations is still ill-posed and it is ambiguous whether the change of an observed surface colour is due to the change of the surface gradient or of the surface reflectance. In order to separate surface orientation from reflectance, our method tracks texture changes over time and exploits surface reflectance's temporal constancy. This additional constraint allows us to reformulate the problem as an energy functional minimisation, solved by a standard quasi-Newton method. Our method is tested both on real and synthetic data, quantitatively evaluated and compared to a state-of-the-art method.",
"We present a photometric stereo method for non-rigid objects of unknown and spatially varying materials. The prior art uses time-multiplexed illumination but assumes constant surface normals across several frames, fundamentally limiting the accuracy of the estimated normals. We explicitly account for time-varying surface orientations, and show that for unknown Lambertian materials, five images are sufficient to recover surface orientation in one frame. Our optimized system implementation exploits the physical properties of typical cameras and LEDs to reduce the required number of images to just three, and also facilitates frame-to-frame image alignment using standard optical flow methods, despite varying illumination. We demonstrate the system's performance by computing surface orientations for several different moving, deforming objects.",
"This paper presents a novel approach for recovering the shape of non-Lambertian, multicolored objects using two input images. We show that a ring light source with complementary colored lights has the potential to be effectively utilized for this purpose. Under this lighting, the brightness of an object surface varies with respect to different reflections. Therefore, analyzing how brightness is modulated by illumination color gives us distinct cues to recover shape. Moreover, the use of complementary colored illumination enables the color photometric stereo to be applicable to multicolored surfaces. Here, we propose a color correction method based on the addition principle of complementary colors to remove the effect of illumination from the observed color. This allows the inclusion of surfaces with any number of chromaticities. Therefore, our method offers significant advantages over previous methods, which often assume constant object albedo and Lambertian reflectance. To the best of our knowledge, this is the first attempt to employ complementary colors on a ring light source to compute shape while considering both non-Lambertian reflection and spatially varying albedo. To show the efficacy of our method, we present results on synthetic and real world images and compare against photometric stereo methods elsewhere in the literature.",
"Many vision and graphics problems such as relighting, structured light scanning and photometric stereo, need images of a scene under a number of different illumination conditions. It is typically assumed that the scene is static. To extend such methods to dynamic scenes, dense optical flow can be used to register adjacent frames. This registration becomes inaccurate if the frame rate is too low with respect to the degree of movement in the scenes. We present a general method that extends time multiplexing with color multiplexing in order to better handle dynamic scenes. Our method allows for packing more illumination information into a single frame, thereby reducing the number of required frames over which optical flow must be computed. Moreover, color-multiplexed frames lend themselves better to reliably computing optical flow. We show that our method produces better results compared to time-multiplexing alone. We demonstrate its application to relighting, structured light scanning and photometric stereo in dynamic scenes.",
"We present a multispectral photometric stereo method for capturing geometry of deforming surfaces. A novel photometric calibration technique allows calibration of scenes containing multiple piecewise constant chromaticities. This method estimates per-pixel photometric properties, then uses a RANSAC-based approach to estimate the dominant chromaticities in the scene. A likelihood term is developed linking surface normal, image intensity and photometric properties, which allows estimating the number of chromaticities present in a scene to be framed as a model estimation problem. The Bayesian Information Criterion is applied to automatically estimate the number of chromaticities present during calibration. A two-camera stereo system provides low resolution geometry, allowing the likelihood term to be used in segmenting new images into regions of constant chromaticity. This segmentation is carried out in a Markov Random Field framework and allows the correct photometric properties to be used at each pixel to estimate a dense normal map. Results are shown on several challenging real-world sequences, demonstrating state-of-the-art results using only two cameras and three light sources. Quantitative evaluation is provided against synthetic ground truth data.",
"Photometric stereo (PS) is an established technique for high-detail reconstruction of 3D geometry and appearance. To correct for surface integration errors, PS is often combined with multiview stereo (MVS). With dynamic objects, PS reconstruction also faces the problem of computing optical flow (OF) for image alignment under rapid changes in illumination. Current PS methods typically compute optical flow and MVS as independent stages, each one with its own limitations and errors introduced by early regularization. In contrast, scene flow methods estimate geometry and motion, but lack the fine detail from PS. This paper proposes photogeometric scene flow (PGSF) for high-quality dynamic 3D reconstruction. PGSF performs PS, OF, and MVS simultaneously. It is based on two key observations: (i) while image alignment improves PS, PS allows for surfaces to be relit to improve alignment, (ii) PS provides surface gradients that render the smoothness term in MVS unnecessary, leading to truly data-driven, continuous depth estimates. This synergy is demonstrated in the quality of the resulting RGB appearance, 3D geometry, and 3D motion."
]
} |
1904.02605 | 2934448960 | We present a new color photometric stereo (CPS) method that can recover high quality, detailed 3D face geometry in a single shot. Our system uses three uncalibrated near point lights of different colors and a single camera. We first utilize 3D morphable model (3DMM) and semantic segmentation of facial parts to achieve robust self-calibration of light sources. We then address the spectral ambiguity problem by incorporating albedo consensus, albedo similarity, and proxy prior into a unified framework. We avoid the need for spatial constancy of albedo and use a new measure for albedo similarity that is based on the albedo norm profile. Experiments show that our new approach produces state-of-the-art results in single image with high-fidelity geometry that includes details such as wrinkles. | Fyffe @cite_63 extended the usual three color channels to six by using two RGB cameras and a pair of Dolby dichroic filters. An extension of their work @cite_14 employed polarized color gradient illumination but requires an ultra complex setup with 2040 LEDs light sources. Chakrabarti and Sunkavalli @cite_4 observe that the reflectance and normal within a uniform color region can be uniquely recovered from spectrally demultiplexed image by assuming piecewise constant albedo. Ozawa @cite_0 densely discretize albedo chromaticity and utilize consensus on albedo norms to reconstruct objects with spatially varying albedo. Most of these approaches assume directional lighting and require pre-calibrating them. It is possible to use near light sources @cite_48 but still require pre-calibration. In contrast, our technique assumes unknown light positions and spatially varying albedo. The former enables more feasible capture and the latter fulfils the physical property of real faces. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_48",
"@cite_0",
"@cite_63"
],
"mid": [
"1890563702",
"2520788639",
"1644979020",
"2800508542",
"2175612008"
],
"abstract": [
"We present a method for acquiring the per-pixel diffuse albedo, specular albedo, and surface normal maps of a subject at a single instant in time. The method is single-shot, requiring no optical flow, and per-pixel, making no assumptions regarding albedo statistics or surface connectivity. We photograph the subject inside a spherical illumination device emitting a static lighting pattern of vertically polarized RGB color gradients aligned with the XYZ axes, and horizontally polarized RGB color gradients inversely aligned with the XYZ axes. We capture simultaneous photographs using one of two possible setups: a single-view setup using a coaxially aligned camera pair with a polarizing beam splitter, and a multi-view stereo setup with different orientations of linear polarizing filters placed on the cameras, enabling high- quality geometry reconstruction. From this lighting we derive full-color diffuse albedo, single-channel specular albedo suitable for dielectric materials, and polarization-preserving surface normals which are free of corruption from subsurface scattering. We provide simple formulae to estimate the diffuse albedo, specular albedo, and surface normal maps in the single-view and multi-view cases and show error bounds which are small for many common subjects including faces.",
"We present a single-shot system to recover surface geometry of objects with spatially-varying albedos, from images captured under a calibrated RGB photometric stereo setup—with three light directions multiplexed across different color channels in the observed RGB image. Since the problem is ill-posed point-wise, we assume that the albedo map can be modeled as piece-wise constant with a restricted number of distinct albedo values. We show that under ideal conditions, the shape of a non-degenerate local constant albedo surface patch can theoretically be recovered exactly. Moreover, we present a practical and efficient algorithm that uses this model to robustly recover shape from real images. Our method first reasons about shape locally in a dense set of patches in the observed image, producing shape distributions for every patch. These local distributions are then combined to produce a single consistent surface normal map. We demonstrate the efficacy of the approach through experiments on both synthetic renderings as well as real captured images.",
"In this paper we present the first solution to 3D reconstruction in monocular laparoscopy using methods based on Photometric Stereo (PS). Our main contributions are to provide the new theory and practical solutions to successfully apply PS in close-range imaging conditions. We are specifically motivated by a solution with minimal hardware modification to existing laparoscopes. In fact the only physical modification we make is to adjust the colour of the laparoscope's illumination via three colour filters placed at its tip. Once calibrated, our approach can compute 3D from a single image, does not require correspondence estimation, and computes absolute depth densely. We demonstrate the potential of our approach with ground truth ex-vivo and in-vivo experimentation.",
"Abstract We present a photometric stereo method that requires only a single color image. Conventional color photometric stereo methods for single color images cannot deal with multi-colored surfaces, since a color observation at a surface point is insufficient for determining the reflectances and the surface normal at that point. We exploit the global information of surface color and geometry by introducing a surface-color feature that enables classification of a surface into regions of the same color and simultaneously estimate surface normals. The surface-color feature, being invariant in geometry, qualifies the spatial distribution of the square norm of RGB reflectances and attributes surface points of a reflectance norm to the correct color. We discuss the theoretical validity of our surface classification and present a practical algorithm for multi-colored surface recovery. Although some classification ambiguities remain in principle, we show that they can be resolved under a smoothness constraint on the surface geometry. We evaluated the accuracy of our method through simulations and we demonstrated its effectiveness on real scenes.",
"Spectral multiplexing allows multiple channels of information to be captured simultaneously, using readily available color cameras. Information may be multiplexed across the color channels of a camera by use of colored lights (e.g. [Woodham 1980; Hernandez and Vogiatzis 2010]) or colored filters (e.g. [ 2008]). We propose a novel method for single-shot photometric stereo by spectral multiplexing. The output of our method is a simultaneous per-pixel estimate of the surface normal and full-color reflectance. Our method is well suited to materials with varying color and texture, requires no time-varying illumination, and no high-speed cameras. Being a single-shot method, it may be applied to dynamic scenes without any need for optical flow. Our key contributions are a generalization of three-color photometric stereo to multiple (more than three) color channels, and the design of a practical six-color-channel system using off-the-shelf parts only."
]
} |
1904.02605 | 2934448960 | We present a new color photometric stereo (CPS) method that can recover high quality, detailed 3D face geometry in a single shot. Our system uses three uncalibrated near point lights of different colors and a single camera. We first utilize 3D morphable model (3DMM) and semantic segmentation of facial parts to achieve robust self-calibration of light sources. We then address the spectral ambiguity problem by incorporating albedo consensus, albedo similarity, and proxy prior into a unified framework. We avoid the need for spatial constancy of albedo and use a new measure for albedo similarity that is based on the albedo norm profile. Experiments show that our new approach produces state-of-the-art results in single image with high-fidelity geometry that includes details such as wrinkles. | There are methods for inferring face geometry from a single unconstrained image. We refer readers to @cite_58 for an overview of state-of-the-art methods. However, their accuracy is incomparable to multi-view stereo and photometric stereo. Piotraschke and Blanz @cite_62 demonstrate the usefulness of semantic segmentation to improve reconstruction quality. In our work, we use the 3D morphable model @cite_60 to obtain an initial proxy face for light source calibration. | {
"cite_N": [
"@cite_60",
"@cite_58",
"@cite_62"
],
"mid": [
"2107037917",
"2806379360",
"2464650832"
],
"abstract": [
"Generative 3D face models are a powerful tool in computer vision. They provide pose and illumination invariance by modeling the space of 3D faces and the imaging process. The power of these models comes at the cost of an expensive and tedious construction process, which has led the community to focus on more easily constructed but less powerful models. With this paper we publish a generative 3D shape and texture model, the Basel Face Model (BFM), and demonstrate its application to several face recognition task. We improve on previous models by offering higher shape and texture accuracy due to a better scanning device and less correspondence artifacts due to an improved registration algorithm. The same 3D face model can be fit to 2D or 3D images acquired under different situations and with different sensors using an analysis by synthesis method. The resulting model parameters separate pose, lighting, imaging and identity parameters, which facilitates invariant face recognition across sensors and data sets by comparing only the identity parameters. We hope that the availability of this registered face model will spur research in generative models. Together with the model we publish a set of detailed recognition and reconstruction results on standard databases to allow complete algorithm comparisons.",
"",
"Automated 3D reconstruction of faces from images is challenging if the image material is difficult in terms of pose, lighting, occlusions and facial expressions, and if the initial 2D feature positions are inaccurate or unreliable. We propose a method that reconstructs individual 3D shapes from multiple single images of one person, judges their quality and then combines the best of all results. This is done separately for different regions of the face. The core element of this algorithm and the focus of our paper is a quality measure that judges a reconstruction without information about the true shape. We evaluate different quality measures, develop a method for combining results, and present a complete processing pipeline for automated reconstruction."
]
} |
1904.02605 | 2934448960 | We present a new color photometric stereo (CPS) method that can recover high quality, detailed 3D face geometry in a single shot. Our system uses three uncalibrated near point lights of different colors and a single camera. We first utilize 3D morphable model (3DMM) and semantic segmentation of facial parts to achieve robust self-calibration of light sources. We then address the spectral ambiguity problem by incorporating albedo consensus, albedo similarity, and proxy prior into a unified framework. We avoid the need for spatial constancy of albedo and use a new measure for albedo similarity that is based on the albedo norm profile. Experiments show that our new approach produces state-of-the-art results in single image with high-fidelity geometry that includes details such as wrinkles. | Shape-from-shading and deep learning based approaches have also been adopted to recover details @cite_5 @cite_1 @cite_41 @cite_13 @cite_6 @cite_16 @cite_38 . Jiang @cite_61 combined local corrective deformation fields with photometric consistency constraints. Yamaguchi @cite_44 use a large corpus of high-fidelity face capture from the USC Light Stage @cite_47 to learn the mapping from texture to highly-detailed displacement map. These solutions can provide visually pleasing results but accuracy depends heavily on illumination. | {
"cite_N": [
"@cite_61",
"@cite_38",
"@cite_41",
"@cite_1",
"@cite_6",
"@cite_44",
"@cite_5",
"@cite_47",
"@cite_16",
"@cite_13"
],
"mid": [
"2593956217",
"2903041701",
"2962780596",
"2519131448",
"2795709097",
"2810993953",
"",
"2016462422",
"",
"2555510177"
],
"abstract": [
"3D face reconstruction from a single image is a classical and challenging problem with wide applications in many areas. Inspired by recent works in face animation from RGB-D or monocular video inputs, we develop a novel method for reconstructing 3D faces from unconstrained 2D images using a coarse-to-fine optimization strategy. First, a smooth coarse 3D face is generated from an example-based bilinear face model by aligning the projection of 3D face landmarks with 2D landmarks detected from the input image. Afterward, using local corrective deformation fields, the coarse 3D face is refined using photometric consistency constraints, resulting in a medium face shape. Finally, a shape-from-shading method is applied on the medium face to recover fine geometric details. Our method outperforms the state-of-the-art approaches in terms of accuracy and detail recovery, which is demonstrated in extensive experiments using real-world models and publicly available data sets.",
"Dense 3D face reconstruction plays a fundamental role in visual media production involving digital actors. We improve upon high fidelity reconstruction from a single 2D photo with a reconstruction framework that is robust to large variations in expressions, poses and illumination. We provide a global optimization step improving the alignment of 3D facial geometry to tracked 2D landmarks with 3D Laplacian deformation. Face detail is improved through, extending Shape from Shading reconstruction with fitted albedo prior masks, together with a fast proportionality constraint between depth and image gradients consistent with local self-occlusion behavior. Together these measures better preserve the crucial facial features that define an actor's identity, and we illustrate this through a variety of comparisons with related works.",
"It has been recently shown that neural networks can recover the geometric structure of a face from a single given image. A common denominator of most existing face geometry reconstruction methods is the restriction of the solution space to some low-dimensional subspace. While such a model significantly simplifies the reconstruction problem, it is inherently limited in its expressiveness. As an alternative, we propose an Image-to-Image translation network that jointly maps the input image to a depth image and a facial correspondence map. This explicit pixel-based mapping can then be utilized to provide high quality reconstructions of diverse faces under extreme expressions, using a purely geometric refinement process. In the spirit of recent approaches, the network is trained only with synthetic data, and is then evaluated on “in-the-wild” facial images. Both qualitative and quantitative analyses demonstrate the accuracy and the robustness of our approach.",
"Fast and robust three-dimensional reconstruction of facial geometric structure from a single image is a challenging task with numerous applications. Here, we introduce a learning-based approach for reconstructing a three-dimensional face from a single image. Recent face recovery methods rely on accurate localization of key characteristic points. In contrast, the proposed approach is based on a Convolutional-Neural-Network (CNN) which extracts the face geometry directly from its image. Although such deep architectures outperform other models in complex computer vision problems, training them properly requires a large dataset of annotated examples. In the case of three-dimensional faces, currently, there are no large volume data sets, while acquiring such big-data is a tedious task. As an alternative, we propose to generate random, yet nearly photo-realistic, facial images for which the geometric form is known. The suggested model successfully recovers facial shapes from real images, even for faces with extreme expressions and under various lighting conditions.",
"Existing single view, 3D face reconstruction methods can produce beautifully detailed 3D results, but typically only for near frontal, unobstructed viewpoints. We describe a system designed to provide detailed 3D reconstructions of faces viewed under extreme conditions, out of plane rotations, and occlusions. Motivated by the concept of bump mapping, we propose a layered approach which decouples estimation of a global shape from its mid-level details (e.g., wrinkles). We estimate a coarse 3D face shape which acts as a foundation and then separately layer this foundation with details represented by a bump map. We show how a deep convolutional encoder-decoder can be used to estimate such bump maps. We further show how this approach naturally extends to generate plausible details for occluded facial regions. We test our approach and its components extensively, quantitatively demonstrating the invariance of our estimated facial details. We further provide numerous qualitative examples showing that our method produces detailed 3D face shapes in viewing conditions where existing state of the art often break down.",
"We present a deep learning-based technique to infer high-quality facial reflectance and geometry given a single unconstrained image of the subject, which may contain partial occlusions and arbitrary illumination conditions.",
"",
"We present a novel process for acquiring detailed facial geometry with high resolution diffuse and specular photometric information from multiple viewpoints using polarized spherical gradient illumination. Key to our method is a new pair of linearly polarized lighting patterns which enables multiview diffuse-specular separation under a given spherical illumination condition from just two photographs. The patterns -- one following lines of latitude and one following lines of longitude -- allow the use of fixed linear polarizers in front of the cameras, enabling more efficient acquisition of diffuse and specular albedo and normal maps from multiple viewpoints. In a second step, we employ these albedo and normal maps as input to a novel multi-resolution adaptive domain message passing stereo reconstruction algorithm to create high resolution facial geometry. To do this, we formulate the stereo reconstruction from multiple cameras in a commonly parameterized domain for multiview reconstruction. We show competitive results consisting of high-resolution facial geometry with relightable reflectance maps using five DSLR cameras. Our technique scales well for multiview acquisition without requiring specialized camera systems for sensing multiple polarization states.",
"",
"Reconstructing the detailed geometric structure of a face from a given image is a key to many computer vision and graphics applications, such as motion capture and reenactment. The reconstruction task is challenging as human faces vary extensively when considering expressions, poses, textures, and intrinsic geometries. While many approaches tackle this complexity by using additional data to reconstruct the face of a single subject, extracting facial surface from a single image remains a difficult problem. As a result, single-image based methods can usually provide only a rough estimate of the facial geometry. In contrast, we propose to leverage the power of convolutional neural networks to produce a highly detailed face reconstruction from a single image. For this purpose, we introduce an end-to-end CNN framework which derives the shape in a coarse-to-fine fashion. The proposed architecture is composed of two main blocks, a network that recovers the coarse facial geometry (CoarseNet), followed by a CNN that refines the facial features of that geometry (FineNet). The proposed networks are connected by a novel layer which renders a depth image given a mesh in 3D. Unlike object recognition and detection problems, there are no suitable datasets for training CNNs to perform face geometry reconstruction. Therefore, our training regime begins with a supervised phase, based on synthetic images, followed by an unsupervised phase that uses only unconstrained facial images. The accuracy and robustness of the proposed model is demonstrated by both qualitative and quantitative evaluation tests."
]
} |
1904.02382 | 2930406384 | Facial actions are spatio-temporal signals by nature, and therefore their modeling is crucially dependent on the availability of temporal information. In this paper, we focus on inferring such temporal dynamics of facial actions when no explicit temporal information is available, i.e. from still images. We present a novel approach to capture multiple scales of such temporal dynamics, with an application to facial Action Unit (AU) intensity estimation and dimensional affect estimation. In particular, 1) we propose a framework that infers a dynamic representation (DR) from a still image, which captures the bi-directional flow of time within a short time-window centered at the input image; 2) we show that we can train our method without the need of explicitly generating target representations, allowing the network to represent dynamics more broadly; and 3) we propose to apply a multiple temporal scale approach that infers DRs for different window lengths (MDR) from a still image. We empirically validate the value of our approach on the task of frame ranking, and show how our proposed MDR attains state of the art results on BP4D for AU intensity estimation and on SEMAINE for dimensional affect estimation, using only still images at test time. | Exploiting the temporal modeling of facial expressions on video sequences is a longstanding problem in Computer Vision. Some works have proposed to summarize short-term motion at the feature level, extending hand-crafted features to what is known as Three Orthogonal Planes (TOP) @cite_5 @cite_22 . Other works have exploited the use of a Fourier Transform @cite_29 , or spatio-temporal convolution @cite_10 @cite_3 . The majority of related work focus on using recurrent or latent-based models, in particular Recurrent Neural Networks (RNNs @cite_42 @cite_10 @cite_6 @cite_37 @cite_20 @cite_47 ). | {
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_29",
"@cite_42",
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_47",
"@cite_10",
"@cite_20"
],
"mid": [
"2617750261",
"2077958330",
"2803017656",
"2963875208",
"",
"2600389231",
"2479639417",
"2798536775",
"2280620570",
"2546875627"
],
"abstract": [
"Deep Neural Networks (DNNs) have shown to outperform traditional methods in various visual recognition tasks including Facial Expression Recognition (FER). In spite of efforts made to improve the accuracy of FER systems using DNN, existing methods still are not generalizable enough in practical applications. This paper proposes a 3D Convolutional Neural Network method for FER in videos. This new network architecture consists of 3D Inception-ResNet layers followed by an LSTM unit that together extracts the spatial relations within facial images as well as the temporal relations between different frames in the video. Facial landmark points are also used as inputs to our network which emphasize on the importance of facial components rather than the facial regions that may not contribute significantly to generating facial expressions. Our proposed method is evaluated using four publicly available databases in subject-independent and cross-database tasks and outperforms state-of-the-art methods.",
"Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.",
"Depression is a serious mental disorder that affects millions of people all over the world. Traditional clinical diagnosis methods are subjective, complicated and need extensive participation of experts. Audio-visual automatic depression analysis systems predominantly base their predictions on very brief sequential segments, sometimes as little as one frame. Such data contains much redundant information, causes a high computational load, and negatively affects the detection accuracy. Final decision making at the sequence level is then based on the fusion of frame or segment level predictions. However, this approach loses longer term behavioural correlations, as the behaviours themselves are abstracted away by the frame-level predictions. We propose to on the one hand use automatically detected human behaviour primitives such as Gaze directions, Facial action units (AU), etc. as low-dimensional multi-channel time series data, which can then be used to create two sequence descriptors. The first calculates the sequence-level statistics of the behaviour primitives and the second casts the problem as a Convolutional Neural Network problem operating on a spectral representation of the multichannel behaviour signals. The results of depression detection (binary classification) and severity estimation (regression) experiments conducted on the AVEC 2016 DAIC-WOZ database show that both methods achieved significant improvement compared to the previous state of the art in terms of the depression severity estimation.",
"Facial action units (AU) are the fundamental units to decode human facial expressions. At least three aspects affect performance of automated AU detection: spatial representation, temporal modeling, and AU correlation. Unlike most studies that tackle these aspects separately, we propose a hybrid network architecture to jointly model them. Specifically, spatial representations are extracted by a Convolutional Neural Network (CNN), which, as analyzed in this paper, is able to reduce person-specific biases caused by hand-crafted descriptors (e.g., HOG and Gabor). To model temporal dependencies, Long Short-Term Memory (LSTMs) are stacked on top of these representations, regardless of the lengths of input videos. The outputs of CNNs and LSTMs are further aggregated into a fusion network to produce per-frame prediction of 12 AUs. Our network naturally addresses the three issues together, and yields superior performance compared to existing methods that consider these issues independently. Extensive experiments were conducted on two large spontaneous datasets, GFT and BP4D, with more than 400,000 frames coded with 12 AUs. On both datasets, we report improvements over a standard multi-label CNN and feature-based state-of-the-art. Finally, we provide visualization of the learned AU models, which, to our best knowledge, reveal how machines see AUs for the first time.",
"",
"One key challenging issue of facial expression recognition is to capture the dynamic variation of facial physical structure from videos. In this paper, we propose a part-based hierarchical bidirectional recurrent neural network (PHRNN) to analyze the facial expression information of temporal sequences. Our PHRNN models facial morphological variations and dynamical evolution of expressions, which is effective to extract “temporal features” based on facial landmarks (geometry information) from consecutive frames. Meanwhile, in order to complement the still appearance information, a multi-signal convolutional neural network (MSCNN) is proposed to extract “spatial features” from still frames. We use both recognition and verification signals as supervision to calculate different loss functions, which are helpful to increase the variations of different expressions and reduce the differences among identical expressions. This deep evolutional spatial-temporal network (composed of PHRNN and MSCNN) extracts the partial-whole, geometry-appearance, and dynamic-still information, effectively boosting the performance of facial expression recognition. Experimental results show that this method largely outperforms the state-of-the-art ones. On three widely used facial expression databases (CK+, Oulu-CASIA, and MMI), our method reduces the error rates of the previous best ones by 45.5 , 25.8 , and 24.4 , respectively.",
"Video based facial expression recognition has been a long standing problem and attracted growing attention recently. The key to a successful facial expression recognition system is to exploit the potentials of audiovisual modalities and design robust features to effectively characterize the facial appearance and configuration changes caused by facial motions. We propose an effective framework to address this issue in this paper. In our study, both visual modalities (face images) and audio modalities (speech) are utilized. A new feature descriptor called Histogram of Oriented Gradients from Three Orthogonal Planes (HOG-TOP) is proposed to extract dynamic textures from video sequences to characterize facial appearance changes. And a new effective geometric feature derived from the warp transformation of facial landmarks is proposed to capture facial configuration changes. Moreover, the role of audio modalities on recognition is also explored in our study. We applied the multiple feature fusion to tackle the video-based facial expression recognition problems under lab-controlled environment and in the wild, respectively. Experiments conducted on the extended Cohn-Kanade (CK+) database and the Acted Facial Expression in Wild (AFEW) 4.0 database show that our approach is robust in dealing with video-based facial expression recognition problems under lab-controlled environment and in the wild compared with the other state-of-the-art methods.",
"Automatic understanding of human affect using visual signals is of great importance in everyday human–machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute popular and effective representations for affect. Nevertheless, the majority of collected datasets this far, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this paper, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also report on the results of the First Affect-in-the-wild Challenge (Aff-Wild Challenge) that was recently organized in conjunction with CVPR 2017 on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Furthermore, we design and extensively train an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues. The proposed deep learning architecture, AffWildNet, includes convolutional and recurrent neural network layers, exploiting the invariant properties of convolutional features, while also modeling temporal dynamics that arise in human behavior via the recurrent layers. The AffWildNet produced state-of-the-art results on the Aff-Wild Challenge. We then exploit the AffWild database for learning features, which can be used as priors for achieving best performances both for dimensional, as well as categorical emotion recognition, using the RECOLA, AFEW-VA and EmotiW 2017 datasets, compared to all other methods designed for the same goal. The database and emotion recognition models are available at http: ibug.doc.ic.ac.uk resources first-affect-wild-challenge.",
"Spontaneous facial expression recognition under uncontrolled conditions is a hard task. It depends on multiple factors including shape, appearance and dynamics of the facial features, all of which are adversely affected by environmental noise and low intensity signals typical of such conditions. In this work, we present a novel approach to Facial Action Unit detection using a combination of Convolutional and Bi-directional Long Short-Term Memory Neural Networks (CNN-BLSTM), which jointly learns shape, appearance and dynamics in a deep learning manner. In addition, we introduce a novel way to encode shape features using binary image masks computed from the locations of facial landmarks. We show that the combination of dynamic CNN features and Bi-directional Long Short-Term Memory excels at modelling the temporal information. We thoroughly evaluate the contributions of each component in our system and show that it achieves state-of-the-art performance on the FERA-2015 Challenge dataset.",
"In this paper, we present a video-based emotion recognition system submitted to the EmotiW 2016 Challenge. The core module of this system is a hybrid network that combines recurrent neural network (RNN) and 3D convolutional networks (C3D) in a late-fusion fashion. RNN and C3D encode appearance and motion information in different ways. Specifically, RNN takes appearance features extracted by convolutional neural network (CNN) over individual video frames as input and encodes motion later, while C3D models appearance and motion of video simultaneously. Combined with an audio module, our system achieved a recognition accuracy of 59.02 without using any additional emotion-labeled video clips in training set, compared to 53.8 of the winner of EmotiW 2015. Extensive experiments show that combining RNN and C3D together can improve video-based emotion recognition noticeably."
]
} |
1904.02382 | 2930406384 | Facial actions are spatio-temporal signals by nature, and therefore their modeling is crucially dependent on the availability of temporal information. In this paper, we focus on inferring such temporal dynamics of facial actions when no explicit temporal information is available, i.e. from still images. We present a novel approach to capture multiple scales of such temporal dynamics, with an application to facial Action Unit (AU) intensity estimation and dimensional affect estimation. In particular, 1) we propose a framework that infers a dynamic representation (DR) from a still image, which captures the bi-directional flow of time within a short time-window centered at the input image; 2) we show that we can train our method without the need of explicitly generating target representations, allowing the network to represent dynamics more broadly; and 3) we propose to apply a multiple temporal scale approach that infers DRs for different window lengths (MDR) from a still image. We empirically validate the value of our approach on the task of frame ranking, and show how our proposed MDR attains state of the art results on BP4D for AU intensity estimation and on SEMAINE for dimensional affect estimation, using only still images at test time. | Our work is related to motion prediction, where the goal is to infer motion from either still images or sequences. In this sense, the goal is to predict . Some works have tackled this problem by predicting optical flow from still images @cite_48 @cite_38 . Others have proposed to infer the next frame to follow a preceding video sequence @cite_50 @cite_45 . In particular, @cite_46 proposed to infer the next dynamic image, as it better correlates with the preceding frames. These methods do not attempt to summarize motion, but rather predict the most likely frame to follow a given image or image sequence. | {
"cite_N": [
"@cite_38",
"@cite_48",
"@cite_45",
"@cite_50",
"@cite_46"
],
"mid": [
"1930563420",
"2964132058",
"",
"2951383673",
"2952741422"
],
"abstract": [
"Given a scene, what is going to move, and in what direction will it move? Such a question could be considered a non-semantic form of action prediction. In this work, we present a convolutional neural network (CNN) based approach for motion prediction. Given a static image, this CNN predicts the future motion of each and every pixel in the image in terms of optical flow. Our CNN model leverages the data in tens of thousands of realistic videos to train our model. Our method relies on absolutely no human labeling and is able to predict motion based on the context of the scene. Because our CNN model makes no assumptions about the underlying scene, it can predict future optical flow on a diverse set of scenarios. We outperform all previous approaches by large margins.",
"This paper proposes motion prediction in single still images by learning it from a set of videos. The building assumption is that similar motion is characterized by similar appearance. The proposed method learns local motion patterns given a specific appearance and adds the predicted motion in a number of applications. This work (i) introduces a novel method to predict motion from appearance in a single static image, (ii) to that end, extends of the Structured Random Forest with regression derived from first principles, and (iii) shows the value of adding motion predictions in different tasks such as: weak frame-proposals containing unexpected events, action recognition, motion saliency. Illustrative results indicate that motion prediction is not only feasible, but also provides valuable information for a number of applications.",
"",
"In this work, we focus on a challenging task: synthesizing multiple imaginary videos given a single image. Major problems come from high dimensionality of pixel space and the ambiguity of potential motions. To overcome those problems, we propose a new framework that produce imaginary videos by transformation generation. The generated transformations are applied to the original image in a novel volumetric merge network to reconstruct frames in imaginary video. Through sampling different latent variables, our method can output different imaginary video samples. The framework is trained in an adversarial way with unsupervised learning. For evaluation, we propose a new assessment metric @math . In experiments, we test on 3 datasets varying from synthetic data to natural scene. Our framework achieves promising performance in image quality assessment. The visual inspection indicates that it can successfully generate diverse five-frame videos in acceptable perceptual quality.",
"Human action-anticipation methods predict what is the future action by observing only a few portion of an action in progress. This is critical for applications where computers have to react to human actions as early as possible such as autonomous driving, human-robotic interaction, assistive robotics among others. In this paper, we present a method for human action anticipation by predicting the most plausible future human motion. We represent human motion using Dynamic Images and make use of tailored loss functions to encourage a generative model to produce accurate future motion prediction. Our method outperforms the currently best performing action-anticipation methods by 4 on JHMDB-21, 5.2 on UT-Interaction and 5.1 on UCF 101-24 benchmarks."
]
} |
1904.02382 | 2930406384 | Facial actions are spatio-temporal signals by nature, and therefore their modeling is crucially dependent on the availability of temporal information. In this paper, we focus on inferring such temporal dynamics of facial actions when no explicit temporal information is available, i.e. from still images. We present a novel approach to capture multiple scales of such temporal dynamics, with an application to facial Action Unit (AU) intensity estimation and dimensional affect estimation. In particular, 1) we propose a framework that infers a dynamic representation (DR) from a still image, which captures the bi-directional flow of time within a short time-window centered at the input image; 2) we show that we can train our method without the need of explicitly generating target representations, allowing the network to represent dynamics more broadly; and 3) we propose to apply a multiple temporal scale approach that infers DRs for different window lengths (MDR) from a still image. We empirically validate the value of our approach on the task of frame ranking, and show how our proposed MDR attains state of the art results on BP4D for AU intensity estimation and on SEMAINE for dimensional affect estimation, using only still images at test time. | Our work can be viewed as image-to-image translation, where a dynamic representation (a 3-channel image in our case) is generated from an input image. Works in image-to-image translation generally attempt to modify an input image to generate an output according to a target attribute or style, and thus do not have as a goal to any information the input image @cite_41 @cite_31 @cite_39 @cite_7 @cite_16 @cite_18 @cite_2 . These approaches generally rely on the use of Generative Adversarial Networks (GANs) @cite_23 , or any of its extensions @cite_41 @cite_31 @cite_16 . GANs are a powerful tool to capture the target distribution, enforcing the networks to produce plausible outputs. However, as we shall see, we will not be using explicit target representations to learn our network, and therefore the use of GANs is not a suitable tool to learn the dynamic representations. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_41",
"@cite_39",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_16"
],
"mid": [
"2326925005",
"2963800363",
"2768626898",
"1903029394",
"2099471712",
"",
"2963073614",
"2963444790"
],
"abstract": [
"Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.",
"We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.",
"Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently [7, 8, 21, 12, 4, 18]. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation [23], we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data."
]
} |
1904.02382 | 2930406384 | Facial actions are spatio-temporal signals by nature, and therefore their modeling is crucially dependent on the availability of temporal information. In this paper, we focus on inferring such temporal dynamics of facial actions when no explicit temporal information is available, i.e. from still images. We present a novel approach to capture multiple scales of such temporal dynamics, with an application to facial Action Unit (AU) intensity estimation and dimensional affect estimation. In particular, 1) we propose a framework that infers a dynamic representation (DR) from a still image, which captures the bi-directional flow of time within a short time-window centered at the input image; 2) we show that we can train our method without the need of explicitly generating target representations, allowing the network to represent dynamics more broadly; and 3) we propose to apply a multiple temporal scale approach that infers DRs for different window lengths (MDR) from a still image. We empirically validate the value of our approach on the task of frame ranking, and show how our proposed MDR attains state of the art results on BP4D for AU intensity estimation and on SEMAINE for dimensional affect estimation, using only still images at test time. | In this paper we propose to learn without explicitly generating target representations. Instead, we will make use of a proxy loss function, called a Rank Loss, to train our network in a self-supervised manner. Self-supervised learning avoids the need of explicit target data, and instead explores the structure of the training data to supervise the training process, using e.g. temporal relations or semantic structures @cite_28 . Some works on self-supervised learning have already used the temporal order of video frames to train networks, aiming to learn video representations of asymmetric human actions @cite_21 @cite_8 or analyze temporal coherence @cite_27 @cite_36 . To the best of our knowledge, we are the first to propose the use of a Rank Loss function to learn a dynamic representation of facial expressions in a self-supervised manner. | {
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_36",
"@cite_21",
"@cite_27"
],
"mid": [
"2487442924",
"343636949",
"1836533770",
"2950809610",
"2285336231"
],
"abstract": [
"In this paper, we present an approach for learning a visual representation from the raw spatiotemporal signals in videos. Our representation is learned without supervision from semantic labels. We formulate our method as an unsupervised sequential verification task, i.e., we determine whether a sequence of frames from a video is in the correct temporal order. With this simple task and no semantic labels, we learn a powerful visual representation using a Convolutional Neural Network (CNN). The representation contains complementary information to that learned from supervised image datasets like ImageNet. Qualitative results show that our method captures information that is temporally varying, such as human pose. When used as pre-training for action recognition, our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51. To demonstrate its sensitivity to human pose, we show results for pose estimation on the FLIC and MPII datasets that are competitive, or better than approaches using significantly more supervision. Our method can be combined with supervised representations to provide an additional boost in accuracy.",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"Current state-of-the-art classification and detection algorithms train deep convolutional networks using labeled data. In this work we study unsupervised feature learning with convolutional networks in the context of temporally coherent unlabeled data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity priors. We establish a connection between slow feature learning and metric learning. Using this connection we define \"temporal coherence\" -- a criterion which can be used to set hyper-parameters in a principled and automated manner. In a transfer learning experiment, we show that the resulting encoder can be used to define a more semantically coherent metric without the use of labels.",
"We propose a new self-supervised CNN pre-training technique based on a novel auxiliary task called \"odd-one-out learning\". In this task, the machine is asked to identify the unrelated or odd element from a set of otherwise related elements. We apply this technique to self-supervised video representation learning where we sample subsequences from videos and ask the network to learn to predict the odd video subsequence. The odd video subsequence is sampled such that it has wrong temporal order of frames while the even ones have the correct temporal order. Therefore, to generate a odd-one-out question no manual annotation is required. Our learning machine is implemented as multi-stream convolutional neural network, which is learned end-to-end. Using odd-one-out networks, we learn temporal representations for videos that generalizes to other related tasks such as action recognition. On action classification, our method obtains 60.3 on the UCF101 dataset using only UCF101 data for training which is approximately 10 better than current state-of-the-art self-supervised learning methods. Similarly, on HMDB51 dataset we outperform self-supervised state-of-the art methods by 12.7 on action classification task.",
"How can unlabeled video augment visual learning? Existing methods perform \"slow\" feature analysis, encouraging the representations of temporally close frames to exhibit only small differences. While this standard approach captures the fact that high-level visual signals change slowly over time, it fails to capture *how* the visual content changes. We propose to generalize slow feature analysis to \"steady\" feature analysis. The key idea is to impose a prior that higher order derivatives in the learned feature space must be small. To this end, we train a convolutional neural network with a regularizer on tuples of sequential frames from unlabeled video. It encourages feature changes over time to be smooth, i.e., similar to the most recent changes. Using five diverse datasets, including unlabeled YouTube and KITTI videos, we demonstrate our method's impact on object, scene, and action recognition tasks. We further show that our features learned from unlabeled video can even surpass a standard heavily supervised pretraining approach."
]
} |
1904.02331 | 2951559221 | The overreliance on large parallel corpora significantly limits the applicability of machine translation systems to the majority of language pairs. Back-translation has been dominantly used in previous approaches for unsupervised neural machine translation, where pseudo sentence pairs are generated to train the models with a reconstruction loss. However, the pseudo sentences are usually of low quality as translation errors accumulate during training. To avoid this fundamental issue, we propose an alternative but more effective approach, extract-edit, to extract and then edit real sentences from the target monolingual corpora. Furthermore, we introduce a comparative translation loss to evaluate the translated target sentences and thus train the unsupervised translation systems. Experiments show that the proposed approach consistently outperforms the previous state-of-the-art unsupervised machine translation systems across two benchmarks (English-French and English-German) and two low-resource language pairs (English-Romanian and English-Russian) by more than 2 (up to 3.63) BLEU points. | Our work is also related to the recent work on applying retrieval mechanisms to augment text generation, such as image captioning @cite_36 @cite_12 , dialogue generation @cite_34 @cite_3 @cite_40 and style transfer @cite_23 @cite_0 . Some editing-based models @cite_1 @cite_25 are proposed to further enhance the retrieved text. Recent work in machine translation @cite_33 augments an NMT model with sentence pairs retrieved by an off-the-shelf search engine. However, these methods are two-staged with supervised retrieval first. In our work, the extracted-edited sentences are not directly used as the ground truth to train the translation model. Instead, we view these sentences as pivotal points in the target language space and further we propose a comparative translation loss to train the system in a fully unsupervised way. | {
"cite_N": [
"@cite_33",
"@cite_36",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_40",
"@cite_23",
"@cite_34",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"2144715671",
"2963018920",
"2964308564",
"2963667126",
"2809213523",
"2616969219",
"2539809671",
"2890397703",
""
],
"abstract": [
"",
"The ever growing amount of web images and their associated texts offers new opportunities for integrative models bridging natural language processing and computer vision. However, the potential benefits of such data are yet to be fully realized due to the complexity and noise in the alignment between image content and text. We address this challenge with contributions in two folds: first, we introduce the new task of image caption generalization, formulated as visually-guided sentence compression, and present an efficient algorithm based on dynamic beam search with dependency-based constraints. Second, we release a new large-scale corpus with 1 million image-caption pairs achieving tighter content alignment between images and text. Evaluation results show the intrinsic quality of the generalized captions and the extrinsic utility of the new imagetext parallel corpus with respect to a concrete application of image caption transfer.",
"We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.",
"Abstract: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"",
"Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses. We propose a new paradigm for response generation, that is response generation by editing, which significantly increases the diversity and informativeness of the generation results. Our assumption is that a plausible response can be generated by slightly revising an existing response prototype. The prototype is retrieved from a pre-defined index and provides a good start-point for generation because it is grammatical and informative. We design a response editing model, where an edit vector is formed by considering differences between a prototype context and a current context, and then the edit vector is fed to a decoder to revise the prototype response for the current context. Experiment results on a large scale dataset demonstrate that the response editing model outperforms generative and retrieval-based models on various aspects.",
"Generative adversarial networks (GANs) have great successes on synthesizing data. However, the existing GANs restrict the discriminator to be a binary classifier, and thus limit their learning capacity for tasks that need to synthesize output with rich structures such as natural language descriptions. In this paper, we propose a novel generative adversarial network, RankGAN, for generating high-quality language descriptions. Rather than training the discriminator to learn and assign absolute binary predicate for individual data sample, the proposed RankGAN is able to analyze and rank a collection of human-written and machine-written sentences by giving a reference group. By viewing a set of data samples collectively and evaluating their quality through relative ranking scores, the discriminator is able to make better assessment which in turn helps to learn a better generator. The proposed RankGAN is optimized through the policy gradient technique. Experimental results on multiple public datasets clearly demonstrate the effectiveness of the proposed approach.",
"Open-domain human-computer conversation has attracted much attention in the field of NLP. Contrary to rule- or template-based domain-specific dialog systems, open-domain conversation usually requires data-driven approaches, which can be roughly divided into two categories: retrieval-based and generation-based systems. Retrieval systems search a user-issued utterance (called a query) in a large database, and return a reply that best matches the query. Generative approaches, typically based on recurrent neural networks (RNNs), can synthesize new replies, but they suffer from the problem of generating short, meaningless utterances. In this paper, we propose a novel ensemble of retrieval-based and generation-based dialog systems in the open domain. In our approach, the retrieved candidate, in addition to the original query, is fed to an RNN-based reply generator, so that the neural model is aware of more information. The generated reply is then fed back as a new candidate for post-reranking. Experimental results show that such ensemble outperforms each single part of it by a large margin.",
"Generic sequence-to-sequence models have trouble generating outputs with highly-structured dependencies such as source code. Motivated by the observation that editing is easier than writing from scratch, we propose a general retrieve-and-edit paradigm that can leverage any base sequence-to-sequence model: given a test input, we first retrieve a training example and then edit the retrieved output into the final predicted output using the base model. The key challenge is to efficiently learn a retriever that is sensitive to the prediction task. We propose first learning a joint variational autoencoder over input-output pairs and then regressing a conditional retriever on the joint embeddings. On the Hearthstone cards benchmark, we show that applying the retrieve-and-edit paradigm to a vanilla sequence-to-sequence model results in BLEU scores approaching those of specialized AST-based code generation models. Additionally, we introduce a new autocomplete task on Python code from GitHub, on which we demonstrate the benefits of retrieve-and-edit.",
""
]
} |
1904.02317 | 2929862222 | The current advances in object detection depend on large-scale datasets to get good performance. However, there may not always be sufficient samples in many scenarios, which leads to the research on few-shot detection as well as its extreme variation one-shot detection. In this paper, the one-shot detection has been formulated as a conditional probability problem. With this insight, a novel one-shot conditional object detection (OSCD) framework, referred as Comparison Network (ComparisonNet), has been proposed. Specifically, query and target image features are extracted through a Siamese network as mapped metrics of marginal probabilities. A two-stage detector for OSCD is introduced to compare the extracted query and target features with the learnable metric to approach the optimized non-linear conditional probability. Once trained, ComparisonNet can detect objects of both seen and unseen classes without further training, which also has the advantages including class-agnostic, training-free for unseen classes, and without catastrophic forgetting. Experiments show that the proposed approach achieves state-of-the-art performance on the proposed datasets of Fashion-MNIST and PASCAL VOC. | Convolutional Neural Network (CNN) based object detection approaches can mainly be divided into two categories: one-stage detectors @cite_7 @cite_48 @cite_5 @cite_17 and two-stage detectors @cite_39 @cite_1 @cite_35 @cite_50 . One-stage detectors predict object locations and classes in the whole image directly, without region proposal procedure. While two-stage detectors first use region proposal methods to generate a set of candidate object locations, then classify each candidate location as foreground classes or background. | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_48",
"@cite_1",
"@cite_39",
"@cite_50",
"@cite_5",
"@cite_17"
],
"mid": [
"",
"2963542991",
"",
"2773656608",
"2102605133",
"2613718673",
"2963037989",
"2884561390"
],
"abstract": [
"",
"Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.",
"",
"Image based localization is one of the important problems in computer vision due to its wide applicability in robotics, augmented reality, and autonomous systems. There is a rich set of methods described in the literature how to geometrically register a 2D image w.r.t. a 3D model. Recently, methods based on deep (and convolutional) feedforward networks (CNNs) became popular for pose regression. However, these CNN-based methods are still less accurate than geometry based methods despite being fast and memory efficient. In this work we design a deep neural network architecture based on sparse feature descriptors to estimate the absolute pose of an image. Our choice of using sparse feature descriptors has two major advantages: first, our network is significantly smaller than the CNNs proposed in the literature for this task---thereby making our approach more efficient and scalable. Second---and more importantly---, usage of sparse features allows to augment the training data with synthetic viewpoints, which leads to substantial improvements in the generalization performance to unseen poses. Thus, our proposed method aims to combine the best of the two worlds---feature-based localization and CNN-based pose regression--to achieve state-of-the-art performance in the absolute pose estimation. A detailed analysis of the proposed architecture and a rigorous evaluation on the existing datasets are provided to support our method.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.",
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https: github.com facebookresearch Detectron."
]
} |
1904.02317 | 2929862222 | The current advances in object detection depend on large-scale datasets to get good performance. However, there may not always be sufficient samples in many scenarios, which leads to the research on few-shot detection as well as its extreme variation one-shot detection. In this paper, the one-shot detection has been formulated as a conditional probability problem. With this insight, a novel one-shot conditional object detection (OSCD) framework, referred as Comparison Network (ComparisonNet), has been proposed. Specifically, query and target image features are extracted through a Siamese network as mapped metrics of marginal probabilities. A two-stage detector for OSCD is introduced to compare the extracted query and target features with the learnable metric to approach the optimized non-linear conditional probability. Once trained, ComparisonNet can detect objects of both seen and unseen classes without further training, which also has the advantages including class-agnostic, training-free for unseen classes, and without catastrophic forgetting. Experiments show that the proposed approach achieves state-of-the-art performance on the proposed datasets of Fashion-MNIST and PASCAL VOC. | One- or few-shot learning methods aim to learn new knowledge rapidly from few data. Considering that deep models @cite_17 @cite_50 @cite_11 @cite_40 @cite_37 @cite_22 trained on the data-rich datasets have led to significant advances and universal applications in image classification and object detection, fine-tuning pre-trained models can be a simple and efficient method to transfer knowledge from source domains to target domains. Beyond the basic fine-tuning operation, metric learning based works @cite_23 @cite_26 @cite_34 and meta-learning based methods @cite_4 @cite_46 @cite_14 @cite_27 @cite_3 @cite_41 @cite_36 @cite_49 @cite_18 have made great progress in few-shot learning. Our method is inspired by Relation Network @cite_3 . Unlike most previous methods using pre-defined distance metrics, Relation Network proves that learnable distance metric outperforms pre-defined distance metrics in few-shot learning. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_14",
"@cite_18",
"@cite_41",
"@cite_36",
"@cite_17",
"@cite_3",
"@cite_40",
"@cite_27",
"@cite_50",
"@cite_23",
"@cite_49",
"@cite_46",
"@cite_34",
"@cite_11"
],
"mid": [
"2194775991",
"2601450892",
"",
"2097117768",
"2519882289",
"2883582441",
"",
"2884901161",
"2884561390",
"2964105864",
"2964046515",
"",
"2613718673",
"2963341924",
"",
"2753160622",
"2879454547",
"2163605009"
],
"abstract": [
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on a complicated fine-tuning procedure and an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to prototype representations of each class, rather than individual points. Our method is competitive with state-of-the-art one-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and mini-ImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-of-the-art results on the Caltech UCSD bird dataset.",
"",
"We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"We develop a conceptually simple but powerful approach that can learn novel categories from few annotated examples. In this approach, the experience with already learned categories is used to facilitate the learning of novel classes. Our insight is two-fold: (1) there exists a generic, category agnostic transformation from models learned from few samples to models learned from large enough sample sets, and (2) such a transformation could be effectively learned by high-capacity regressors. In particular, we automatically learn the transformation with a deep model regression network on a large collection of model pairs. Experiments demonstrate that encoding this transformation as prior knowledge greatly facilitates the recognition in the small sample size regime on a broad range of tasks, including domain adaptation, fine-grained recognition, action recognition, and scene classification.",
"We unify recent neural approaches to one-shot learning with older ideas of associative memory in a model for met alearning. Our model learns jointly to represent data and to bind class labels to representations in a single shot. It builds representations via slow weights, learned across tasks through SGD, while fast weights constructed by a Hebbian learning rule implement one-shot binding for each new task. On the Omniglot, Mini-ImageNet, and Penn Treebank one-shot learning benchmarks, our model achieves state-of-the-art results.",
"",
"Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.",
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https: github.com facebookresearch Detectron.",
"We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.",
"",
"",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.",
"",
"Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.",
"The key issue of few-shot learning is learning to generalize. This paper proposes a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning methods, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry."
]
} |
1904.02317 | 2929862222 | The current advances in object detection depend on large-scale datasets to get good performance. However, there may not always be sufficient samples in many scenarios, which leads to the research on few-shot detection as well as its extreme variation one-shot detection. In this paper, the one-shot detection has been formulated as a conditional probability problem. With this insight, a novel one-shot conditional object detection (OSCD) framework, referred as Comparison Network (ComparisonNet), has been proposed. Specifically, query and target image features are extracted through a Siamese network as mapped metrics of marginal probabilities. A two-stage detector for OSCD is introduced to compare the extracted query and target features with the learnable metric to approach the optimized non-linear conditional probability. Once trained, ComparisonNet can detect objects of both seen and unseen classes without further training, which also has the advantages including class-agnostic, training-free for unseen classes, and without catastrophic forgetting. Experiments show that the proposed approach achieves state-of-the-art performance on the proposed datasets of Fashion-MNIST and PASCAL VOC. | One-shot learning is a basic task and it can be easily extended to few-shot learning @cite_3 . Performance on few-shot detection will improve once there are advances in OSCD. Therefore, we focus on OSCD in this paper. OSCD is consistent with some visual searching tasks such as template matching @cite_13 @cite_30 @cite_19 , image retrieval @cite_29 @cite_47 @cite_42 @cite_15 and person re-identification @cite_20 @cite_12 @cite_2 in principle. While it is implemented with an object detection pipeline in the same manner as the common object detection tasks. | {
"cite_N": [
"@cite_30",
"@cite_15",
"@cite_29",
"@cite_42",
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_47",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"2560741668",
"2963090248",
"2963166708",
"1913628733",
"2964105864",
"1936896434",
"2751519608",
"2544587078",
"2137285991",
"2098807270",
"1979260620"
],
"abstract": [
"We propose a novel measure for template matching named Deformable Diversity Similarity – based on the diversity of feature matches between a target image window and the template. We rely on both local appearance and geometric information that jointly lead to a powerful approach for matching. Our key contribution is a similarity measure, that is robust to complex deformations, significant background clutter, and occlusions. Empirical evaluation on the most up-to-date benchmark shows that our method outperforms the current state-of-the-art in its detection accuracy while improving computational complexity.",
"Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA’s high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.",
"Abstract: Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets.",
"Approximate nearest neighbor search is an efficient strategy for large-scale image retrieval. Encouraged by the recent advances in convolutional neural networks (CNNs), we propose an effective deep learning framework to generate binary hash codes for fast image retrieval. Our idea is that when the data labels are available, binary codes can be learned by employing a hidden layer for representing the latent concepts that dominate the class labels. The utilization of the CNN also allows for learning image representations. Unlike other supervised methods that require pair-wised inputs for binary code learning, our method learns hash codes and image representations in a point-wised manner, making it suitable for large-scale datasets. Experimental results show that our method outperforms several state-of-the-art hashing algorithms on the CIFAR-10 and MNIST datasets. We further demonstrate its scalability and efficacy on a large-scale dataset of 1 million clothing images.",
"We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.",
"We present a new approach to wide baseline matching. We propose to use a hierarchical decomposition of the image domain and coarse-to-fine selection of regions to match. In contrast to interest point matching methods, which sample salient regions to reduce the cost of comparing all regions in two images, our method eliminates regions systematically to achieve efficiency. One advantage of our approach is that it is not restricted to covariant salient regions, which is too restrictive under large viewpoint and leads to few corresponding regions. Affine invariant matching of regions in the hierarchy is achieved efficiently by a coarse-to-fine search of the affine space. Experiments on two benchmark datasets shows that our method finds more correct correspondence of the image (with fewer false alarms) than other wide baseline methods on large viewpoint change.",
"This paper targets to bring together the research efforts on two fields that are growing actively in the past few years: multicamera person Re-Identification (ReID) and large-scale image retrieval. We demonstrate that the essentials of image retrieval and person ReID are the same, i.e., measuring the similarity between images. However, person ReID requires more discriminative and robust features to identify the subtle differences of different persons and overcome the large variance among images of the same person. Specifically, we propose a coarse-to-fine (C2F) framework and a Convolutional Neural Network structure named as Conv-Net to tackle the large-scale person ReID as an image retrieval task. Given a query person image, the C2F firstly employ Conv-Net to extract a compact descriptor and perform the coarse-level search. A robust descriptor conveying more spatial cues is hence extracted to perform the fine-level search. Extensive experimental results show that the proposed method outperforms existing methods on two public datasets. Further, the evaluation on a large-scale Person-520K dataset demonstrates that our work is significantly more efficient than existing works, e.g., only needs 180ms to identify a query person from 520K images.",
"While deep learning has become a key ingredient in the top performing methods for many computer vision tasks, it has failed so far to bring similar improvements to instance-level image retrieval. In this article, we argue that reasons for the underwhelming results of deep methods on image retrieval are threefold: (1) noisy training data, (2) inappropriate deep architecture, and (3) suboptimal training procedure. We address all three issues. First, we leverage a large-scale but noisy landmark dataset and develop an automatic cleaning method that produces a suitable training set for deep retrieval. Second, we build on the recent R-MAC descriptor, show that it can be interpreted as a deep and differentiable architecture, and present improvements to enhance it. Last, we train this network with a siamese architecture that combines three streams with a triplet loss. At the end of the training process, the proposed architecture produces a global image representation in a single forward pass that is well suited for image retrieval. Extensive experiments show that our approach significantly outperforms previous retrieval approaches, including state-of-the-art methods based on costly local descriptor indexing and spatial verification. On Oxford 5k, Paris 6k and Holidays, we respectively report 94.7, 96.6, and 94.8 mean average precision. Our representations can also be heavily compressed using product quantization with little loss in accuracy.",
"Fast-Match is a fast algorithm for approximate template matching under 2D affine transformations that minimizes the Sum-of-Absolute-Differences (SAD) error measure. There is a huge number of transformations to consider but we prove that they can be sampled using a density that depends on the smoothness of the image. For each potential transformation, we approximate the SAD error using a sub linear algorithm that randomly examines only a small number of pixels. We further accelerate the algorithm using a branch-and-bound scheme. As images are known to be piecewise smooth, the result is a practical affine template matching algorithm with approximation guarantees, that takes a few seconds to run on a standard machine. We perform several experiments on three different datasets, and report very good results. To the best of our knowledge, this is the first template matching algorithm which is guaranteed to handle arbitrary 2D affine transformations.",
"Solving the person re-identification problem involves matching observation s of individuals across disjoint camera views. The problem becomes particularly hard in a busy public scene as the number of possible matches is very high. This is further compounded by significant appearance changes due to varying lighting conditions, vie wing angles and body poses across camera views. To address this problem, existing approaches focus on extracting or learning discriminative features followed by template matching using a distance measure. The novelty of this work is that we reformulate the person reidentification problem as a ranking problem and learn a subspace where th e potential true match is given highest ranking rather than any direct distance measure. By doing so, we convert the person re-identification problem from an absolute scoring p roblem to a relative ranking problem. We further develop an novel Ensemble RankSVMto overcome the scalability limitation problem suffered by existing SVM-based ranking methods. This new model reduces significantly memory usage therefore is much more scalable, whilst maintaining high-level performance. We present extensive experiments to demonstrate the performance gain of the proposed ranking approach over existing template matching and classification models.",
"In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances."
]
} |
1904.02317 | 2929862222 | The current advances in object detection depend on large-scale datasets to get good performance. However, there may not always be sufficient samples in many scenarios, which leads to the research on few-shot detection as well as its extreme variation one-shot detection. In this paper, the one-shot detection has been formulated as a conditional probability problem. With this insight, a novel one-shot conditional object detection (OSCD) framework, referred as Comparison Network (ComparisonNet), has been proposed. Specifically, query and target image features are extracted through a Siamese network as mapped metrics of marginal probabilities. A two-stage detector for OSCD is introduced to compare the extracted query and target features with the learnable metric to approach the optimized non-linear conditional probability. Once trained, ComparisonNet can detect objects of both seen and unseen classes without further training, which also has the advantages including class-agnostic, training-free for unseen classes, and without catastrophic forgetting. Experiments show that the proposed approach achieves state-of-the-art performance on the proposed datasets of Fashion-MNIST and PASCAL VOC. | Previous works @cite_32 @cite_45 for OSCD follow the sliding-window paradigm, in which a classifier is applied on a dense image grid. They use handcrafted features to represent image patches. If a gird and the query image has a high similarity in feature space, the grid may contain an object instance. Whereas the sliding-window strategy is not flexible for different scale and aspect ratio objects. Furthermore, hand-engineered features tend to have poor performance for viewpoint and intra-class variation. | {
"cite_N": [
"@cite_45",
"@cite_32"
],
"mid": [
"2271805342",
"2155080527"
],
"abstract": [
"One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.",
"We present a generic detection localization algorithm capable of searching for a visual object of interest without training. The proposed method operates using a single example of an object of interest to find similar matches, does not require prior knowledge (learning) about objects being sought, and does not require any preprocessing step or segmentation of a target image. Our method is based on the computation of local regression kernels as descriptors from a query, which measure the likeness of a pixel to its surroundings. Salient features are extracted from said descriptors and compared against analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. We illustrate optimality properties of the algorithm using a naive-Bayes framework. The algorithm yields a scalar resemblance map, indicating the likelihood of similarity between the query and all patches in the target image. By employing nonparametric significance tests and nonmaxima suppression, we detect the presence and location of objects similar to the given query. The approach is extended to account for large variations in scale and rotation. High performance is demonstrated on several challenging data sets, indicating successful detection of objects in diverse contexts and under different imaging conditions."
]
} |
1904.02317 | 2929862222 | The current advances in object detection depend on large-scale datasets to get good performance. However, there may not always be sufficient samples in many scenarios, which leads to the research on few-shot detection as well as its extreme variation one-shot detection. In this paper, the one-shot detection has been formulated as a conditional probability problem. With this insight, a novel one-shot conditional object detection (OSCD) framework, referred as Comparison Network (ComparisonNet), has been proposed. Specifically, query and target image features are extracted through a Siamese network as mapped metrics of marginal probabilities. A two-stage detector for OSCD is introduced to compare the extracted query and target features with the learnable metric to approach the optimized non-linear conditional probability. Once trained, ComparisonNet can detect objects of both seen and unseen classes without further training, which also has the advantages including class-agnostic, training-free for unseen classes, and without catastrophic forgetting. Experiments show that the proposed approach achieves state-of-the-art performance on the proposed datasets of Fashion-MNIST and PASCAL VOC. | It should be noted that both the classical OSCD work @cite_32 @cite_6 , and the mentioned visual tracking work @cite_43 @cite_31 implement the similarity calculation by comparing the target features @math and query features @math with a pre-defined metric @math . @cite_32 @cite_6 detect objects by computing the cosine similarity between the query and target features. The correlation operation in @cite_43 @cite_31 can also be viewed as a fixed manually defined metric. Although these metric based detection methods have made significant progress, the fixed defined metric may not be the best choice in the few-shot regime @cite_3 . Besides, discovering an appropriate metric @math for a specific task is laborious. | {
"cite_N": [
"@cite_32",
"@cite_3",
"@cite_6",
"@cite_43",
"@cite_31"
],
"mid": [
"2155080527",
"2964105864",
"",
"2470394683",
"2799058067"
],
"abstract": [
"We present a generic detection localization algorithm capable of searching for a visual object of interest without training. The proposed method operates using a single example of an object of interest to find similar matches, does not require prior knowledge (learning) about objects being sought, and does not require any preprocessing step or segmentation of a target image. Our method is based on the computation of local regression kernels as descriptors from a query, which measure the likeness of a pixel to its surroundings. Salient features are extracted from said descriptors and compared against analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. We illustrate optimality properties of the algorithm using a naive-Bayes framework. The algorithm yields a scalar resemblance map, indicating the likelihood of similarity between the query and all patches in the target image. By employing nonparametric significance tests and nonmaxima suppression, we detect the presence and location of objects similar to the given query. The approach is extended to account for large variations in scale and rotation. High performance is demonstrated on several challenging data sets, indicating successful detection of objects in diverse contexts and under different imaging conditions.",
"We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.",
"",
"The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object’s appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.",
"Visual object tracking has been a fundamental topic in recent years and many deep learning based trackers have achieved state-of-the-art performance on multiple benchmarks. However, most of these trackers can hardly get top performance with real-time speed. In this paper, we propose the Siamese region proposal network (Siamese-RPN) which is end-to-end trained off-line with large-scale image pairs. Specifically, it consists of Siamese subnetwork for feature extraction and region proposal subnetwork including the classification branch and regression branch. In the inference phase, the proposed framework is formulated as a local one-shot detection task. We can pre-compute the template branch of the Siamese subnetwork and formulate the correlation layers as trivial convolution layers to perform online tracking. Benefit from the proposal refinement, traditional multi-scale test and online fine-tuning can be discarded. The Siamese-RPN runs at 160 FPS while achieving leading performance in VOT2015, VOT2016 and VOT2017 real-time challenges."
]
} |
1904.02566 | 2928916328 | We propose a simple method for estimating noise level from a single color image. In most image-denoising algorithms, an accurate noise-level estimate results in good denoising performance; however, it is difficult to estimate noise level from a single image because it is an ill-posed problem. We tackle this problem by using prior knowledge that textures are highly correlated between RGB channels and noise is uncorrelated to other signals. We also extended our method for RAW images because they are available in almost all digital cameras and often used in practical situations. Experiments show the high noise-estimation performance of our method in synthetic noisy images. We also applied our method to natural images including RAW images and achieved better noise-estimation performance than conventional methods. | A fast patch-based noise-estimation method has recently been proposed @cite_25 using the Canny edge detector @cite_4 to exclude highly textured areas. This method is fast because of its simplicity, but the parameters of the edge detector have to be properly set to correctly detect areas with rich textures. Learning-based noise-estimation methods @cite_24 and denoising methods @cite_12 @cite_8 using convolutional neural networks @cite_14 have also been proposed recently. These methods achieve high performance in noise estimation and denoising but suffer from high computational costs of convolutions for real-time computing when sufficient computational resources are not available such as in smartphones. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_24",
"@cite_25",
"@cite_12"
],
"mid": [
"2163605009",
"2145023731",
"",
"",
"2754272008",
"2798278116"
],
"abstract": [
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.",
"",
"",
"An accurate quantitative noise estimate is required in many image video processing applications like denoising, computer vision, pattern recognition and tracking. But blind and accurate estimation of noise in an unknown image is a challenging task and hence is an open area of research. We propose the first elegant and novel blind noise estimation method based on random image tile selection and statistical sampling theory for estimating standard deviation of zero mean Gaussian and speckle noise in digital images. Randomly selected samples, i.e., pixels with (3 3 ) neighborhood, are checked for availability of edges in the tile. If there is an edge in the tile at more than one neighboring pixel, the tile is excluded. Only non-edge tiles are used for estimation of noise in the tile and subsequently in the image using the concepts of statistical sampling theory. Finally, we propose a supervised curve fitting approach using the proposed noise estimation model for more accurate estimation of standard deviation of the two types of noise. The proposed technique is computationally efficient as it is a selective random sample-based spatial domain technique. Benchmarking with other contemporary techniques published so far shows that the proposed technique clearly outperforms the others by at least 5 improved noise estimates, over a very wide range of noise.",
"In this paper, we consider a typical image blind denoising problem, which is to remove unknown noise from noisy images. As we all know, discriminative learning based methods, such as DnCNN, can achieve state-of-the-art denoising results, but they are not applicable to this problem due to the lack of paired training data. To tackle the barrier, we propose a novel two-step framework. First, a Generative Adversarial Network (GAN) is trained to estimate the noise distribution over the input noisy images and to generate noise samples. Second, the noise patches sampled from the first step are utilized to construct a paired training dataset, which is used, in turn, to train a deep Convolutional Neural Network (CNN) for denoising. Extensive experiments have been done to demonstrate the superiority of our approach in image blind denoising."
]
} |
1904.02496 | 2933645907 | In this paper, we present a novel algorithm that combines multi-context term embeddings using a neural classifier and we test this approach on the use case of corpus-based term set expansion. In addition, we present a novel and unique dataset for intrinsic evaluation of corpus-based term set expansion algorithms. We show that, over this dataset, our algorithm provides up to 5 mean average precision points over the best baseline. | Several works have addressed the term set expansion problem. We focus on corpus-based approaches based on the distributional similarity hypothesis @cite_5 . State-of-the-art techniques return the @math nearest neighbors around the seed terms as the expanded set, where terms are represented by their co-occurrence or embedding vectors in a training corpus according to different context types, such as linear window context @cite_12 @cite_18 @cite_17 @cite_14 @cite_3 @cite_19 , explicit lists @cite_23 @cite_0 @cite_4 , coordinational patterns @cite_0 and unary patterns @cite_17 @cite_7 . In this work, we generalize coordinational patterns, look at additional context types and combine multiple context-type embeddings. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"",
"1982242209",
"2777203405",
"2787643138",
"2035432878",
"2891318356",
"2161669948",
"",
"",
"2295058825"
],
"abstract": [
"",
"",
"In this paper, we study the problem of expanding a set of given seed entities into a more complete set by discovering other entities that also belong to the same concept set. A typical example is to use \"Canon\" and \"Nikon\" as seed entities, and derive other entities (e.g., \"Olympus\") in the same concept set of camera brands. In order to discover such relevant entities, we exploit several web data sources, including lists extracted from web pages and user queries from a web search engine. While these web data are highly diverse with rich information that usually cover a wide range of the domains of interest, they tend to be very noisy. We observe that previously proposed random walk based approaches do not perform very well on these noisy data sources. Accordingly, we propose a new general framework based on iterative similarity aggregation, and present detailed experimental results to show that, when using general-purpose web data for set expansion, our approach outperforms previous techniques in terms of both precision and recall.",
"Corpus-based set expansion (i.e., finding the “complete” set of entities belonging to the same semantic class, based on a given corpus and a tiny set of seeds) is a critical task in knowledge discovery. It may facilitate numerous downstream applications, such as information extraction, taxonomy induction, question answering, and web search.",
"This paper is a short empirical study of the performance of centrality and classification based iterative term set expansion methods for distributional semantic models. Iterative term set expansion is an interactive process using distributional semantics models where a user labels terms as belonging to some sought after term set, and a system uses this labeling to supply the user with new, candidate, terms to label, trying to maximize the number of positive examples found. While centrality based methods have a long history in term set expansion, we compare them to classification methods based on the the Simple Margin method, an Active Learning approach to classification using Support Vector Machines. Examining the performance of various centrality and classification based methods for a variety of distributional models over five different term sets, we can show that active learning based methods consistently outperform centrality based methods.",
"We present a corpus-based approach to the class expansion task. For a given set of seed entities we use co-occurrence statistics taken from a text collection to define a membership function that is used to rank candidate entities for inclusion in the set. We describe an evaluation framework that uses data from Wikipedia. The performance of our class extension method improves as the size of the text collection increases.",
"Online social media yields a large-scale corpora which is fairly informative and sometimes includes many up-to-date entities. The challenging task of expanding entity sets on social media text is to extract more uncommon entities only using several seeds already in hand. In this paper, we present an approach which is able to find novel entities by expanding a small initial seed set on Twitter text. Our method first generates candidate sets on the basis of the semantic similarity feature. Then it jointly utilizes 2 text-based features and other 12 ones which carry social media specific information. With the scores on those features, a ranking model is learned by a supervised algorithm to synthetically score each candidate terms and then the final ranked list is taken as the target expanded set. We do experiments with 24 entity classes on the Twitter corpus and in the expanded sets there come many novel entities which have not been completely detected in previous researches. And the experimental results on the datasets of different years can perfectly consist with the objective law that fresh entities change as time goes on.",
"Generating semantic lexicons semi-automatically could be a great time saver, relative to creating them by hand. In this paper, we present an algorithm for extracting potential entries for a category from an on-line corpus, based upon a small set of exemplars. Our algorithm finds more correct terms and fewer incorrect ones than previous work in this area. Additionally, the entries that are generated potentially provide broader coverage of the category than would occur to an individual coding them by hand. Our algorithm finds many terms not included within Wordnet (many more than previous algorithms), and could be viewed as an \"enhancer\" of existing broad-coverage resources.",
"",
"",
"A key challenge of entity set expansion is that multifaceted input seeds can lead to significant incoherence in the result set. In this paper, we present a novel solution to handling multifaceted seeds by combining existing user-generated ontologies with a novel word-similarity metric based on skip-grams. By blending the two resources we are able to produce sparse word ego-networks that are centered on the seed terms and are able to capture semantic equivalence among words. We demonstrate that the resulting networks possess internally-coherent clusters, which can be exploited to provide non-overlapping expansions, in order to reflect different semantic classes of the seeds. Empirical evaluation against state-of-the-art baselines shows that our solution, EgoSet, is able to not only capture multiple facets in the input query, but also generate expansions for each facet with higher precision."
]
} |
1904.02348 | 2930719122 | In this paper, we propose a novel space partitioning strategy for implicit hierarchy visualization such that the new plot not only has a tidy layout similar to the treemap, but also is flexible to data changes similar to the Voronoi treemap. To achieve this, we define a new distance function and neighborhood relationship between sites so that space will be divided by axis-aligned segments. Then a sweepline+skyline based heuristic algorithm is proposed to allocate the partitioned spaces to form an orthogonal Voronoi diagram with orthogonal rectangles. To the best of our knowledge, it is the first time to use a sweepline-based strategy for the Voronoi treemap. Moreover, we design a novel strategy to initialize the diagram status and modify the status update procedure so that the generation of our plot is more effective and efficient. We show that the proposed algorithm has an O(nlog(n)) complexity which is the same as the state-of-the-art Voronoi treemap. To this end, we show via experiments on the artificial dataset and real-world dataset the performance of our algorithm in terms of computation time, converge rate, and aspect ratio. Finally, we discuss the pros and cons of our method and make a conclusion. | In this section, we give an overview of implicit hierarchy visualization methods which are related to our work. We mainly focus on the canvas subdivision strategies used to generate layouts, instead of including all implicit visualization techniques. Based on whether the sites are referred to during the subdivision, we divide the methods into two clusters: non-site-based methods and site-based methods. Both of them belong to methods with inclusion edge representation according to the design space definition @cite_31 . | {
"cite_N": [
"@cite_31"
],
"mid": [
"2121815329"
],
"abstract": [
"Apart from explicit node-link representations, implicit visualizations and especially the Treemap as their frontrunner have acquired a solid position among the available techniques to visualize hierarchies. Their advantage is a highly space-efficient graphical representation that does not require explicit drawing of edges. In this paper, we survey the design space for this class of visualization techniques. We establish the design space along the four axes of dimensionality, edge representation, node representation, and layout by examining existing implicit hierarchy visualization techniques. The survey is completed by casting some light into regions of the design space that have not yet been explored. Our design space is not a mere theoretical construct, but a practically usable tool for rapid visualization development. To that end, we discuss a software implementation of the introduced design space."
]
} |
1904.02348 | 2930719122 | In this paper, we propose a novel space partitioning strategy for implicit hierarchy visualization such that the new plot not only has a tidy layout similar to the treemap, but also is flexible to data changes similar to the Voronoi treemap. To achieve this, we define a new distance function and neighborhood relationship between sites so that space will be divided by axis-aligned segments. Then a sweepline+skyline based heuristic algorithm is proposed to allocate the partitioned spaces to form an orthogonal Voronoi diagram with orthogonal rectangles. To the best of our knowledge, it is the first time to use a sweepline-based strategy for the Voronoi treemap. Moreover, we design a novel strategy to initialize the diagram status and modify the status update procedure so that the generation of our plot is more effective and efficient. We show that the proposed algorithm has an O(nlog(n)) complexity which is the same as the state-of-the-art Voronoi treemap. To this end, we show via experiments on the artificial dataset and real-world dataset the performance of our algorithm in terms of computation time, converge rate, and aspect ratio. Finally, we discuss the pros and cons of our method and make a conclusion. | Implicit hierarchy visualization methods that partition the whole space without considering the sites are treated as non-site-based methods, such as the treemap. These methods position the data by following some rules or experience in order to get expected configurations, which sometimes are also named as heuristic-based algorithm. Starting from the propose of original treemap in 1992 @cite_12 , a large number of variants are proposed in the literature @cite_11 @cite_23 . | {
"cite_N": [
"@cite_23",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2106588364",
"2064657914"
],
"abstract": [
"",
"The traditional approach to representing tree structures is as a rooted, directed graph with the root node at the top of the page and children nodes below the parent node with lines connecting them (Figure 1). Knuth (1968, p. 305-313) has a long discussion about this standard representation, especially why the root is at the top and he offers several alternatives including brief mention of a space-filling approach. However, the remainder of his presentation and most other discussions of trees focus on various node and edge representations. By contrast, this paper deals with a two-dimensional (2-d) space-filling approach in which each node is a rectangle whose area is proportional to some attribute such as node size.",
"This article summarises the current state of research into multiple tree visualisations. It discusses the spectrum of current representation techniques used on single trees, pairs of trees and finally multiple trees, in order to identify which representations are best suited to particular tasks and to find gaps in the representation space, in which opportunities for future multiple tree visualisation research may exist. The application areas from where multiple tree data are derived are enumerated, and the distinct structures that multiple trees make in combination with each other and the effect on subsequent approaches to their visualisation are discussed, along with the basic high-level goals of existing multiple tree visualisations."
]
} |
1904.02348 | 2930719122 | In this paper, we propose a novel space partitioning strategy for implicit hierarchy visualization such that the new plot not only has a tidy layout similar to the treemap, but also is flexible to data changes similar to the Voronoi treemap. To achieve this, we define a new distance function and neighborhood relationship between sites so that space will be divided by axis-aligned segments. Then a sweepline+skyline based heuristic algorithm is proposed to allocate the partitioned spaces to form an orthogonal Voronoi diagram with orthogonal rectangles. To the best of our knowledge, it is the first time to use a sweepline-based strategy for the Voronoi treemap. Moreover, we design a novel strategy to initialize the diagram status and modify the status update procedure so that the generation of our plot is more effective and efficient. We show that the proposed algorithm has an O(nlog(n)) complexity which is the same as the state-of-the-art Voronoi treemap. To this end, we show via experiments on the artificial dataset and real-world dataset the performance of our algorithm in terms of computation time, converge rate, and aspect ratio. Finally, we discuss the pros and cons of our method and make a conclusion. | Non-rectangular treemaps are also designed in the literature. Jigsaw map has nicely shaped regions and stable layout by considering Hilbert curves or H curves @cite_7 . They generate irregular shapes which are not easy to be compared with. A modification then was proposed by splitting the space into rectangles @cite_32 . To relax rectangular constraint, angular treemaps describe a divide-and-conquer method to partition the space into various shapes @cite_13 . Besides that, the treemap layout that produces irregular nested shapes by subdividing the Gosper curve @cite_28 was also proposed. | {
"cite_N": [
"@cite_28",
"@cite_13",
"@cite_32",
"@cite_7"
],
"mid": [
"1989683591",
"2003653728",
"2072607341",
"2138434937"
],
"abstract": [
"The emergence of very large hierarchies that result from the increase in available data raises many problems of visualization and navigation. On data sets of such scale, classical graph drawing methods do not take advantage of certain human cognitive skills such as shape recognition. These cognitive skills could make it easier to remember the global structure of the data. In this paper, we propose a method that is based on the use of nested irregular shapes. We name it GosperMap as we rely on the use of a Gosper Curve to generate these shapes. By employing human perception mechanisms that were developed by handling, for example, cartographic maps, this technique facilitates the visualization and navigation of a hierarchy. An algorithm has been designed to preserve region containment according to the hierarchy and to set the leaves' sizes proportionally to a property, in such a way that the size of nonleaf regions corresponds to the sum of their children's sizes. Moreover, the input ordering of the hierarchy's nodes is preserved, i.e., the areas that represent two consecutive children of a node in the hierarchy are adjacent to one another. This property is especially useful because it guarantees some stability in our algorithm. We illustrate our technique by providing visualization examples of the repartition of tax money in the US over time. Furthermore, we validate the use of the GosperMap in a professional documentation context and show the stability and ease of memorization for this type of map.",
"Space-filling visualization techniques have proved their capability in visualizing large hierarchical structured data. However, most existing techniques restrict their partitioning process in vertical and horizontal direction only, which cause problem with identifying hierarchical structures. This paper presents a new space-filling method named Angular Treemaps that relax the constraint of the rectangular subdivision. The approach of Angular Treemaps utilizes divide and conquer paradigm to visualize and emphasize large hierarchical structures within a compact and limited display area with better interpretability. Angular Treemaps generate various layouts to highlight hierarchical sub-structure based on user's preferences or system recommendations. It offers flexibility to be adopted into a wider range of applications, regarding different enclosing shapes. Preliminary usability results suggest user's performance by using this technique is improved in locating and identifying categorized analysis tasks.",
"Treemaps are a well known and powerful space-filling visualisation method for displaying hierarchical data. Many alternative treemap algorithms have been proposed, often with the aim being to optimise performance across several criteria, including spatial stability to assist users in locating and monitoring items of interest. In this paper, we demonstrate that spatial stability is not fully captured by the commonly used \"distance change” (DC) metric, and we introduce a new \"location drift” (LD) metric to more fully capture spatial stability. An empirical study examines the validity and usefulness of the location drift metric, showing that it explains some effects on user performance that distance change does not. Next, we introduce \"Hilbert” and \"Moore” treemap algorithms, which are designed to achieve high spatial stability. We assess their performance in comparison to other treemaps, showing that Hilbert and Moore treemaps perform well across all stability metrics.",
"A recent line of treemap research has focused on layout algorithms that optimize properties such as stability, preservation of ordering information, and aspect ratio of rectangles. No ideal treemap layout algorithm has been found, and so it is natural to explore layouts that produce nonrectangular regions. This note describes a connection between space-filling visualizations and the mathematics of space-filling curves, and uses that connection to characterize a family of layout algorithms which produce nonrectangular regions but enjoy geometric continuity under changes to the data and legibility even for highly unbalanced trees."
]
} |
1904.02348 | 2930719122 | In this paper, we propose a novel space partitioning strategy for implicit hierarchy visualization such that the new plot not only has a tidy layout similar to the treemap, but also is flexible to data changes similar to the Voronoi treemap. To achieve this, we define a new distance function and neighborhood relationship between sites so that space will be divided by axis-aligned segments. Then a sweepline+skyline based heuristic algorithm is proposed to allocate the partitioned spaces to form an orthogonal Voronoi diagram with orthogonal rectangles. To the best of our knowledge, it is the first time to use a sweepline-based strategy for the Voronoi treemap. Moreover, we design a novel strategy to initialize the diagram status and modify the status update procedure so that the generation of our plot is more effective and efficient. We show that the proposed algorithm has an O(nlog(n)) complexity which is the same as the state-of-the-art Voronoi treemap. To this end, we show via experiments on the artificial dataset and real-world dataset the performance of our algorithm in terms of computation time, converge rate, and aspect ratio. Finally, we discuss the pros and cons of our method and make a conclusion. | Neighborhood treemap (Nmap) @cite_17 , that successively bisects a set of pre-defined sites on the horizontal or vertical directions and then scales the bisections to match the value of each site, is also a site-based method. Although no distance function is used during the segmentation, Nmap also needs sites representing the similarity relationships of data elements to be positioned in the canvas. Thus, Nmap can preserve similarity relationship among data elements very well. However, no evidence shows that Nmap can produce stable layouts with dynamic data. Circle packing @cite_26 can also be treated as a site-based method since the generation of the layouts is based on the center of each circle, as well as the recently proposed bubble treemaps @cite_2 . | {
"cite_N": [
"@cite_26",
"@cite_2",
"@cite_17"
],
"mid": [
"2056273964",
"2753405889",
"2023283825"
],
"abstract": [
"In this paper a novel approach is described for tree visualization using nested circles. The brother nodes at the same level are represented by externally tangent circles; the tree nodes at different levels are displayed by using 2D nested circles or 3D nested cylinders. A new layout algorithm for tree structure is described. It provides a good overview for large data sets. It is easy to see all the branches and leaves of the tree. The new method has been applied to the visualization of file systems.",
"We present a novel type of circular treemap, where we intentionally allocate extra space for additional visual variables. With this extended visual design space, we encode hierarchically structured data along with their uncertainties in a combined diagram. We introduce a hierarchical and force-based circle-packing algorithm to compute Bubble Treemaps, where each node is visualized using nested contour arcs. Bubble Treemaps do not require any color or shading, which offers additional design choices. We explore uncertainty visualization as an application of our treemaps using standard error and Monte Carlo-based statistical models. To this end, we discuss how uncertainty propagates within hierarchies. Furthermore, we show the effectiveness of our visualization using three different examples: the package structure of Flare, the S&P 500 index, and the US consumer expenditure survey.",
"Space-filling techniques seek to use as much as possible the visual space to represent a dataset, splitting it into regions that represent the data elements. Amongst those techniques, Treemaps have received wide attention due to its simplicity, reduced visual complexity, and compact use of the available space. Several different Treemap algorithms have been proposed, however the core idea is the same, to divide the visual space into rectangles with areas proportional to some data attribute or weight. Although pleasant layouts can be effectively produced by the existing techniques, most of them do not take into account relationships that might exist between different data elements when partitioning the visual space. This violates the distance-similarity metaphor, that is, close rectangles do not necessarily represent similar data elements. In this paper, we propose a novel approach, called Neighborhood Treemap (Nmap), that seeks to solve this limitation by employing a slice and scale strategy where the visual space is successively bisected on the horizontal or vertical directions and the bisections are scaled until one rectangle is defined per data element. Compared to the current techniques with the same similarity preservation goal, our approach presents the best results while being two to three orders of magnitude faster. The usefulness of Nmap is shown by two applications involving the organization of document collections and the construction of cartograms illustrating its effectiveness on different scenarios."
]
} |
1904.02549 | 2930077538 | Face Alignment is an active computer vision domain, that consists in localizing a number of facial landmarks that vary across datasets. State-of-the-art face alignment methods either consist in end-to-end regression, or in refining the shape in a cascaded manner, starting from an initial guess. In this paper, we introduce DeCaFA, an end-to-end deep convolutional cascade architecture for face alignment. DeCaFA uses fully-convolutional stages to keep full spatial resolution throughout the cascade. Between each cascade stage, DeCaFA uses multiple chained transfer layers with spatial softmax to produce landmark-wise attention maps for each of several landmark alignment tasks. Weighted intermediate supervision, as well as efficient feature fusion between the stages allow to learn to progressively refine the attention maps in an end-to-end manner. We show experimentally that DeCaFA significantly outperforms existing approaches on 300W, CelebA and WFLW databases. In addition, we show that DeCaFA can learn fine alignment with reasonable accuracy from very few images using coarsely annotated data. | Popular examples of cascaded regression methods include SDM @cite_0 : in their pioneering work, Xiong show that using simple linear regressors upon SIFT features in a cascaded manner already provides satisfying alignment results. LBF @cite_22 is a refinement that employs randomized decision trees to dramatically speed up feature extraction. DAN @cite_23 uses deep networks to learn each cascade stage. However, one downside of these approaches is that the update regressors are not learned jointly in a end-to-end fashion, thus there is no guarantee that the learned feature point alignment sequences might be optimal. MDM @cite_9 improves the feature extraction process by sharing the convolutional layer among all steps of the cascade that are performed through a recurrent neural network. This results in memory footprint reduction as well as better representation learning and a more optimized landmark trajectory throughout the cascade. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_22",
"@cite_23"
],
"mid": [
"2157285372",
"2474575620",
"1998294030",
"2962819150"
],
"abstract": [
"Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu intraface.",
"Cascaded regression has recently become the method of choice for solving non-linear least squares problems such as deformable image alignment. Given a sizeable training set, cascaded regression learns a set of generic rules that are sequentially applied to minimise the least squares problem. Despite the success of cascaded regression for problems such as face alignment and head pose estimation, there are several shortcomings arising in the strategies proposed thus far. Specifically, (a) the regressors are learnt independently, (b) the descent directions may cancel one another out and (c) handcrafted features (e.g., HoGs, SIFT etc.) are mainly used to drive the cascade, which may be sub-optimal for the task at hand. In this paper, we propose a combined and jointly trained convolutional recurrent neural network architecture that allows the training of an end-to-end to system that attempts to alleviate the aforementioned drawbacks. The recurrent module facilitates the joint optimisation of the regressors by assuming the cascades form a nonlinear dynamical system, in effect fully utilising the information between all cascade levels by introducing a memory unit that shares information across all levels. The convolutional module allows the network to extract features that are specialised for the task at hand and are experimentally shown to outperform hand-crafted features. We show that the application of the proposed architecture for the problem of face alignment results in a strong improvement over the current state-of-the-art.",
"This paper presents a highly efficient, very accurate regression approach for face alignment. Our approach has two novel components: a set of local binary features, and a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. Our approach achieves the state-of-the-art results when tested on the current most challenging benchmarks. Furthermore, because extracting and regressing local binary features is computationally very cheap, our system is much faster than previous methods. It achieves over 3, 000 fps on a desktop or 300 fps on a mobile phone for locating a few dozens of landmarks.",
"In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70 . Our method has also been submitted for evaluation as part of the Menpo challenge."
]
} |
1904.02549 | 2930077538 | Face Alignment is an active computer vision domain, that consists in localizing a number of facial landmarks that vary across datasets. State-of-the-art face alignment methods either consist in end-to-end regression, or in refining the shape in a cascaded manner, starting from an initial guess. In this paper, we introduce DeCaFA, an end-to-end deep convolutional cascade architecture for face alignment. DeCaFA uses fully-convolutional stages to keep full spatial resolution throughout the cascade. Between each cascade stage, DeCaFA uses multiple chained transfer layers with spatial softmax to produce landmark-wise attention maps for each of several landmark alignment tasks. Weighted intermediate supervision, as well as efficient feature fusion between the stages allow to learn to progressively refine the attention maps in an end-to-end manner. We show experimentally that DeCaFA significantly outperforms existing approaches on 300W, CelebA and WFLW databases. In addition, we show that DeCaFA can learn fine alignment with reasonable accuracy from very few images using coarsely annotated data. | Furthermore, annotating images in term of several face landmarks is a time-consuming task. As a result, data is rather scarce and annotated in terms of varying number of landmarks. For instance, 300W database @cite_16 contains approximately 3000 images labelled with 68 landmarks for train, whereas WFLW database @cite_2 contains 7500 images with 98 landmarks. Thus, one can wonder if we can use all those images within the same framework to learn more robust landmark predictions. In @cite_19 the authors address this problem by using a classical multi-task formulation. However, this essentially ignores the intrinsic relationship between the structure of different landmark alignment tasks. Likewise, if we can predict the position of 68 landmarks, we can also easily deduce the position of landmarks for a coarser markup, such as eye mouth corners and nose tip @cite_4 . | {
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_4",
"@cite_2"
],
"mid": [
"2736671157",
"2284800790",
"1834627138",
"2963789946"
],
"abstract": [
"Face alignment is a critical topic in the computer vision community. Numerous efforts have been made and various benchmark datasets have been released in recent decades. However, two significant issues remain in recent datasets, e.g., Intra-Dataset Variation and Inter-Dataset Variation. Inter-Dataset Variation refers to bias on expression, head pose, etc. inside one certain dataset, while Intra-Dataset Variation refers to different bias across different datasets. To address the mentioned problems, we proposed a novel Deep Variation Leveraging Network (DVLN), which consists of two strong coupling sub-networks, e.g., Dataset-Across Network (DA-Net) and Candidate-Decision Network (CD-Net). Extensive evaluations show that our approach demonstrates real-time performance and dramatically outperforms state-of-the-art methods on the challenging 300-W dataset.,,,,,, To address the mentioned problems, we proposed a novel Deep Variation Leveraging Network (DVLN), which consists of two strong coupling sub-networks, e.g., Dataset-Across Network (DA-Net) and Candidate-Decision Network (CD-Net). In particular, DA-Net takes advantage of different characteristics and distributions across different datasets, while CD-Net makes a final decision on candidate hypotheses given by DA-Net to leverage variations within one certain dataset. Extensive evaluations show that our approach demonstrates real-time performance and dramatically outperforms state-of-the-art methods on the challenging 300-W dataset.",
"Computer Vision has recently witnessed great research advance towards automatic facial points detection. Numerous methodologies have been proposed during the last few years that achieve accurate and efficient performance. However, fair comparison between these methodologies is infeasible mainly due to two issues. (a) Most existing databases, captured under both constrained and unconstrained (in-the-wild) conditions have been annotated using different mark-ups and, in most cases, the accuracy of the annotations is low. (b) Most published works report experimental results using different training testing sets, different error metrics and, of course, landmark points with semantically different locations. In this paper, we aim to overcome the aforementioned problems by (a) proposing a semi-automatic annotation technique that was employed to re-annotate most existing facial databases under a unified protocol, and (b) presenting the 300 Faces In-The-Wild Challenge (300-W), the first facial landmark localization challenge that was organized twice, in 2013 and 2015. To the best of our knowledge, this is the first effort towards a unified annotation scheme of massive databases and a fair experimental comparison of existing facial landmark localization systems. The images and annotations of the new testing database that was used in the 300-W challenge are available from http: ibug.doc.ic.ac.uk resources 300-W_IMAVIS .",
"Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.",
"We present a novel boundary-aware face alignment algorithm by utilising boundary lines as the geometric structure of a human face to help facial landmark localisation. Unlike the conventional heatmap based method and regression based method, our approach derives face landmarks from boundary lines which remove the ambiguities in the landmark definition. Three questions are explored and answered by this work: 1. Why using boundary? 2. How to use boundary? 3. What is the relationship between boundary estimation and landmarks localisation? Our boundary-aware face alignment algorithm achieves 3.49 mean error on 300-W Fullset, which outperforms state-of-the-art methods by a large margin. Our method can also easily integrate information from other datasets. By utilising boundary information of 300-W dataset, our method achieves 3.92 mean error with 0.39 failure rate on COFW dataset, and 1.25 mean error on AFLW-Full dataset. Moreover, we propose a new dataset WFLW to unify training and testing across different factors, including poses, expressions, illuminations, makeups, occlusions, and blurriness. Dataset and model are publicly available at https: wywu.github.io projects LAB LAB.html"
]
} |
1904.02478 | 2930954426 | One of the main problems encountered so far with recurrent neural networks is that they struggle to retain long-time information dependencies in their recurrent connections. Neural Turing Machines (NTMs) attempt to mitigate this issue by providing the neural network with an external portion of memory, in which information can be stored and manipulated later on. The whole mechanism is differentiable end-to-end, allowing the network to learn how to utilise this long-term memory via SGD. This allows NTMs to infer simple algorithms directly from data sequences. Nonetheless, the model can be hard to train due to a large number of parameters and interacting components and little related work is present. In this work we use a NTM to learn and generalise two arithmetical tasks: binary addition and multiplication. These tasks are two fundamental algorithmic examples in computer science, and are a lot more challenging than the previously explored ones, with which we aim to shed some light on the capabilities on this neural model. | Previous works investigated the task of learning binary arithmetic with neural networks. About the binary addition task, @cite_21 showed three depth-optimal feedforward neural networks able to perform @math -bits sums. More recently, @cite_11 developed a new model to perform addition in a more parallel way. Similarly, the multiplication task was studied in @cite_5 , where a model optimal in depth was developed. Also @cite_17 studied this task, developing a model that is more efficient and fast than the previous ones. @cite_7 studied both the problems using feedforward neural networks, founding solutions that are optimal in the depth of the network and bounds polinomially the number of neurons and synapses. | {
"cite_N": [
"@cite_11",
"@cite_7",
"@cite_21",
"@cite_5",
"@cite_17"
],
"mid": [
"190377865",
"2032690046",
"",
"2141469882",
"2579422619"
],
"abstract": [
"Addition is the most commonly used arithmetic operation and is the speed-limiting element in the core of arithmetic logic unit (ALU) in a microprocessor. Perceptron of feedforward neural networks, inspired by the threshold logic unit neuron model of McCulloch and Pitts, is one of the most important aspects of artificial neural networks (ANN). This paper proposes a design of neural network parallel adder (NNPA) under the framework of multi-layer perceptron (MLP) of binary feedforward neural networks (BFNN). The DNA-like learning algorithm proposed by the present authors is successfully used for training the weight-threshold values of NNPA. Moreover, the efficiency of NNPA is compared with that of the conventional adder such as carry-ripple adder and carry-look-ahead adder. It is shown that some advantages of ANN such as synchronous, parallel and fast speed in information processing are sufficiently taken by the current NNPA.",
"Abstract We design new feed-forward multi-layered neural networks which perform different elementary arithmetic operations, such as bit shifting, addition of N p -bit numbers, and multiplication of two n -bit numbers. All the structures are optimal in depth and are polynomially bounded in the number of neurons and in the number of synapses. The whole set of synaptic couplings and thresholds are obtained exactly.",
"",
"An artificial neural network (ANN) is commonly modeled by a threshold circuit, a network of interconnected processing units called linear threshold gates. The depth of a network represents the number of unit delays or the time for parallel computation. The Size of a circuit is the number of gates and measures the amount of hardware. It was known that traditional logic circuits consisting of only unbounded fan-in AND, OR, NOT gates would require at least Ω(log n log log n) depth to compute common arithmetic functions such as the product or the quotient of two n-bit numbers, unless we allow the size (and fan-in) to increase exponentially (in n). We show in this paper that ANNs can be much more powerful than traditional logic circuits. In particular, we prove that that iterated addition can be computed by depth-2 ANN, and multiplication and division can be computed by depth-3 ANNs with polynomial size and polynomially bounded integer weights, respectively. Moreover, it follows from known lower bound results that these ANNs are optimal in depth. We also indicate that these techniques can be applied to construct polynomial-size depth-3 ANN for powering, and depth-4 ANN for multiple product.",
"Almost all signal processing applications demand a considerable number of multiplications. As addition, multiplication is also the speed-limiting element in the core of arithmetic logic unit (ALU) in a microprocessor. In this paper, the perceptron of artificial neural network (ANN) is used to build a binary multiplier, named neural network binary multiplier(NNBM). It is shown that many advantages of ANN such as synchronous, parallel and fast speed on processing information are taken by the multiplier."
]
} |
1904.02478 | 2930954426 | One of the main problems encountered so far with recurrent neural networks is that they struggle to retain long-time information dependencies in their recurrent connections. Neural Turing Machines (NTMs) attempt to mitigate this issue by providing the neural network with an external portion of memory, in which information can be stored and manipulated later on. The whole mechanism is differentiable end-to-end, allowing the network to learn how to utilise this long-term memory via SGD. This allows NTMs to infer simple algorithms directly from data sequences. Nonetheless, the model can be hard to train due to a large number of parameters and interacting components and little related work is present. In this work we use a NTM to learn and generalise two arithmetical tasks: binary addition and multiplication. These tasks are two fundamental algorithmic examples in computer science, and are a lot more challenging than the previously explored ones, with which we aim to shed some light on the capabilities on this neural model. | The common problem to all these models is that they have not any generalization capability: the networks are trained with @math bits long binary numbers and they only learn how to operate with @math bits, being not able to generalize what they have learned to larger numbers. Differently from these approaches, @cite_14 presents a model based on different GRU layers and kernel operations that is able to generalize almost perfectly to larger sequences of bits than those it was trained on on both tasks, but the model is quite complex and requires many layers and an accurate parameters setting in order to work properly. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2963187627"
],
"abstract": [
"Abstract: Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization."
]
} |
1904.02541 | 2934024281 | Due to their importance in practice, dominating set problems in graphs have been greatly studied in past and different formulations of these problems are presented in literature. This paper's focus is on two problems: weakly convex dominating set problem (WCVXDSP) and convex dominating set problem (CVXDSP). It introduces two integer linear programming (ILP) formulation for CVXDSP and one ILP mode for WCVXDSP, as well as proof for equivalency between ILP models for CVXDSP. The proof of correctness for all introduced ILP formulations is provided by showing that optimal solution to the each ILP formulation is equal to the optimal solution of the original problem. | The edge subdivision influence on the convex domination number is discussed in @cite_8 . In that paper it is shown that, in general, the convex domination number can be arbitrarily increased and decreased by an edge subdivision. Study of weakly convex domination subdivision number and its upper bounds is presented in @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_8"
],
"mid": [
"1954707945",
"2740018852"
],
"abstract": [
"A set @math is weakly convex in @math if for any two vertices @math there exists an @math --geodesic such that all of its vertices belong to @math . A set @math is a weakly convex dominating set if @math is weakly convex and dominating. The weakly convex domination number @math of a graph @math equals the minimum cardinality of a weakly convex dominating set in @math . The weakly convex domination subdivision number sd @math is the minimum number of edges that must be subdivided (each edge in @math can be subdivided at most once) in order to increase the weakly convex domination number. In this paper we initiate the study of weakly convex domination subdivision number and we establish upper bounds for it.",
"We study the influence of edge subdivision on the convex domination number. We show that in general an edge subdivision can arbitrarily increase and arbitrarily decrease the convex domination number. We also find some bounds for unicyclic graphs and we investigate graphs G for which the convex domination number changes after subdivision of any edge in G."
]
} |
1904.02142 | 2953130735 | We introduce deep inside-outside recursive autoencoders (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree. Our approach predicts each word in an input sentence conditioned on the rest of the sentence and uses inside-outside dynamic programming to consider all possible binary trees over the sentence. At test time the CKY algorithm extracts the highest scoring parse. DIORA achieves a new state-of-the-art F1 in unsupervised binary constituency parsing (unlabeled) in two benchmark datasets, WSJ and MultiNLI. | Unsupervised learning of syntactic structure has been an active research area , including for unsupervised segmentation and unsupervised dependency parsing @cite_8 . Some models exploit the availability of parallel corpora in multiple languages . Others have shown that dependency parsing can be used for unsupervised constituency parsing , or that it's effective to prune a random subset of possible trees . These approaches aren't necessarily orthogonal to DIORA. For instance, our model may benefit when combined with an unsupervised dependency parser. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2250520759"
],
"abstract": [
"Many statistical learning problems in NLP call for local model search methods. But accuracy tends to suffer with current techniques, which often explore either too narrowly or too broadly: hill-climbers can get stuck in local optima, whereas samplers may be inefficient. We propose to arrange individual local optimizers into organized networks. Our building blocks are operators of two types: (i) transform, which suggests new places to search, via non-random restarts from already-found local optima; and (ii) join, which merges candidate solutions to find better optima. Experiments on grammar induction show that pursuing different transforms (e.g., discarding parts of a learned model or ignoring portions of training data) results in improvements. Groups of locally-optimal solutions can be further perturbed jointly, by constructing mixtures. Using these tools, we designed several modular dependency grammar induction networks of increasing complexity. Our complete system achieves 48.6 accuracy (directed dependency macro-average over all 19 languages in the 2006 7 CoNLL data) — more than 5 higher than the previous state-of-the-art."
]
} |
1904.02072 | 2931993367 | Receiving timely and relevant security information is crucial for maintaining a high-security level on an IT infrastructure. This information can be extracted from Open Source Intelligence published daily by users, security organisations, and researchers. In particular, Twitter has become an information hub for obtaining cutting-edge information about many subjects, including cybersecurity. This work proposes SYNAPSE, a Twitter-based streaming threat monitor that generates a continuously updated summary of the threat landscape related to a monitored infrastructure. Its tweet-processing pipeline is composed of filtering, feature extraction, binary classification, an innovative clustering strategy, and generation of Indicators of Compromise (IoCs). A quantitative evaluation considering all tweets from 80 accounts over more than 8 months (over 195.000 tweets), shows that our approach timely and successfully finds the majority of security-related tweets concerning an example IT infrastructure (true positive rate above 90 ), incorrectly selects a small number of tweets as relevant (false positive rate under 10 ), and summarises the results to very few IoCs per day. A qualitative evaluation of the IoCs generated by SYNAPSE demonstrates their relevance (based on the CVSS score and the availability of patches or exploits), and timeliness (based on threat disclosure dates from NVD). | Several works aim to find cybersecurity OSINT about a given IT infrastructure. These rely on a keyword set to govern the selection of tweets, thereby picking only the potentially relevant content. Mittal al @cite_36 use a knowledge base created from security concepts to evaluate if a tweet is relevant for cybersecurity. Similarly, Le Sceller al @cite_28 designed a framework that collects tweets on a keyword basis and is capable of extending the keyword set automatically. Ritter al @cite_29 search Twitter for occurrences of three specific : DoS attacks, data breaches, and account hijacking. Trabelsi al @cite_15 cluster tweets by subject. Threats not referred by NVD are considered novel and handled like zero-day vulnerabilities. Dionísio al @cite_13 used deep learning techniques to detect and extract security-related information from tweets. Sabottke al @cite_25 show that information about exploits are published on Twitter two days before they are included in NVD (on average). None of these works provide an end-to-end solution for online threat monitoring, mainly because they focus on detection, overlooking summarisation and SOC integration. | {
"cite_N": [
"@cite_36",
"@cite_28",
"@cite_29",
"@cite_15",
"@cite_13",
"@cite_25"
],
"mid": [
"2472414028",
"2743411104",
"2270414365",
"1655081782",
"2934951104",
"1707806712"
],
"abstract": [
"In order to secure vital personal and organizational system we require timely intelligence on cybersecurity threats and vulnerabilities. Intelligence about these threats is generally available in both overt and covert sources like the National Vulnerability Database, CERT alerts, blog posts, social media, and dark web resources. Intelligence updates about cybersecurity can be viewed as temporal events that a security analyst must keep up with so as to secure a computer system. We describe CyberTwitter, a system to discover and analyze cybersecurity intelligence on Twitter and serve as a OSINT (Open-source intelligence) source. We analyze real time information updates, in form of tweets, to extract intelligence about various possible threats. We use the Semantic Web RDF to represent the intelligence gathered and SWRL rules to reason over extracted intelligence to issue alerts for security analysts.",
"Everyday, security experts face a growing number of security events that affecting people well-being, their information systems and sometimes the critical infrastructure. The sooner they can detect and understand these threats, the more they can mitigate and forensically investigate them. Therefore, they need to have a situation awareness of the existing security events and their possible effects. However, given the large number of events, it can be difficult for security analysts and researchers to handle this flow of information in an adequate manner and answer the following questions in near-real time: what are the current security events? How long do they last? In this paper, we will try to answer these issues by leveraging social networks that contain a massive amount of valuable information on many topics. However, because of the very high volume, extracting meaningful information can be challenging. For this reason, we propose SONAR: an automatic, self-learned framework that can detect, geolocate and categorize cyber security events in near-real time over the Twitter stream. SONAR is based on a taxonomy of cyber security events and a set of seed keywords describing type of events that we want to follow in order to start detecting events. Using these seed keywords, it automatically discovers new relevant keywords such as malware names to enhance the range of detection while staying in the same domain. Using a custom taxonomy describing all type of cyber threats, we demonstrate the capabilities of SONAR on a dataset of approximately 47.8 million tweets related to cyber security in the last 9 months. SONAR could efficiently and effectively detect, categorize and monitor cyber security related events before getting on the security news, and it could automatically discover new security terminologies with their event. Additionally, SONAR is highly scalable and customizable by design; therefore we could adapt SONAR framework for virtually any type of events that experts are interested in.",
"Twitter contains a wealth of timely information, however staying on top of breaking events requires that an information analyst constantly scan many sources, leading to information overload. For example, a user might wish to be made aware whenever an infectious disease outbreak takes place, when a new smartphone is announced or when a distributed Denial of Service (DoS) attack might affect an organization's network connectivity. There are many possible event categories an analyst may wish to track, making it impossible to anticipate all those of interest in advance. We therefore propose a weakly supervised approach, in which extractors for new categories of events are easy to define and train, by specifying a small number of seed examples. We cast seed-based event extraction as a learning problem where only positive and unlabeled data is available. Rather than assuming unlabeled instances are negative, as is common in previous work, we propose a learning objective which regularizes the label distribution towards a user-provided expectation. Our approach greatly outperforms heuristic negatives, used in most previous work, in experiments on real-world data. Significant performance gains are also demonstrated over two novel and competitive baselines: semi-supervised EM and one-class support-vector machines. We investigate three security-related events breaking on Twitter: DoS attacks, data breaches and account hijacking. A demonstration of security events extracted by our system is available at: http: kb1.cse.ohio-state.edu:8123 events hacked",
"Staying informed about security vulnerabilities, work-arounds and the availability of patches regarding the components of a given system is crucial to ensure system security. Several channels can be used to the monitor the new vulnerabilities publications, but these channels are scattered. We propose in this paper a vulnerability monitoring system based on twitter analysis that aggregates and analyses different sources of data and extracts zero-day vulnerabilities.",
"To be prepared against cyberattacks, most organizations resort to security information and event management systems to monitor their infrastructures. These systems depend on the timeliness and relevance of the latest updates, patches and threats provided by cyberthreat intelligence feeds. Open source intelligence platforms, namely social media networks such as Twitter, are capable of aggregating a vast amount of cybersecurity-related sources. To process such information streams, we require scalable and efficient tools capable of identifying and summarizing relevant information for specified assets. This paper presents the processing pipeline of a novel tool that uses deep neural networks to process cybersecurity information received from Twitter. A convolutional neural network identifies tweets containing security-related information relevant to assets in an IT infrastructure. Then, a bidirectional long short-term memory network extracts named entities from these tweets to form a security alert or to fill an indicator of compromise. The proposed pipeline achieves an average 94 true positive rate and 91 true negative rate for the classification task and an average F1-score of 92 for the named entity recognition task, across three case study infrastructures.",
"In recent years, the number of software vulnerabilities discovered has grown significantly. This creates a need for prioritizing the response to new disclosures by assessing which vulnerabilities are likely to be exploited and by quickly ruling out the vulnerabilities that are not actually exploited in the real world. We conduct a quantitative and qualitative exploration of the vulnerability-related information disseminated on Twitter. We then describe the design of a Twitter-based exploit detector, and we introduce a threat model specific to our problem. In addition to response prioritization, our detection techniques have applications in risk modeling for cyber-insurance and they highlight the value of information provided by the victims of attacks."
]
} |
1904.02072 | 2931993367 | Receiving timely and relevant security information is crucial for maintaining a high-security level on an IT infrastructure. This information can be extracted from Open Source Intelligence published daily by users, security organisations, and researchers. In particular, Twitter has become an information hub for obtaining cutting-edge information about many subjects, including cybersecurity. This work proposes SYNAPSE, a Twitter-based streaming threat monitor that generates a continuously updated summary of the threat landscape related to a monitored infrastructure. Its tweet-processing pipeline is composed of filtering, feature extraction, binary classification, an innovative clustering strategy, and generation of Indicators of Compromise (IoCs). A quantitative evaluation considering all tweets from 80 accounts over more than 8 months (over 195.000 tweets), shows that our approach timely and successfully finds the majority of security-related tweets concerning an example IT infrastructure (true positive rate above 90 ), incorrectly selects a small number of tweets as relevant (false positive rate under 10 ), and summarises the results to very few IoCs per day. A qualitative evaluation of the IoCs generated by SYNAPSE demonstrates their relevance (based on the CVSS score and the availability of patches or exploits), and timeliness (based on threat disclosure dates from NVD). | Research-oriented work focus on gathering OSINT and transforming it into machine-readable IoCs for feeding Intrusion Detection Systems (IDS), anti-viruses, or other tools. Mathews al @cite_20 employ traditional ( , logs) and non-traditional ( , forums, blog posts) data sources to create an ontology that infers the legitimacy of traffic flows, feeding an IDS with the results. Liao al @cite_48 developed a framework for extracting IoCs from technical literature, enabling high recall of the methodology. In a different work, Zhu al @cite_44 present a system that processes the scientific literature studying Android malware and extracts features describing the attacks to create a malware detector. The objective of these works is to extract machine-readable information from OSINT, which is different from our goal. | {
"cite_N": [
"@cite_44",
"@cite_48",
"@cite_20"
],
"mid": [
"2538057479",
"2538865281",
"2063948318"
],
"abstract": [
"Malware detection increasingly relies on machine learning techniques, which utilize multiple features to separate the malware from the benign apps. The effectiveness of these techniques primarily depends on the manual feature engineering process, based on human knowledge and intuition. However, given the adversaries' efforts to evade detection and the growing volume of publications on malware behaviors, the feature engineering process likely draws from a fraction of the relevant knowledge. We propose an end-to-end approach for automatic feature engineering. We describe techniques for mining documents written in natural language (e.g. scientific papers) and for representing and querying the knowledge about malware in a way that mirrors the human feature engineering process. Specifically, we first identify abstract behaviors that are associated with malware, and then we map these behaviors to concrete features that can be tested experimentally. We implement these ideas in a system called FeatureSmith, which generates a feature set for detecting Android malware. We train a classifier using these features on a large data set of benign and malicious apps. This classifier achieves a 92.5 true positive rate with only 1 false positives, which is comparable to the performance of a state-of-the-art Android malware detector that relies on manually engineered features. In addition, FeatureSmith is able to suggest informative features that are absent from the manually engineered set and to link the features generated to abstract concepts that describe malware behaviors.",
"To adapt to the rapidly evolving landscape of cyber threats, security professionals are actively exchanging Indicators of Compromise (IOC) (e.g., malware signatures, botnet IPs) through public sources (e.g. blogs, forums, tweets, etc.). Such information, often presented in articles, posts, white papers etc., can be converted into a machine-readable OpenIOC format for automatic analysis and quick deployment to various security mechanisms like an intrusion detection system. With hundreds of thousands of sources in the wild, the IOC data are produced at a high volume and velocity today, which becomes increasingly hard to manage by humans. Efforts to automatically gather such information from unstructured text, however, is impeded by the limitations of today's Natural Language Processing (NLP) techniques, which cannot meet the high standard (in terms of accuracy and coverage) expected from the IOCs that could serve as direct input to a defense system. In this paper, we present iACE, an innovation solution for fully automated IOC extraction. Our approach is based upon the observation that the IOCs in technical articles are often described in a predictable way: being connected to a set of context terms (e.g., \"download\") through stable grammatical relations. Leveraging this observation, iACE is designed to automatically locate a putative IOC token (e.g., a zip file) and its context (e.g., \"malware\", \"download\") within the sentences in a technical article, and further analyze their relations through a novel application of graph mining techniques. Once the grammatical connection between the tokens is found to be in line with the way that the IOC is commonly presented, these tokens are extracted to generate an OpenIOC item that describes not only the indicator (e.g., a malicious zip file) but also its context (e.g., download from an external source). Running on 71,000 articles collected from 45 leading technical blogs, this new approach demonstrates a remarkable performance: it generated 900K OpenIOC items with a precision of 95 and a coverage over 90 , which is way beyond what the state-of-the-art NLP technique and industry IOC tool can achieve, at a speed of thousands of articles per hour. Further, by correlating the IOCs mined from the articles published over a 13-year span, our study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.",
"Traditional intrusion detection and prevention systems have well known limitations that decrease their utility against many kinds of attacks. Creating a new system that collaboratively combines information from traditional and nontraditional sensors to produce new, relevant signatures is one way to deal with these limitations. In this paper, we present a framework that uses this collaborative approach, as well as the details for a network traffic based classifier that shows promise for detecting malicious traffic."
]
} |
1904.02072 | 2931993367 | Receiving timely and relevant security information is crucial for maintaining a high-security level on an IT infrastructure. This information can be extracted from Open Source Intelligence published daily by users, security organisations, and researchers. In particular, Twitter has become an information hub for obtaining cutting-edge information about many subjects, including cybersecurity. This work proposes SYNAPSE, a Twitter-based streaming threat monitor that generates a continuously updated summary of the threat landscape related to a monitored infrastructure. Its tweet-processing pipeline is composed of filtering, feature extraction, binary classification, an innovative clustering strategy, and generation of Indicators of Compromise (IoCs). A quantitative evaluation considering all tweets from 80 accounts over more than 8 months (over 195.000 tweets), shows that our approach timely and successfully finds the majority of security-related tweets concerning an example IT infrastructure (true positive rate above 90 ), incorrectly selects a small number of tweets as relevant (false positive rate under 10 ), and summarises the results to very few IoCs per day. A qualitative evaluation of the IoCs generated by SYNAPSE demonstrates their relevance (based on the CVSS score and the availability of patches or exploits), and timeliness (based on threat disclosure dates from NVD). | With the few exceptions discussed bellow, most stream clustering algorithms require the target number of clusters ( @math ) to be defined as a parameter and discard elements that do not fit the clusters (outliers) @cite_1 . Feng al @cite_42 cluster only the tweets' hashtags, using text similarity to adapt the number of clusters to the collected data. However, this algorithm would potentially miss important information in the security field, as the clustering would not consider the full tweet text, only hashtags. Saki al @cite_34 use a density-based clustering approach, therefore avoiding the definition of @math . However, their technique discards outliers, which could lead to missing important emerging threats. Shou al @cite_33 algorithm allows the value of @math to vary up to an upper limit, but its outlier detection mechanism discards topics that do not gain traction, ignoring possibly important threats that remain unknown for long periods of time. | {
"cite_N": [
"@cite_42",
"@cite_1",
"@cite_33",
"@cite_34"
],
"mid": [
"1514461580",
"2088340225",
"2028544407",
"2298934765"
],
"abstract": [
"What is happening around the world? When and where? Mining the geo-tagged Twitter stream makes it possible to answer the above questions in real-time. Although a single tweet can be short and noisy, proper aggregations of tweets can provide meaningful results. In this paper, we focus on hierarchical spatio-temporal hashtag clustering techniques. Our system has the following features: (1) Exploring events (hashtag clusters) with different space granularity. Users can zoom in and out on maps to find out what is happening in a particular area. (2) Exploring events with different time granularity. Users can choose to see what is happening today or in the past week. (3) Efficient single-pass algorithm for event identification, which provides human-readable hashtag clusters. (4) Efficient event ranking which aims to find burst events and localized events given a particular region and time frame. To support aggregation with different space and time granularity, we propose a data structure called STREAMCUBE, which is an extension of the data cube structure from the database community with spatial and temporal hierarchy. To achieve high scalability, we propose a divide-and-conquer method to construct the STREAMCUBE. To support flexible event ranking with different weights, we proposed a top-k based index. Different efficient methods are used to speed up event similarity computations. Finally, we have conducted extensive experiments on a real twitter data. Experimental results show that our framework can provide meaningful results with high scalability.",
"Data stream mining is an active research area that has recently emerged to discover knowledge from large amounts of continuously generated data. In this context, several data stream clustering algorithms have been proposed to perform unsupervised learning. Nevertheless, data stream clustering imposes several challenges to be addressed, such as dealing with nonstationary, unbounded data that arrive in an online fashion. The intrinsic nature of stream data requires the development of algorithms capable of performing fast and incremental processing of data objects, suitably addressing time and memory limitations. In this article, we present a survey of data stream clustering algorithms, providing a thorough discussion of the main design components of state-of-the-art algorithms. In addition, this work addresses the temporal aspects involved in data stream clustering, and presents an overview of the usually employed experimental methodologies. A number of references are provided that describe applications of data stream clustering in different domains, such as network intrusion detection, sensor networks, and stock market analysis. Information regarding software packages and data repositories are also available for helping researchers and practitioners. Finally, some important issues and open questions that can be subject of future research are discussed.",
"With the explosive growth of microblogging services, short-text messages (also known as tweets) are being created and shared at an unprecedented rate. Tweets in its raw form can be incredibly informative, but also overwhelming. For both end-users and data analysts it is a nightmare to plow through millions of tweets which contain enormous noises and redundancies. In this paper, we study continuous tweet summarization as a solution to address this problem. While traditional document summarization methods focus on static and small-scale data, we aim to deal with dynamic, quickly arriving, and large-scale tweet streams. We propose a novel prototype called Sumblr (SUMmarization By stream cLusteRing) for tweet streams. We first propose an online tweet stream clustering algorithm to cluster tweets and maintain distilled statistics called Tweet Cluster Vectors. Then we develop a TCV-Rank summarization technique for generating online summaries and historical summaries of arbitrary time durations. Finally, we describe a topic evolvement detection method, which consumes online and historical summaries to produce timelines automatically from tweet streams. Our experiments on large-scale real tweets demonstrate the efficiency and effectiveness of our approach.",
"This paper presents an online frame-based clustering algorithm (OFC) for unsupervised classification applications in which data are received in a streaming manner as time passes by with the number of clusters being unknown. This algorithm consists of a number of steps including density-based outlier removal, new cluster generation, and cluster update. It is designed for applications when data samples are received in an online manner in frames. Such frames are first passed through an outlier removal step to generate denoised frames with consistent data samples during transitions times between clusters. A classification step is then applied to find whether frames belong to any of existing clusters. When frames do not get matched to any of existing clusters and certain criteria are met, a new cluster is created in real time and in an on-the-fly manner by using support vector domain descriptors. Experiments involving four synthetic and two real datasets are conducted to show the performance of the introduced clustering algorithm in terms of cluster purity and normalized mutual information. Comparison results with similar clustering algorithms designed for streaming data are also reported exhibiting the effectiveness of the introduced online frame-based clustering algorithm. Online frame-based clustering algorithm without having any knowledge of number of clusters.For applications when samples of a class appear in streaming frames.Superior to existing algorithms applicable to online frame-based clustering."
]
} |
1904.02215 | 2927638312 | A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online. | Dense Piecewise Planar Tracking and Mapping (DPPTAM) @cite_18 is a direct visual odometry algorithm for dense reconstruction using a monocular camera. Dense reconstruction is based on detecting planar regions -- assumed to be homogeneous-color regions. The TUM RGB-D Dataset @cite_12 was used to assess the performance of the proposed approach in the original paper. | {
"cite_N": [
"@cite_18",
"@cite_12"
],
"mid": [
"2210972093",
"2021851106"
],
"abstract": [
"This paper proposes a direct monocular SLAM algorithm that estimates a dense reconstruction of a scene in real-time on a CPU. Highly textured image areas are mapped using standard direct mapping techniques [1], that minimize the photometric error across different views. We make the assumption that homogeneous-color regions belong to approximately planar areas. Our contribution is a new algorithm for the estimation of such planar areas, based on the information of a superpixel segmentation and the semidense map from highly textured areas. We compare our approach against several alternatives using the public TUM dataset [2] and additional live experiments with a hand-held camera. We demonstrate that our proposal for piecewise planar monocular SLAM is faster, more accurate and more robust than the piecewise planar baseline [3]. In addition, our experimental results show how the depth regularization of monocular maps can damage its accuracy, being the piecewise planar assumption a reasonable option in indoor scenarios.",
"In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools."
]
} |
1904.02215 | 2927638312 | A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online. | Direct Monocular SLAM is a direct method that operates on intensities of images from a monocular camera @cite_10 both for tracking and mapping, allowing dense 3D reconstruction. Validated on custom datasets from TUM, covering indoor and outdoor environments. While working in data from above water deployments this package consistently diverged in underwater data, as was also reported earlier @cite_25 | {
"cite_N": [
"@cite_10",
"@cite_25"
],
"mid": [
"612478963",
"2599972759"
],
"abstract": [
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.",
"The problem of state estimation using primarily visual data has received a lot of attention in the last decade. Several open source packages have appeared addressing the problem, each supported by impressive demonstrations. Applying any of these packages on a new dataset however, has been proven extremely challenging. Suboptimal performance, loss of localization, and challenges in customization have not produced a clear winner. Several other research groups have presented superb performance without releasing the code, sometimes materializing as commercial products. In this paper, ten of the most promising open source packages are evaluated, by cross validating them on the datasets provided for each package and by testing them on eight different datasets collected over the years in our laboratory. Indoor and outdoor, terrestrial and flying vehicles, in addition to underwater robots, cameras, and buoys were used to collect data. An analysis on the motions required for the different approaches and an evaluation of their performance is presented."
]
} |
1904.02215 | 2927638312 | A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online. | Semi-Direct Visual Odometry @cite_31 relies on both a direct method for tracking and triangulating pixels with high image gradients and a feature-based method for jointly optimizing structure and motion. It uses the IMU prior for image alignment and can be generalized to multi-camera systems. The proposed system has been tested in a lab setting with different sensors and robots, as well as the EuRoC @cite_33 and ICL-NUIM @cite_35 datasets. | {
"cite_N": [
"@cite_35",
"@cite_31",
"@cite_33"
],
"mid": [
"2058535340",
"2564632156",
"2396274919"
],
"abstract": [
"We introduce the Imperial College London and National University of Ireland Maynooth (ICL-NUIM) dataset for the evaluation of visual odometry, 3D reconstruction and SLAM algorithms that typically use RGB-D data. We present a collection of handheld RGB-D camera sequences within synthetically generated environments. RGB-D sequences with perfect ground truth poses are provided as well as a ground truth surface model that enables a method of quantitatively evaluating the final map or surface reconstruction accuracy. Care has been taken to simulate typically observed real-world artefacts in the synthetic imagery by modelling sensor noise in both RGB and depth data. While this dataset is useful for the evaluation of visual odometry and SLAM trajectory estimation, our main focus is on providing a method to benchmark the surface reconstruction accuracy which to date has been missing in the RGB-D community despite the plethora of ground truth RGB-D datasets available.",
"Direct methods for visual odometry (VO) have gained popularity for their capability to exploit information from all intensity gradients in the image. However, low computational speed as well as missing guarantees for optimality and consistency are limiting factors of direct methods, in which established feature-based methods succeed instead. Based on these considerations, we propose a semidirect VO (SVO) that uses direct methods to track and triangulate pixels that are characterized by high image gradients, but relies on proven feature-based methods for joint optimization of structure and motion. Together with a robust probabilistic depth estimation algorithm, this enables us to efficiently track pixels lying on weak corners and edges in environments with little or high-frequency texture. We further demonstrate that the algorithm can easily be extended to multiple cameras, to track edges, to include motion priors, and to enable the use of very large field of view cameras, such as fisheye and catadioptric ones. Experimental evaluation on benchmark datasets shows that the algorithm is significantly faster than the state of the art while achieving highly competitive accuracy.",
"This paper presents visual-inertial datasets collected on-board a micro aerial vehicle. The datasets contain synchronized stereo images, IMU measurements and accurate ground truth. The first batch of datasets facilitates the design and evaluation of visual-inertial localization algorithms on real flight data. It was collected in an industrial environment and contains millimeter accurate position ground truth from a laser tracking system. The second batch of datasets is aimed at precise 3D environment reconstruction and was recorded in a room equipped with a motion capture system. The datasets contain 6D pose ground truth and a detailed 3D scan of the environment. Eleven datasets are provided in total, ranging from slow flights under good visual conditions to dynamic flights with motion blur and poor illumination, enabling researchers to thoroughly test and evaluate their algorithms. All datasets contain raw sensor measurements, spatio-temporally aligned sensor data and ground truth, extrinsic and intrinsic calibrations and datasets for custom calibrations."
]
} |
1904.02215 | 2927638312 | A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online. | ORB-SLAM2 @cite_9 is a monocular stereo SLAM system, that uses ORB features for tracking, mapping, relocalizing, and loop closing. It was tested in different datasets, including KITTI @cite_41 and EuRoC @cite_33 . The authors extended it to utilize the IMU @cite_43 , although, currently, the extended system is not available open source. | {
"cite_N": [
"@cite_41",
"@cite_43",
"@cite_9",
"@cite_33"
],
"mid": [
"2115579991",
"2538522345",
"1612997784",
"2396274919"
],
"abstract": [
"We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.",
"In recent years there have been excellent results in visual-inertial odometry techniques, which aim to compute the incremental motion of the sensor with high accuracy and robustness. However, these approaches lack the capability to close loops and trajectory estimation accumulates drift even if the sensor is continually revisiting the same place. In this letter, we present a novel tightly coupled visual-inertial simultaneous localization and mapping system that is able to close loops and reuse its map to achieve zero-drift localization in already mapped areas. While our approach can be applied to any camera configuration, we address here the most general problem of a monocular camera, with its well-known scale ambiguity. We also propose a novel IMU initialization method, which computes the scale, the gravity direction, the velocity, and gyroscope and accelerometer biases, in a few seconds with high accuracy. We test our system in the 11 sequences of a recent micro-aerial vehicle public dataset achieving a typical scale factor error of 1 and centimeter precision. We compare to the state-of-the-art in visual-inertial odometry in sequences with revisiting, proving the better accuracy of our method due to map reuse and no drift accumulation.",
"This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.",
"This paper presents visual-inertial datasets collected on-board a micro aerial vehicle. The datasets contain synchronized stereo images, IMU measurements and accurate ground truth. The first batch of datasets facilitates the design and evaluation of visual-inertial localization algorithms on real flight data. It was collected in an industrial environment and contains millimeter accurate position ground truth from a laser tracking system. The second batch of datasets is aimed at precise 3D environment reconstruction and was recorded in a room equipped with a motion capture system. The datasets contain 6D pose ground truth and a detailed 3D scan of the environment. Eleven datasets are provided in total, ranging from slow flights under good visual conditions to dynamic flights with motion blur and poor illumination, enabling researchers to thoroughly test and evaluate their algorithms. All datasets contain raw sensor measurements, spatio-temporally aligned sensor data and ground truth, extrinsic and intrinsic calibrations and datasets for custom calibrations."
]
} |
1904.02215 | 2927638312 | A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online. | Realtime Edge Based Inertial Visual Odometry @cite_29 is specifically designed for Micro Aerial Vehicles (MAV). In particular, it tracks the pose of a robot by fusing data from a monocular camera and an IMU. The approach first processes the images to detect edges to track and map. An EKF is used for estimating the depth. The system was evaluated using the EuRoC dataset @cite_33 . The reliance to edge detection resulted in consistent failure over all underwater datasets. | {
"cite_N": [
"@cite_29",
"@cite_33"
],
"mid": [
"2762152057",
"2396274919"
],
"abstract": [
"A working solution for control and teleoperation of Micro Aerial Vehicles using a frontal camera and an inertial measurement unit as sole sensors is presented. The system is an extension of an edge based visual odometry algorithm to integrate inertial sensors. A mixed tightly-loosely coupled approach is used, taking advantage of each sensor in this minimalistic setup, while keeping the complexity low. The system runs completely on board a MAV providing a semidense output that is more useful for navigation than the sparse maps generated by most feature based systems. To the best of the author’s knowledge, the system is the first semidense VO method running fully on board a MAV for vision in the loop control. An extensive evaluation of the method is presented using the EuRoC MAV dataset, that is specially targeted for MAV navigation in realistic situations. Some of the practical issues of teleoperation are also addressed, in particular how data is transmitted and presented to the user. Finally, real life experiments are included to illustrate the performance of the complete system and the teleoperation interface.",
"This paper presents visual-inertial datasets collected on-board a micro aerial vehicle. The datasets contain synchronized stereo images, IMU measurements and accurate ground truth. The first batch of datasets facilitates the design and evaluation of visual-inertial localization algorithms on real flight data. It was collected in an industrial environment and contains millimeter accurate position ground truth from a laser tracking system. The second batch of datasets is aimed at precise 3D environment reconstruction and was recorded in a room equipped with a motion capture system. The datasets contain 6D pose ground truth and a detailed 3D scan of the environment. Eleven datasets are provided in total, ranging from slow flights under good visual conditions to dynamic flights with motion blur and poor illumination, enabling researchers to thoroughly test and evaluate their algorithms. All datasets contain raw sensor measurements, spatio-temporally aligned sensor data and ground truth, extrinsic and intrinsic calibrations and datasets for custom calibrations."
]
} |
1904.02215 | 2927638312 | A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online. | An implementation of the original Multi-State Constraint Kalman Filter from Mourikis and Roumeliotis @cite_37 was made available as open source from the GRASP lab @cite_24 . It uses a monocular camera and was tested on the EuRoC dataset @cite_33 . | {
"cite_N": [
"@cite_24",
"@cite_37",
"@cite_33"
],
"mid": [
"",
"2118223742",
"2396274919"
],
"abstract": [
"",
"In this paper, we present an extended Kalman filter (EKF)-based algorithm for real-time vision-aided inertial navigation. The primary contribution of this work is the derivation of a measurement model that is able to express the geometric constraints that arise when a static feature is observed from multiple camera poses. This measurement model does not require including the 3D feature position in the state vector of the EKF and is optimal, up to linearization errors. The vision-aided inertial navigation algorithm we propose has computational complexity only linear in the number of features, and is capable of high-precision pose estimation in large-scale real-world environments. The performance of the algorithm is demonstrated in extensive experimental results, involving a camera IMU system localizing within an urban area.",
"This paper presents visual-inertial datasets collected on-board a micro aerial vehicle. The datasets contain synchronized stereo images, IMU measurements and accurate ground truth. The first batch of datasets facilitates the design and evaluation of visual-inertial localization algorithms on real flight data. It was collected in an industrial environment and contains millimeter accurate position ground truth from a laser tracking system. The second batch of datasets is aimed at precise 3D environment reconstruction and was recorded in a room equipped with a motion capture system. The datasets contain 6D pose ground truth and a detailed 3D scan of the environment. Eleven datasets are provided in total, ranging from slow flights under good visual conditions to dynamic flights with motion blur and poor illumination, enabling researchers to thoroughly test and evaluate their algorithms. All datasets contain raw sensor measurements, spatio-temporally aligned sensor data and ground truth, extrinsic and intrinsic calibrations and datasets for custom calibrations."
]
} |
1904.02215 | 2927638312 | A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online. | Stereo Multi-State Constraint Kalman Filter @cite_14 is also based on MSCKF, while using a stereo camera. The proposed method is able to have comparable computational cost as monocular solutions with increased robustness. Experiments in the EuRoC dataset and on a custom dataset collected with a UAV demonstrate good performance. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2962987986"
],
"abstract": [
"In recent years, vision-aided inertial odometry for state estimation has matured significantly. However, we still encounter challenges in terms of improving the computational efficiency and robustness of the underlying algorithms for applications in autonomous flight with microaerial vehicles, in which it is difficult to use high-quality sensors and powerful processors because of constraints on size and weight. In this letter, we present a filter-based stereo visual inertial odometry that uses the multistate constraint Kalman filter. Previous work on the stereo visual inertial odometry has resulted in solutions that are computationally expensive. We demonstrate that our stereo multistate constraint Kalman filter (S-MSCKF) is comparable to state-of-the-art monocular solutions in terms of computational cost, while providing significantly greater robustness. We evaluate our S-MSCKF algorithm and compare it with state-of-the-art methods including OKVIS, ROVIO, and VINS-MONO on both the EuRoC dataset and our own experimental datasets demonstrating fast autonomous flight with a maximum speed of 17.5 m s in indoor and outdoor environments. Our implementation of the S-MSCKF is available at https: github.com KumarRobotics msckf_vio."
]
} |
1904.02215 | 2927638312 | A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online. | Robust Visual Inertial Odometry @cite_13 employs an Iterated Extended Kalman Filter to tightly fuse IMU data with images from one or multiple cameras. The photometric error is derived from image patches that are used as landmark descriptors and is included as residual for the update step. The EuRoC dataset @cite_33 was used for assessing the performance of the system. | {
"cite_N": [
"@cite_13",
"@cite_33"
],
"mid": [
"2754177129",
"2396274919"
],
"abstract": [
"This paper presents a visual-inertial odometry framework that tightly fuses inertial measurements with visual data from one or more cameras, by means of an iterated extended Kalman filter. By employing image patches as landmark descriptors, a photometric error is derived, which is directly integrated as an innovation term in the filter update step. Consequently, the data association is an inherent part of the estimation process and no additional feature extraction or matching processes are required. Furthermore, it enables the tracking of noncorner-shaped features, such as lines, and thereby increases the set of possible landmarks. The filter state is formulated in a fully robocentric fashion, which reduces errors related to nonlinearities. This also includes partitioning of a landmark’s location estimate into a bearing vector and distance and thereby allows an undelayed initialization of landmarks. Overall, this results in a compact approach, which exhibits a high level of robustness with respect to low ...",
"This paper presents visual-inertial datasets collected on-board a micro aerial vehicle. The datasets contain synchronized stereo images, IMU measurements and accurate ground truth. The first batch of datasets facilitates the design and evaluation of visual-inertial localization algorithms on real flight data. It was collected in an industrial environment and contains millimeter accurate position ground truth from a laser tracking system. The second batch of datasets is aimed at precise 3D environment reconstruction and was recorded in a room equipped with a motion capture system. The datasets contain 6D pose ground truth and a detailed 3D scan of the environment. Eleven datasets are provided in total, ranging from slow flights under good visual conditions to dynamic flights with motion blur and poor illumination, enabling researchers to thoroughly test and evaluate their algorithms. All datasets contain raw sensor measurements, spatio-temporally aligned sensor data and ground truth, extrinsic and intrinsic calibrations and datasets for custom calibrations."
]
} |
1904.02215 | 2927638312 | A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online. | Open Keyframe-based Visual-Inertial SLAM @cite_19 is a tightly-coupled nonlinear optimization method that fuses IMU data and images from one or more cameras. Keyframes are selected according to spacing rather than considering time-successive poses. The optimization is performed over a sliding window and states out of that window are marginalized. Experiments with a custom-made sensor suite validated the proposed approach. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2091790851"
],
"abstract": [
"Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual-inertial odometry or simultaneous localization and mapping SLAM. While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual-inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual-inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy."
]
} |
1904.02075 | 2934793040 | Subspace clustering has been extensively studied from the hypothesis-and-test, algebraic, and spectral clustering based perspectives. Most assume that only a single type class of subspace is present. Generalizations to multiple types are non-trivial, plagued by challenges such as choice of types and numbers of models, sampling imbalance and parameter tuning. In this work, we formulate the multi-type subspace clustering problem as one of learning non-linear subspace filters via deep multi-layer perceptrons (mlps). The response to the learnt subspace filters serve as the feature embedding that is clustering-friendly, i.e., points of the same clusters will be embedded closer together through the network. For inference, we apply K-means to the network output to cluster the data. Experiments are carried out on both synthetic and real world multi-type fitting problems, producing state-of-the-art results. | : Early approaches address this in a sequential RANSAC fashion @cite_51 @cite_58 @cite_37 by iteratively fitting and removing inliers. The J-Linkage @cite_30 and T-Linkage @cite_68 simultaneously consider the interactions between all points and hypotheses. The final partition is achieved by clustering. The above greedy algorithms often do not perform well under high noise level. Global algorithms have also been proposed to minimize an energy with various regularization terms, including spatial regularization (PEaRL) @cite_38 and label count penalty @cite_71 . To eschew the problem of having to set thresholds, the ORK approach @cite_20 @cite_21 ranked the hypothesis according to data preference rather than absolute residuals. Analytic approaches are characterized by elegant mathematical formulation, including those based on the sparsity @cite_52 and low-rank @cite_8 assumptions and their variants. Many of the preceding works adopt spectral clustering for final grouping and assume known number of models, but only a few considered the model selection problem, e.g., @cite_39 @cite_24 @cite_8 @cite_59 . Even fewer works @cite_47 @cite_66 @cite_4 @cite_51 considered the problem of fitting multiple model of various types, and in these few works, the types are assumed to be known a priori and well-defined which is often not realistic. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_37",
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_58",
"@cite_52",
"@cite_39",
"@cite_24",
"@cite_71",
"@cite_59",
"@cite_47",
"@cite_68",
"@cite_51",
"@cite_66",
"@cite_20"
],
"mid": [
"1519594923",
"",
"1974014252",
"1599555836",
"1997201895",
"1561329191",
"1927667002",
"1993962865",
"2105681109",
"2082712442",
"2108780778",
"2139054653",
"2962713801",
"2114674762",
"2002875082",
"2150903798",
"2164444636"
],
"abstract": [
"This paper tackles the problem of fitting multiple instances of a model to data corrupted by noise and outliers. The proposed solution is based on random sampling and conceptual data representation. Each point is represented with the characteristic function of the set of random models that fit the point. A tailored agglomerative clustering, called J-linkage, is used to group points belonging to the same model. The method does not require prior specification of the number of models, nor it necessitate parameters tuning. Experimental results demonstrate the superior performances of the algorithm.",
"",
"We propose a robust method for detecting local planar regions in a scene with an uncalibrated stereo. Our method is based on random sampling using distributions of feature point locations. For doing RANSAC, we use the distributions for each feature point defined by the distances between the point and the other points. We first choose a correspondence by using an uniform distribution and next choose candidate correspondences by using the distribution of the chosen point. Then, we compute a homography from the chosen correspondences and find the largest consensus set of the homography. We repeat this procedure until all regions are detected. We demonstrate that our method is robust to the outliers in a scene by simulations and real image examples.",
"Many techniques have been proposed for segmenting feature point trajectories tracked through a video sequence into independent motions. It has been found, however, that methods that perform very well in simulations perform very poorly for real video sequences. This paper resolves this mystery by analyzing the geometric structure of the degeneracy of the motion model. This leads to a new segmentation algorithm: a multi-stage unsupervised learning scheme first using the degenerate motion model and then using the general 3-D motion model. We demonstrate by simulated and real video experiments that our method is superior to all existing methods in practical situations.",
"In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.",
"Random hypothesis generation underpins many geometric model fitting techniques. Unfortunately it is also computationally expensive. We propose a fundamentally new approach to accelerate hypothesis sampling by guiding it with information derived from residual sorting. We show that residual sorting innately encodes the probability of two points to have arisen from the same model and is obtained without recourse to domain knowledge (e.g. keypoint matching scores) typically used in previous sampling enhancement methods. More crucially our approach is naturally capable of handling data with multiple model instances and excels in applications (e.g. multi-homography fitting) which easily frustrate other techniques. Experiments show that our method provides superior efficiency on various geometric model estimation tasks. Implementation of our algorithm is available on the authors, homepage.",
"Because of their abundance and simplicity, planes are used in several computer vision tasks. Their simplicity results in that, under perspective projection, the transformation between a world plane and its corresponding image plane is projective linear, or a homography. These relations also hold between perspective views of a plane in different images. This paper proposes an algorithm that detects planar homographies in uncalibrated image pairs. It then demonstrates how this plane identification method can be used as a first step in an image analysis process, when point matching between images is unreliable. The detection is performed using a RANSAC scheme based on the linear computation of the homography matrix elements using four points. Results are shown on real image pairs.",
"Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering.",
"A new formulation for multiway spectral clustering is proposed. This method corresponds to a weighted kernel principal component analysis (PCA) approach based on primal-dual least-squares support vector machine (LS-SVM) formulations. The formulation allows the extension to out-of-sample points. In this way, the proposed clustering model can be trained, validated, and tested. The clustering information is contained on the eigendecomposition of a modified similarity matrix derived from the data. This eigenvalue problem corresponds to the dual solution of a primal optimization problem formulated in a high-dimensional feature space. A model selection criterion called the balanced line fit (BLF) is also proposed. This criterion is based on the out-of-sample extension and exploits the structure of the eigenvectors and the corresponding projections when the clusters are well formed. The BLF criterion can be used to obtain clustering parameters in a learning framework. Experimental results with difficult toy problems and image segmentation show improved performance in terms of generalization to new samples and computation times.",
"This paper addresses real-world challenges in the motion segmentation problem, including perspective effects, missing data, and unknown number of motions. It first formulates the 3-D motion segmentation from two perspective views as a subspace clustering problem, utilizing the epipolar constraint of an image pair. It then combines the point correspondence information across multiple image frames via a collaborative clustering step, in which tight integration is achieved via a mixed norm optimization scheme. For model selection, we propose an over-segment and merge approach, where the merging step is based on the property of the ell_1-norm of the mutual sparse representation of two over-segmented groups. The resulting algorithm can deal with incomplete trajectories and perspective effects substantially better than state-of-the-art two-frame and multi-frame methods. Experiments on a 62-clip dataset show the significant superiority of the proposed idea in both segmentation accuracy and model selection.",
"This paper studies the problem of multibody motion segmentation, which is an important, but challenging problem due to its well-known chicken-and-egg-type recursive character. We propose a new mixture-of-fundamental-matrices model to describe the multibody motions from two views. Based on the maximum likelihood estimation, in conjunction with a random sampling scheme, we show that the problem can be naturally formulated as a linear programming (LP) problem. Consequently, the motion segmentation problem can be solved efficiently by linear program relaxation. Experiments demonstrate that: without assuming the actual number of motions our method produces accurate segmentation result. This LP formulation has also other advantages, such as easy to handle outliers and easy to enforce prior knowledge etc.",
"This paper considers the problem of clustering a collection of unlabeled data points assumed to lie near a union of lower dimensional planes. As is common in computer vision or unsupervised learning applications, we do not know in advance how many subspaces there are nor do we have any information about their dimensions. We develop a novel geometric analysis of an algorithm named sparse subspace clustering (SSC) [11], which signicantly broadens the range of problems where it is provably eective. For instance, we show that SSC can recover multiple subspaces, each of dimension comparable to the ambient dimension. We also prove that SSC can correctly cluster data points even when the subspaces of interest intersect. Further, we develop an extension of SSC that succeeds when the data set is corrupted with possibly overwhelmingly many outliers. Underlying our analysis are clear geometric insights, which may bear on other sparse recovery problems. A numerical study complements our theoretical analysis and demonstrates the eectiveness of these methods.",
"We propose a general formulation, called Multi-X, for multi-class multi-instance model fitting – the problem of interpreting the input data as a mixture of noisy observations originating from multiple instances of multiple classes. We extend the commonly used ( )-expansion-based technique with a new move in the label space. The move replaces a set of labels with the corresponding density mode in the model parameter domain, thus achieving fast and robust optimization. Key optimization parameters like the bandwidth of the mode seeking are set automatically within the algorithm. Considering that a group of outliers may form spatially coherent structures in the data, we propose a cross-validation-based technique removing statistically insignificant instances. Multi-X outperforms significantly the state-of-the-art on publicly available datasets for diverse problems: multiple plane and rigid motion detection; motion segmentation; simultaneous plane and cylinder fitting; circle and line fitting.",
"This paper presents an improvement of the J-linkage algorithm for fitting multiple instances of a model to noisy data corrupted by outliers. The binary preference analysis implemented by J-linkage is replaced by a continuous (soft, or fuzzy) generalization that proves to perform better than J-linkage on simulated data, and compares favorably with state of the art methods on public domain real datasets.",
"Motion segmentation involves clustering features together that belong to independently moving objects. The image features on each of these objects conform to one of several putative motion models, but the number and type of motion is unknown a priori. In order to cluster these features, the problems of model selection, robust estimation and clustering must all be addressed simultaneously. Within this paper we place the three problems into a common statistical framework; investigating the use of information criteria and robust mixture models as a principled way for motion segmentation of images. The final result is a general fully automatic algorithm for clustering that works in the presence of noise and outliers.",
"We propose a novel algorithm for segmenting multiple motions of different types from point correspondences in multiple affine or perspective views. Since point trajectories associated with different motions live in different manifolds, traditional approaches deal with only one manifold type: linear subspaces for affine views, and homographic, bilinear and trilinear varieties for two and three perspective views. As real motion sequences contain motions of different types, we cast motion segmentation as a problem of clustering manifolds of different types. Rather than explicitly modeling each manifold as a linear, bilinear or multilinear variety, we use nonlinear dimensionality reduction to learn a low-dimensional representation of the union of all manifolds. We show that for a union of separated manifolds, the LLE algorithm computes a matrix whose null space contains vectors giving the segmentation of the data. An analysis of the variance of these vectors allows us to distinguish them from other vectors in the null space. This leads to a new algorithm for clustering both linear and nonlinear manifolds. Although this algorithm is theoretically designed for separated manifolds, our experiments demonstrate its performance on real data where this assumption does not hold. We test our algorithm on the Hopkins 155 motion segmentation database and achieve an average classification error of 4.8 , which compares favorably against state-of-the art multiframe motion segmentation methods.",
"We present a novel and highly effective approach for multi-body motion segmentation. Drawing inspiration from robust statistical model fitting, we estimate putative subspace hypotheses from the data. However, instead of ranking them we encapsulate the hypotheses in a novel Mercer kernel which elicits the potential of two point trajectories to have emerged from the same subspace. The kernel permits the application of well-established statistical learning methods for effective outlier rejection, automatic recovery of the number of motions and accurate segmentation of the point trajectories. The method operates well under severe outliers arising from spurious trajectories or mistracks. Detailed experiments on a recent benchmark dataset (Hopkins 155) show that our method is superior to other state-of-the-art approaches in terms of recovering the number of motions, segmentation accuracy, robustness against gross outliers and computational efficiency."
]
} |
1904.02075 | 2934793040 | Subspace clustering has been extensively studied from the hypothesis-and-test, algebraic, and spectral clustering based perspectives. Most assume that only a single type class of subspace is present. Generalizations to multiple types are non-trivial, plagued by challenges such as choice of types and numbers of models, sampling imbalance and parameter tuning. In this work, we formulate the multi-type subspace clustering problem as one of learning non-linear subspace filters via deep multi-layer perceptrons (mlps). The response to the learnt subspace filters serve as the feature embedding that is clustering-friendly, i.e., points of the same clusters will be embedded closer together through the network. For inference, we apply K-means to the network output to cluster the data. Experiments are carried out on both synthetic and real world multi-type fitting problems, producing state-of-the-art results. | : Using deep learning to solve geometric model fitting has received growing considerations. The dense approaches use raw image to model the transformation between image pairs as homography @cite_33 or non-rigid transformation @cite_62 . @cite_41 proposed to estimate the camera pose directly from image sequences. DSAC @cite_42 learns to extract from sparse feature correspondences a geometric model in a manner akin to RANSAC. The ability to learn representations from sparse points was also developed recently @cite_0 @cite_55 . This ability was exploited by @cite_6 to fit essential matrix from noisy correspondences. Despite the promising results, none of the existing works have considered generic model fitting and, more importantly, fitting data drawn from multiple models and even multiple types. In our work, we formulate the generic multi-type fitting problem as one of learning good representations for clustering. | {
"cite_N": [
"@cite_62",
"@cite_33",
"@cite_41",
"@cite_55",
"@cite_42",
"@cite_6",
"@cite_0"
],
"mid": [
"2604233003",
"2439114332",
"2592936284",
"2963121255",
"2556455135",
"2963674285",
"2560609797"
],
"abstract": [
"We address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate spline transformation, and estimating its parameters. The contributions of this work are three-fold. First, we propose a convolutional neural network architecture for geometric matching. The architecture is based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Second, we demonstrate that the network parameters can be trained from synthetically generated imagery without the need for manual annotation and that our matching layer significantly increases generalization capabilities to never seen before images. Finally, we show that the same model can perform both instance-level and category-level matching giving state-of-the-art results on the challenging Proposal Flow dataset.",
"We present a deep convolutional neural network for estimating the relative homography between a pair of images. Our feed-forward network has 10 layers, takes two stacked grayscale images as input, and produces an 8 degree of freedom homography which can be used to map the pixels from the first image to the second. We present two convolutional neural network architectures for HomographyNet: a regression network which directly estimates the real-valued homography parameters, and a classification network which produces a distribution over quantized homographies. We use a 4-point homography parameterization which maps the four corners from one image into the second image. Our networks are trained in an end-to-end fashion using warped MS-COCO images. Our approach works without the need for separate local feature detection and transformation estimation stages. Our deep models are compared to a traditional homography estimator based on ORB features and we highlight the scenarios where HomographyNet outperforms the traditional technique. We also describe a variety of applications powered by deep homography estimation, thus showcasing the flexibility of a deep learning approach.",
"This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance.",
"Few prior works study deep learning on point sets. PointNet is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.",
"RANSAC is an important algorithm in robust optimization and a central building block for many computer vision applications. In recent years, traditionally hand-crafted pipelines have been replaced by deep learning pipelines, which can be trained in an end-to-end fashion. However, RANSAC has so far not been used as part of such deep learning pipelines, because its hypothesis selection procedure is non-differentiable. In this work, we present two different ways to overcome this limitation. The most promising approach is inspired by reinforcement learning, namely to replace the deterministic hypothesis selection by a probabilistic selection for which we can derive the expected loss w.r.t. to all learnable parameters. We call this approach DSAC, the differentiable counterpart of RANSAC. We apply DSAC to the problem of camera localization, where deep learning has so far failed to improve on traditional approaches. We demonstrate that by directly minimizing the expected loss of the output camera poses, robustly estimated by RANSAC, we achieve an increase in accuracy. In the future, any deep learning pipeline can use DSAC as a robust optimization component.",
"We develop a deep architecture to learn to find good correspondences for wide-baseline stereo. Given a set of putative sparse matches and the camera intrinsics, we train our network in an end-to-end fashion to label the correspondences as inliers or outliers, while simultaneously using them to recover the relative pose, as encoded by the essential matrix. Our architecture is based on a multi-layer perceptron operating on pixel coordinates rather than directly on the image, and is thus simple and small. We introduce a novel normalization technique, called Context Normalization, which allows us to process each data point separately while embedding global information in it, and also makes the network invariant to the order of the correspondences. Our experiments on multiple challenging datasets demonstrate that our method is able to drastically improve the state of the art with little training data.",
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption."
]
} |
1904.02147 | 2931527157 | In this work, we learn a shared encoding representation for a multi-task neural network model optimized with connectionist temporal classification (CTC) and conventional framewise cross-entropy training criteria. Our experiments show that the multi-task training not only tackles the complexity of optimizing CTC models such as acoustic-to-word but also results in significant improvement compared to the plain-task training with an optimal setup. Furthermore, we propose to use the encoding representation learned by the multi-task network to initialize the encoder of attention-based models. Thereby, we train a deep attention-based end-to-end model with 10 long short-term memory (LSTM) layers of encoder which produces 12.2 and 22.6 word-error-rate on Switchboard and CallHome subsets of the Hub5 2000 evaluation. | @cite_24 @cite_26 used framewise CE training to initialize the LSTM layers of phone-based CTC models. They found that using such pretrained parameters, CTC training is more stable than when using random initialization. @cite_10 trained deep feed-forward sequential memory networks (Deep-FSMN) with CTC and proposed to incorporate CE loss as a regularization term. They argued that CE loss is helpful in stabilizing CTC training and improving the alignments of CTC models, which then lead to significant improvements in WER. | {
"cite_N": [
"@cite_24",
"@cite_26",
"@cite_10"
],
"mid": [
"1533416326",
"1489125746",
"2963308316"
],
"abstract": [
"We explore alternative acoustic modeling techniques for large vocabulary speech recognition using Long Short-Term Memory recurrent neural networks. For an acoustic frame labeling task, we compare the conventional approach of cross-entropy (CE) training using fixed forced-alignments of frames and labels, with the Connectionist Temporal Classification (CTC) method proposed for labeling unsegmented sequence data. We demonstrate that the latter can be implemented with finite state transducers. We experiment with phones and context dependent HMM states as acoustic modeling units. We also investigate the effect of context in acoustic input by training unidirectional and bidirectional LSTM RNN models. We show that a bidirectional LSTM RNN CTC model using phone units can perform as well as an LSTM RNN model trained with CE using HMM state alignments. Finally, we also show the effect of sequence discriminative training on these models and show the first results for sMBR training of CTC models.",
"We have recently shown that deep Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as acoustic models for speech recognition. More recently, we have shown that the performance of sequence trained context dependent (CD) hidden Markov model (HMM) acoustic models using such LSTM RNNs can be equaled by sequence trained phone models initialized with connectionist temporal classification (CTC). In this paper, we present techniques that further improve performance of LSTM RNN acoustic models for large vocabulary speech recognition. We show that frame stacking and reduced frame rate lead to more accurate models and faster decoding. CD phone modeling leads to further improvements. We also present initial results for LSTM RNN models outputting words directly.",
"In this paper, we present an improved feedforward sequential memory networks (FSMN) architecture, namely Deep-FSMN (DFSMN), by introducing skip connections between memory blocks in adjacent layers. These skip connections enable the information flow across different layers and thus alleviate the gradient vanishing problem when building very deep structure. As a result, DFSMN significantly benefits from these skip connections and deep structure. We have compared the performance of DFSMN to BLSTM both with and without lower frame rate (LFR) on several large speech recognition tasks, including English and Mandarin. Experimental results shown that DFSMN can consistently outperform BLSTM with dramatic gain, especially trained with LFR using CD-Phone as modeling units. In the 20000 hours Fisher (FSH) task, the proposed DFSMN can achieve a word error rate of 9.4 by purely using the cross-entropy criterion and decoding with a 3-gram language model, which achieves a 1.5 absolute improvement compared to the BLSTM. In a 20000 hours Mandarin recognition task, the LFR trained DFSMN can achieve more than 20 relative improvement compared to the LFR trained BLSTM. Moreover, we can easily design the lookahead filter order of the memory blocks in DFSMN to control the latency for real-time applications."
]
} |
1904.02147 | 2931527157 | In this work, we learn a shared encoding representation for a multi-task neural network model optimized with connectionist temporal classification (CTC) and conventional framewise cross-entropy training criteria. Our experiments show that the multi-task training not only tackles the complexity of optimizing CTC models such as acoustic-to-word but also results in significant improvement compared to the plain-task training with an optimal setup. Furthermore, we propose to use the encoding representation learned by the multi-task network to initialize the encoder of attention-based models. Thereby, we train a deep attention-based end-to-end model with 10 long short-term memory (LSTM) layers of encoder which produces 12.2 and 22.6 word-error-rate on Switchboard and CallHome subsets of the Hub5 2000 evaluation. | @cite_23 presented that CTC and attention-based models can share encoder's representation. We additionally showed pretraining an encoder also helps attention-based model converge faster and better. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2962988733"
],
"abstract": [
"We present a state-of-the-art end-to-end Automatic Speech Recognition (ASR) model. We learn to listen and write characters with a joint Connectionist Temporal Classification (CTC) and attention-based encoder-decoder network. The encoder is a deep Convolutional Neural Network (CNN) based on the VGG network. The CTC network sits on top of the encoder and is jointly trained with the attention-based decoder. During the beam search process, we combine the CTC predictions, the attention-based decoder predictions and a separately trained LSTM language model. We achieve a 5-10 error reduction compared to prior systems on spontaneous Japanese and Chinese speech, and our end-to-end model beats out traditional hybrid ASR systems."
]
} |
1904.02104 | 2930983013 | Scene graph construction visual relationship detection from an image aims to give a precise structural description of the objects (nodes) and their relationships (edges). The mutual promotion of object detection and relationship detection is important for enhancing their individual performance. In this work, we propose a new framework, called semantics guided graph relation neural network (SGRN), for effective visual relationship detection. First, to boost the object detection accuracy, we introduce a source-target class cognoscitive transformation that transforms the features of the co-occurent objects to the target object domain to refine the visual features. Similarly, source-target cognoscitive transformations are used to refine features of objects from features of relations, and vice versa. Second, to boost the relation detection accuracy, besides the visual features of the paired objects, we embed the class probability of the object and subject separately to provide high level semantic information. In addition, to reduce the search space of relationships, we design a semantics-aware relationship filter to exclude those object pairs that have no relation. We evaluate our approach on the Visual Genome dataset and it achieves the state-of-the-art performance for visual relationship detection. Additionally, Our approach also significantly improves the object detection performance (i.e. 4.2 in mAP accuracy). | As an important topic of scene understanding, visual relationship detection has attracted increasing attention in recent decades. In recent years, deep learning technologies facilitate more accurate detection of objects as well as visual relationships @cite_22 @cite_16 @cite_7 . | {
"cite_N": [
"@cite_16",
"@cite_22",
"@cite_7"
],
"mid": [
"2607855566",
"2479423890",
"2963650529"
],
"abstract": [
"Relationships among objects play a crucial role in image understanding. Despite the great success of deep learning techniques in recognizing individual objects, reasoning about the relationships among objects remains a challenging task. Previous methods often treat this as a classification problem, considering each type of relationship (e.g. ride) or each distinct visual phrase (e.g. person-ride-horse) as a category. Such approaches are faced with significant difficulties caused by the high diversity of visual appearance for each kind of relationships or the large number of distinct visual phrases. We propose an integrated framework to tackle this problem. At the heart of this framework is the Deep Relational Network, a novel formulation designed specifically for exploiting the statistical dependencies between objects and their relationships. On two large data sets, the proposed method achieves substantial improvement over state-of-the-art.",
"Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.",
"Computers still struggle to understand the interdependency of objects in the scene as a whole, e.g., relations between objects or their attributes. Existing methods often ignore global context cues capturing the interactions among different object instances, and can only recognize a handful of types by exhaustively training individual detectors for all possible relationships. To capture such global interdependency, we propose a deep Variation-structured Re-inforcement Learning (VRL) framework to sequentially discover object relationships and attributes in the whole image. First, a directed semantic action graph is built using language priors to provide a rich and compact representation of semantic correlations between object categories, predicates, and attributes. Next, we use a variation-structured traversal over the action graph to construct a small, adaptive action set for each step based on the current state and historical actions. In particular, an ambiguity-aware object mining scheme is used to resolve semantic ambiguity among object categories that the object detector fails to distinguish. We then make sequential predictions using a deep RL framework, incorporating global context cues and semantic embeddings of previously extracted phrases in the state vector. Our experiments on the Visual Relationship Detection (VRD) dataset and the large-scale Visual Genome dataset validate the superiority of VRL, which can achieve significantly better detection results on datasets involving thousands of relationship and attribute types. We also demonstrate that VRL is able to predict unseen types embedded in our action graph by learning correlations on shared graph nodes."
]
} |
1904.02104 | 2930983013 | Scene graph construction visual relationship detection from an image aims to give a precise structural description of the objects (nodes) and their relationships (edges). The mutual promotion of object detection and relationship detection is important for enhancing their individual performance. In this work, we propose a new framework, called semantics guided graph relation neural network (SGRN), for effective visual relationship detection. First, to boost the object detection accuracy, we introduce a source-target class cognoscitive transformation that transforms the features of the co-occurent objects to the target object domain to refine the visual features. Similarly, source-target cognoscitive transformations are used to refine features of objects from features of relations, and vice versa. Second, to boost the relation detection accuracy, besides the visual features of the paired objects, we embed the class probability of the object and subject separately to provide high level semantic information. In addition, to reduce the search space of relationships, we design a semantics-aware relationship filter to exclude those object pairs that have no relation. We evaluate our approach on the Visual Genome dataset and it achieves the state-of-the-art performance for visual relationship detection. Additionally, Our approach also significantly improves the object detection performance (i.e. 4.2 in mAP accuracy). | . Embedding technologies are popular in NLP communities and have been resorted to for visual tasks in recent decades, such as image caption @cite_17 @cite_18 and image retrieval @cite_0 . Motivated by the success of embedding technologies, some works attempt to learn embedding for visual relationship detection. Semantic word embedding is used to explore the language priors between objects in order to improve the relation prediction accuracy @cite_22 @cite_15 @cite_32 @cite_11 . Zero-short visual relationship detection can also benefit from the language priors @cite_22 @cite_34 . Zhang al @cite_10 learn relation translation vectors from visual triples by embedding object and subject respectively. To deal with the appearance variation of visual relations, some works learn the visual phrase embedding @cite_34 @cite_1 . All these works either directly use language priors in semantic word embedding or learn visual embedding. In our work, we leverage the embedding of object class information for effective relation proposal, source-target-aware passage passing, and predicate prediction to explicitly use the available predicted class information. | {
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_22",
"@cite_1",
"@cite_32",
"@cite_0",
"@cite_15",
"@cite_34",
"@cite_10",
"@cite_17"
],
"mid": [
"2951805548",
"2777602943",
"2479423890",
"2905588046",
"2962737704",
"2744926832",
"2770106191",
"2799262568",
"2591644541",
"2953276893"
],
"abstract": [
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"Recognizing how objects interact with each other is a crucial task in visual recognition. If we define the context of the interaction to be the objects involved, then most current methods can be categorized as either: (i) training a single classifier on the combination of the interaction and its context; or (ii) aiming to recognize the interaction independently of its explicit context. Both methods suffer limitations: the former scales poorly with the number of combinations and fails to generalize to unseen combinations, while the latter often leads to poor interaction recognition performance due to the difficulty of designing a contextindependent interaction classifier.,,To mitigate those drawbacks, this paper proposes an alternative, context-aware interaction recognition framework. The key to our method is to explicitly construct an interaction classifier which combines the context, and the interaction. The context is encoded via word2vec into a semantic space, and is used to derive a classification result for the interaction. The proposed method still builds one classifier for one interaction (as per type (ii) above), but the classifier built is adaptive to context via weights which are context dependent. The benefit of using the semantic space is that it naturally leads to zero-shot generalizations in which semantically similar contexts (subject-object pairs) can be recognized as suitable contexts for an interaction, even if they were not observed in the training set. Our method also scales with the number of interaction-context pairs since our model parameters do not increase with the number of interactions. Thus our method avoids the limitation of both approaches. We demonstrate experimentally that the proposed framework leads to improved performance for all investigated interaction representations and datasets.",
"Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. “man riding bicycle” and “man pushing bicycle”). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. “man” and “bicycle”) and predicates (e.g. “riding” and “pushing”) independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.",
"We seek to detect visual relations in images of the form of triplets t = (subject, predicate, object), such as \"person riding dog\", where training examples of the individual entities are available but their combinations are rare or unseen at training. This is an important set-up due to the combinatorial nature of visual relations : collecting sufficient training data for all possible triplets would be very hard. The contributions of this work are three-fold. First, we learn a representation of visual relations that combines (i) individual embeddings for subject, object and predicate together with (ii) a visual phrase embedding that represents the relation triplet. Second, we learn how to transfer visual phrase embeddings from existing training triplets to unseen test triplets using analogies between relations that involve similar objects. Third, we demonstrate the benefits of our approach on two challenging datasets involving rare and unseen relations : on HICO-DET, our model achieves significant improvement over a strong baseline, and we confirm this improvement on retrieval of unseen triplets on the UnRel rare relation dataset.",
"Understanding the visual relationship between two objects involves identifying the subject, the object, and a predicate relating them. We leverage the strong correlations between the predicate and the hsubj; obji pair (both semantically and spatially) to predict predicates conditioned on the subjects and the objects. Modeling the three entities jointly more accurately reflects their relationships compared to modeling them independently, but it complicates learning since the semantic space of visual relationships is huge and training data is limited, especially for longtail relationships that have few instances. To overcome this, we use knowledge of linguistic statistics to regularize visual model learning. We obtain linguistic knowledge by mining from both training annotations (internal knowledge) and publicly available text, e.g., Wikipedia (external knowledge), computing the conditional probability distribution of a predicate given a (subj, obj) pair. As we train the visual model, we distill this knowledge into the deep model to achieve better generalization. Our experimental results on the Visual Relationship Detection (VRD) and Visual Genome datasets suggest that with this linguistic knowledge distillation, our model outperforms the stateof- the-art methods significantly, especially when predicting unseen relationships (e.g., recall improved from 8.45 to 19.17 on VRD zero-shot testing set).",
"Querying with an example image is a simple and intuitive interface to retrieve information from a visual database. Most of the research in image retrieval has focused on the task of instance-level image retrieval, where the goal is to retrieve images that contain the same object instance as the query image. In this work we move beyond instance-level retrieval and consider the task of semantic image retrieval in complex scenes, where the goal is to retrieve images that share the same semantics as the query image. We show that, despite its subjective nature, the task of semantically ranking visual scenes is consistently implemented across a pool of human annotators. We also show that a similarity based on human-annotated region-level captions is highly correlated with the human ranking and constitutes a good computable surrogate. Following this observation, we learn a visual embedding of the images where the similarity in the visual space is correlated with their semantic similarity surrogate. We further extend our model to learn a joint embedding of visual and textual cues that allows one to query the database using a text modifier in addition to the query image, adapting the results to the modifier. Finally, our model can ground the ranking decisions by showing regions that contributed the most to the similarity between pairs of images, providing a visual explanation of the similarity.",
"Reasoning about the relationships between object pairs in images is a crucial task for holistic scene understanding. Most of the existing works treat this task as a pure visual classification task: each type of relationship or phrase is classified as a relation category based on the extracted visual features. However, each kind of relationships has a wide variety of object combination and each pair of objects has diverse interactions. Obtaining sufficient training samples for all possible relationship categories is difficult and expensive. In this work, we propose a natural language guided framework to tackle this problem. We propose to use a generic bi-directional recurrent neural network to predict the semantic connection between the participating objects in the relationship from the aspect of natural language. The proposed simple method achieves the state-of-the-art on the Visual Relationship Detection (VRD) and Visual Genome datasets, especially when predicting unseen relationships (e.g. recall improved from 76.42 to 89.79 on VRD zero-shot testing set).",
"Large scale visual understanding is challenging, as it requires a model to handle the widely-spread and imbalanced distribution of triples. In real-world scenarios with large numbers of objects and relations, some are seen very commonly while others are barely seen. We develop a new relationship detection model that embeds objects and relations into two vector spaces where both discriminative capability and semantic affinity are preserved. We learn both a visual and a semantic module that map features from the two modalities into a shared space, where matched pairs of features have to discriminate against those unmatched, but also maintain close distances to semantically similar ones. Benefiting from that, our model can achieve superior performance even when the visual entity categories scale up to more than 80,000, with extremely skewed class distribution. We demonstrate the efficacy of our model on a large and imbalanced benchmark based of Visual Genome that comprises 53,000+ objects and 29,000+ relations, a scale at which no previous work has ever been evaluated at. We show superiority of our model over carefully designed baselines on the original Visual Genome dataset with 80,000+ categories. We also show state-of-the-art performance on the VRD dataset and the scene graph dataset which is a subset of Visual Genome with 200 categories.",
"Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate ≈ object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu’s multi-modal model with language priors [27].",
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit."
]
} |
1904.02104 | 2930983013 | Scene graph construction visual relationship detection from an image aims to give a precise structural description of the objects (nodes) and their relationships (edges). The mutual promotion of object detection and relationship detection is important for enhancing their individual performance. In this work, we propose a new framework, called semantics guided graph relation neural network (SGRN), for effective visual relationship detection. First, to boost the object detection accuracy, we introduce a source-target class cognoscitive transformation that transforms the features of the co-occurent objects to the target object domain to refine the visual features. Similarly, source-target cognoscitive transformations are used to refine features of objects from features of relations, and vice versa. Second, to boost the relation detection accuracy, besides the visual features of the paired objects, we embed the class probability of the object and subject separately to provide high level semantic information. In addition, to reduce the search space of relationships, we design a semantics-aware relationship filter to exclude those object pairs that have no relation. We evaluate our approach on the Visual Genome dataset and it achieves the state-of-the-art performance for visual relationship detection. Additionally, Our approach also significantly improves the object detection performance (i.e. 4.2 in mAP accuracy). | . To handle the intractable number of possible pairwise object combinations, Dai al @cite_16 use a simple filter to remove many of the unnecessary object pairs. Li al @cite_3 cluster the phrase regions into some important ones and pass messages between them. The most related work to ours is @cite_35 which also proposes a relation proposal network to estimate the relatedness of each object pair based on the predicted class probabilities but without semantic embedding. Different from their work, our SRePN uses semantic embedding to choose the most semantically inter-dependent object pairs. | {
"cite_N": [
"@cite_35",
"@cite_16",
"@cite_3"
],
"mid": [
"2886970679",
"2607855566",
"2810482788"
],
"abstract": [
"We propose a novel scene graph generation model called Graph R-CNN, that is both effective and efficient at detecting objects and their relations in images. Our model contains a Relation Proposal Network (RePN) that efficiently deals with the quadratic number of potential relations between objects in an image. We also propose an attentional Graph Convolutional Network (aGCN) that effectively captures contextual information between objects and relations. Finally, we introduce a new evaluation metric that is more holistic and realistic than existing metrics. We report state-of-the-art performance on scene graph generation as evaluated using both existing and our proposed metrics.",
"Relationships among objects play a crucial role in image understanding. Despite the great success of deep learning techniques in recognizing individual objects, reasoning about the relationships among objects remains a challenging task. Previous methods often treat this as a classification problem, considering each type of relationship (e.g. ride) or each distinct visual phrase (e.g. person-ride-horse) as a category. Such approaches are faced with significant difficulties caused by the high diversity of visual appearance for each kind of relationships or the large number of distinct visual phrases. We propose an integrated framework to tackle this problem. At the heart of this framework is the Deep Relational Network, a novel formulation designed specifically for exploiting the statistical dependencies between objects and their relationships. On two large data sets, the proposed method achieves substantial improvement over state-of-the-art.",
"Generating scene graph to describe the object interactions inside an image gains increasing interests these years. However, most of the previous methods use complicated structures with slow inference speed or rely on the external data, which limits the usage of the model in real-life scenarios. To improve the efficiency of scene graph generation, we propose a subgraph-based connection graph to concisely represent the scene graph during the inference. A bottom-up clustering method is first used to factorize the entire graph into subgraphs, where each subgraph contains several objects and a subset of their relationships. By replacing the numerous relationship representations of the scene graph with fewer subgraph and object features, the computation in the intermediate stage is significantly reduced. In addition, spatial information is maintained by the subgraph features, which is leveraged by our proposed Spatial-weighted Message Passing (SMP) structure and Spatial-sensitive Relation Inference (SRI) module to facilitate the relationship recognition. On the recent Visual Relationship Detection and Visual Genome datasets, our method outperforms the state-of-the-art method in both accuracy and speed. Code has been made publicly available (https: github.com yikang-li FactorizableNet)."
]
} |
1904.02144 | 2947657997 | The goal of a decision-based adversarial attack on a trained model is to generate adversarial examples based solely on observing output labels returned by the targeted model. We develop HopSkipJumpAttack, a family of algorithms based on a novel estimate of the gradient direction using binary information at the decision boundary. The proposed family includes both untargeted and targeted attacks optimized for @math and @math similarity metrics respectively. Theoretical analysis is provided for the proposed algorithms and the gradient direction estimate. Experiments show HopSkipJumpAttack requires significantly fewer model queries than Boundary Attack, a powerful existing decision-based attack. It also achieves competitive performance in attacking adversarially trained models on MNIST. (HopSkipJumpAttack was named Boundary Attack++ in a previous version of the preprint.) | Another way of improving efficiency of Boundary Attack is to combine it with transfer-based attack. In particular, proposed Biased Boundary Attack, which biases the sampling procedure by combining low-frequency random noise with the gradient from a substitute model. Biased Boundary Attack is able to significantly reduce the number of model queries. However, Biased Boundary Attack relies on the transferability between the substitute model and the target model, as with other transfer-based attacks. Our algorithm does not rely on the additional assumption of transferability, and is a direct algorithmic improvement over Boundary Attack through the exploitation of discarded information into the gradient direction estimation. Zeroth-order optimization refers to problems with only functional information available, instead of first-order gradient information. Such problems have been extensively studied in convex optimization and bandit problems. studied one-point randomized estimate of gradient for bandit convex optimization. and observed a faster convergence rate by using two function evaluations for estimating gradient. established optimal rates of convex zeroth-order optimization via mirror descent with two-point gradient estimates. Recently, proposed ZOO by applying zeroth-order algorithms to score-based adversarial attacks, which perform as effectively as the state-of-the-art white-box attack. @cite_20 further improved ZOO by a zeroth-order stochastic variance reduced gradient estimate. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2963243330"
],
"abstract": [
"As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying. This paper addresses these challenges by presenting: a) a comprehensive theoretical analysis of variance reduced zeroth-order (ZO) optimization, b) a novel variance reduced ZO algorithm, called ZO-SVRG, and c) an experimental evaluation of our approach in the context of two compelling applications, black-box chemical material classification and generation of adversarial examples from black-box deep neural network models. Our theoretical analysis uncovers an essential difficulty in the analysis of ZO-SVRG: the unbiased assumption on gradient estimates no longer holds. We prove that compared to its first-order counterpart, ZO-SVRG with a two-point random gradient estimator suffers an additional error of order O(1 b), where b the mini-batch size. To mitigate this error, we propose two accelerated versions of ZO-SVRG utilizing variance reduced gradient estimators, which achieve the best rate known for ZO stochastic optimization (in terms of iterations). Our extensive experimental results show that our approaches outperform other state-of-the-art ZO algorithms, and strike a balance between the convergence rate and the function query complexity."
]
} |
1904.02296 | 2890581551 | Style transfer describes the rendering of an image’s semantic content as different artistic styles. Recently, generative adversarial networks (GANs) have emerged as an effective approach in style transfer by adversarially training the generator to synthesize convincing counterfeits. However, traditional GAN suffers from the mode collapse issue, resulting in unstable training and making style transfer quality difficult to guarantee. In addition, the GAN generator is only compatible with one style, so a series of GANs must be trained to provide users with choices to transfer more than one kind of style. In this paper, we focus on tackling these challenges and limitations to improve style transfer. We propose adversarial gated networks (Gated-GAN) to transfer multiple styles in a single model. The generative networks have three modules: an encoder, a gated transformer, and a decoder. Different styles can be achieved by passing input images through different branches of the gated transformer. To stabilize training, the encoder and decoder are combined as an auto-encoder to reconstruct the input images. The discriminative networks are used to distinguish whether the input image is a stylized or genuine image. An auxiliary classifier is used to recognize the style categories of transferred images, thereby helping the generative networks generate images in multiple styles. In addition, Gated-GAN makes it possible to explore a new style by investigating styles learned from artists or genres. Our extensive experiments demonstrate the stability and effectiveness of the proposed model for multi-style transfer. | Style transfer is an extension of texture transfer, the goal of the latter being to render an object with a texture taken from a different object @cite_26 @cite_4 @cite_0 @cite_23 . Most previous texture transfer algorithms rely on texture synthesis methods and low-level image features to preserve target image structure. Texture synthesis is the process of algorithmically constructing an unlimited number of images from a texture sample. The generated images are perceived by humans to be of the same texture but not exactly like the original images. A large range of powerful parametric and non-parametric algorithms exist to synthesize photo-realistic natural texture @cite_32 @cite_2 @cite_45 . Based on texture synthesis, @cite_30 and @cite_27 used segmentation and patch matching to preserve information content. However, the texture transfer methods use only low-level target image features to inform texture transfer and take a long time to migrate a style from one image to another. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_32",
"@cite_0",
"@cite_27",
"@cite_45",
"@cite_23",
"@cite_2"
],
"mid": [
"2520715860",
"2116013899",
"1973399149",
"1999360130",
"2119798818",
"2471440592",
"1601669748",
"",
"2550586938"
],
"abstract": [
"Style transfer is a process of migrating a style from a given image to the content of another, synthesizing a new image, which is an artistic mixture of the two. Recent work on this problem adopting convolutional neural-networks (CNN) ignited a renewed interest in this field, due to the very impressive results obtained. There exists an alternative path toward handling the style transfer task, via the generalization of texture synthesis algorithms. This approach has been proposed over the years, but its results are typically less impressive compared with the CNN ones. In this paper, we propose a novel style transfer algorithm that extends the texture synthesis work of (2005), while aiming to get stylized images that are closer in quality to the CNN ones. We modify Kwatra’s algorithm in several key ways in order to achieve the desired transfer, with emphasis on a consistent way for keeping the content intact in selected regions, while producing hallucinated and rich style in others. The results obtained are visually pleasing and diverse, shown to be competitive with the recent CNN style transfer algorithms. The proposed algorithm is fast and flexible, being able to process any pair of content + style images.",
"A non-parametric method for texture synthesis is proposed. The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. A Markov random field model is assumed, and the conditional distribution of a pixel given all its neighbors synthesized so far is estimated by querying the sample image and finding all similar neighborhoods. The degree of randomness is controlled by a single perceptually intuitive parameter. The method aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures.",
"A texture transfer algorithm modifies the target image replacing the high frequency information with the example source image. Previous texture transfer techniques normally use such factors as color distance and standard deviation for selecting the best texture from the candidate sets. These factors are useful for expressing a texture effect of the example source in the target image, but are less than optimal for considering the object shape of the target image. In this paper, we propose a novel texture transfer algorithm to express the directional effect based on the flow of the target image. For this, we use a directional factor that considers the gradient direction of the target image. We add an additional energy term that respects the image gradient to the previous fast texture transfer algorithm. Additionally, we propose a method for estimating the directional factor weight value from the target image. We have tested our algorithm with various target images. Our algorithm can express a result image with the feature of the example source texture and the flow of the target image.",
"We present a simple image-based method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures. Second, we extend the algorithm to perform texture transfer — rendering an object with a texture taken from a different object. More generally, we demonstrate how an image can be re-rendered in the style of a different image. The method works directly on the images and does not require 3D information.",
"The article presents an algorithm for texture transfer between images that is up to several orders of magnitude faster than current state-of-the-art techniques. I demonstrate how the technique can leverage self-similarity of complex images to increase resolution of some types of images and to create novel, artistic looking images from photographs without any prior artistic source. Compared to other alternatives, methods based on texture transfer are global in the sense that the user need not deal with details such as defining and painting individual brush strokes. Texture transfer methods are also more general since they don't need to emulate any particular artistic style (line drawing, hatching, realistic oil painting, and so on). Not surprisingly, there is a price to pay for this generality - an algorithm designed for a specific artistic style will most likely produce results superior to those presented in the paper for that particular case.",
"This paper presents a novel unsupervised method to transfer the style of an example image to a source image. The complex notion of image style is here considered as a local texture transfer, eventually coupled with a global color transfer. For the local texture transfer, we propose a new method based on an adaptive patch partition that captures the style of the example image and preserves the structure of the source image. More precisely, this example-based partition predicts how well a source patch matches an example patch. Results on various images show that our method outperforms the most recent techniques.",
"This paper proposes a two-stage texture synthesis algorithm. At the first stage, a structure tensor map carrying information about the local orientation is synthesized from the exemplar’s data and used at the second stage to constrain the synthesis of the texture. Keeping in mind that the algorithm should be able to reproduce as faithfully as possible the visual aspect, statistics, and morphology of the input sample, the method is tested on various textures and compared objectively with existing methods, highlighting its strength in successfully synthesizing the output texture in many situations where traditional algorithms fail to reproduce the exemplar’s patterns. The promising results pave the way towards the synthesis of accurately large and multi-scale patterns as it is the case for carbon material samples showing laminar structures, for example.",
"",
"In this paper, we aim at super-resolving a low-resolution texture under the assumption that a high-resolution patch of the texture is available. To do so, we propose a variational method that combines two approaches that are texture synthesis and image reconstruction. The resulting objective function holds a nonconvex energy that involves a quadratic distance to the low-resolution image, a histogram-based distance to the high-resolution patch, and a nonlocal regularization that links the missing pixels with the patch pixels. As for the histogram-based measure, we use a sum of Wasserstein distances between the histograms of some linear transformations of the textures. The resulting optimization problem is efficiently solved with a primal-dual proximal method. Experiments show that our method leads to a significant improvement, both visually and numerically, with respect to the state-of-the-art algorithms for solving similar problems."
]
} |
1904.02296 | 2890581551 | Style transfer describes the rendering of an image’s semantic content as different artistic styles. Recently, generative adversarial networks (GANs) have emerged as an effective approach in style transfer by adversarially training the generator to synthesize convincing counterfeits. However, traditional GAN suffers from the mode collapse issue, resulting in unstable training and making style transfer quality difficult to guarantee. In addition, the GAN generator is only compatible with one style, so a series of GANs must be trained to provide users with choices to transfer more than one kind of style. In this paper, we focus on tackling these challenges and limitations to improve style transfer. We propose adversarial gated networks (Gated-GAN) to transfer multiple styles in a single model. The generative networks have three modules: an encoder, a gated transformer, and a decoder. Different styles can be achieved by passing input images through different branches of the gated transformer. To stabilize training, the encoder and decoder are combined as an auto-encoder to reconstruct the input images. The discriminative networks are used to distinguish whether the input image is a stylized or genuine image. An auxiliary classifier is used to recognize the style categories of transferred images, thereby helping the generative networks generate images in multiple styles. In addition, Gated-GAN makes it possible to explore a new style by investigating styles learned from artists or genres. Our extensive experiments demonstrate the stability and effectiveness of the proposed model for multi-style transfer. | The success of deep CNNs for image classification @cite_5 @cite_7 prompted many scientists and engineers to visualize features from a CNN @cite_19 . DeepDream @cite_7 was initially invented to help visualize what a deep neural network sees when given an image. Later, the algorithm became a technique to generate artworks in new psychedelic and abstract forms. Based on image representations derived from pre-trained CNNs, Gatys @cite_34 introduced a neural style transfer algorithm to separate and recombine image content and style. This approach has since been improved in various follow-up papers. Li @cite_13 studied patch-based style transfer by combining generative Markov random field (MRF) models and the pre-trained CNNs. Selim @cite_22 extended this idea to head portrait painting transfer by imposing novel spatial constraints to avoid facial deformations. Luan @cite_36 studied photorealistic style transfer by assuming the input to output transformation was locally affine in color space. Optimization-based methods can produce high quality results but they are computationally expensive, since each optimization step requires a forward and backward pass through the pre-trained network. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_36",
"@cite_19",
"@cite_5",
"@cite_34",
"@cite_13"
],
"mid": [
"2461455396",
"2097117768",
"2604721644",
"1915485278",
"2163605009",
"2475287302",
"2275363859"
],
"abstract": [
"Head portraits are popular in traditional painting. Automating portrait painting is challenging as the human visual system is sensitive to the slightest irregularities in human faces. Applying generic painting techniques often deforms facial structures. On the other hand portrait painting techniques are mainly designed for the graphite style and or are based on image analogies; an example painting as well as its original unpainted version are required. This limits their domain of applicability. We present a new technique for transferring the painting from a head portrait onto another. Unlike previous work our technique only requires the example painting and is not restricted to a specific style. We impose novel spatial constraints by locally transferring the color distributions of the example painting. This better captures the painting texture and maintains the integrity of facial structures. We generate a solution through Convolutional Neural Networks and we present an extension to video. Here motion is exploited in a way to reduce temporal inconsistencies and the shower-door effect. Our approach transfers the painting style while maintaining the input photograph identity. In addition it significantly reduces facial deformations over state of the art.",
"We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.",
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.",
"This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images. The generative MRF acts on higher-levels of a dCNN feature pyramid, controling the image layout at an abstract level. We apply the method to both photographic and non-photo-realistic (artwork) synthesis tasks. The MRF regularizer prevents over-excitation artifacts and reduces implausible feature mixtures common to previous dCNN inversion approaches, permitting synthezing photographic content with increased visual plausibility. Unlike standard MRF-based texture synthesis, the combined system can both match and adapt local features with considerable variability, yielding results far out of reach of classic generative MRF methods."
]
} |
1904.02232 | 2952357537 | Question-answering plays an important role in e-commerce as it allows potential customers to actively seek crucial information about products or services to help their purchase decision making. Inspired by the recent success of machine reading comprehension (MRC) on formal documents, this paper explores the potential of turning customer reviews into a large source of knowledge that can be exploited to answer user questions. We call this problem Review Reading Comprehension (RRC). To the best of our knowledge, no existing work has been done on RRC. In this work, we first build an RRC dataset called ReviewRC based on a popular benchmark for aspect-based sentiment analysis. Since ReviewRC has limited training examples for RRC (and also for aspect-based sentiment analysis), we then explore a novel post-training approach on the popular language model BERT to enhance the performance of fine-tuning of BERT for RRC. To show the generality of the approach, the proposed post-training is also applied to some other review-based tasks such as aspect extraction and aspect sentiment classification in aspect-based sentiment analysis. Experimental results demonstrate that the proposed post-training is highly effective. The datasets and code are available at https: www.cs.uic.edu hxu . | Knowledge bases (KBs) (such as Freebase @cite_41 @cite_17 @cite_45 or DBpedia @cite_15 @cite_24 ) have been used for question answering @cite_21 . However, the ever-changing nature of online businesses, where new products and services appear constantly, makes it prohibitive to build a high-quality KB to cover all new products and services. | {
"cite_N": [
"@cite_41",
"@cite_21",
"@cite_24",
"@cite_45",
"@cite_15",
"@cite_17"
],
"mid": [
"",
"2783691043",
"2151149636",
"2148721079",
"2159817687",
"2444318157"
],
"abstract": [
"",
"In E-commerce sites, there are platforms for users to pose product-related questions and experienced customers may provide answers voluntarily. Among the questions asked by users, a large proportion of them are yes-no questions reflecting that users wish to know whether or not the product can satisfy a certain criterion or meet a certain expectation. Both Question Answering (QA) approaches and Community Question Answering methods are not suitable for answer prediction for new questions in this setting. The reasons are that questions are product-associated and many of them are concerned about user experiences and subjective opinions. In addition to existing question-answer pairs, user written reviews can provide useful clues for answer prediction. In this paper, we propose a new framework that can tackle the task of review-aware answer prediction for product-related questions. The aspect analytics model in this framework learns latent aspects as well as aspect-specific embeddings of reviews via a 3-order Autoencoder. One advantage of this learned model is that it can generate aspect-specific representations for new questions. The predictive answer model in our framework, learned jointly from existing questions, answers, and reviews, is able to predict the answers for new yes-no questions taking into consideration of aspects. Besides, our framework can provide supportive reviews grouped by relevant aspects serving as information for explainable answers. Experiment results on 15 different product categories from a large-scale benchmark E-commence QA dataset demonstrate the effectiveness of our framework.",
"As an increasing amount of RDF data is published as Linked Data, intuitive ways of accessing this data become more and more important. Question answering approaches have been proposed as a good compromise between intuitiveness and expressivity. Most question answering systems translate questions into triples which are matched against the RDF data to retrieve an answer, typically relying on some similarity metric. However, in many cases, triples do not represent a faithful representation of the semantic structure of the natural language question, with the result that more expressive queries can not be answered. To circumvent this problem, we present a novel approach that relies on a parse of the question to produce a SPARQL template that directly mirrors the internal structure of the question. This template is then instantiated using statistical entity identification and predicate detection. We show that this approach is competitive and discuss cases of questions that can be answered with our approach but not with competing approaches.",
"Answering natural language questions using the Freebase knowledge base has recently been explored as a platform for advancing the state of the art in open domain semantic parsing. Those efforts map questions to sophisticated meaning representations that are then attempted to be matched against viable answer candidates in the knowledge base. Here we show that relatively modest information extraction techniques, when paired with a webscale corpus, can outperform these sophisticated approaches by roughly 34 relative gain.",
"Linked Data semantic sources, in particular DBpedia, can be used to answer many user queries. PowerAqua is an open multi-ontology Question Answering (QA) system for the Semantic Web (SW). However, the emergence of Linked Data, characterized by its openness, heterogeneity and scale, introduces a new dimension to the Semantic Web scenario, in which exploiting the relevant information to extract answers for Natural Language (NL) user queries is a major challenge. In this paper we discuss the issues and lessons learned from our experience of integrating PowerAqua as a front-end for DBpedia and a subset of Linked Data sources. As such, we go one step beyond the state of the art on end-users interfaces for Linked Data by introducing mapping and fusion techniques needed to translate a user query by means of multiple sources. Our first informal experiments probe whether, in fact, it is feasible to obtain answers to user queries by composing information across semantic sources and Linked Data, even in its current form, where the strength of Linked Data is more a by-product of its size than its quality. We believe our experiences can be extrapolated to a variety of end-user applications that wish to scale, open up, exploit and re-use what possibly is the greatest wealth of data about everything in the history of Artificial Intelligence.",
"Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to retrieve the candidate answers from Freebase, and then infer over Wikipedia to validate these answers. Experiments on the WebQuestions question answering dataset show that our method achieves an F_1 of 53.3 , a substantial improvement over the state-of-the-art."
]
} |
1904.01990 | 2931428928 | This paper considers the domain adaptive person re-identification (re-ID) problem: learning a re-ID model from a labeled source domain and an unlabeled target domain. Conventional methods are mainly to reduce feature distribution gap between the source and target domains. However, these studies largely neglect the intra-domain variations in the target domain, which contain critical factors influencing the testing performance on the target domain. In this work, we comprehensively investigate into the intra-domain variations of the target domain and propose to generalize the re-ID model w.r.t three types of the underlying invariance, i.e., exemplar-invariance, camera-invariance and neighborhood-invariance. To achieve this goal, an exemplar memory is introduced to store features of the target domain and accommodate the three invariance properties. The memory allows us to enforce the invariance constraints over global training batch without significantly increasing computation cost. Experiment demonstrates that the three invariance properties and the proposed memory are indispensable towards an effective domain adaptation system. Results on three re-ID domains show that our domain adaptation accuracy outperforms the state of the art by a large margin. Code is available at: this https URL | Indeed, the three invariance properties and the memory module have been separately presented in existing works. However, our work is different from them. The exemplar-invariance and memory module have been presented in self-supervised learning @cite_41 , few-shot learning @cite_37 @cite_38 @cite_24 and supervised learning @cite_13 . Yet, we explore the feasibility of this idea in unsupervised domain adaptation and overcoming the variations in the target domain. The neighborhood-invariance is similar to deep association learning (DAL) @cite_39 . A difference from DAL is that we design a soft classification loss to align the top- @math neighbors instead of calculating the triplet loss between the mutual top-1 neighbors. Importantly, comparing with HHL @cite_14 and DAL @cite_39 , we comprehensively consider three invariance constraints. It is worthy of discovering the mutual benefit among the three invariance properties. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_14",
"@cite_41",
"@cite_39",
"@cite_24",
"@cite_13"
],
"mid": [
"2963341924",
"2472819217",
"2896016251",
"2798991696",
"",
"2885201931",
"2963574614"
],
"abstract": [
"Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.",
"Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of \"one-shot learning.\" Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory location-based focusing mechanisms.",
"Person re-identification (re-ID) poses unique challenges for unsupervised domain adaptation (UDA) in that classes in the source and target sets (domains) are entirely different and that image variations are largely caused by cameras. Given a labeled source training set and an unlabeled target training set, we aim to improve the generalization ability of re-ID models on the target testing set. To this end, we introduce a Hetero-Homogeneous Learning (HHL) method. Our method enforces two properties simultaneously: (1) camera invariance, learned via positive pairs formed by unlabeled target images and their camera style transferred counterparts; (2) domain connectedness, by regarding source target images as negative matching pairs to the target source images. The first property is implemented by homogeneous learning because training pairs are collected from the same domain. The second property is achieved by heterogeneous learning because we sample training pairs from both the source and target domains. On Market-1501, DukeMTMC-reID and CUHK03, we show that the two properties contribute indispensably and that very competitive re-ID UDA accuracy is achieved. Code is available at: https: github.com zhunzhong07 HHL.",
"Neural net classifiers trained on data with annotated class labels can also capture apparent visual similarity among categories without being directed to do so. We study whether this observation can be extended beyond the conventional domain of supervised learning: Can we learn a good feature representation that captures apparent similarity among instances, instead of classes, by merely asking the feature to be discriminative of individual instances? We formulate this intuition as a non-parametric classification problem at the instance-level, and use noise-contrastive estimation to tackle the computational challenges imposed by the large number of instance classes. Our experimental results demonstrate that, under unsupervised learning settings, our method surpasses the state-of-the-art on ImageNet classification by a large margin. Our method is also remarkable for consistently improving test performance with more training data and better network architectures. By fine-tuning the learned feature, we further obtain competitive results for semi-supervised learning and object detection tasks. Our non-parametric model is highly compact: With 128 features per image, our method requires only 600MB storage for a million images, enabling fast nearest neighbour retrieval at the run time.",
"",
"Current major approaches to visual recognition follow an end-to-end formulation that classifies an input image into one of the pre-determined set of semantic categories. Parametric softmax classifiers are a common choice for such a closed world with fixed categories, especially when big labeled data is available during training. However, this becomes problematic for open-set scenarios where new categories are encountered with very few examples for learning a generalizable parametric classifier. We adopt a non-parametric approach for visual recognition by optimizing feature embeddings instead of parametric classifiers. We use a deep neural network to learn the visual feature that preserves the neighborhood structure in the semantic space, based on the Neighborhood Component Analysis (NCA) criterion. Limited by its computational bottlenecks, we devise a mechanism to use augmented memory to scale NCA for large datasets and very deep networks. Our experiments deliver not only remarkable performance on ImageNet classification for such a simple non-parametric method, but most importantly a more generalizable feature representation for sub-category discovery and few-shot recognition.",
"Existing person re-identification benchmarks and methods mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be searched from a gallery of whole scene images. To close the gap, we propose a new deep learning framework for person search. Instead of breaking it down into two separate tasks—pedestrian detection and person re-identification, we jointly handle both aspects in a single convolutional neural network. An Online Instance Matching (OIM) loss function is proposed to train the network effectively, which is scalable to datasets with numerous identities. To validate our approach, we collect and annotate a large-scale benchmark dataset for person search. It contains 18,184 images, 8,432 identities, and 96,143 pedestrian bounding boxes. Experiments show that our framework outperforms other separate approaches, and the proposed OIM loss function converges much faster and better than the conventional Softmax loss."
]
} |
1904.02242 | 2912758744 | Thermal Infrared (TIR) cameras are gaining popularity in many computer vision applications due to their ability to operate under low-light conditions. Images produced by TIR cameras are usually difficult for humans to perceive visually, which limits their usability. Several methods in the literature were proposed to address this problem by transforming TIR images into realistic visible spectrum (VIS) images. However, existing TIR-VIS datasets suffer from imperfect alignment between TIR-VIS image pairs which degrades the performance of supervised methods. We tackle this problem by learning this transformation using an unsupervised Generative Adversarial Network (GAN) which trains on unpaired TIR and VIS images. When trained and evaluated on KAIST-MS dataset, our proposed methods was shown to produce significantly more realistic and sharp VIS images than the existing state-of-the-art supervised methods. In addition, our proposed method was shown to generalize very well when evaluated on a new dataset of new environments. | In the infrared spectrum, less research has been done on transforming thermal images to VIS images. In @cite_13 , a CNN-based method was proposed to transform near-infrared (NIR) images to VIS images. Their method was shown to perform well as the NIR and VIS images are highly correlated in the electromagnetic spectrum. Kniaz @cite_10 proposed VIS to TIR transformation using a CNN model as a way to generate synthetic TIR images. The KAIST-MS @cite_9 dataset introduced the first realistic large-scale dataset of TIR-VIS image pairs which opened up for developing TIR-VIS transformation models. Berg @cite_6 proposed a CNN-based model to transform TIR images to VIS images trained on the KAIST-MS dataset. However, the imperfect registration of the dataset caused the output from their method to be blurry and corrupted in some cases. | {
"cite_N": [
"@cite_10",
"@cite_9",
"@cite_13",
"@cite_6"
],
"mid": [
"2612034263",
"2549063375",
"2963805028",
"2887656403"
],
"abstract": [
"Abstract. Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.",
"Multispectral pedestrian detection is essential for around-the-clock applications, e.g., surveillance and autonomous driving. We deeply analyze Faster R-CNN for multispectral pedestrian detection task and then model it into a convolutional network (ConvNet) fusion problem. Further, we discover that ConvNet-based pedestrian detectors trained by color or thermal images separately provide complementary information in discriminating human instances. Thus there is a large potential to improve pedestrian detection by using color and thermal images in DNNs simultaneously. We carefully design four ConvNet fusion architectures that integrate two-branch ConvNets on different DNNs stages, all of which yield better performance compared with the baseline detector. Our experimental results on KAIST pedestrian benchmark show that the Halfway Fusion model that performs fusion on the middle-level convolutional features outperforms the baseline method by 11 and yields a missing rate 3.5 lower than the other proposed architectures.",
"This paper proposes a method for transferring the RGB color spectrum to near-infrared (NIR) images using deep multi-scale convolutional neural networks. A direct and integrated transfer between NIR and RGB pixels is trained. The trained model does not require any user guidance or a reference image database in the recall phase to produce images with a natural appearance. To preserve the rich details of the NIR image, its high frequency features are transferred to the estimated RGB image. The presented approach is trained and evaluated on a real-world dataset containing a large amount of road scene images in summer. The dataset was captured by a multi-CCD NIR RGB camera, which ensures a perfect pixel to pixel registration.",
"Transformation of thermal infrared (TIR) images into visual, i.e. perceptually realistic color (RGB) images, is a challenging problem. TIR cameras have the ability to see in scenarios where vision is severely impaired, for example in total darkness or fog, and they are commonly used, e.g., for surveillance and automotive applications. However, interpretation of TIR images is difficult, especially for untrained operators. Enhancing the TIR image display by transforming it into a plausible, visual, perceptually realistic RGB image presumably facilitates interpretation. Existing grayscale to RGB, so called, colorization methods cannot be applied to TIR images directly since those methods only estimate the chrominance and not the luminance. In the absence of conventional colorization methods, we propose two fully automatic TIR to visual color image transformation methods, a two-step and an integrated approach, based on Convolutional Neural Networks. The methods require neither pre- nor postprocessing, do not require any user input, and are robust to image pair misalignments. We show that the methods do indeed produce perceptually realistic results on publicly available data, which is assessed both qualitatively and quantitatively."
]
} |
1904.02095 | 2927639528 | The enormous financial success of online advertising platforms is partially due to the precise targeting features they offer. Although researchers and journalists have found many ways that advertisers can target---or exclude---particular groups of users seeing their ads, comparatively little attention has been paid to the implications of the platform's ad delivery process, comprised of the platform's choices about who should see an ad. It has been hypothesized that this process can "skew" ad delivery in ways that the advertisers do not intend, making some users less likely than others to see particular ads based on their demographic characteristics. In this paper, we demonstrate that such skewed delivery occurs on Facebook, due to market and financial optimization effects as well as the platform's own predictions about the "relevance" of ads to different groups of users. We find that both the advertiser's budget and the content of the ad each significantly contribute to the skew of Facebook's ad delivery. Critically, we observe significant skew in delivery along gender and racial lines for "real" ads for employment and housing opportunities despite neutral targeting parameters. Our results demonstrate previously unknown mechanisms that can lead to potentially discriminatory ad delivery, even when advertisers set their targeting parameters to be highly inclusive. This underscores the need for policymakers and platforms to carefully consider the role of the ad delivery optimization run by ad platforms themselves---and not just the targeting choices of advertisers---in preventing discrimination in digital advertising. | Discrimination in advertising As described above, Facebook has some policies and tools in place to prevent discriminatory ad targeting. However, advertisers can still exclude users based on a variety of interests that are highly correlated with race by using custom audiences @cite_0 , or by using location @cite_39 @cite_3 . Separately, Sweeney @cite_52 and Datta al @cite_38 have studied discrimination in Google's advertising system, and have examined the potential parties responsible and how their actions may be interpreted under the law @cite_17 . | {
"cite_N": [
"@cite_38",
"@cite_52",
"@cite_3",
"@cite_39",
"@cite_0",
"@cite_17"
],
"mid": [
"2951240445",
"",
"",
"2794558831",
"2794342582",
"2789814261"
],
"abstract": [
"To partly address people's concerns over web tracking, Google has created the Ad Settings webpage to provide information about and some choice over the profiles Google creates on users. We present AdFisher, an automated tool that explores how user behaviors, Google's ads, and Ad Settings interact. AdFisher can run browser-based experiments and analyze data using machine learning and significance tests. Our tool uses a rigorous experimental design and statistical analysis to ensure the statistical soundness of our results. We use AdFisher to find that the Ad Settings was opaque about some features of a user's profile, that it does provide some choice on ads, and that these choices can lead to seemingly discriminatory ads. In particular, we found that visiting webpages associated with substance abuse changed the ads shown but not the settings page. We also found that setting the gender to female resulted in getting fewer instances of an ad related to high paying jobs than setting it to male. We cannot determine who caused these findings due to our limited visibility into the ad ecosystem, which includes Google, advertisers, websites, and users. Nevertheless, these results can form the starting point for deeper investigations by either the companies themselves or by regulatory bodies.",
"",
"",
"Ad targeting is getting more powerful with introduction of new tools, such as Custom Audiences, behavioral targeting, and Audience Insights. Although this is beneficial for businesses as it enables people to receive more relevant advertising, the power of the tools has downsides. In this paper, we focus on three downsides: privacy violations, microtargeting (i.e., the ability to reach a specific individual or individuals without their explicit knowledge that they are the only ones an ad reaches) and ease of reaching marginalized groups. Using Facebook's ad system as a case study, we demonstrate the feasibility of such downsides. We then discuss Facebook's response to our responsible disclosures of the findings and call for additional policy, science, and engineering work to protect consumers in the rapidly evolving ecosystem of ad targeting.",
"Recently, online targeted advertising platforms like Facebook have been criticized for allowing advertisers to discriminate against users belonging to sensitive groups, i.e., to exclude users belonging to a certain race or gender from receiving their ads. Such criticisms have led, for instance, Facebook to disallow the use of attributes such as ethnic affinity from being used by advertisers when targeting ads related to housing or employment or financial services. In this paper, we show that such measures are far from sufficient and that the problem of discrimination in targeted advertising is much more pernicious. We argue that discrimination measures should be based on the targeted population and not on the attributes used for targeting. We systematically investigate the different targeting methods offered by Facebook for their ability to enable discriminatory advertising. We show that a malicious advertiser can create highly discriminatory ads without using sensitive attributes. Our findings call for exploring fundamentally new methods for mitigating discrimination in online targeted advertising.",
"Author(s): Datta, A; Datta, A; Makagon, J; Mulligan, DK; Tschantz, MC | Editor(s): Friedler, SA; Wilson, C"
]
} |
1904.01987 | 2934088140 | Convolutional neural networks (CNNs) have demonstrated their capability to solve different kind of problems in a very huge number of applications. However, CNNs are limited for their computational and storage requirements. These limitations make difficult to implement these kind of neural networks on embedded devices such as mobile phones, smart cameras or advanced driving assistance systems. In this paper, we present a novel layer named Hybrid Cosine Based Convolution that replaces standard convolutional layers using cosine basis to generate filter weights. The proposed layers provide several advantages: faster convergence in training, the receptive field can be increased at no cost and substantially reduce the number of parameters. We evaluate our proposed layers on three competitive classification tasks where our proposed layers can achieve similar (and in some cases better) performances than VGG and ResNet architectures. | During the last years, frequency analysis have been applied to improve several deep learning architectures. For example, @cite_13 showed a methodology that reduces memory usage and speeds up the inference stage. It consists of the approximation of the trained rank-N convolutional neural network filters by separable rank-1 filters. In an extension to this, the standard convolution on the spatial domain was replaced by multiplications of the transformed filters in the frequency domain @cite_17 . | {
"cite_N": [
"@cite_13",
"@cite_17"
],
"mid": [
"1996901117",
"2963340555"
],
"abstract": [
"The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5× speedup with no loss in accuracy, and 4.5× speedup with less than 1 drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"Abstract: Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges."
]
} |
1904.01987 | 2934088140 | Convolutional neural networks (CNNs) have demonstrated their capability to solve different kind of problems in a very huge number of applications. However, CNNs are limited for their computational and storage requirements. These limitations make difficult to implement these kind of neural networks on embedded devices such as mobile phones, smart cameras or advanced driving assistance systems. In this paper, we present a novel layer named Hybrid Cosine Based Convolution that replaces standard convolutional layers using cosine basis to generate filter weights. The proposed layers provide several advantages: faster convergence in training, the receptive field can be increased at no cost and substantially reduce the number of parameters. We evaluate our proposed layers on three competitive classification tasks where our proposed layers can achieve similar (and in some cases better) performances than VGG and ResNet architectures. | Hashed Networks exploit inherent redundancy in neural networks using a low-cost hash function to randomly group connection weights into hash buckets, in which all the weights share the same parameter value @cite_19 . An extension by @cite_5 groups the weights based on their DCT representation. Another methodology by @cite_8 prunes the network by learning only the important connections. Then the weights are quantified to enforce weight sharing. Finally, Huffman encoding is applied. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_8"
],
"mid": [
"2172166488",
"2382313035",
"2964299589"
],
"abstract": [
"As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb ever-increasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.",
"Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to \"absorb\" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers, hindering many applications such as image and speech recognition on mobile phones and other devices. In this paper, we present a novel net- work architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected layers of a deep learning model, leading to dramatic savings in memory and storage consumption. Based on the key observation that the weights of learned convolutional filters are typically smooth and low-frequency, we first convert filter weights to the frequency domain with a discrete cosine transform (DCT) and use a low-cost hash function to randomly group frequency parameters into hash buckets. All parameters assigned the same hash bucket share a single value learned with standard back-propagation. To further reduce model size, we allocate fewer hash buckets to high-frequency components, which are generally less important. We evaluate FreshNets on eight data sets, and show that it leads to better compressed performance than several relevant baselines.",
"Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency."
]
} |
1904.01987 | 2934088140 | Convolutional neural networks (CNNs) have demonstrated their capability to solve different kind of problems in a very huge number of applications. However, CNNs are limited for their computational and storage requirements. These limitations make difficult to implement these kind of neural networks on embedded devices such as mobile phones, smart cameras or advanced driving assistance systems. In this paper, we present a novel layer named Hybrid Cosine Based Convolution that replaces standard convolutional layers using cosine basis to generate filter weights. The proposed layers provide several advantages: faster convergence in training, the receptive field can be increased at no cost and substantially reduce the number of parameters. We evaluate our proposed layers on three competitive classification tasks where our proposed layers can achieve similar (and in some cases better) performances than VGG and ResNet architectures. | In a more close relation to the work proposed in this paper, @cite_1 present a method to compress neural networks in the frequency domain. Using a DCT, filter weights are clustered. These weights are quantized and encoded using Huffman coding. However, this compression is only applied for storage purposes. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2554242204"
],
"abstract": [
"Deep convolutional neural networks (CNNs) are successfully used in a number of applications. However, their storage and computational requirements have largely prevented their widespread use on mobile devices. Here we present an effective CNN compression approach in the frequency domain, which focuses not only on smaller weights but on all the weights and their underlying connections. By treating convolutional filters as images, we decompose their representations in the frequency domain as common parts (i.e., cluster centers) shared by other similar filters and their individual private parts (i.e., individual residuals). A large number of low-energy frequency coefficients in both parts can be discarded to produce high compression without significantly compromising accuracy. We relax the computational burden of convolution operations in CNNs by linearly combining the convolution responses of discrete cosine transform (DCT) bases. The compression and speed-up ratios of the proposed algorithm are thoroughly analyzed and evaluated on benchmark image datasets to demonstrate its superiority over state-of-the-art methods."
]
} |
1904.01987 | 2934088140 | Convolutional neural networks (CNNs) have demonstrated their capability to solve different kind of problems in a very huge number of applications. However, CNNs are limited for their computational and storage requirements. These limitations make difficult to implement these kind of neural networks on embedded devices such as mobile phones, smart cameras or advanced driving assistance systems. In this paper, we present a novel layer named Hybrid Cosine Based Convolution that replaces standard convolutional layers using cosine basis to generate filter weights. The proposed layers provide several advantages: faster convergence in training, the receptive field can be increased at no cost and substantially reduce the number of parameters. We evaluate our proposed layers on three competitive classification tasks where our proposed layers can achieve similar (and in some cases better) performances than VGG and ResNet architectures. | In a similar manner, @cite_9 present harmonics blocks that are used to replace standard convolutional layers. Harmonic blocks consist of a convolution with a filter bank that isolates the coefficients of the DCT basis functions to their exclusive feature maps, creating a new feature map per each channel and each frequency defined by hand. However, the spectral decomposition of this proposal upsamples the number of intermediate features between layers thus notably increasing the corresponding memory requirements. In our case, DCT frequencies are learned so only the more relevant decomposition are used in the network. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2903953509"
],
"abstract": [
"Convolutional neural networks (CNNs) learn filters in order to capture local correlation patterns in feature space. In contrast, in this paper we propose harmonic blocks that produce features by learning optimal combinations of spectral filters defined by the Discrete Cosine Transform. The harmonic blocks are used to replace conventional convolutional layers to construct partial or fully harmonic CNNs. We extensively validate our approach and show that the introduction of harmonic blocks into state-of-the-art CNN baseline architectures results in comparable or better performance in classification tasks on small NORB, CIFAR10 and CIFAR100 datasets."
]
} |
1904.01846 | 2926719997 | Humans naturally "program" a fellow collaborator to perform a task by demonstrating the task few times. It is intuitive, therefore, for a human to program a collaborative robot by demonstration and many paradigms use a single demonstration of the task. This is a form of one-shot learning in which a single training example, plus some context of the task, is used to infer a model of the task for subsequent execution and later refinement. This paper presents a one-shot learning from demonstration framework to learn contact-intensive tasks using only visual perception of the demonstrated task. The robot learns a policy for performing the tasks in terms of a priori skills and further uses self-evaluation based on visual and tactile perception of the skill performance to learn the force correspondences for the skills. The self-evaluation is performed based on goal states detected in the demonstration with the help of task context and the skill parameters are tuned using reinforcement learning. This approach enables the robot to learn force correspondences which cannot be inferred from a visual demonstration of the task. The effectiveness of this approach is evaluated using a vegetable peeling task. | Early works on LfD involved record and playback methods @cite_14 . The demonstrated task was decomposed into a sequence of state transitions which was identified to be achieved through a series of actions from a given set. And these state-action correspondences were programmed as if-then rules. Later works used machine learning @cite_17 and neural networks @cite_5 @cite_19 in LfD for inference. Other approaches to learning tasks involve using demonstrations to learn the rewards and the states and use reinforcement learning to learn a policy to achieve the desired states. These methods are called inverse reinforcement learning or inverse optimal control @cite_18 @cite_3 where rewards to achieve the task are learnt from the demonstrations. Traditionally, learning from demonstration has been mostly used for kinematic tasks like pick and place, where reinforcement learning has been used to achieve completion as well as optimal performance of the motion primitives @cite_6 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_5",
"@cite_17"
],
"mid": [
"1999874108",
"1994049796",
"2500624988",
"2060914855",
"2153264310",
"2135504505",
"1582682500"
],
"abstract": [
"We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.",
"Automatic programming of manipulator robots attracts in creasing interest within the context of mechanical assembly. In this paper we focus on the problem of mating two parts, which requires sensor-based strategies to deal with geometric uncertainty. We present a system that embodies a two-phase approach to build robot programs implementing such strate gies. First a training phase interacts with the robot actuatorsand sensors and produces traces of execution of a given part- mating operation. Next a purely computational induction phase transforms these traces into an executable manipula tor-level program for the operation. The system embedding this approach has been completely implemented, and it has been used experimentally on several assembly tasks.",
"This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.",
"Physical contact events often allow a natural decomposition of manipulation tasks into action phases and subgoals. Within the motion primitive paradigm, each action phase corresponds to a motion primitive, and the subgoals correspond to the goal parameters of these primitives. Current state-of-the-art reinforcement learning algorithms are able to efficiently and robustly optimize the parameters of motion primitives in very high-dimensional problems. These algorithms often consider only shape parameters, which determine the trajectory between the start- and end-point of the movement. In manipulation, however, it is also crucial to optimize the goal parameters, which represent the subgoals between the motion primitives. We therefore extend the policy improvement with path integrals (PI2) algorithm to simultaneously optimize shape and goal parameters. Applying simultaneous shape and goal learning to sequences of motion primitives leads to the novel algorithm PI2 Seq. We use our methods to address a fundamental challenge in manipulation: improving the robustness of everyday pick-and-place tasks.",
"",
"Task demonstration is an effective technique for developing robot motion control policies. As tasks become more complex, however, demonstration can become more difficult. In this work, we introduce an algorithm that uses corrective human feedback to build a policy able to perform a novel task, by combining simpler policies learned from demonstration. While some demonstration-based learning approaches do adapt policies with execution experience, few provide corrections within low-level motion control domains or to enable the linking of multiple of demonstrated policies. Here we introduce Feedback for Policy Scaffolding (FPS) as an algorithm that first evaluates and corrects the execution of motion primitive policies learned from demonstration. The algorithm next corrects and enables the execution of a more complex task constructed from these primitives. Key advantages of building a policy from demonstrated primitives is the potential for primitive policy reuse within multiple complex policies and the faster development of these policies, in addition to the development of complex policies for which full demonstration is difficult. Policy reuse under our algorithm is assisted by human teacher feedback, which also contributes to the improvement of policy performance. Within a simulated robot motion control domain we validate that, using FPS, a policy for a novel task is successfully built from motion primitives learned from demonstration. We show feedback to both aid and enable policy development, improving policy performance in success, speed and efficiency.",
"Robot Programming by Demonstration is an intuitive method to program a robot. The programmer shows how a particular task is performed, using an interface device that allows the measurement and recording of the human’s motions and other parameters that are relevant to perform the demonstrated task. This paper presents an analysis of the learning and interaction requirements that are characteristic for an RPD system. Based on these requirements, a new system architecture is proposed that supports all phases of the interactive programming process. For an example task, experimental results are given."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.