aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1906.01376
2948952637
Data-driven models are subject to model errors due to limited and noisy training data. Key to the application of such models in safety-critical domains is the quantification of their model error. Gaussian processes provide such a measure and uniform error bounds have been derived, which allow safe control based on these models. However, existing error bounds require restrictive assumptions. In this paper, we employ the Gaussian process distribution and continuity arguments to derive a novel uniform error bound under weaker assumptions. Furthermore, we demonstrate how this distribution can be used to derive probabilistic Lipschitz constants and analyze the asymptotic behavior of our bound. Finally, we derive safety conditions for the control of unknown dynamical systems based on Gaussian process models and evaluate them in simulations of a robotic manipulator.
The latter issue has been addressed by considering the support of the prior distribution of the Gaussian process as belief space. Based on bounds for the suprema of GPs @cite_23 and existing error bounds for interpolation with radial basis functions a uniform error bound for Kriging (alternative term for GP regression for noise-free training data) is derived in @cite_6 . However, the uniform error of Gaussian process regression with noisy observations has not been analyzed with the help of the prior GP distribution to the best of our knowledge.
{ "cite_N": [ "@cite_6", "@cite_23" ], "mid": [ "2818855837", "1545211435" ], "abstract": [ "AbstractKriging based on Gaussian random fields is widely used in reconstructing unknown functions. The kriging method has pointwise predictive distributions which are computationally simple. Howev...", "* Recasts topics in random fields by following a completely new way of handling both geometry and probability * Significant exposition of the work of others in the field * Presentation is clear and pedagogical * Excellent reference work as well as excellent work for self study @PARASPLIT This monograph is devoted to a completely new approach to geometric problems arising in the study of random fields. The groundbreaking material in Part III, for which the background is carefully prepared in Parts I and II, is of both theoretical and practical importance, and striking in the way in which problems arising in geometry and probability are beautifully intertwined. @PARASPLIT The three parts to the monograph are quite distinct. Part I presents a user-friendly yet comprehensive background to the general theory of Gaussian random fields, treating classical topics such as continuity and boundedness, entropy and majorizing measures, Borell and Slepian inequalities. Part II gives a quick review of geometry, both integral and Riemannian, to provide the reader with the material needed for Part III, and to give some new results and new proofs of known results along the way. Topics such as Crofton formulae, curvature measures for stratified manifolds, critical point theory, and tube formulae are covered. In fact, this is the only concise, self-contained treatment of all of the above topics, which are necessary for the study of random fields. The new approach in Part III is devoted to the geometry of excursion sets of random fields and the related Euler characteristic approach to extremal probabilities. @PARASPLIT \"Random Fields and Geometry\" will be useful for probabilists and statisticians, and for theoretical and applied mathematicians who wish to learn about new relationships between geometry and probability. It will be helpful for graduate students in a classroom setting, or for self-study. Finally, this text will serve as a basic reference for all those interested in the companion volume of the applications of the theory. These applications, to appear in a forthcoming volume, will cover areas as widespread as brain imaging, physical oceanography, and astrophysics." ] }
1906.01532
2948889181
We present a closed-loop control strategy for a delta-wing unmanned aerial aquatic-vehicle (UAAV) that enables autonomous swim, fly, and water-to-air transition. Our control system consists of a hybrid state estimator and a closed-loop feedback policy which is capable of trajectory following through the water, air and transition domains. To test our estimator and control approach in hardware, we instrument the vehicle with a minimalistic set of commercial off-the-shelf sensors. Finally, we demonstrate a successful autonomous water-to-air transition with our prototype UAAV system and discuss the implications of these results with regards to robustness.
Due to advances in technology, especially in the remote-controlled aircraft domain, several unmanned aerial-aquatic vehicles have emerged in the past few years @cite_4 . These vehicles, which span the air and water domains, have been the subject of both design studies and hardware demonstrations. Many design strategies have focused on novel propulsion mechanisms. In @cite_10 , the authors develop a quadcopter which can both swim and fly, and is able to transition between the two domains via a novel propeller design. In @cite_17 @cite_15 @cite_5 , the authors develop a fixed-wing UAAV which uses a water-bottle rocket-like propulsion mechanism to exit the water. Some of the same authors present a novel gearbox design to enable multi-domain locomotion with a single propeller in @cite_6 . In @cite_9 , the authors propose a flapping multi-domain wing design and discuss its implications for multi-domain locomotion. Researchers have also engaged in structural analysis of these aerial-aquatic systems. For instance, in @cite_0 , the authors present a computational analysis of a fixed-wing UAAV impacting the water during water-entry.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_6", "@cite_0", "@cite_5", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "2025766927", "1815611065", "2587335294", "2013352872", "2541583107", "", "1700946486", "1482718296" ], "abstract": [ "Abstract The aquatic unmanned aerial vehicle (AquaUAV), a kind of vehicle that can operate both in the air and the water, has been regarded as a new breakthrough to broaden the application scenario of UAV. Wide application prospects in military and civil field are more than bright, therefore many institutions have focused on the development of such a vehicle. However, due to the significant difference of the physical properties between the air and the water, it is rather difficult to design a fully-featured AquaUAV. Until now, majority of partially-featured AquaUAVs have been developed and used to verify the feasibility of an aquatic–aerial vehicle. In the present work, we classify the current partially-featured AquaUAV into three categories from the scope of the whole UAV field, i.e., the seaplane UAV, the submarine-launched UAV, and the submersible UAV. Then the recent advancements and common characteristics of the three kinds of AquaUAVs are reviewed in detail respectively. Then the applications of bionics in the design of AquaUAV, the transition mode between the air and the water, the morphing wing structure for air–water adaptation, and the power source and the propulsion type are summarized and discussed. The tradeoff analyses for different transition methods between the air and the water are presented. Furthermore, it indicates that applying the bionics into the design and development of the AquaUAV will be essential and significant. Finally, the significant technical challenges for the AquaUAV to change from a conception to a practical prototype are indicated.", "Ocean sampling for highly temporal phenomena, such as harmful algal blooms, necessitates a vehicle capable of fast aerial travel interspersed with an aquatic means of acquiring in-situ measurements. Vehicle platforms with this capability have yet to be widely adopted by the oceanographic community. Several animal examples successfully make this aerial aquatic transition using a flapping foil actuator, offering an existence proof for a viable vehicle design (Fig. 1).We discuss a preliminary realization of a flapping wing actuation system for use in both air and water. The wing employs an active in-line motion degree of freedom to generate the large force envelope necessary for propulsion in both fluid media.", "Aerial–aquatic locomotion would allow a broad array of tasks in robot-enabled environmental monitoring or disaster management. One of the most significant challenges of aerial–aquatic locomotion in mobile robots is finding a propulsion system that is capable of working effectively in both fluids and transitioning between them. The large differences in the density and viscosity of air compared to water means that a single direct propulsion system without adaptability will be inefficient in at least one medium. This paper examines multimodal propeller propulsion using computational tools validated against experimental data. Based on this analysis, we present a novel gearbox enabling an aerial propulsion system to operate efficiently underwater. This is achieved with minimal complexity using a single fixed pitch propeller system, which can change gear underwater by reversing the drive motor, but with the gearing arranged to leave the propeller direction unchanged. This system is then integrated into a small robot, and flights in air and locomotion underwater are demonstrated.", "A submersible unmanned aerial vehicle (UAV) is proposed firstly, which is capable of operating in both air and water. One of the outstanding characteristics of the UAV is that the air-water transition imitates that of a gannet, i.e., plunge-diving. In this paper, the plunge-diving process of this UAV is simplified as a water-entry problem with a certain initial velocity, and the impact force is calculated by the method of the computational fluid dynamics (CFD). The Volume of Fluid is coupled with the 3-D Navier-Stokes equations to establish the model of the flow field, and the equations are solved in Fluent 6.3. The phase distribution and the pressure distribution during water-entry are presented and analyzed. Furthermore, the effects of the dropping height and the wing's sweptback angle on the impact force are investigated and discussed.", "The ability to collect water samples rapidly with aerial–aquatic robots would increase the safety and efficiency of water health monitoring and allow water sample collection from dangerous or inaccessible areas. An aquatic micro air vehicle (AquaMAV) able to dive into the water offers a low cost and robust means of collecting samples. However, small-scale flying vehicles generally do not have sufficient power for transition to flight from water. In this paper, we present a novel jet propelled AquaMAV able to perform jumpgliding leaps from water and a planar trajectory model that is able to accurately predict aquatic escape trajectories. Using this model, we are able to offer insights into the stability of aquatic takeoff to perturbations from surface waves and demonstrate that an impulsive leap is a robust method of flight transition. The AquaMAV uses a CO @math powered water jet to escape the water, actuated by a custom shape memory alloy gas release. The 100 g robot leaps from beneath the surface, where it can deploy wings and glide over the water, achieving speeds above 11 m s.", "", "Bio-inspired vehicles are currently leading the way in the quest to produce a vehicle capable of flight and underwater navigation. However, a fully functional vehicle has not yet been realized. We present the first fully functional vehicle platform operating in air and underwater with seamless transition between both mediums. These unique capabilities combined with the hovering, high maneuverability and reliability of multirotor vehicles, results in a disruptive technology for both civil and military application including air water search and rescue, inspection, repairs and survey missions among others. The invention was built on a bio-inspired locomotion force analysis that combines flight and swimming. Three main advances in the present work has allowed this invention. The first is the discovery of a seamless transition method between air and underwater. The second is the design of a multi-medium propulsion system capable of efficient operation in air and underwater. The third combines the requirements for lift and thrust for flight (for a given weight) and the requirements for thrust and neutral buoyancy (in water) for swimming. The result is a careful balance between lift, thrust, weight, and neutral buoyancy implemented in the vehicle design. A fully operational prototype demonstrated the flight, and underwater navigation capabilities as well as the rapid air water and water air transition.", "Water sampling with autonomous aerial vehicles has major applications in water monitoring and chemical accident response. Currently, no robot exists that is capable of both underwater locomotion and flight. This is principally because of the major design tradeoffs for operation in both water and air. A major challenge for such an aerial-aquatic mission is the transition to flight from the water. The use of high power density jet propulsion would allow short, impulsive take-offs by Micro Air Vehicles (MAVs). In this paper, we present a high power water jet propulsion system capable of launching a 70 gram vehicle to speeds of 11m s in 0.3s, designed to allow waterborne take off for an Aquatic Micro Air Vehicle (AquaMAV). Jumps propelled by the jet are predicted to have a range of over 20m without gliding. Propulsion is driven by a miniaturised 57 bar gas release system, with many other applications in pneumatically actuated robots. We will show the development of a theoretical model to allow designs to be tailored to specific missions, and free flying operation of the jet." ] }
1906.01532
2948889181
We present a closed-loop control strategy for a delta-wing unmanned aerial aquatic-vehicle (UAAV) that enables autonomous swim, fly, and water-to-air transition. Our control system consists of a hybrid state estimator and a closed-loop feedback policy which is capable of trajectory following through the water, air and transition domains. To test our estimator and control approach in hardware, we instrument the vehicle with a minimalistic set of commercial off-the-shelf sensors. Finally, we demonstrate a successful autonomous water-to-air transition with our prototype UAAV system and discuss the implications of these results with regards to robustness.
Few of the approaches mentioned above consider the closed-loop control or estimation strategies necessary for enabling multi-domain locomotion. In @cite_16 and @cite_1 the authors present modeling, simulation and control strategies for a multi-domain quadcopter. Their approach focuses on applying robust control techniques to develop a globally stable switching attitude controller which relies on two different linear models. In @cite_7 , the authors explore hybrid control for the quadrotor UAAV experimentally.
{ "cite_N": [ "@cite_16", "@cite_1", "@cite_7" ], "mid": [ "1964973847", "1843575698", "2801453727" ], "abstract": [ "The complete modeling and simulation of an unmanned vehicle with combined aerial and underwater ca- pabilities, called Hybrid Unmanned Aerial Underwater Vehicle (HUAUV), is presented in this paper. The best architecture for this kind of vehicle was evaluated based on the adaptation of typical platforms for aerial and underwater vehicles, to allow the navigation in both environments. The model selected was based on a quadrotor-like aerial platform, adapted to dive and move underwater. Kinematic and dynamic models are presented here, and the parameters for a small dimension prototype was estimated and simulated. Finally, controllers were used and validated in realistic simulation, including air and water navigation, and the environment transition problem. To the best of our knowledge, it is the first vehicle that is able to navigate in both environment without mechanical adaptation during the medium transitions. I. INTRODUCTION Nowadays, unmanned autonomous vehicles have been the focus of many development efforts, with a large range of applications. The amount of resources applied has improved their capabilities, especially in the military field. Remotely operated or autonomous Unmanned Aerial Vehicles (UAVs), for example, were used in recent military operations around the world (1). But they were also used in non-military activities, like agriculture (2) and surveillance (3). Another important robotic platform are the Unmanned Underwater Vehicles (UUVs), whose the most known are the Remotely Operated Vehicles (ROVs). This kind of vehicles can also be applied in several commercial field operations (4), e.g. oil and gas extraction in ultra deep waters (5). Both kind of vehicles are well adapted to work in their own environment (air and water, respectively), but some situations may require a single vehicle capable of working in both environment. Such requirement commonly appears when is necessary to perform maintenance on partially or fully submersing structures, as ship hull or risers. A typical approach includes using auxiliary vessels to transport ROVs that will make the inspection of offshore target regions. This problem is harder in partially submersed structures. In such situations, where the usage of auxiliary ships is difficult and expensive, underwater robots equipped with wheels or tracks are recommended.", "This paper presents a method for stabilizing the attitude of a Hybrid Unmanned Aerial Underwater Vehicle. Firstly, we present aerodynamic and hydrodynamic models for the angular motion of our robot, discussing effects like buoyancy force and added inertia. Next, we apply robust control techniques for both environment, aerial and underwater, based on linear uncertain models with only four vertices and well-defined stability criteria, such as D-stability and ℋ 2 performance. Gain matrices K air and K wat are computed and the attitude of the vehicle at the hovering operation point for each environment is controlled, respectively. Finally, a procedure is proposed to check the global stability for the switching control case, when the robot changes from air to water (or vice-versa). Numerical simulations with disturbances and switching control are presented to show the stability at different initial conditions.", "Abstract Modeling and control of a multi-medium unmanned vehicle capable of seamless operation in air or underwater is introduced in this paper. The multi-medium system is treated as a hybrid system with continuous dynamics while performing in both air and underwater, and discrete jumps in the medium density during the transitions. The continuous dynamics are modeled by the Newton–Euler formalism, taking into account the effects of the buoyancy and drag phenomena, normally neglected in aerial vehicles. A hybrid controller is designed for trajectory tracking considering the full system, including a transition strategy to assure the switching between mediums. Stability analysis for the full system is provided using hybrid Lyapunov and invariance principles. The performance of the control strategy is validated through simulations. Finally, an experimental platform consisting of a multirotor in an octo-quadcopter configuration was developed and some preliminary experimental results are introduced, showing the vehicle performing in air, underwater and through the transition." ] }
1906.01532
2948889181
We present a closed-loop control strategy for a delta-wing unmanned aerial aquatic-vehicle (UAAV) that enables autonomous swim, fly, and water-to-air transition. Our control system consists of a hybrid state estimator and a closed-loop feedback policy which is capable of trajectory following through the water, air and transition domains. To test our estimator and control approach in hardware, we instrument the vehicle with a minimalistic set of commercial off-the-shelf sensors. Finally, we demonstrate a successful autonomous water-to-air transition with our prototype UAAV system and discuss the implications of these results with regards to robustness.
The recent work most similar to our approach are @cite_13 and @cite_2 . Both explore high thrust-to-weight ratio tail-sitter fixed-wing UAV designs. The vehicle presented in @cite_13 is distinct in that it is not submersible, but uses a novel passive mechanism to facilitate rapid take-off from the water's surface. The vehicle in @cite_2 also possesses some novel design attributes--- notably passive water draining from inside the wing's cavity. While this fixed-wing vehicle is submersible, it differs from our system in some important ways.
{ "cite_N": [ "@cite_13", "@cite_2" ], "mid": [ "2558693562", "2754138574" ], "abstract": [ "With the goal of extending unmanned aerial vehicles mission duration, a solar recharge strategy is envisioned with lakes as preferred charging and standby areas. The Sherbrooke University Water-Air VEhicle (SUWAVE) concept developed is able to takeoff and land vertically on water. The physical prototype consists of a wing coupled to a rotating center body that minimizes the added components with a passive takeoff maneuver. A dynamic model of takeoff, validated with experimental results, serves as a design tool. The landing is executed by diving, without requiring complex control or wing folding. Structural integrity of the wing is confirmed by investigating the accelerations at impact. A predictive model is developed for various impact velocities. The final prototype has executed multiple repeatable takeoffs and has succeeded in completing full operation cycles of flying, diving, floating, and taking off.", "This paper presents test results and performance characterization of the first fixed-wing unmanned vehicle capable of full cross-domain operation in both the aerial and underwater environments with repeated transition and low-energy loitering capabilities. This vehicle concept combines the speed and range of an aircraft with the persistence, diving capabilities, and stealth of a submersible. The paper describes the proof-of-concept vehicle including its concept of operations, the approaches employed to achieve the required functions, and the main components and subsystems. Key subsystems include a passively flooding and draining wing, a single motor and propeller combination for propulsion in both domains, and aerodynamic–hydrodynamic control surfaces. Experiments to quantify the vehicle performance, control responses, and energy consumption in underwater, surface, and flight operation are presented and analyzed. Results of several full-cycle tests are presented to characterize and illustrate each stage of operation including surface locomotion, underwater locomotion, water egress, flight, and water ingress. In total, the proof-of-concept vehicle demonstrated 12 full-cycle cross-domain missions including both manually controlled and autonomous operation." ] }
1906.01357
2948895425
Multi-target Multi-camera Tracking (MTMCT) aims to extract the trajectories from videos captured by a set of cameras. Recently, the tracking performance of MTMCT is significantly enhanced with the employment of re-identification (Re-ID) model. However, the appearance feature usually becomes unreliable due to the occlusion and orientation variance of the targets. Directly applying Re-ID model in MTMCT will encounter the problem of identity switches (IDS) and tracklet fragment caused by occlusion. To solve these problems, we propose a novel tracking framework in this paper. In this framework, the occlusion status and orientation information are utilized in Re-ID model with human pose information considered. In addition, the tracklet association using the proposed fused tracking feature is adopted to handle the fragment problem. The proposed tracker achieves 81.3 IDF1 on the multiple-camera hard sequence, which outperforms all other reference methods by a large margin.
Multi-target Multi-camera Tracking is a challenging task due to the illumination variance, change of viewpoints and the blind area among cameras. Methods like @cite_59 @cite_48 @cite_32 @cite_31 @cite_66 @cite_39 aim to model the relationship among cameras including illumination changes, travel time and entry exit rates across pairs of cameras. Illumination always varies largely on different viewpoints, so the brightness transfer function (BTF) from a given camera to another camera is estimated to model the illumination changes. @cite_6 finds that all BTFs lie in a low dimensional subspace, and demonstrates that subspace can be used to compute appearance similarity. @cite_37 employs a Cumulative Brightness Transfer Function (CBTF) for mapping color among cameras located at different physical sites. However, the above methods only address the appearance information but ignore the spatial relationship among cameras. To solve this problem, @cite_46 uses kernel density estimation to infer the inter-camera relationships in the form of the multivariate probability density of space-time variables, then integrates spatial cue and appearance cue with the maximum likelihood estimation framework.
{ "cite_N": [ "@cite_37", "@cite_46", "@cite_48", "@cite_32", "@cite_6", "@cite_39", "@cite_59", "@cite_31", "@cite_66" ], "mid": [ "2025247481", "1968381457", "2736908959", "2740393196", "2115569382", "2126141220", "", "2584568718", "2152525669" ], "abstract": [ "The appearance of individuals captured by multiple non-overlapping cameras varies greatly due to pose and illumination changes between camera views. In this paper we address the problem of dealing with illumination changes in order to recover matching of individuals appearing at different camera sites. This task is challenging as accurately mapping colour changes between views requires an exhaustive set of corresponding chromatic brightness values to be collected, which is very difficult in real world scenarios. We propose a Cumulative Brightness Transfer Function (CBTF) for mapping colour between cameras located at different physical sites, which makes better use of the available colour information from a very sparse training set. In addition we develop a bi-directional mapping approach to obtain a more accurate similarity measure between a pair of candidate objects. We evaluate the proposed method using challenging datasets obtained from real world distributed CCTV camera networks. The results demonstrate that our bi-directional CBTF method significantly outperforms existing techniques.", "Tracking across cameras with non-overlapping views is a challenging problem. Firstly, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in illumination, pose and camera properties. To deal with the first problem, we observe that people or vehicles tend to follow the same paths in most cases, i.e., roads, walkways, corridors etc. The proposed algorithm uses this conformity in the traversed paths to establish correspondence. The algorithm learns this conformity and hence the inter-camera relationships in the form of multivariate probability density of space-time variables (entry and exit locations, velocities, and transition times) using kernel density estimation. To handle the appearance change of an object as it moves from one camera to another, we show that all brightness transfer functions from a given camera to another camera lie in a low dimensional subspace. This subspace is learned by using probabilistic principal component analysis and used for appearance matching. The proposed approach does not require explicit inter-camera calibration, rather the system learns the camera topology and subspace of inter-camera brightness transfer functions during a training phase. Once the training is complete, correspondences are assigned using the maximum likelihood (ML) estimation framework using both location and appearance cues. Experiments with real world videos are reported which validate the proposed approach.", "We present a novel approach to person tracking within the context of entity association. In large-scale distributed multi-camera systems, person re-identification is a challenging computer vision task as the problem is two-fold: detecting entities through identification and recognition techniques; and connecting entities temporally by associating them in often crowded environments. Since tracking essentially involves linking detections, we can reformulate it purely as a re-identification task. The inherent advantage of such a reformulation lies in the ability of the tracking algorithm to effectively handle temporal discontinuities in multi-camera environments. To accomplish this, we model human appearance, face biometric and location constraints across cameras. We do not make restrictive assumptions such as number of people in a scene. Our approach is validated by using a simple and efficient inference algorithm. Results on two publicly available datasets, CamNeT and DukeMTMC, are significantly better compared to other existing methods.", "In this study, we present a set of new evaluation measures for the track-based multi-camera tracking (T-MCT) task leveraging the clustering measurements. We demonstrate that the proposed evaluation measures provide notable advantages over previous ones. Moreover, a distributed and online T-MCT framework is proposed, where re-identification (Re-id) is embedded in T-MCT, to confirm the validity of the proposed evaluation measures. Experimental results reveal that with the proposed evaluation measures, the performance of T-MCT can be accurately measured, which is highly correlated to the performance of Re-id. Furthermore, it is also noted that our T-MCT framework achieves competitive score on the DukeMTMC dataset when compared to the previous work that used global optimization algorithms. Both the evaluation measures and the inter-camera tracking framework are proven to be the stepping stone for multi-camera tracking.", "When viewed from a system of multiple cameras with non-overlapping fields of view, the appearance of an object in one camera view is usually very different from its appearance in another camera view due to the differences in illumination, pose and camera parameters. In order to handle the change in observed colors of an object as it moves from one camera to another, we show that all brightness transfer functions from a given camera to another camera lie in a low dimensional subspace and demonstrate that this subspace can be used to compute appearance similarity. In the proposed approach, the system learns the subspace of inter-camera brightness transfer functions in a training phase during which object correspondences are assumed to be known. Once the training is complete, correspondences are assigned using the maximum a posteriori (MAP) estimation framework using both location and appearance cues. We evaluate the proposed method under several real world scenarios obtaining encouraging results.", "The paper investigates the unsupervised learning of a model of activity for a multi-camera surveillance network that can be created from a large set of observations. This enables the learning algorithm to establish links between camera views associated with an activity. The learning algorithm operates in a correspondence-free manner, exploiting the statistical consistency of the observation data. The derived model is used to automatically determine the topography of a network of cameras and to provide a means for tracking targets across the \"blind\" areas of the network. A theoretical justification and experimental validation of the methods are provided.", "", "This paper presents a scalable solution to the problem of tracking objects across spatially separated, uncalibrated, non-overlapping cameras. Unlike other approaches this technique uses an incremental learning method, to model both the colour variations and posterior probability distributions of spatio-temporal links between cameras. These operate in parallel and are then used with an appearance model of the object to track across spatially separated cameras. The approach requires no pre-calibration or batch preprocessing, is completely unsupervised, and becomes more accurate over time as evidence is accumulated.", "This paper presents a novel and robust approach to consistent labeling for people surveillance in multicamera systems. A general framework scalable to any number of cameras with overlapped views is devised. An offline training process automatically computes ground-plane homography and recovers epipolar geometry. When a new object is detected in any one camera, hypotheses for potential matching objects in the other cameras are established. Each of the hypotheses is evaluated using a prior and likelihood value. The prior accounts for the positions of the potential matching objects, while the likelihood is computed by warping the vertical axis of the new object on the field of view of the other cameras and measuring the amount of match. In the likelihood, two contributions (forward and backward) are considered so as to correctly handle the case of groups of people merged into single objects. Eventually, a maximum-a-posteriori approach estimates the best label assignment for the new object. Comparisons with other methods based on homography and extensive outdoor experiments demonstrate that the proposed approach is accurate and robust in coping with segmentation errors and in disambiguating groups." ] }
1906.01357
2948895425
Multi-target Multi-camera Tracking (MTMCT) aims to extract the trajectories from videos captured by a set of cameras. Recently, the tracking performance of MTMCT is significantly enhanced with the employment of re-identification (Re-ID) model. However, the appearance feature usually becomes unreliable due to the occlusion and orientation variance of the targets. Directly applying Re-ID model in MTMCT will encounter the problem of identity switches (IDS) and tracklet fragment caused by occlusion. To solve these problems, we propose a novel tracking framework in this paper. In this framework, the occlusion status and orientation information are utilized in Re-ID model with human pose information considered. In addition, the tracklet association using the proposed fused tracking feature is adopted to handle the fragment problem. The proposed tracker achieves 81.3 IDF1 on the multiple-camera hard sequence, which outperforms all other reference methods by a large margin.
In addition, numerous graph-based models @cite_15 @cite_68 @cite_54 @cite_51 @cite_61 @cite_45 @cite_38 are proposed to deal with MTMCT. @cite_15 constructs a mini-cost flow graph to complete data association among cameras in 3D world space. In @cite_68 , the data association is formulated as a constrained flow optimization of a convex problem, and the problem is solved by the k-shortest paths algorithm. In @cite_45 , Yoon exploit the multiple hypothesis tracking (MHT) algorithm and apply it on MTMCT with some modifications. Branches in track-hypothesis trees represent the trajectory across multiple cameras. Maximum Weight Independent Set (MWIS) in @cite_55 is adopted for computing the best hypothesis set. With the development of Re-ID, a number of methods @cite_45 @cite_2 @cite_2 @cite_42 @cite_30 adopt Re-ID technology to represent the appearance of the target. In @cite_30 , Ristani learn a good feature for both MTMCT and Re-ID with a convolutional neural network. In @cite_42 , Zhang obtain a good result with simple hierarchical clustering and well-trained Re-ID feature.
{ "cite_N": [ "@cite_61", "@cite_38", "@cite_30", "@cite_54", "@cite_55", "@cite_42", "@cite_45", "@cite_2", "@cite_15", "@cite_68", "@cite_51" ], "mid": [ "2519136515", "", "2962923976", "2640181096", "177037875", "2779160359", "2791203079", "", "2084652104", "2171243491", "2758895866" ], "abstract": [ "Incorporating multiple cameras is an effective solution to improve the performance and robustness of multi-target tracking to occlusion and appearance ambiguities. In this paper, we propose a new multi-camera multi-target tracking method based on a space-time-view hyper-graph that encodes higher-order constraints (i.e., beyond pairwise relations) on 3D geometry, appearance, motion continuity, and trajectory smoothness among 2D tracklets within and across different camera views. We solve tracking in each single view and reconstruction of tracked trajectories in 3D environment simultaneously by formulating the problem as an efficient search of dense sub-hypergraphs on the space-time-view hyper-graph using a sampling based approach. Experimental results on the PETS 2009 dataset and MOTChallenge 2015 3D benchmark demonstrate that our method performs favorably against the state-of-the-art methods in both single-camera and multi-camera multi-target tracking, while achieving close to real-time running efficiency. We also provide experimental analysis of the influence of various aspects of our method to the final tracking performance.", "", "Multi-Target Multi-Camera Tracking (MTMCT) tracks many people through video taken from several cameras. Person Re-Identification (Re-ID) retrieves from a gallery images of people similar to a person query image. We learn good features for both MTMCT and Re-ID with a convolutional neural network. Our contributions include an adaptive weighted triplet loss for training and a new technique for hard-identity mining. Our method outperforms the state of the art both on the DukeMTMC benchmarks for tracking, and on the Market-1501 and DukeMTMC-ReID benchmarks for Re-ID. We examine the correlation between good Re-ID and good MTMCT scores, and perform ablation studies to elucidate the contributions of the main components of our system. Code is available1.", "In this paper, a unified three-layer hierarchical approach for solving tracking problems in multiple non-overlapping cameras is proposed. Given a video and a set of detections (obtained by any person detector), we first solve within-camera tracking employing the first two layers of our framework and, then, in the third layer, we solve across-camera tracking by merging tracks of the same person in all cameras in a simultaneous fashion. To best serve our purpose, a constrained dominant sets clustering (CDSC) technique, a parametrized version of standard quadratic optimization, is employed to solve both tracking tasks. The tracking problem is caste as finding constrained dominant sets from a graph. In addition to having a unified framework that simultaneously solves within- and across-camera tracking, the third layer helps link broken tracks of the same person occurring during within-camera tracking. In this work, we propose a fast algorithm, based on dynamics from evolutionary game theory, which is efficient and salable to large-scale real-world applications.", "Multitarget tracking (MTT) hinges upon the solution of a data association problem in which observations across scans are partitioned into tracks and false alarms so that accurate estimates of true targets can be recovered. In this chapter, we describe a methodology for solving this data association problem as a maximum weight independent set problem (MWISP). This MWISP approach has been used successfully for almost a decade in fielded sensor systems using a multiple hypothesis tracking (MHT) framework, but has received virtually no attention in the tracking literature, nor has it been recognized as an application in the clique independent set literature. The primary aim of this chapter is to simultaneously fill these two voids. Second, we show that the MWISP formulation is equivalent to the multidimensional assignment (MAP) formulation, one of the most widely documented approaches for solving the data association problem in MTT. Finally, we offer a qualitative comparison between the MWISP and MAP formulations, while highlighting other important practical issues in data association algorithms that are commonly overlooked by the optimization community.", "Although many methods perform well in single camera tracking, multi-camera tracking remains a challenging problem with less attention. DukeMTMC is a large-scale, well-annotated multi-camera tracking benchmark which makes great progress in this field. This report is dedicated to briefly introduce our method on DukeMTMC and show that simple hierarchical clustering with well-trained person re-identification features can get good results on this dataset.", "In this study, a multiple hypothesis tracking (MHT) algorithm for multi-target multi-camera tracking (MCT) with disjoint views is proposed. The authors' method forms track-hypothesis trees, and each branch of them represents a multi-camera track of a target that may move within a camera as well as move across cameras. Furthermore, multi-target tracking within a camera is performed simultaneously with the tree formation by manipulating a status of each track hypothesis. Each status represents three different stages of a multi-camera track: tracking, searching, and end-of-track. The tracking status means targets are tracked by a single camera tracker. In the searching status, the disappeared targets are examined if they reappear in other cameras. The end-of-track status does the target exited the camera network due to its lengthy invisibility. These three status assists MHT to form the track-hypothesis trees for multi-camera tracking. Furthermore, a gating technique which eliminates the unlikely observation-to-track association using space-time information has been introduced. In the experiments, the proposed method has been tested using two datasets, DukeMTMC and NLPR , which demonstrates that the method outperforms the state-of-the-art method in terms of improvement of the accuracy. In addition, real-time and online performance of proposed method is also showed in this study.", "", "We generalize the network flow formulation for multiobject tracking to multi-camera setups. In the past, reconstruction of multi-camera data was done as a separate extension. In this work, we present a combined maximum a posteriori (MAP) formulation, which jointly models multicamera reconstruction as well as global temporal data association. A flow graph is constructed, which tracks objects in 3D world space. The multi-camera reconstruction can be efficiently incorporated as additional constraints on the flow graph without making the graph unnecessarily large. The final graph is efficiently solved using binary linear programming. On the PETS 2009 dataset we achieve results that significantly exceed the current state of the art.", "Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.", "In this paper, we propose a pipeline for multi-target visual tracking under multi-camera system. For multi-camera system tracking problem, efficient data association across cameras, and at the same time, across frames becomes more important than single-camera system tracking. However, most of the multi-camera tracking algorithms emphasis on single camera across frame data association. Thus in our work, we model our tracking problem as a global graph, and adopt Generalized Maximum Multi Clique optimization problem as our core algorithm to take both across frame and across camera data correlation into account all together. Furthermore, in order to compute good similarity scores as the input of our graph model, we extract both appearance and dynamic motion similarities. For appearance feature, Local Maximal Occurrence Representation(LOMO) feature extraction algorithm for ReID is conducted. When it comes to capturing the dynamic information, we build Hankel matrix for each tracklet of target and apply rank estimation with Iterative Hankel Total Least Squares(IHTLS) algorithm to it. We evaluate our tracker on the challenging Terrace Sequences from EPFL CVLAB as well as recently published Duke MTMC dataset." ] }
1906.01357
2948895425
Multi-target Multi-camera Tracking (MTMCT) aims to extract the trajectories from videos captured by a set of cameras. Recently, the tracking performance of MTMCT is significantly enhanced with the employment of re-identification (Re-ID) model. However, the appearance feature usually becomes unreliable due to the occlusion and orientation variance of the targets. Directly applying Re-ID model in MTMCT will encounter the problem of identity switches (IDS) and tracklet fragment caused by occlusion. To solve these problems, we propose a novel tracking framework in this paper. In this framework, the occlusion status and orientation information are utilized in Re-ID model with human pose information considered. In addition, the tracklet association using the proposed fused tracking feature is adopted to handle the fragment problem. The proposed tracker achieves 81.3 IDF1 on the multiple-camera hard sequence, which outperforms all other reference methods by a large margin.
In the context of appearance feature, many works @cite_30 @cite_53 @cite_8 @cite_50 @cite_52 @cite_58 recently adopt deep learning to represent appearance of the target. In @cite_8 , Feng design a quality-aware mechanism to select the @math images from the historical samples of the target, and ResNet-18 @cite_7 is adopted to measure the quality of the detection. Then the Re-ID features of the selected detections are input into a classifier to get the similarity score between tracklets and detections. In @cite_50 , spatial and temporal attention mechanism are adopted in feature extraction, which make the network focus on the matching patterns of the input image pair. In @cite_60 , Chu use spatial and temporal attention mechanism on feature extraction to handle the drift problem caused by single object tracker. In @cite_58 , Yoon apply historical appearance matching to overcome the temporal error. The above methods attempt to solve the problems caused by occlusion and background clutter, and they maintain a stable appearance feature in a complex environment. In this paper, we employ the human pose information to estimate the target state including the occlusion status and orientation. In this way, we can make better use of Re-ID feature.
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_8", "@cite_60", "@cite_53", "@cite_52", "@cite_50", "@cite_58" ], "mid": [ "2962923976", "", "2910057067", "2963481014", "2921601546", "2604679602", "2895150009", "2963382789" ], "abstract": [ "Multi-Target Multi-Camera Tracking (MTMCT) tracks many people through video taken from several cameras. Person Re-Identification (Re-ID) retrieves from a gallery images of people similar to a person query image. We learn good features for both MTMCT and Re-ID with a convolutional neural network. Our contributions include an adaptive weighted triplet loss for training and a new technique for hard-identity mining. Our method outperforms the state of the art both on the DukeMTMC benchmarks for tracking, and on the Market-1501 and DukeMTMC-ReID benchmarks for Re-ID. We examine the correlation between good Re-ID and good MTMCT scores, and perform ablation studies to elucidate the contributions of the main components of our system. Code is available1.", "", "In this paper, we propose a unified Multi-Object Tracking (MOT) framework learning to make full use of long term and short term cues for handling complex cases in MOT scenes. Besides, for better association, we propose switcher-aware classification (SAC), which takes the potential identity-switch causer (switcher) into consideration. Specifically, the proposed framework includes a Single Object Tracking (SOT) sub-net to capture short term cues, a re-identification (ReID) sub-net to extract long term cues and a switcher-aware classifier to make matching decisions using extracted features from the main target and the switcher. Short term cues help to find false negatives, while long term cues avoid critical mistakes when occlusion happens, and the SAC learns to combine multiple cues in an effective way and improves robustness. The method is evaluated on the challenging MOT benchmarks and achieves the state-of-the-art results.", "In this paper, we propose a CNN-based framework for online MOT. This framework utilizes the merits of single object trackers in adapting appearance models and searching for target in the next frame. Simply applying single object tracker for MOT will encounter the problem in computational efficiency and drifted results caused by occlusion. Our framework achieves computational efficiency by sharing features and using ROI-Pooling to obtain individual features for each target. Some online learned target-specific CNN layers are used for adapting the appearance model for each target. In the framework, we introduce spatial-temporal attention mechanism (STAM) to handle the drift caused by occlusion and interaction among targets. The visibility map of the target is learned and used for inferring the spatial attention map. The spatial attention map is then applied to weight the features. Besides, the occlusion status can be estimated from the visibility map, which controls the online updating process via weighted loss on training samples with different occlusion statuses in different frames. It can be considered as temporal attention mechanism. The proposed algorithm achieves 34.3 and 46.0 in MOTA on challenging MOT15 and MOT16 benchmark dataset respectively.", "Recent progresses in model-free single object tracking (SOT) algorithms have largely inspired applying SOT to multi-object tracking (MOT) to improve the robustness as well as relieving dependency on external detector. However, SOT algorithms are generally designed for distinguishing a target from its environment, and hence meet problems when a target is spatially mixed with similar objects as observed frequently in MOT. To address this issue, in this paper we propose an instance-aware tracker to integrate SOT techniques for MOT by encoding awareness both within and between target models. In particular, we construct each target model by fusing information for distinguishing target both from background and other instances (tracking targets). To conserve uniqueness of all target models, our instance-aware tracker considers response maps from all target models and assigns spatial locations exclusively to optimize the overall accuracy. Another contribution we make is a dynamic model refreshing strategy learned by a convolutional neural network. This strategy helps to eliminate initialization noise as well as to adapt to variation of target size and appearance. To show the effectiveness of the proposed approach, it is evaluated on the popular MOT15 and MOT16 challenge benchmarks. On both benchmarks, our approach achieves the best overall performances in comparison with published results.", "Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.", "In this paper, we propose an online Multi-Object Tracking (MOT) approach which integrates the merits of single object tracking and data association methods in a unified framework to handle noisy detections and frequent interactions between targets. Specifically, for applying single object tracking in MOT, we introduce a cost-sensitive tracking loss based on the state-of-the-art visual tracker, which encourages the model to focus on hard negative distractors during online learning. For data association, we propose Dual Matching Attention Networks (DMAN) with both spatial and temporal attention mechanisms. The spatial attention module generates dual attention maps which enable the network to focus on the matching patterns of the input image pair, while the temporal attention module adaptively allocates different levels of attention to different samples in the tracklet to suppress noisy observations. Experimental results on the MOT benchmark datasets show that the proposed algorithm performs favorably against both online and offline trackers in terms of identity-preserving metrics.", "In this paper, we propose the methods to handle temporal errors during multi-object tracking. Temporal error occurs when objects are occluded or noisy detections appear near the object. In those situations, tracking may fail and various errors like drift or ID-switching occur. It is hard to overcome temporal errors only by using motion and shape information. So, we propose the historical appearance matching method and joint-input siamese network which was trained by 2-step process. It can prevent tracking failures although objects are temporally occluded or last matching information is unreliable. We also provide useful technique to remove noisy detections effectively according to scene condition. Tracking performance, especially identity consistency, is highly improved by attaching our methods." ] }
1906.01620
2948210138
While Deep Neural Networks (DNNs) have become the go-to approach in computer vision, the vast majority of these models fail to properly capture the uncertainty inherent in their predictions. Estimating this predictive uncertainty can be crucial, for instance in automotive applications. In Bayesian deep learning, predictive uncertainty is often decomposed into the distinct types of aleatoric and epistemic uncertainty. The former can be estimated by letting a DNN output the parameters of a probability distribution. Epistemic uncertainty estimation is a more challenging problem, and while different scalable methods recently have emerged, no comprehensive comparison has been performed in a real-world setting. We therefore accept this task and propose an evaluation framework for predictive uncertainty estimation that is specifically designed to test the robustness required in real-world computer vision applications. Using the proposed framework, we perform an extensive comparison of the popular ensembling and MC-dropout methods on the tasks of depth completion and street-scene semantic segmentation. Our comparison suggests that ensembling consistently provides more reliable uncertainty estimates. Code is available at this https URL.
Lakshminarayanan @cite_10 created a parametric model @math of the conditional distribution using a DNN @math , and learned multiple point estimates @math by repeatedly minimizing the MLE objective @math with random initialization. They then averaged over the corresponding parametric models to obtain the predictive distribution, The authors considered this a non-Bayesian alternative to predictive uncertainty estimation. However, since @math always can be seen as samples from some distribution @math , we note that is virtually identical to . Ensembling can thus also be viewed as approximate Bayesian inference, where the level of approximation is determined by how well the implicit sampling distribution @math approximates the posterior @math . Ideally, we want @math to be distributed exactly according to @math . Since @math is highly in the parameter space for DNNs @cite_11 @cite_42 , so is @math . By minimizing @math multiple times, starting from initial points, we are likely to end up in different local optima. Ensembling can thus generate a compact set of samples @math that, even for small values of @math , captures this important aspect of multi-modality in @math .
{ "cite_N": [ "@cite_42", "@cite_10", "@cite_11" ], "mid": [ "", "2963238274", "2101762657" ], "abstract": [ "", "Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.", "We show that for a single neuron with the logistic function as the transfer function the number of local minima of the error function based on the square loss can grow exponentially in the dimension." ] }
1709.06309
2566767245
Sentiment analysis can be regarded as a relation extraction problem in which the sentiment of some opinion holder towards a certain aspect of a product, theme or event needs to be extracted. We present a novel neural architecture for sentiment analysis as a relation extraction problem that addresses this problem by dividing it into three subtasks: i) identification of aspect and opinion terms, ii) labeling of opinion terms with a sentiment, and iii) extraction of relations between opinion terms and aspect terms. For each subtask, we propose a neural network based component and combine all of them into a complete system for relational sentiment analysis. The component for aspect and opinion term extraction is a hybrid architecture consisting of a recurrent neural network stacked on top of a convolutional neural network. This approach outperforms a standard convolutional deep neural architecture as well as a recurrent network architecture and performs competitively compared to other methods on two datasets of annotated customer reviews. To extract sentiments for individual opinion terms, we propose a recurrent architecture in combination with word distance features and achieve promising results, outperforming a majority baseline by 18 accuracy and providing the first results for the USAGE dataset. Our relation extraction component outperforms the current state-of-the-art in aspect-opinion relation extraction by 15 F-Measure.
Most relevant in terms of aspect and opinion term extraction are the works of @cite_34 and Irsoy and Cardie @cite_1 . address the extraction of opinion expressions while Irsoy and Cardie focus on the extraction of opinion targets. Both approaches frame the respective tasks as a sequence labeling task using RNNs.
{ "cite_N": [ "@cite_34", "@cite_1" ], "mid": [ "2252024663", "2144012961" ], "abstract": [ "The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014.", "Recurrent neural networks (RNNs) are connectionist models of sequential data that are naturally applicable to the analysis of natural language. Recently, “depth in space” — as an orthogonal notion to “depth in time” — in RNNs has been investigated by stacking multiple layers of RNNs and shown empirically to bring a temporal hierarchy to the architecture. In this work we apply these deep RNNs to the task of opinion expression extraction formulated as a token-level sequence-labeling task. Experimental results show that deep, narrow RNNs outperform traditional shallow, wide RNNs with the same number of parameters. Furthermore, our approach outperforms previous CRF-based baselines, including the state-of-the-art semi-Markov CRF model, and does so without access to the powerful opinion lexicons and syntactic features relied upon by the semi-CRF, as well as without the standard layer-by-layer pre-training typically required of RNN architectures." ] }
1709.06548
2753491454
A Triangle Generative Adversarial Network ( @math -GAN) is developed for semi-supervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence is provided by only a few paired samples. @math -GAN consists of four neural networks, two generators and two discriminators. The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs. The generators and discriminators are trained together using adversarial learning. Under mild assumptions, in theory the joint distributions characterized by the two generators concentrate to the data distribution. In experiments, three different kinds of domain pairs are considered, image-label, image-image and image-attribute pairs. Experiments on semi-supervised image classification, image-to-image translation and attribute-based image generation demonstrate the superiority of the proposed approach.
The proposed framework focuses on designing GAN for joint-distribution matching. Conditional GAN can be used for this task if supervised data is available. Various conditional GANs have been proposed to condition the image generation on class labels @cite_1 , attributes @cite_3 , texts @cite_23 @cite_19 and images @cite_32 @cite_34 . Unsupervised learning methods have also been developed for this task. BiGAN @cite_21 and ALI @cite_7 proposed a method to jointly learn a generation network and an inference network via adversarial learning. Though originally designed for learning the two-way transition between the stochastic latent variables and real data samples, BiGAN and ALI can be directly adapted to learn the joint distribution of two real domains. Another method is called DiscoGAN @cite_17 , in which two generators are used to model the bidirectional mapping between domains, and another two discriminators are used to decide whether a generated sample is fake or not in each individual domain. Further, additional reconstructon losses are introduced to make the two generators strongly coupled and also alleviate the problem of mode collapsing. Similiar work includes CycleGAN @cite_31 , DualGAN @cite_10 and DTN @cite_22 . Additional weight-sharing constraints are introduced in CoGAN @cite_14 and UNIT @cite_35 .
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_22", "@cite_7", "@cite_21", "@cite_1", "@cite_32", "@cite_3", "@cite_19", "@cite_23", "@cite_31", "@cite_34", "@cite_10", "@cite_17" ], "mid": [ "2592480533", "", "2553897675", "2411541852", "2412320034", "2125389028", "", "2552611751", "", "", "2962793481", "2523714292", "2608015370", "" ], "abstract": [ "Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in this https URL .", "", "We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.", "We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.", "The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.", "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "", "Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications.", "", "", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.", "" ] }
1709.06548
2753491454
A Triangle Generative Adversarial Network ( @math -GAN) is developed for semi-supervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence is provided by only a few paired samples. @math -GAN consists of four neural networks, two generators and two discriminators. The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs. The generators and discriminators are trained together using adversarial learning. Under mild assumptions, in theory the joint distributions characterized by the two generators concentrate to the data distribution. In experiments, three different kinds of domain pairs are considered, image-label, image-image and image-attribute pairs. Experiments on semi-supervised image classification, image-to-image translation and attribute-based image generation demonstrate the superiority of the proposed approach.
Various methods and model architectures have been proposed to improve and stabilize the training of GAN, such as feature matching @cite_36 @cite_37 @cite_15 , Wasserstein GAN @cite_30 , energy-based GAN @cite_4 , and unrolled GAN @cite_11 among many other related works. Our work is orthogonal to these methods, which could also be used to improve the training of @math -GAN. Instead of using adversarial loss, there also exists work that uses supervised learning @cite_0 for joint-distribution matching, and variational autoencoders for semi-supervised learning @cite_26 @cite_6 . Lastly, our work is also closely related to the recent work of @cite_25 @cite_5 @cite_12 , which treats one of the domains as latent variables.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_4", "@cite_36", "@cite_6", "@cite_0", "@cite_5", "@cite_15", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "", "", "2951326654", "2521028896", "2432004435", "2951044009", "", "2753391154", "2625357353", "2752660961", "2753672091", "2554314924" ], "abstract": [ "", "", "A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence absence of associated labels captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.", "We introduce the \"Energy-based Generative Adversarial Network\" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images.", "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "A new method for learning variational autoencoders (VAEs) is developed, based on Stein variational gradient descent. A key advantage of this approach is that one need not make parametric assumptions about the form of the encoder distribution. Performance is further enhanced by integrating the proposed encoder with importance sampling. Excellent performance is demonstrated across multiple unsupervised and semi-supervised problems, including semi-supervised analysis of the ImageNet data, demonstrating the scalability of the model to large datasets.", "", "A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: ( @math ) from observed data fed through the encoder to yield codes, and ( @math ) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from ( @math ) and ( @math ), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.", "The Generative Adversarial Network (GAN) has achieved great success in generating realistic (real-valued) synthetic data. However, convergence issues and difficulties dealing with discrete data hinder the applicability of GAN to text. We propose a framework for generating realistic text via adversarial training. We employ a long short-term memory network as generator, and a convolutional network as discriminator. Instead of using the standard objective of GAN, we propose matching the high-dimensional latent feature distributions of real and synthetic sentences, via a kernelized discrepancy metric. This eases adversarial training by alleviating the mode-collapsing problem. Our experiments show superior performance in quantitative evaluation, and demonstrate that our model can generate realistic-looking sentences.", "We investigate the non-identifiability issues associated with bidirectional adversarial training for joint distribution matching. Within a framework of conditional entropy, we propose both adversarial and non-adversarial approaches to learn desirable matched joint distributions for unsupervised and supervised tasks. We unify a broad family of adversarial models as joint distribution matching problems. Our approach stabilizes learning of unsupervised bidirectional adversarial learning methods. Further, we introduce an extension for semi-supervised learning tasks. Theoretical results are validated in synthetic data and real-world applications.", "A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback-Leibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.", "We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator." ] }
1709.06316
2755981968
In this paper, we propose a novel deep learning based video saliency prediction method, named DeepVS. Specifically, we establish a large-scale eye-tracking database of videos (LEDOV), which includes 32 subjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, which is composed of the objectness and motion subnets. In OM-CNN, cross-net mask and hierarchical feature normalization are proposed to combine the spatial features of the objectness subnet and the temporal features of the motion subnet. We further find from our database that there exists a temporal correlation of human attention with a smooth saliency transition across video frames. We thus propose saliency-structured convolutional long short-term memory (SS-ConvLSTM) network, using the extracted features from OM-CNN as the input. Consequently, the inter-frame saliency maps of a video can be generated, which consider both structured output with center-bias and cross-frame transitions of human attention maps. Finally, the experimental results show that DeepVS advances the state-of-the-art in video saliency prediction.
Most recently, DNN has succeeded in many computer vision tasks, such as image classification @cite_17 , action recognition @cite_33 and object detection @cite_36 . In the field of saliency prediction, DNN has also been successfully incorporated to automatically learn spatial features for predicting saliency of images @cite_58 @cite_30 @cite_15 @cite_34 @cite_44 @cite_53 @cite_3 . Specifically, as one of the pioneering works, Deepfix @cite_58 proposed a DNN based structure on VGG-16 @cite_17 and inception module @cite_8 to learn multi-scales semantic representation for saliency prediction. In Deppfix, a dilated convolutional structure was developed to extend receptive field, and then a location biased convolutional layer was proposed to learn the centre-bias pattern for saliency prediction. Similarly, SALICON @cite_30 was also proposed to fine tune the existing object recognition DNNs, and developed an efficient loss function for training the DNN model in saliency prediction. Later, some advanced DNN methods @cite_15 @cite_34 @cite_44 @cite_3 were proposed to improve the performance of image saliency prediction.
{ "cite_N": [ "@cite_30", "@cite_33", "@cite_8", "@cite_36", "@cite_53", "@cite_34", "@cite_3", "@cite_44", "@cite_15", "@cite_58", "@cite_17" ], "mid": [ "2212216676", "1950788856", "2097117768", "2963037989", "2583180462", "2147347517", "2509306173", "2519528544", "2952932416", "2964114039", "1686810756" ], "abstract": [ "Saliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. Conventional saliency models typically rely on low-level image statistics to predict human fixations. While these models perform significantly better than chance, there is still a large gap between model prediction and human behavior. This gap is largely due to the limited capability of models in predicting eye fixations with strong semantic content, the so-called semantic gap. This paper presents a focused study to narrow the semantic gap with an architecture based on Deep Neural Network (DNN). It leverages the representational power of high-level semantics encoded in DNNs pretrained for object recognition. Two key components are fine-tuning the DNNs fully convolutionally with an objective function based on the saliency evaluation metrics, and integrating information at different image scales. We compare our method with 14 saliency models on 6 public eye tracking benchmark datasets. Results demonstrate that our DNNs can automatically learn features particularly for saliency prediction that surpass by a big margin the state-of-the-art. In addition, our model ranks top to date under all seven metrics on the MIT300 challenge set.", "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.", "We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.", "We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples. The first stage of the network consists of a generator model whose weights are learned by back-propagation computed from a binary cross entropy (BCE) loss over downsampled versions of the saliency maps. The resulting prediction is processed by a discriminator network trained to solve a binary classification task between the saliency maps generated by the generative stage and the ground truth ones. Our experiments show how adversarial training allows reaching state-of-the-art performance across different metrics when combined with a widely-used loss function like BCE. Our results can be reproduced with the source code and trained models available at https: imatge-upc.github. io saliency-salgan-2017 .", "A key problem in salient object detection is how to effectively model the semantic properties of salient objects in a data-driven manner. In this paper, we propose a multi-task deep saliency model based on a fully convolutional neural network with global input (whole raw images) and global output (whole saliency maps). In principle, the proposed saliency model takes a data-driven strategy for encoding the underlying saliency prior information, and then sets up a multi-task learning scheme for exploring the intrinsic correlations between saliency detection and semantic image segmentation. Through collaborative feature learning from such two correlated tasks, the shared fully convolutional layers produce effective features for object perception. Moreover, it is capable of capturing the semantic information on salient objects across different levels using the fully convolutional layers, which investigate the feature-sharing properties of salient object detection with a great reduction of feature redundancy. Finally, we present a graph Laplacian regularized nonlinear regression model for saliency refinement. Experimental results demonstrate the effectiveness of our approach in comparison with the state-of-the-art approaches.", "This paper presents a novel deep architecture for saliency prediction. Current state of the art models for saliency prediction employ Fully Convolutional networks that perform a non-linear combination of features extracted from the last convolutional layer to predict saliency maps. We propose an architecture which, instead, combines features extracted at different levels of a Convolutional Neural Network (CNN). Our model is composed of three main blocks: a feature extraction CNN, a feature encoding network, that weights low and high level feature maps, and a prior learning network. We compare our solution with state of the art saliency models on two public benchmarks datasets. Results show that our model outperforms under all evaluation metrics on the SALICON dataset, which is currently the largest public dataset for saliency prediction, and achieves competitive results on the MIT300 benchmark.", "Deep networks have been proved to encode high level semantic features and delivered superior performance in saliency detection. In this paper, we go one step further by developing a new saliency model using recurrent fully convolutional networks (RFCNs). Compared with existing deep network based methods, the proposed network is able to incorporate saliency prior knowledge for more accurate inference. In addition, the recurrent architecture enables our method to automatically learn to refine the saliency map by correcting its previous errors. To train such a network with numerous parameters, we propose a pre-training strategy using semantic segmentation data, which simultaneously leverages the strong supervision of segmentation tasks for better training and enables the network to capture generic representations of objects for saliency detection. Through extensive experimental evaluations, we demonstrate that the proposed method compares favorably against state-of-the-art approaches, and that the proposed recurrent deep model as well as the pre-training method can significantly improve performance.", "The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles. This paper, however, addresses the problem with a completely data-driven approach by training a convolutional neural network (convnet). The learning process is formulated as a minimization of a loss function that measures the Euclidean distance of the predicted saliency map with the provided ground truth. The recent publication of large datasets of saliency prediction has provided enough data to train end-to-end architectures that are both fast and accurate. Two designs are proposed: a shallow convnet trained from scratch, and a another deeper solution whose first three layers are adapted from another network trained for classification. To the authors knowledge, these are the first end-to-end CNNs trained and tested for the purpose of saliency prediction.", "Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom–up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant—this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1709.06316
2755981968
In this paper, we propose a novel deep learning based video saliency prediction method, named DeepVS. Specifically, we establish a large-scale eye-tracking database of videos (LEDOV), which includes 32 subjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, which is composed of the objectness and motion subnets. In OM-CNN, cross-net mask and hierarchical feature normalization are proposed to combine the spatial features of the objectness subnet and the temporal features of the motion subnet. We further find from our database that there exists a temporal correlation of human attention with a smooth saliency transition across video frames. We thus propose saliency-structured convolutional long short-term memory (SS-ConvLSTM) network, using the extracted features from OM-CNN as the input. Consequently, the inter-frame saliency maps of a video can be generated, which consider both structured output with center-bias and cross-frame transitions of human attention maps. Finally, the experimental results show that DeepVS advances the state-of-the-art in video saliency prediction.
However, only a few works manage to apply DNN in video saliency prediction @cite_20 @cite_43 @cite_39 @cite_2 @cite_12 . In these DNNs, the dynamic characteristics were explored in two ways: adding temporal information in CNN structures @cite_20 @cite_43 @cite_2 or developing dynamic structure with LSTM @cite_39 @cite_12 . For adding temporal information, a four-layer CNN in @cite_20 and a two-stream CNN in @cite_43 were trained, respectively, with both RGB frames and motion maps as the inputs. Similarly, in @cite_2 , the pair of video frames concatenated with a static saliency map (generated by the static CNN) are input to the dynamic CNN for video saliency prediction, allowing CNN to generalize more temporal features through the representation learning of DNN. Instead, we find that human attention is more likely to be attracted by the moving objects or moving parts of the objects. As such, to explore the semantic temporal features for video saliency prediction, a motion subnet in our OM-CNN is trained under the guidance of the objectness subnet.
{ "cite_N": [ "@cite_39", "@cite_43", "@cite_2", "@cite_20", "@cite_12" ], "mid": [ "2313180542", "2506895016", "2757028014", "2344129934", "2740797387" ], "abstract": [ "In many computer vision tasks, the relevant information to solve the problem at hand is mixed to irrelevant, distracting information. This has motivated researchers to design attentional models that can dynamically focus on parts of images or videos that are salient, e.g., by down-weighting irrelevant pixels. In this work, we propose a spatiotemporal attentional model that learns where to look in a video directly from human fixation data. We model visual attention with a mixture of Gaussians at each frame. This distribution is used to express the probability of saliency for each pixel. Time consistency in videos is modeled hierarchically by: 1) deep 3D convolutional features to represent spatial and short-term time relations and 2) a long short-term memory network on top that aggregates the clip-level representation of sequential clips and therefore expands the temporal domain from few frames to seconds. The parameters of the proposed model are optimized via maximum likelihood estimation using human fixations as training data, without knowledge of the action in each video. Our experiments on Hollywood2 show state-of-the-art performance on saliency prediction for video. We also show that our attentional model trained on Hollywood2 generalizes well to UCF101 and it can be leveraged to improve action classification accuracy on both datasets.", "In recent years, visual saliency estimation in images has attracted much attention in the computer vision community. However, predicting saliency in videos has received rela- tively little attention. Inspired by the recent success of deep convolutional neural networks based static saliency mod- els, in this work, we study two different two-stream convo- lutional networks for dynamic saliency prediction. To im- prove the generalization capability of our models, we also introduce a novel, empirically grounded data augmenta- tion technique for this task. We test our models on DIEM dataset and report superior results against the existing mod- els. Moreover, we perform transfer learning experiments on SALICON, a recently proposed static saliency dataset, by finetuning our models on the optical flows estimated from static images. Our experiments show that taking motion into account in this way can be helpful for static saliency estimation.", "This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).", "The purpose of this paper is the detection of salient areas in natural video by using the new deep learning techniques. Salient patches in video frames are predicted first. Then the predicted visual fixation maps are built upon them. We design the deep architecture on the basis of CaffeNet implemented with Caffe toolkit. We show that changing the way of data selection for optimisation of network parameters, we can save computation cost up to @math times. We extend deep learning approaches for saliency prediction in still images with RGB values to specificity of video using the sensitivity of the human visual system to residual motion. Furthermore, we complete primary colour pixel values by contrast features proposed in classical visual attention prediction models. The experiments are conducted on two publicly available datasets. The first is IRCCYN video database containing @math videos with an overall amount of @math frames and eye fixations of @math subjects. The second one is HOLLYWOOD2 provided @math movie clips with the eye fixations of @math subjects. On IRCYYN dataset, the accuracy obtained is of @math . On HOLLYWOOD2 dataset, results in prediction of saliency of patches show the improvement up to @math with regard to RGB use only. The resulting accuracy of @math is obtained. The AUC metric in comparison of predicted saliency maps with visual fixation maps shows the increase up to @math on a sample of video clips from this dataset.", "Although the recent success of convolutional neural network (CNN) advances state-of-the-art saliency prediction in static images, few work has addressed the problem of predicting attention in videos. On the other hand, we nd that the attention of different subjects consistently focuses on a single face in each frame of videos involving multiple faces. Therefore, we propose in this paper a novel deep learning (DL) based method to predict salient face in multiple-face videos, which is capable of learning features and transition of salient faces across video frames. In particular, we rst learn a CNN for each frame to locate salient face. Taking CNN features as input, we develop a multiple-stream long short-term memory (M-LSTM) network to predict the temporal transition of salient faces in video sequences. To evaluate our DL-based method, we build a new eye-tracking database of multiple-face videos. The experimental results show that our method outperforms the prior state-of-the-art methods in predicting visual attention on faces in multipleface videos." ] }
1709.06316
2755981968
In this paper, we propose a novel deep learning based video saliency prediction method, named DeepVS. Specifically, we establish a large-scale eye-tracking database of videos (LEDOV), which includes 32 subjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, which is composed of the objectness and motion subnets. In OM-CNN, cross-net mask and hierarchical feature normalization are proposed to combine the spatial features of the objectness subnet and the temporal features of the motion subnet. We further find from our database that there exists a temporal correlation of human attention with a smooth saliency transition across video frames. We thus propose saliency-structured convolutional long short-term memory (SS-ConvLSTM) network, using the extracted features from OM-CNN as the input. Consequently, the inter-frame saliency maps of a video can be generated, which consider both structured output with center-bias and cross-frame transitions of human attention maps. Finally, the experimental results show that DeepVS advances the state-of-the-art in video saliency prediction.
For developing the dynamic structure, Bazzani @cite_39 and Liu @cite_12 applied the LSTM networks to predict human attention, relying on both short- and long-term memory. However, the fully connections in LSTM limit dimensions of both input and output, unable to obtain end-to-end saliency map. As such, the strong prior knowledge need to be assumed for the distribution of saliency. To be more specific, in @cite_39 , the human attention is assumed to distribute as Gaussian Mixture Model (GMM), then the LSTM is constructed to learn parameters of GMM. Similarly, @cite_12 focuses on predicting the saliency of conference videos and assume that the saliency in each face is a Gaussian distribution. In @cite_12 , the face saliency transition across video frames is learned by LSTM, and the final saliency map is generated via combining the saliency of all faces in video. In our work, we first explore 2C-LSTM with Bayesian dropout, to directly predict saliency maps in an end-to-end manner. This allows learning the more complex distribution of human attention, rather than pre-assumed distribution of saliency.
{ "cite_N": [ "@cite_12", "@cite_39" ], "mid": [ "2740797387", "2313180542" ], "abstract": [ "Although the recent success of convolutional neural network (CNN) advances state-of-the-art saliency prediction in static images, few work has addressed the problem of predicting attention in videos. On the other hand, we nd that the attention of different subjects consistently focuses on a single face in each frame of videos involving multiple faces. Therefore, we propose in this paper a novel deep learning (DL) based method to predict salient face in multiple-face videos, which is capable of learning features and transition of salient faces across video frames. In particular, we rst learn a CNN for each frame to locate salient face. Taking CNN features as input, we develop a multiple-stream long short-term memory (M-LSTM) network to predict the temporal transition of salient faces in video sequences. To evaluate our DL-based method, we build a new eye-tracking database of multiple-face videos. The experimental results show that our method outperforms the prior state-of-the-art methods in predicting visual attention on faces in multipleface videos.", "In many computer vision tasks, the relevant information to solve the problem at hand is mixed to irrelevant, distracting information. This has motivated researchers to design attentional models that can dynamically focus on parts of images or videos that are salient, e.g., by down-weighting irrelevant pixels. In this work, we propose a spatiotemporal attentional model that learns where to look in a video directly from human fixation data. We model visual attention with a mixture of Gaussians at each frame. This distribution is used to express the probability of saliency for each pixel. Time consistency in videos is modeled hierarchically by: 1) deep 3D convolutional features to represent spatial and short-term time relations and 2) a long short-term memory network on top that aggregates the clip-level representation of sequential clips and therefore expands the temporal domain from few frames to seconds. The parameters of the proposed model are optimized via maximum likelihood estimation using human fixations as training data, without knowledge of the action in each video. Our experiments on Hollywood2 show state-of-the-art performance on saliency prediction for video. We also show that our attentional model trained on Hollywood2 generalizes well to UCF101 and it can be leveraged to improve action classification accuracy on both datasets." ] }
1709.06316
2755981968
In this paper, we propose a novel deep learning based video saliency prediction method, named DeepVS. Specifically, we establish a large-scale eye-tracking database of videos (LEDOV), which includes 32 subjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, which is composed of the objectness and motion subnets. In OM-CNN, cross-net mask and hierarchical feature normalization are proposed to combine the spatial features of the objectness subnet and the temporal features of the motion subnet. We further find from our database that there exists a temporal correlation of human attention with a smooth saliency transition across video frames. We thus propose saliency-structured convolutional long short-term memory (SS-ConvLSTM) network, using the extracted features from OM-CNN as the input. Consequently, the inter-frame saliency maps of a video can be generated, which consider both structured output with center-bias and cross-frame transitions of human attention maps. Finally, the experimental results show that DeepVS advances the state-of-the-art in video saliency prediction.
The eye-tracking databases of videos collect the fixations of subjects on each video frame, which can be used as the ground truth for video saliency prediction. The existing eye-tracking databases benefit from the mature eye-tracking technology. In particular, an eye tracker is used to obtain the fixations of subjects on videos, by tracking the pupil and corneal reflections @cite_18 . The pupil locations are then mapped to the real-world stimuli, i.e., video frames, through a pre-defined calibration matrix. As such, fixations can be located in each video frame, indicating where people pay attention.
{ "cite_N": [ "@cite_18" ], "mid": [ "2804567257" ], "abstract": [ "The past decade has witnessed the use of high-level features in saliency prediction for both videos and images. Unfortunately, the existing saliency prediction methods only handle high-level static features, such as face. In fact, high-level dynamic features (also called actions), such as speaking or head turning, are also extremely attractive to visual attention in videos. Thus, in this paper, we propose a data-driven method for learning to predict the saliency of multiple-face videos, by leveraging both static and dynamic features at high-level. Specifically, we introduce an eye-tracking database, collecting the fixations of 39 subjects viewing 65 multiple-face videos. Through analysis on our database, we find a set of high-level features that cause a face to receive extensive visual attention. These high-level features include the static features of face size, center-bias and head pose, as well as the dynamic features of speaking and head turning. Then, we present the techniques for extracting these high-level features. Afterwards, a novel model, namely multiple hidden Markov model (M-HMM), is developed in our method to enable the transition of saliency among faces. In our M-HMM, the saliency transition takes into account both the state of saliency at previous frames and the observed high-level features at the current frame. The experimental results show that the proposed method is superior to other state-of-the-art methods in predicting visual attention on multiple-face videos. Finally, we shed light on a promising implementation of our saliency prediction method in locating the region-of-interest, for video conference compression with high efficiency video coding." ] }
1709.06316
2755981968
In this paper, we propose a novel deep learning based video saliency prediction method, named DeepVS. Specifically, we establish a large-scale eye-tracking database of videos (LEDOV), which includes 32 subjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, which is composed of the objectness and motion subnets. In OM-CNN, cross-net mask and hierarchical feature normalization are proposed to combine the spatial features of the objectness subnet and the temporal features of the motion subnet. We further find from our database that there exists a temporal correlation of human attention with a smooth saliency transition across video frames. We thus propose saliency-structured convolutional long short-term memory (SS-ConvLSTM) network, using the extracted features from OM-CNN as the input. Consequently, the inter-frame saliency maps of a video can be generated, which consider both structured output with center-bias and cross-frame transitions of human attention maps. Finally, the experimental results show that DeepVS advances the state-of-the-art in video saliency prediction.
Now, we review the existing video eye-tracking databases. Table summarizes the basic properties of these databases. To the best of our knowledge, CRCNS @cite_29 , SFU @cite_42 , DIEM @cite_48 and Hollywood @cite_35 are the most popular databases, widely used in the most of recent video saliency prediction works @cite_5 @cite_56 @cite_66 @cite_11 @cite_32 @cite_65 @cite_22 @cite_20 @cite_13 . In the following, they are reviewed in more details.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_48", "@cite_29", "@cite_42", "@cite_65", "@cite_32", "@cite_56", "@cite_5", "@cite_13", "@cite_66", "@cite_20", "@cite_11" ], "mid": [ "2071555787", "1679630208", "2119577735", "2130184867", "1967508848", "2295598507", "2091008902", "2081239020", "2118985252", "1945036000", "1959094031", "2344129934", "2033859430" ], "abstract": [ "Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in ‘saccade and fixate’ regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap. First, we complement existing state-of-the art large scale dynamic computer vision annotated datasets like Hollywood-2 [1] and UCF Sports [2] with human eye movements collected under the ecological constraints of visual action and scene context recognition tasks. To our knowledge these are the first large human eye tracking datasets to be collected and made publicly available for video, vision.imar.ro eyetracking (497,107 frames, each viewed by 19 subjects), unique in terms of their (a) large scale and computer vision relevance, (b) dynamic, video stimuli, (c) task control, as well as free-viewing . Second, we introduce novel dynamic consistency and alignment measures , which underline the remarkable stability of patterns of visual search among subjects. Third, we leverage the significant amount of collected data in order to pursue studies and build automatic, end-to-end trainable computer vision systems based on human eye movements. Our studies not only shed light on the differences between computer vision spatio-temporal interest point image sampling strategies and the human fixations, as well as their impact for visual recognition performance, but also demonstrate that human fixations can be accurately predicted, and when used in an end-to-end automatic system, leveraging some of the advanced computer vision practice, can lead to state of the art results.", "Visual saliency, which predicts regions in the field of view that draw the most visual attention, has attracted a lot of interest from researchers. It has already been used in several vision tasks, e.g., image classification, object detection, foreground segmentation. Recently, the spectrum analysis based visual saliency approach has attracted a lot of interest due to its simplicity and good performance, where the phase information of the image is used to construct the saliency map. In this paper, we propose a new approach for detecting spatiotemporal visual saliency based on the phase spectrum of the videos, which is easy to implement and computationally efficient. With the proposed algorithm, we also study how the spatiotemporal saliency can be used in two important vision task, abnormality detection and spatiotemporal interest point detection. The proposed algorithm is evaluated on several commonly used datasets with comparison to the state-of-art methods from the literature. The experiments demonstrate the effectiveness of the proposed approach to spatiotemporal visual saliency detection and its application to the above vision tasks", "Where does one attend when viewing dynamic scenes? Research into the factors influencing gaze location during static scene viewing have reported that low-level visual features contribute very little to gaze location especially when opposed by high-level factors such as viewing task. However, the inclusion of transient features such as motion in dynamic scenes may result in a greater influence of visual features on gaze allocation and coordination of gaze across viewers. In the present study, we investigated the contribution of low- to mid-level visual features to gaze location during free-viewing of a large dataset of videos ranging in content and length. Signal detection analysis on visual features and Gaussian Mixture Models for clustering gaze was used to identify the contribution of visual features to gaze location. The results show that mid-level visual features including corners and orientations can distinguish between actual gaze locations and a randomly sampled baseline. However, temporal features such as flicker, motion, and their respective contrasts were the most predictive of gaze location. Additionally, moments in which all viewers’ gaze tightly clustered in the same location could be predicted by motion. Motion and mid-level visual features may influence gaze allocation in dynamic scenes, but it is currently unclear whether this influence is involuntary or due to correlations with higher order factors such as scene semantics.", "We evaluate the applicability of a biologically-motivated algorithm to select visually-salient regions of interest in video streams for multiply-foveated video compression. Regions are selected based on a nonlinear integration of low-level visual cues, mimicking processing in primate occipital, and posterior pariet al cortex. A dynamic foveation filter then blurs every frame, increasingly with distance from salient locations. Sixty-three variants of the algorithm (varying number and shape of virtual foveas, maximum blur, and saliency competition) are evaluated against an outdoor video scene, using MPEG-1 and constant-quality MPEG-4 (DivX) encoding. Additional compression radios of 1.1 to 8.5 are achieved by foveation. Two variants of the algorithm are validated against eye fixations recorded from four to six human observers on a heterogeneous collection of 50 video clips (over 45 000 frames in total). Significantly higher overlap than expected by chance is found between human and algorithmic foveations. With both variants, foveated clips are, on average, approximately half the size of unfoveated clips, for both MPEG-1 and MPEG-4. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.", "This correspondence describes a publicly available database of eye-tracking data, collected on a set of standard video sequences that are frequently used in video compression, processing, and transmission simulations. A unique feature of this database is that it contains eye-tracking data for both the first and second viewings of the sequence. We have made available the uncompressed video sequences and the raw eye-tracking data for each sequence, along with different visualizations of the data and a preliminary analysis based on two well-known visual attention models.", "Human vision system actively seeks salient regions and movements in video sequences to reduce the search effort. Modeling computational visual saliency map provides important information for semantic understanding in many real world applications. In this paper, we propose a novel video saliency detection model for detecting the attended regions that correspond to both interesting objects and dominant motions in video sequences. In spatial saliency map, we inherit the classical bottom-up spatial saliency map. In temporal saliency map, a novel optical flow model is proposed based on the dynamic consistency of motion. The spatial and the temporal saliency maps are constructed and further fused together to create a novel attention model. The proposed attention model is evaluated on three video datasets. Empirical validations demonstrate the salient regions detected by our dynamic consistent saliency map highlight the interesting objects effectively and efficiency. More importantly, the automatically video attended regions detected by proposed attention model are consistent with the ground truth saliency maps of eye movement data.", "Recently visual saliency has attracted wide attention of researchers in the computer vision and multimedia field. However, most of the visual saliency-related research was conducted on still images for studying static saliency. In this paper, we give a comprehensive comparative study for the first time of dynamic saliency (video shots) and static saliency (key frames of the corresponding video shots), and two key observations are obtained: 1) video saliency is often different from, yet quite related with, image saliency, and 2) camera motions, such as tilting, panning or zooming, affect dynamic saliency significantly. Motivated by these observations, we propose a novel camera motion and image saliency aware model for dynamic saliency prediction. The extensive experiments on two static-vs-dynamic saliency datasets collected by us show that our proposed method outperforms the state-of-the-art methods for dynamic saliency prediction. Finally, we also introduce the application of dynamic saliency prediction for dynamic video captioning, assisting people with hearing impairments to better entertain videos with only off-screen voices, e.g., documentary films, news videos and sports videos.", "Saliency detection is widely used to extract regions of interest in images for various image processing applications. Recently, many saliency detection models have been proposed for video in uncompressed (pixel) domain. However, video over Internet is always stored in compressed domains, such as MPEG2, H.264, and MPEG4 Visual. In this paper, we propose a novel video saliency detection model based on feature contrast in compressed domain. Four types of features including luminance, color, texture, and motion are extracted from the discrete cosine transform coefficients and motion vectors in video bitstream. The static saliency map of unpredicted frames (I frames) is calculated on the basis of luminance, color, and texture features, while the motion saliency map of predicted frames (P and B frames) is computed by motion feature. A new fusion method is designed to combine the static saliency and motion saliency maps to get the final saliency map for each video frame. Due to the directly derived features in compressed domain, the proposed model can predict the salient regions efficiently for video frames. Experimental results on a public database show superior performance of the proposed video saliency detection model in compressed domain.", "We propose a novel algorithm to detect visual saliency from video signals by combining both spatial and temporal information and statistical uncertainty measures. The main novelty of the proposed method is twofold. First, separate spatial and temporal saliency maps are generated, where the computation of temporal saliency incorporates a recent psychological study of human visual speed perception. Second, the spatial and temporal saliency maps are merged into one using a spatiotemporally adaptive entropy-based uncertainty weighting approach. The spatial uncertainty weighing incorporates the characteristics of proximity and continuity of spatial saliency, while the temporal uncertainty weighting takes into account the variations of background motion and local contrast. Experimental results show that the proposed spatiotemporal uncertainty weighting algorithm significantly outperforms state-of-the-art video saliency detection models.", "We present a novel video saliency detection method to support human activity recognition and weakly supervised training of activity detection algorithms. Recent research has emphasized the need for analyzing salient information in videos to minimize dataset bias or to supervise weakly labeled training of activity detectors. In contrast to previous methods we do not rely on training information given by either eye-gaze or annotation data, but propose a fully unsupervised algorithm to find salient regions within videos. In general, we enforce the Gestalt principle of figure-ground segregation for both appearance and motion cues. We introduce an encoding approach that allows for efficient computation of saliency by approximating joint feature distributions. We evaluate our approach on several datasets, including challenging scenarios with cluttered background and camera motion, as well as salient object detection in images. Overall, we demonstrate favorable performance compared to state-of-the-art methods in estimating both ground-truth eye-gaze and activity annotations.", "Visual saliency has been shown to depend on the unpredictability of the visual stimulus given its surround. Various previous works have advocated the equivalence between stimulus saliency and uncompressibility. We propose a direct measure of this quantity, namely the number of bits required by an optimal video compressor to encode a given video patch, and show that features derived from this measure are highly predictive of eye fixations. To account for global saliency effects, these are embedded in a Markov random field model. The resulting saliency measure is shown to achieve state-of-the-art accuracy for the prediction of fixations, at a very low computational cost. Since most modern cameras incorporate video encoders, this paves the way for in-camera saliency estimation, which could be useful in a variety of computer vision applications.", "The purpose of this paper is the detection of salient areas in natural video by using the new deep learning techniques. Salient patches in video frames are predicted first. Then the predicted visual fixation maps are built upon them. We design the deep architecture on the basis of CaffeNet implemented with Caffe toolkit. We show that changing the way of data selection for optimisation of network parameters, we can save computation cost up to @math times. We extend deep learning approaches for saliency prediction in still images with RGB values to specificity of video using the sensitivity of the human visual system to residual motion. Furthermore, we complete primary colour pixel values by contrast features proposed in classical visual attention prediction models. The experiments are conducted on two publicly available datasets. The first is IRCCYN video database containing @math videos with an overall amount of @math frames and eye fixations of @math subjects. The second one is HOLLYWOOD2 provided @math movie clips with the eye fixations of @math subjects. On IRCYYN dataset, the accuracy obtained is of @math . On HOLLYWOOD2 dataset, results in prediction of saliency of patches show the improvement up to @math with regard to RGB use only. The resulting accuracy of @math is obtained. The AUC metric in comparison of predicted saliency maps with visual fixation maps shows the increase up to @math on a sample of video clips from this dataset.", "During recent years remarkable progress has been made in visual saliency modeling. Our interest is in video saliency. Since videos are fundamentally different from still images, they are viewed differently by human observers. For example, the time each video frame is observed is a fraction of a second, while a still image can be viewed leisurely. Therefore, video saliency estimation methods should differ substantially from image saliency methods. In this paper we propose a novel method for video saliency estimation, which is inspired by the way people watch videos. We explicitly model the continuity of the video by predicting the saliency map of a given frame, conditioned on the map from the previous frame. Furthermore, accuracy and computation speed are improved by restricting the salient locations to a carefully selected candidate set. We validate our method using two gaze-tracked video datasets and show we outperform the state-of-the-art." ] }
1709.06316
2755981968
In this paper, we propose a novel deep learning based video saliency prediction method, named DeepVS. Specifically, we establish a large-scale eye-tracking database of videos (LEDOV), which includes 32 subjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, which is composed of the objectness and motion subnets. In OM-CNN, cross-net mask and hierarchical feature normalization are proposed to combine the spatial features of the objectness subnet and the temporal features of the motion subnet. We further find from our database that there exists a temporal correlation of human attention with a smooth saliency transition across video frames. We thus propose saliency-structured convolutional long short-term memory (SS-ConvLSTM) network, using the extracted features from OM-CNN as the input. Consequently, the inter-frame saliency maps of a video can be generated, which consider both structured output with center-bias and cross-frame transitions of human attention maps. Finally, the experimental results show that DeepVS advances the state-of-the-art in video saliency prediction.
@cite_29 is one of the earlist video eye-tracking databases established by Itti in 2004. It is still used as a benchmark in the recent video saliency prediction works, such as @cite_5 . CRCNS contains 50 videos mainly including outdoor scenes, TV shows and video games. The length of each video ranges from 5.5 to 93.9 seconds, and the frame rate of all videos is 30 frames per second (fps). For each video, 4 to 6 subjects were asked to look at the main actors or actions. Afterward, they were required to depict the main content of video. Thus, CRNS is a task-driven eye-tracking database for videos. Later, a new database @cite_59 was established, by manually cutting all 50 videos of CRCNS into 523 clippets'' with 1-3 second duration, according to the abrupt cinematic cuts. Another 8 subjects were recruited to view these video clippets, with their eye-tracking data recorded in @cite_59 .
{ "cite_N": [ "@cite_5", "@cite_29", "@cite_59" ], "mid": [ "2118985252", "2130184867", "" ], "abstract": [ "We propose a novel algorithm to detect visual saliency from video signals by combining both spatial and temporal information and statistical uncertainty measures. The main novelty of the proposed method is twofold. First, separate spatial and temporal saliency maps are generated, where the computation of temporal saliency incorporates a recent psychological study of human visual speed perception. Second, the spatial and temporal saliency maps are merged into one using a spatiotemporally adaptive entropy-based uncertainty weighting approach. The spatial uncertainty weighing incorporates the characteristics of proximity and continuity of spatial saliency, while the temporal uncertainty weighting takes into account the variations of background motion and local contrast. Experimental results show that the proposed spatiotemporal uncertainty weighting algorithm significantly outperforms state-of-the-art video saliency detection models.", "We evaluate the applicability of a biologically-motivated algorithm to select visually-salient regions of interest in video streams for multiply-foveated video compression. Regions are selected based on a nonlinear integration of low-level visual cues, mimicking processing in primate occipital, and posterior pariet al cortex. A dynamic foveation filter then blurs every frame, increasingly with distance from salient locations. Sixty-three variants of the algorithm (varying number and shape of virtual foveas, maximum blur, and saliency competition) are evaluated against an outdoor video scene, using MPEG-1 and constant-quality MPEG-4 (DivX) encoding. Additional compression radios of 1.1 to 8.5 are achieved by foveation. Two variants of the algorithm are validated against eye fixations recorded from four to six human observers on a heterogeneous collection of 50 video clips (over 45 000 frames in total). Significantly higher overlap than expected by chance is found between human and algorithmic foveations. With both variants, foveated clips are, on average, approximately half the size of unfoveated clips, for both MPEG-1 and MPEG-4. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.", "" ] }
1709.06316
2755981968
In this paper, we propose a novel deep learning based video saliency prediction method, named DeepVS. Specifically, we establish a large-scale eye-tracking database of videos (LEDOV), which includes 32 subjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, which is composed of the objectness and motion subnets. In OM-CNN, cross-net mask and hierarchical feature normalization are proposed to combine the spatial features of the objectness subnet and the temporal features of the motion subnet. We further find from our database that there exists a temporal correlation of human attention with a smooth saliency transition across video frames. We thus propose saliency-structured convolutional long short-term memory (SS-ConvLSTM) network, using the extracted features from OM-CNN as the input. Consequently, the inter-frame saliency maps of a video can be generated, which consider both structured output with center-bias and cross-frame transitions of human attention maps. Finally, the experimental results show that DeepVS advances the state-of-the-art in video saliency prediction.
@cite_42 is a public video database containing eye-tracking data of 12 uncompressed YUV videos, which are frequently used as the standard test set for video compression and processing algorithms. Each video is in the CIF resolution ( @math ), and is with 3-10 seconds at a frame rate of 30 fps. All eye-tracking data were collected, when 15 non-expert subjects were free viewing all 12 videos twice.
{ "cite_N": [ "@cite_42" ], "mid": [ "1967508848" ], "abstract": [ "This correspondence describes a publicly available database of eye-tracking data, collected on a set of standard video sequences that are frequently used in video compression, processing, and transmission simulations. A unique feature of this database is that it contains eye-tracking data for both the first and second viewings of the sequence. We have made available the uncompressed video sequences and the raw eye-tracking data for each sequence, along with different visualizations of the data and a preliminary analysis based on two well-known visual attention models." ] }
1709.06316
2755981968
In this paper, we propose a novel deep learning based video saliency prediction method, named DeepVS. Specifically, we establish a large-scale eye-tracking database of videos (LEDOV), which includes 32 subjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, which is composed of the objectness and motion subnets. In OM-CNN, cross-net mask and hierarchical feature normalization are proposed to combine the spatial features of the objectness subnet and the temporal features of the motion subnet. We further find from our database that there exists a temporal correlation of human attention with a smooth saliency transition across video frames. We thus propose saliency-structured convolutional long short-term memory (SS-ConvLSTM) network, using the extracted features from OM-CNN as the input. Consequently, the inter-frame saliency maps of a video can be generated, which consider both structured output with center-bias and cross-frame transitions of human attention maps. Finally, the experimental results show that DeepVS advances the state-of-the-art in video saliency prediction.
@cite_48 is another widely used database, designed to evaluate the contributions of different visual features on gaze clustering. DIEM comprises 84 videos sourced from publicly accessible videos including advertisement, game trailer, movie trailer and news clip. Most of these videos have frequent cinematic cuts. Each video lasts for 27-217 seconds at 30 fps. The free-viewing fixations of around 50 subjects were tracked for each video.
{ "cite_N": [ "@cite_48" ], "mid": [ "2119577735" ], "abstract": [ "Where does one attend when viewing dynamic scenes? Research into the factors influencing gaze location during static scene viewing have reported that low-level visual features contribute very little to gaze location especially when opposed by high-level factors such as viewing task. However, the inclusion of transient features such as motion in dynamic scenes may result in a greater influence of visual features on gaze allocation and coordination of gaze across viewers. In the present study, we investigated the contribution of low- to mid-level visual features to gaze location during free-viewing of a large dataset of videos ranging in content and length. Signal detection analysis on visual features and Gaussian Mixture Models for clustering gaze was used to identify the contribution of visual features to gaze location. The results show that mid-level visual features including corners and orientations can distinguish between actual gaze locations and a randomly sampled baseline. However, temporal features such as flicker, motion, and their respective contrasts were the most predictive of gaze location. Additionally, moments in which all viewers’ gaze tightly clustered in the same location could be predicted by motion. Motion and mid-level visual features may influence gaze allocation in dynamic scenes, but it is currently unclear whether this influence is involuntary or due to correlations with higher order factors such as scene semantics." ] }
1709.06316
2755981968
In this paper, we propose a novel deep learning based video saliency prediction method, named DeepVS. Specifically, we establish a large-scale eye-tracking database of videos (LEDOV), which includes 32 subjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, which is composed of the objectness and motion subnets. In OM-CNN, cross-net mask and hierarchical feature normalization are proposed to combine the spatial features of the objectness subnet and the temporal features of the motion subnet. We further find from our database that there exists a temporal correlation of human attention with a smooth saliency transition across video frames. We thus propose saliency-structured convolutional long short-term memory (SS-ConvLSTM) network, using the extracted features from OM-CNN as the input. Consequently, the inter-frame saliency maps of a video can be generated, which consider both structured output with center-bias and cross-frame transitions of human attention maps. Finally, the experimental results show that DeepVS advances the state-of-the-art in video saliency prediction.
As discussed in Section , video saliency prediction may benefit from the recent development of deep learning. Unfortunately, as seen in Table , the existing databases for video saliency prediction are lack of sufficient eye-tracking data to train DNN. Although Hollywood @cite_35 has 1857 videos, it mainly focuses on task-driven visual saliency. Besides, the video content of Hollywood is limited, only involving human actions of movies. In fact, a large-scale eye-tracking database for video should have 3 criteria: 1) a large number of videos, 2) sufficient subjects, and 3) various video content. In this paper, we establish a large-scale eye-tracking database of videos, satisfying the above three criteria. The detail of our large-scale databases is to be discussed in Section .
{ "cite_N": [ "@cite_35" ], "mid": [ "2071555787" ], "abstract": [ "Systems based on bag-of-words models from image features collected at maxima of sparse interest point operators have been used successfully for both computer visual object and action recognition tasks. While the sparse, interest-point based approach to recognition is not inconsistent with visual processing in biological systems that operate in ‘saccade and fixate’ regimes, the methodology and emphasis in the human and the computer vision communities remains sharply distinct. Here, we make three contributions aiming to bridge this gap. First, we complement existing state-of-the art large scale dynamic computer vision annotated datasets like Hollywood-2 [1] and UCF Sports [2] with human eye movements collected under the ecological constraints of visual action and scene context recognition tasks. To our knowledge these are the first large human eye tracking datasets to be collected and made publicly available for video, vision.imar.ro eyetracking (497,107 frames, each viewed by 19 subjects), unique in terms of their (a) large scale and computer vision relevance, (b) dynamic, video stimuli, (c) task control, as well as free-viewing . Second, we introduce novel dynamic consistency and alignment measures , which underline the remarkable stability of patterns of visual search among subjects. Third, we leverage the significant amount of collected data in order to pursue studies and build automatic, end-to-end trainable computer vision systems based on human eye movements. Our studies not only shed light on the differences between computer vision spatio-temporal interest point image sampling strategies and the human fixations, as well as their impact for visual recognition performance, but also demonstrate that human fixations can be accurately predicted, and when used in an end-to-end automatic system, leveraging some of the advanced computer vision practice, can lead to state of the art results." ] }
1709.06389
2759134725
We propose a new framework for the recognition of online handwritten graphics. Three main features of the framework are its ability to treat symbol and structural level information in an integrated way, its flexibility with respect to different families of graphics, and means to control the tradeoff between recognition effectiveness and computational cost. We model a graphic as a labeled graph generated from a graph grammar. Non-terminal vertices represent subcomponents, terminal vertices represent symbols, and edges represent relations between subcomponents or symbols. We then model the recognition problem as a graph parsing problem: given an input stroke set, we search for a parse tree that represents the best interpretation of the input. Our graph parsing algorithm generates multiple interpretations (consistent with the grammar) and then we extract an optimal interpretation according to a cost function that takes into consideration the likelihood scores of symbols and structures. The parsing algorithm consists in recursively partitioning the stroke set according to structures defined in the grammar and it does not impose constraints present in some previous works (e.g. stroke ordering). By avoiding such constraints and thanks to the powerful representativeness of graphs, our approach can be adapted to the recognition of different graphic notations. We show applications to the recognition of mathematical expressions and flowcharts. Experimentation shows that our method obtains state-of-the-art accuracy in both applications.
In this section, we review some characteristics of the recognition process in previous works, with emphasis on methods for mathematical expression @cite_0 @cite_17 @cite_30 and flowchart recognition @cite_38 @cite_12 @cite_6 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_6", "@cite_0", "@cite_12", "@cite_17" ], "mid": [ "2031071334", "2013257774", "2031948510", "", "", "2037121285" ], "abstract": [ "Document recognition and retrieval technologies complement one another, providing improved access to increasingly large document collections. While recognition and retrieval of textual information is fairly mature, with wide-spread availability of optical character recognition and text-based search engines, recognition and retrieval of graphics such as images, figures, tables, diagrams, and mathematical expressions are in comparatively early stages of research. This paper surveys the state of the art in recognition and retrieval of mathematical expressions, organized around four key problems in math retrieval (query construction, normalization, indexing, and relevance feedback), and four key problems in math recognition (detecting expressions, detecting and classifying symbols, analyzing symbol layout, and constructing a representation of meaning). Of special interest is the machine learning problem of jointly optimizing the component algorithms in a math recognition system, and developing effective indexing, retrieval and relevance feedback algorithms for math retrieval. Another important open problem is developing user interfaces that seamlessly integrate recognition and retrieval. Activity in these important research areas is increasing, in part because math notation provides an excellent domain for studying problems common to many document and graphics recognition and retrieval applications, and also because mature applications will likely provide substantial benefits for education, research, and mathematical literacy.", "In order to segment and recognize on-line handwritten flowchart symbols precisely, we propose a method that segments the graphic symbols based on the loop structure and recognize the segmented symbols by using SVMs. In our experiments, low error rate of 3.37 for symbol segmentation and high recognition rate of 97.6 were obtained. We also propose a beautification and editing method for recognized symbols, and implement them to construct a prototype system. We compare an input time for drawing flowcharts between our system and a traditional application using icon-based interface. As a result, the input time on our system was faster than that on traditional one for flowcharts without texts.", "We present our recent model of a diagram recognition engine. It extends our previous work which approaches the structural recognition as an optimization problem of choosing the best subset of symbol candidates. The main improvement is the integration of our own text separator into the pipeline to deal with text blocks occurring in diagrams. Second improvement is splitting the symbol candidates detection into two stages: uniform symbols detection and arrows detection. Text recognition is left for post processing when the diagram structure is already known. Training and testing of the engine was done on a freely available benchmark database of flowcharts. We correctly segmented and recognized 93.0 of the symbols having 55.1 of the diagrams recognized without any error. Considering correct stroke labeling, we achieved the precision of 95.7 . This result is superior to the state-of-the-art method with the precision of 92.4 . Additionally, we demonstrate the generality of the proposed method by adapting the system to finite automata domain and evaluating it on own database of such diagrams.", "", "", "Automatic recognition of mathematical ex- pressions is one of the key vehicles in the drive towards transcribing documents in scientific and engineering dis- ciplines into electronic form. This problem typically con- sists of two major stages, namely, symbol recognition and structural analysis. In this survey paper, we will re- view most of the existing work with respect to each of the two major stages of the recognition process. In par- ticular, we try to put emphasis on the similarities and differences between systems. Moreover, some important issues in mathematical expression recognition will be ad- dressed in depth. All these together serve to provide a clear overall picture of how this research area has been developed to date." ] }
1709.06389
2759134725
We propose a new framework for the recognition of online handwritten graphics. Three main features of the framework are its ability to treat symbol and structural level information in an integrated way, its flexibility with respect to different families of graphics, and means to control the tradeoff between recognition effectiveness and computational cost. We model a graphic as a labeled graph generated from a graph grammar. Non-terminal vertices represent subcomponents, terminal vertices represent symbols, and edges represent relations between subcomponents or symbols. We then model the recognition problem as a graph parsing problem: given an input stroke set, we search for a parse tree that represents the best interpretation of the input. Our graph parsing algorithm generates multiple interpretations (consistent with the grammar) and then we extract an optimal interpretation according to a cost function that takes into consideration the likelihood scores of symbols and structures. The parsing algorithm consists in recursively partitioning the stroke set according to structures defined in the grammar and it does not impose constraints present in some previous works (e.g. stroke ordering). By avoiding such constraints and thanks to the powerful representativeness of graphs, our approach can be adapted to the recognition of different graphic notations. We show applications to the recognition of mathematical expressions and flowcharts. Experimentation shows that our method obtains state-of-the-art accuracy in both applications.
Early works related to the recognition of mathematical expressions were predominantly based on a sequential recognition process consisting of the symbol segmentation, symbol identification and structural analysis steps @cite_26 @cite_39 @cite_34 . However, a weakness of sequential methods is the fact that errors in early steps are propagated to subsequent steps. For instance, it might be difficult to determine if two handwritten strokes with shape )" and (", close to each other, form a single symbol x", or are the opening and the closing parentheses, respectively. To solve this type of ambiguity, it may be necessary to examine relations of the strokes with other nearby symbols or even with respect to the global structure of the whole expression. This type of observation has motivated more recent works to consider methods that integrate symbol and structural level interpretations into a single process. Most of them are based on parsing methods as described below.
{ "cite_N": [ "@cite_26", "@cite_34", "@cite_39" ], "mid": [ "1518661169", "2170570264", "" ], "abstract": [ "In recent years, the recognition of handwritten mathematical expressions has recieved an increasing amount of attention in pattern recognition research. The diversity of approaches to the problem and the lack of a commercially viable system, however, indicate that there is still much research to be done in this area. In this thesis, I will describe an on-line approach for converting a handwritten mathematical expression into an equivalent expression in a typesetting command language such as TEX or MathML, as well as a feedback-oriented user interface which can make errors more tolerable to the end user since they can be quickly corrected. The three primary components of this system are a method for classifying isolated handwritten symbols, an algorithm for partitioning an expression into symbols, and an algorithm for converting a two-dimensional arrangements of symbols into a typeset expression. For symbol classification, a Gaussian classifier is used to rank order the interpretations of a set of strokes as a single symbol. To partition an expression, the values generated by the symbol classifier are used to perform a constrained search of possible partitions for the one with the minimum summed cost. Finally, the expression is parsed using a simple geometric grammar. Thesis Supervisor: Paul A. Viola Title: Associate Professor", "We describe a robust and efficient system for recognizing typeset and handwritten mathematical notation. From a list of symbols with bounding boxes the system analyzes an expression in three successive passes. The Layout Pass constructs a Baseline Structure Tree (BST) describing the two-dimensional arrangement of input symbols. Reading order and operator dominance are used to allow efficient recognition of symbol layout even when symbols deviate greatly from their ideal positions. Next, the Lexical Pass produces a Lexed BST from the initial BST by grouping tokens comprised of multiple input symbols; these include decimal numbers, function names, and symbols comprised of nonoverlapping primitives such as \"=\". The Lexical Pass also labels vertical structures such as fractions and accents. The Lexed BST is translated into L sup A T sub E X. Additional processing, necessary for producing output for symbolic algebra systems, is carried out in the Expression Analysis Pass. The Lexed BST is translated into an Operator Tree, which describes the order and scope of operations in the input expression. The tree manipulations used in each pass are represented compactly using tree transformations. The compiler-like architecture of the system allows robust handling of unexpected input, increases the scalability of the system, and provides the groundwork for handling dialects of mathematical notation.", "" ] }
1709.06389
2759134725
We propose a new framework for the recognition of online handwritten graphics. Three main features of the framework are its ability to treat symbol and structural level information in an integrated way, its flexibility with respect to different families of graphics, and means to control the tradeoff between recognition effectiveness and computational cost. We model a graphic as a labeled graph generated from a graph grammar. Non-terminal vertices represent subcomponents, terminal vertices represent symbols, and edges represent relations between subcomponents or symbols. We then model the recognition problem as a graph parsing problem: given an input stroke set, we search for a parse tree that represents the best interpretation of the input. Our graph parsing algorithm generates multiple interpretations (consistent with the grammar) and then we extract an optimal interpretation according to a cost function that takes into consideration the likelihood scores of symbols and structures. The parsing algorithm consists in recursively partitioning the stroke set according to structures defined in the grammar and it does not impose constraints present in some previous works (e.g. stroke ordering). By avoiding such constraints and thanks to the powerful representativeness of graphs, our approach can be adapted to the recognition of different graphic notations. We show applications to the recognition of mathematical expressions and flowcharts. Experimentation shows that our method obtains state-of-the-art accuracy in both applications.
To cope with the structural variance of diagrams, some approaches introduce strong constraints in the input, as requiring all symbols to have only one stroke @cite_11 , or loop-like symbols to be written by consecutive strokes @cite_38 . With respect to symbol recognition, detection of texts (or text box) and arrow symbols are regarded as more difficult, as they do not present a fixed shape. For instance, Carton @cite_12 determine box symbols (like , and structure) and then select the best interpretations using a deformation metric. Text symbols are recognized only after box symbols. Bresler @cite_6 also first recognize possible box and arrow symbols, and leave text recognition as a last step. After symbol candidates are identified, the best symbol combination is selected through a max-sum optimization process.
{ "cite_N": [ "@cite_38", "@cite_6", "@cite_12", "@cite_11" ], "mid": [ "2013257774", "2031948510", "", "1603389798" ], "abstract": [ "In order to segment and recognize on-line handwritten flowchart symbols precisely, we propose a method that segments the graphic symbols based on the loop structure and recognize the segmented symbols by using SVMs. In our experiments, low error rate of 3.37 for symbol segmentation and high recognition rate of 97.6 were obtained. We also propose a beautification and editing method for recognized symbols, and implement them to construct a prototype system. We compare an input time for drawing flowcharts between our system and a traditional application using icon-based interface. As a result, the input time on our system was faster than that on traditional one for flowcharts without texts.", "We present our recent model of a diagram recognition engine. It extends our previous work which approaches the structural recognition as an optimization problem of choosing the best subset of symbol candidates. The main improvement is the integration of our own text separator into the pipeline to deal with text blocks occurring in diagrams. Second improvement is splitting the symbol candidates detection into two stages: uniform symbols detection and arrows detection. Text recognition is left for post processing when the diagram structure is already known. Training and testing of the engine was done on a freely available benchmark database of flowcharts. We correctly segmented and recognized 93.0 of the symbols having 55.1 of the diagrams recognized without any error. Considering correct stroke labeling, we achieved the precision of 95.7 . This result is superior to the state-of-the-art method with the precision of 92.4 . Additionally, we demonstrate the generality of the proposed method by adapting the system to finite automata domain and evaluating it on own database of such diagrams.", "", "The electronic white board and the tablet PC are bringing new technologies to modern education. This paper presents a pen-based flowchart recognition system for programming teaching, which uses hybrid SVM-HMM algorithm for sketch recognition. In this algorithm, ICA is used to reduce the dimension of features, a set of fuzzy SVMs are used as preliminary feature classifiers to produce fix length feature vector, which acts as a probability evaluator in the hidden states of Hidden Markov Models, and HMMs are employed as finally classifiers to recognize the unknown pattern. Experiment results show the hybrid algorithm has good learning and recognition ability. And based on this algorithm, an intelligent whiteboard system for programming teaching is designed to identify the sketches into the programming flowchart, and finally converts it into C language programs. User's evaluation shows it is natural for the teachers and the students with a flexible and effective interactive teaching pattern. Therefore, such system brings a new programming teaching patterns and help students to stride the obstacle between the flowchart and the programming language. Students can learn the abstract programming idea and the concrete coding skills effectively and efficiently by the visual comparative learning assisted by the intelligent whiteboard system." ] }
1709.06389
2759134725
We propose a new framework for the recognition of online handwritten graphics. Three main features of the framework are its ability to treat symbol and structural level information in an integrated way, its flexibility with respect to different families of graphics, and means to control the tradeoff between recognition effectiveness and computational cost. We model a graphic as a labeled graph generated from a graph grammar. Non-terminal vertices represent subcomponents, terminal vertices represent symbols, and edges represent relations between subcomponents or symbols. We then model the recognition problem as a graph parsing problem: given an input stroke set, we search for a parse tree that represents the best interpretation of the input. Our graph parsing algorithm generates multiple interpretations (consistent with the grammar) and then we extract an optimal interpretation according to a cost function that takes into consideration the likelihood scores of symbols and structures. The parsing algorithm consists in recursively partitioning the stroke set according to structures defined in the grammar and it does not impose constraints present in some previous works (e.g. stroke ordering). By avoiding such constraints and thanks to the powerful representativeness of graphs, our approach can be adapted to the recognition of different graphic notations. We show applications to the recognition of mathematical expressions and flowcharts. Experimentation shows that our method obtains state-of-the-art accuracy in both applications.
In the method proposed in this work, instead a CYK-based algorithm (that assumes a grammar in CNF), we define a graph grammar and use a top-down parsing algorithm, similar to the one of @cite_9 , but without assuming any ordering of the input strokes. To avoid context-aware algorithms during parsing, we consider stroke partitions drawn from a previously built hypotheses graph (see ) to match the right-hand side of the rules. By doing this, we decouple the parsing algorithm from the particularities of the family of graphics, and achieve independence of the target notation. In addition, it is important to note that target domain knowledge can be fully exploited in the graph grammar definition and hypotheses graph building. This characteristic makes the proposed method general enough to be applied to the recognition of a variety of graphic notations.
{ "cite_N": [ "@cite_9" ], "mid": [ "1978799108" ], "abstract": [ "We present a new approach for parsing two-dimensional input using relational grammars and fuzzy sets. A fast, incremental parsing algorithm is developed, motivated by the two-dimensional structure of written mathematics. The approach reports all identifiable parses of the input. The parses are represented as a fuzzy set, in which the membership grade of a parse measures the similarity between it and the handwritten input. To identify and report parses efficiently, we adapt and apply existing techniques such as rectangular partitions and shared parse forests, and introduce new ideas such as relational classes and interchangeability. We also present a correction mechanism that allows users to navigate parse results and choose the correct interpretation in case of recognition errors or ambiguity. Such corrections are incorporated into subsequent incremental recognition results. Finally, we include two empirical evaluations of our recognizer. One uses a novel user-oriented correction count metric, while the other replicates the CROHME 2011 math recognition contest. Both evaluations demonstrate the effectiveness of our proposed approach." ] }
1709.06201
2754745072
In recent years, a number of artificial intelligent services have been developed such as defect detection system or diagnosis system for customer services. Unfortunately, the core in these services is a black-box in which human cannot understand the underlying decision making logic, even though the inspection of the logic is crucial before launching a commercial service. Our goal in this paper is to propose an analytic method of a model explanation that is applicable to general classification models. To this end, we introduce the concept of a contribution matrix and an explanation embedding in a constraint space by using a matrix factorization. We extract a rule-like model explanation from the contribution matrix with the help of the nonnegative matrix factorization. To validate our method, the experiment results provide with open datasets as well as an industry dataset of a LTE network diagnosis and the results show our method extracts reasonable explanations.
Understanding a model is one of the key issues in the machine learning field. However, understanding a black-box model is difficult in general because of the variety of algorithms and the dependency of data characteristics. So the decision tree or the support vector machine have been still popular and their variations have been published over decades because those algorithms have intuitive structures to be interpreted @cite_8 . Unfortunately, they do not guarantee a sufficiently good performance in practice and we often need to apply advanced techniques to improve the performance, such as ensemble or boosting, which makes a model be a black box like a deep neural network that is the promising technique nowadays.
{ "cite_N": [ "@cite_8" ], "mid": [ "1570448133" ], "abstract": [ "Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization" ] }
1709.06201
2754745072
In recent years, a number of artificial intelligent services have been developed such as defect detection system or diagnosis system for customer services. Unfortunately, the core in these services is a black-box in which human cannot understand the underlying decision making logic, even though the inspection of the logic is crucial before launching a commercial service. Our goal in this paper is to propose an analytic method of a model explanation that is applicable to general classification models. To this end, we introduce the concept of a contribution matrix and an explanation embedding in a constraint space by using a matrix factorization. We extract a rule-like model explanation from the contribution matrix with the help of the nonnegative matrix factorization. To validate our method, the experiment results provide with open datasets as well as an industry dataset of a LTE network diagnosis and the results show our method extracts reasonable explanations.
Visualization techniques such as the saliency map have been often used in order to understand classification results in the image classification problem. To visualize the evidences of an image classification model, @cite_0 measures the sensitivity of classifications and @cite_12 analyzes the activation difference in the marginal distribution of a small area. These approaches are specified to image classifiers, while they are not easy to be extended to the other applications.
{ "cite_N": [ "@cite_0", "@cite_12" ], "mid": [ "2962851944", "2590082389" ], "abstract": [ "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classifiers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classifiers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans)." ] }
1709.06201
2754745072
In recent years, a number of artificial intelligent services have been developed such as defect detection system or diagnosis system for customer services. Unfortunately, the core in these services is a black-box in which human cannot understand the underlying decision making logic, even though the inspection of the logic is crucial before launching a commercial service. Our goal in this paper is to propose an analytic method of a model explanation that is applicable to general classification models. To this end, we introduce the concept of a contribution matrix and an explanation embedding in a constraint space by using a matrix factorization. We extract a rule-like model explanation from the contribution matrix with the help of the nonnegative matrix factorization. To validate our method, the experiment results provide with open datasets as well as an industry dataset of a LTE network diagnosis and the results show our method extracts reasonable explanations.
A model level explanation necessarily needs to understand the underlying decision making constraint of a given classification model. proposed a simplification method of Bayesian network model @cite_10 . Rule extraction methods have been proposed in @cite_7 , but they are restricted in the support vector machine. try to understand the reasoning process of a black box model by extracting a decision tree that approximates the classification model @cite_13 .
{ "cite_N": [ "@cite_13", "@cite_10", "@cite_7" ], "mid": [ "2617799811", "2130485404", "152751697" ], "abstract": [ "Interpretability has become incredibly important as machine learning is increasingly used to inform consequential decisions. We propose to construct global explanations of complex, blackbox models in the form of a decision tree approximating the original model---as long as the decision tree is a good approximation, then it mirrors the computation performed by the blackbox model. We devise a novel algorithm for extracting decision tree explanations that actively samples new training points to avoid overfitting. We evaluate our algorithm on a random forest to predict diabetes risk and a learned controller for cart-pole. Compared to several baselines, our decision trees are both substantially more accurate and equally or more interpretable based on a user study. Finally, we describe several insights provided by our interpretations, including a causal issue validated by a physician.", "We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the \"quintessential\" observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants' understanding when using explanations produced by BCM, compared to those given by prior art.", "Support vector machines (SVMs) are learning systems based on the statistical learning theory, which are exhibiting good generalization ability on real data sets. Nevertheless, a possible limitation of SVM is that they generate black box models. In this work, a procedure for rule extraction from support vector machines is proposed: the SVM+Prototypes method. This method allows to give explanation ability to SVM. Once determined the decision function by means of a SVM, a clustering algorithm is used to determine prototype vectors for each class. These points are combined with the support vectors using geometric methods to define ellipsoids in the input space, which are later transfers to if-then rules. By using the support vectors we can establish the limits of these regions." ] }
1709.06136
2755402024
In this paper, we present a deep reinforcement learning (RL) framework for iterative dialog policy optimization in end-to-end task-oriented dialog systems. Popular approaches in learning dialog policy with RL include letting a dialog agent to learn against a user simulator. Building a reliable user simulator, however, is not trivial, often as difficult as building a good dialog agent. We address this challenge by jointly optimizing the dialog agent and the user simulator with deep RL by simulating dialogs between the two agents. We first bootstrap a basic dialog agent and a basic user simulator by learning directly from dialog corpora with supervised training. We then improve them further by letting the two agents to conduct task-oriented dialogs and iteratively optimizing their policies with deep RL. Both the dialog agent and the user simulator are designed with neural network models that can be trained end-to-end. Our experiment results show that the proposed method leads to promising improvements on task success rate and total task reward comparing to supervised training and single-agent RL training baseline models.
In many of the recent work on using RL for dialog policy learning @cite_13 @cite_27 @cite_9 , hand-designed user simulators are used to interact with the dialog agent. Designing a good performing user simulator is not easy. A too basic user simulator as in @cite_13 may only be able to produce short and simple utterances with limited variety, making the final system lack of robustness against noise in real world user inputs. Advanced user simulators @cite_22 @cite_40 may demonstrate coherent user behavior, but they typically require designing complex rules with domain expertise. We address this challenge using a hybrid learning method, where we firstly bootstrapping a basic functioning user simulator with SL on human annotated corpora, and continuously improving it together with the dialog agent during dialog simulations with deep RL.
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_40", "@cite_27", "@cite_13" ], "mid": [ "160067033", "2949252816", "2571927164", "2412715517", "2412899141" ], "abstract": [ "", "One of the major drawbacks of modularized task-completion dialogue systems is that each module is trained individually, which presents several challenges. For example, downstream modules are affected by earlier modules, and the performance of the entire system is not robust to the accumulated errors. This paper presents a novel end-to-end learning framework for task-completion dialogue systems to tackle such issues. Our neural dialogue system can directly interact with a structured database to assist users in accessing information and accomplishing certain tasks. The reinforcement learning based dialogue manager offers robust capabilities to handle noises caused by other components of the dialogue system. Our experiments in a movie-ticket booking domain show that our end-to-end system not only outperforms modularized dialogue system baselines for both objective and subjective evaluation, but also is robust to noises as demonstrated by several systematic experiments with different error granularity and rates specific to the language understanding module.", "Despite widespread interests in reinforcement-learning for task-oriented dialogue systems, several obstacles can frustrate research and development progress. First, reinforcement learners typically require interaction with the environment, so conventional dialogue corpora cannot be used directly. Second, each task presents specific challenges, requiring separate corpus of task-specific annotated data. Third, collecting and annotating human-machine or human-human conversations for task-oriented dialogues requires extensive domain knowledge. Because building an appropriate dataset can be both financially costly and time-consuming, one popular approach is to build a user simulator based upon a corpus of example dialogues. Then, one can train reinforcement learning agents in an online fashion as they interact with the simulator. Dialogue agents trained on these simulators can serve as an effective starting point. Once agents master the simulator, they may be deployed in a real environment to interact with humans, and continue to be trained online. To ease empirical algorithmic comparisons in dialogues, this paper introduces a new, publicly available simulation framework, where our simulator, designed for the movie-booking domain, leverages both rules and collected data. The simulator supports two tasks: movie ticket booking and movie seeking. Finally, we demonstrate several agents and detail the procedure to add and test your own agent in the proposed framework.", "This paper presents a model for end-to-end learning of task-oriented dialog systems. The main component of the model is a recurrent neural network (an LSTM), which maps from raw dialog history directly to a distribution over system actions. The LSTM automatically infers a representation of dialog history, which relieves the system developer of much of the manual feature engineering of dialog state. In addition, the developer can provide software that expresses business rules and provides access to programmatic APIs, enabling the LSTM to take actions in the real world on behalf of the user. The LSTM can be optimized using supervised learning (SL), where a domain expert provides example dialogs which the LSTM should imitate; or using reinforcement learning (RL), where the system improves by interacting directly with end users. Experiments show that SL and RL are complementary: SL alone can derive a reasonable initial policy from a small number of training dialogs; and starting RL optimization with a policy trained with SL substantially accelerates the learning rate of RL.", "This paper presents an end-to-end framework for task-oriented dialog systems using a variant of Deep Recurrent Q-Networks (DRQN). The model is able to interface with a relational database and jointly learn policies for both language understanding and dialog strategy. Moreover, we propose a hybrid algorithm that combines the strength of reinforcement learning and supervised learning to achieve faster learning speed. We evaluated the proposed model on a 20 Question Game conversational game simulator. Results show that the proposed method outperforms the modular-based baseline and learns a distributed representation of the latent dialog state." ] }
1709.06136
2755402024
In this paper, we present a deep reinforcement learning (RL) framework for iterative dialog policy optimization in end-to-end task-oriented dialog systems. Popular approaches in learning dialog policy with RL include letting a dialog agent to learn against a user simulator. Building a reliable user simulator, however, is not trivial, often as difficult as building a good dialog agent. We address this challenge by jointly optimizing the dialog agent and the user simulator with deep RL by simulating dialogs between the two agents. We first bootstrap a basic dialog agent and a basic user simulator by learning directly from dialog corpora with supervised training. We then improve them further by letting the two agents to conduct task-oriented dialogs and iteratively optimizing their policies with deep RL. Both the dialog agent and the user simulator are designed with neural network models that can be trained end-to-end. Our experiment results show that the proposed method leads to promising improvements on task success rate and total task reward comparing to supervised training and single-agent RL training baseline models.
Jointly optimizing policies for dialog agent and user simulator with RL has also been studied in literature. @cite_39 proposed a co-adaptation framework for dialog systems by jointly optimizing the policies for multiple agents. @cite_25 discussed applying multi-agent RL for policy learning in a resource allocation negotiation scenario. @cite_11 modeled non-cooperative task dialog as a stochastic game and learned jointly the strategies of both agents. Comparing to these previous work, our proposed framework focuses on task-oriented dialogs where the user and the agent positively collaborate to achieve the user's goal. More importantly, we work towards building end-to-end models for task-oriented dialogs that can handle noises and ambiguities in natural language understanding and belief tracking, which is not taken into account in previous work.
{ "cite_N": [ "@cite_11", "@cite_25", "@cite_39" ], "mid": [ "2252140739", "2152342063", "311892248" ], "abstract": [ "In this paper, an original framework to model human-machine spoken dialogues is proposed to deal with co-adaptation between users and Spoken Dialogue Systems in non-cooperative tasks. The conversation is modeled as a Stochastic Game: both the user and the system have their own preferences but have to come up with an agreement to solve a non-cooperative task. They are jointly trained so the Dialogue Manager learns the optimal strategy against the best possible user. Results obtained by simulation show that non-trivial strategies are learned and that this framework is suitable for dialogue modeling.", "We use single-agent and multi-agent Reinforcement Learning (RL) for learning dialogue policies in a resource allocation negotiation scenario. Two agents learn concurrently by interacting with each other without any need for simulated users (SUs) to train against or corpora to learn from. In particular, we compare the Qlearning, Policy Hill-Climbing (PHC) and Win or Learn Fast Policy Hill-Climbing (PHC-WoLF) algorithms, varying the scenario complexity (state space size), the number of training episodes, the learning rate, and the exploration rate. Our results show that generally Q-learning fails to converge whereas PHC and PHC-WoLF always converge and perform similarly. We also show that very high gradually decreasing exploration rates are required for convergence. We conclude that multiagent RL of dialogue policies is a promising alternative to using single-agent RL and SUs or learning directly from corpora.", "Spoken dialogue systems are man-machine interfaces which use speech as the medium of interaction. In recent years, dialogue optimization using reinforcement learning has evolved to be a state-of-the-art technique. The primary focus of research in the dialogue domain is to learn some optimal policy with regard to the task description (reward function) and the user simulation being employed. However, in case of human-human interaction, the parties involved in the dialogue conversation mutually evolve over the period of interaction. This very ability of humans to coadapt attributes largely towards increasing the naturalness of the dialogue. This paper outlines a novel framework for coadaptation in spoken dialogue systems, where the dialogue manager and user simulation evolve over a period of time; they incrementally and mutually optimize their respective behaviors." ] }
1709.06126
2759269552
Motivated by the Gestalt pattern theory, and the Winograd Challenge for language understanding, we design synthetic experiments to investigate a deep learning algorithm's ability to infer simple (at least for human) visual concepts, such as symmetry, from examples. A visual concept is represented by randomly generated, positive as well as negative, example images. We then test the ability and speed of algorithms (and humans) to learn the concept from these images. The training and testing are performed progressively in multiple rounds, with each subsequent round deliberately designed to be more complex and confusing than the previous round(s), especially if the concept was not grasped by the learner. However, if the concept was understood, all the deliberate tests would become trivially easy. Our experiments show that humans can often infer a semantic concept quickly after looking at only a very small number of examples (this is often referred to as an "aha moment": a moment of sudden realization), and performs perfectly during all testing rounds (except for careless mistakes). On the contrary, deep convolutional neural networks (DCNN) could approximate some concepts statistically, but only after seeing many (x10^4) more examples. And it will still make obvious mistakes, especially during deliberate testing rounds or on samples outside the training distributions. This signals a lack of true "understanding", or a failure to reach the right "formula" for the semantics. We did find that some concepts are easier for DCNN than others. For example, simple "counting" is more learnable than "symmetry", while "uniformity" or "conformance" are much more difficult for DCNN to learn. To conclude, we propose an "Aha Challenge" for visual perception, calling for focused and quantitative research on Gestalt-style machine intelligence using limited training examples.
There have been some attempts to analyze the complexity and learning capacity of artificial neural networks over the last decades @cite_44 @cite_27 . Recently, Basu et. al. @cite_8 derived upper bounds on the VC dimension of CNN for texture classification tasks. Szegedy et. al. @cite_12 reported counter-intuitive properties of neural networks, and found adversarial examples with hardly perceptible perturbation that could mislead the algorithm. Since then, many more successful attacks at deep learning were reported @cite_30 @cite_35 @cite_36 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_8", "@cite_36", "@cite_44", "@cite_27", "@cite_12" ], "mid": [ "2274565976", "2572659264", "", "2949103145", "2068777106", "", "1673923490" ], "abstract": [ "Advances in deep learning have led to the broad adoption of Deep Neural Networks (DNNs) to a range of important machine learning problems, e.g., guiding autonomous vehicles, speech recognition, malware detection. Yet, machine learning models, including DNNs, were shown to be vulnerable to adversarial samples-subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software. Adversarial examples are known to transfer from one model to another, even if the second model has a different architecture or was trained on a different set. We introduce the first practical demonstration that this cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data. In our demonstration, we only assume that the adversary can observe outputs from the target DNN given inputs chosen by the adversary. We introduce the attack strategy of fitting a substitute model to the input-output pairs in this manner, then crafting adversarial examples based on this auxiliary model. We evaluate the approach on existing DNN datasets and real-world settings. In one experiment, we force a DNN supported by MetaMind (one of the online APIs for DNN classifiers) to mis-classify inputs at a rate of 84.24 . We conclude with experiments exploring why adversarial samples transfer between DNNs, and a discussion on the applicability of our attack when targeting machine learning algorithms distinct from DNNs.", "Deep learning classifiers are known to be inherently vulnerable to manipulation by intentionally perturbed inputs, named adversarial examples. In this work, we establish that reinforcement learning techniques based on Deep Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and verify the transferability of adversarial examples across different DQN models. Furthermore, we present a novel class of attacks based on this vulnerability that enable policy manipulation and induction in the learning process of DQNs. We propose an attack mechanism that exploits the transferability of adversarial examples to implement policy induction attacks on DQNs, and demonstrate its efficacy and impact through experimental study of a game-learning scenario.", "", "Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception. Videos are available at this http URL.", "High-order neural networks have been shown to have impressive computational, storage, and learning capabilities. This performance is because the order or structure of a high-order neural network can be tailored to the order or structure of a problem. Thus, a neural network designed for a particular class of problems becomes specialized but also very efficient in solving those problems. Furthermore, a priori knowledge, such as geometric invariances, can be encoded in high-order networks. Because this knowledge does not have to be learned, these networks are very efficient in solving problems that utilize this knowledge.", "", "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input." ] }
1709.06126
2759269552
Motivated by the Gestalt pattern theory, and the Winograd Challenge for language understanding, we design synthetic experiments to investigate a deep learning algorithm's ability to infer simple (at least for human) visual concepts, such as symmetry, from examples. A visual concept is represented by randomly generated, positive as well as negative, example images. We then test the ability and speed of algorithms (and humans) to learn the concept from these images. The training and testing are performed progressively in multiple rounds, with each subsequent round deliberately designed to be more complex and confusing than the previous round(s), especially if the concept was not grasped by the learner. However, if the concept was understood, all the deliberate tests would become trivially easy. Our experiments show that humans can often infer a semantic concept quickly after looking at only a very small number of examples (this is often referred to as an "aha moment": a moment of sudden realization), and performs perfectly during all testing rounds (except for careless mistakes). On the contrary, deep convolutional neural networks (DCNN) could approximate some concepts statistically, but only after seeing many (x10^4) more examples. And it will still make obvious mistakes, especially during deliberate testing rounds or on samples outside the training distributions. This signals a lack of true "understanding", or a failure to reach the right "formula" for the semantics. We did find that some concepts are easier for DCNN than others. For example, simple "counting" is more learnable than "symmetry", while "uniformity" or "conformance" are much more difficult for DCNN to learn. To conclude, we propose an "Aha Challenge" for visual perception, calling for focused and quantitative research on Gestalt-style machine intelligence using limited training examples.
Deep networks' vulnerability to adversarial examples led to active research for a defense mechanism. Goodfellow et. al. @cite_31 proposed adversarial training, and Hinton et. al. @cite_6 proposed to distill the knowledge by model compression. Goodfellow et. al. @cite_9 also proposed a zero-sum game framework for estimating generative models via an adversarial process, namely Generative Adversarial Nets (GAN). A GAN balances an adversarial data generator and a discriminator during training. And the trained discriminator is more tolerate to adversarial examples as a result.
{ "cite_N": [ "@cite_9", "@cite_31", "@cite_6" ], "mid": [ "2099471712", "1945616565", "1821462560" ], "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel." ] }
1709.06126
2759269552
Motivated by the Gestalt pattern theory, and the Winograd Challenge for language understanding, we design synthetic experiments to investigate a deep learning algorithm's ability to infer simple (at least for human) visual concepts, such as symmetry, from examples. A visual concept is represented by randomly generated, positive as well as negative, example images. We then test the ability and speed of algorithms (and humans) to learn the concept from these images. The training and testing are performed progressively in multiple rounds, with each subsequent round deliberately designed to be more complex and confusing than the previous round(s), especially if the concept was not grasped by the learner. However, if the concept was understood, all the deliberate tests would become trivially easy. Our experiments show that humans can often infer a semantic concept quickly after looking at only a very small number of examples (this is often referred to as an "aha moment": a moment of sudden realization), and performs perfectly during all testing rounds (except for careless mistakes). On the contrary, deep convolutional neural networks (DCNN) could approximate some concepts statistically, but only after seeing many (x10^4) more examples. And it will still make obvious mistakes, especially during deliberate testing rounds or on samples outside the training distributions. This signals a lack of true "understanding", or a failure to reach the right "formula" for the semantics. We did find that some concepts are easier for DCNN than others. For example, simple "counting" is more learnable than "symmetry", while "uniformity" or "conformance" are much more difficult for DCNN to learn. To conclude, we propose an "Aha Challenge" for visual perception, calling for focused and quantitative research on Gestalt-style machine intelligence using limited training examples.
Few-shot learning @cite_42 @cite_38 @cite_17 , one-shot learning @cite_26 or even zero-shot learning @cite_43 @cite_21 are trying to adopt a classifier to accommodate new classes not seen in training, given only a few examples, one example, or no example at all, respectively. The goal is to transfer learned knowledge and make the model generalizable to new classes or tasks. However, the new patterns tested in these papers are mostly analogous or homologous to the learned patterns. They have not tested the kind of semantic Gestalt visual concepts, which are more diverse and more challenging for machines, but mostly trivially easy for humans. Nevertheless, zero-shot learning capability of DCNN was observed occasionally in some rounds within our experiments. For example, see the near 100 were work on heuristic program to solve visual analogy IQ test @cite_24 , and explicit modeling of higher level visual concepts based on low level textons @cite_14 @cite_37 . We approach these topics from a different angle, using a classification problem to implicitly embed the concepts, and focus on the testing of the limit of end-to-end learning capacity of machines.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_26", "@cite_14", "@cite_42", "@cite_21", "@cite_24", "@cite_43", "@cite_17" ], "mid": [ "2753160622", "2137454801", "", "2128057924", "2950537964", "", "1994565286", "", "2194321275" ], "abstract": [ "Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.", "In this paper, we present a compositional boosting algorithm for detecting and recognizing 17 common image structures in low-middle level vision tasks. These structures, called \"graphlets\", are the most frequently occurring primitives, junctions and composite junctions in natural images, and are arranged in a 3-layer And-Or graph representation. In this hierarchic model, larger graphlets are decomposed (in And-nodes) into smaller graphlets in multiple alternative ways (at Or-nodes), and parts are shared and re-used between graphlets. Then we present a compositional boosting algorithm for computing the 17 graphlets categories collectively in the Bayesian framework. The algorithm runs recursively for each node A in the And-Or graph and iterates between two steps -bottom-up proposal and top-down validation. The bottom-up step includes two types of boosting methods, (i) Detecting instances of A (often in low resolutions) using Adaboosting method through a sequence of tests (weak classifiers) image feature, (ii) Proposing instances of A (often in high resolution) by binding existing children nodes of A through a sequence of compatibility tests on their attributes (e.g angles, relative size etc). The Adaboosting and binding methods generate a number of candidates for node A which are verified by a top-down process in a way similar to Data-Driven Markov Chain Monte Carlo [18]. Both the Adaboosting and binding methods are trained off-line for each graphlet category, and the compositional nature of the model means the algorithm is recursive and can be learned from a small training set. We apply this algorithm to a wide range of indoor and outdoor images with satisfactory results.", "", "Textons refer to fundamental micro-structures in natural images (and videos) and are considered as the atoms of pre-attentive human visual perception (Julesz, 1981). Unfortunately, the word \"texton\" remains a vague concept in the literature for lack of a good mathematical model. In this article, we first present a three-level generative image model for learning textons from texture images. In this model, an image is a superposition of a number of image bases selected from an over-complete dictionary including various Gabor and Laplacian of Gaussian functions at various locations, scales, and orientations. These image bases are, in turn, generated by a smaller number of texton elements, selected from a dictionary of textons. By analogy to the waveform-phoneme-word hierarchy in speech, the pixel-base-texton hierarchy presents an increasingly abstract visual description and leads to dimension reduction and variable decoupling. By fitting the generative model to observed images, we can learn the texton dictionary as parameters of the generative model. Then the paper proceeds to study the geometric, dynamic, and photometric structures of the texton representation by further extending the generative model to account for motion and illumination variations. (1) For the geometric structures, a texton consists of a number of image bases with deformable spatial configurations. The geometric structures are learned from static texture images. (2) For the dynamic structures, the motion of a texton is characterized by a Markov chain model in time which sometimes can switch geometric configurations during the movement. We call the moving textons as \"motons\". The dynamic models are learned using the trajectories of the textons inferred from video sequence. (3) For photometric structures, a texton represents the set of images of a 3D surface element under varying illuminations and is called a \"lighton\" in this paper. We adopt an illumination-cone representation where a lighton is a texton triplet. For a given light source, a lighton image is generated as a linear sum of the three texton bases. We present a sequence of experiments for learning the geometric, dynamic, and photometric structures from images and videos, and we also present some comparison studies with K-mean clustering, sparse coding, independent component analysis, and transformed component analysis. We shall discuss how general textons can be learned from generic natural images.", "We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.", "", "The purpose of this paper is to describe a program now in existence which is capable of solving a wide class of the so-called 'geometric-analogy' problems frequently encountered on intelligence tests. Each member of this class of problems consists of a set of labeled line drawings. The task to be performed can be concisely described by the question: 'figure A is to figure B as figure C is to which of the given answer figures?' For example, given the problem illustrated as Fig. 1, the geometric-analogy program (which we shall subsequently call ANALOGY, for brevity) selected the problem figure labeled 4 as its answer. It seems safe to say that most people would agree with ANALOGY's answer to this problem (which, incidentally, is taken from the 1942 edition of the Psychological Test for College Freshmen of the American Council on Education). Furthermore, if one were required to make explicit the reasoning by which he arrived at his answer, prospects are good that the results would correspond closely to the description of its 'reasoning' produced by ANALOGY.", "", "People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior." ] }
1709.06440
2755326359
Peer-to-peer (P2P) botnets have become one of the major threats in network security for serving as the infrastructure that responsible for various of cyber-crimes. Though a few existing work claimed to detect traditional botnets effectively, the problem of detecting P2P botnets involves more challenges. In this paper, we present PeerHunter, a community behavior analysis based method, which is capable of detecting botnets that communicate via a P2P structure. PeerHunter starts from a P2P hosts detection component. Then, it uses mutual contacts as the main feature to cluster bots into communities. Finally, it uses community behavior analysis to detect potential botnet communities and further identify bot candidates. Through extensive experiments with real and simulated network traces, PeerHunter can achieve very high detection rate and low false positives.
Group or community behavior based methods @cite_6 @cite_16 consider the behavior patterns of a group of bots within the same P2P botnet community. For instance, @cite_26 developed a P2P botnets detection approach that start from building a mutual contact graph of the whole network, then attempt to utilize seeds'' (known bots) to identify the rest of bots within the same botnet. However, most of the time, it is hard to have a seed'' in advance. @cite_6 proposed a group-level behavior analysis based P2P botnets detection method. However, they only considered to use statistical traffic features to cluster P2P hosts, which is subject to P2P botnets that have dynamic or randomized traffic patterns. Besides, their method cannot cope with unknown P2P botnets, which is the common case in botnet detection @cite_2 , because of relying on supervised classification methods (e.g. SVM).
{ "cite_N": [ "@cite_16", "@cite_2", "@cite_6", "@cite_26" ], "mid": [ "", "2059001009", "1599476119", "2044285442" ], "abstract": [ "", "The decentralized nature of Peer-to-Peer (P2P) botnets makes them difficult to detect. Their distributed nature also exhibits resilience against take-down attempts. Moreover, smarter bots are stealthy in their communication patterns, and elude the standard discovery techniques which look for anomalous network or communication behavior. In this paper, we propose Peer Shark, a novel methodology to detect P2P botnet traffic and differentiate it from benign P2P traffic in a network. Instead of the traditional 5-tuple 'flow-based' detection approach, we use a 2-tuple 'conversation-based' approach which is port-oblivious, protocol-oblivious and does not require Deep Packet Inspection. Peer Shark could also classify different P2P applications with an accuracy of more than 95 .", "Advanced botnets adopt a peer-to-peer (P2P) infrastructure for more resilient command and control (C&C). Traditional detection techniques become less effective in identifying bots that communicate via a P2P structure. In this paper, we present PeerClean, a novel system that detects P2P botnets in real time using only high-level features extracted from C&C network flow traffic. PeerClean reliably distinguishes P2P bot-infected hosts from legitimate P2P hosts by jointly considering flow-level traffic statistics and network connection patterns. Instead of working on individual connections or hosts, PeerClean clusters hosts with similar flow traffic statistics into groups. It then extracts the collective and dynamic connection patterns of each group by leveraging a novel dynamic group behavior analysis. Comparing with the individual host-level connection patterns, the collective group patterns are more robust and differentiable. Multi-class classification models are then used to identify different types of bots based on the established patterns. To increase the detection probability, we further propose to train the model with average group behavior, but to explore the extreme group behavior for the detection. We evaluate PeerClean on real-world flow records from a campus network. Our evaluation shows that PeerClean is able to achieve high detection rates with few false positives.", "In this work we show that once a single peer-to-peer (P2P) bot is detected in a network, it may be possible to efficiently identify other members of the same botnet in the same network even before they exhibit any overtly malicious behavior. Detection is based on an analysis of connections made by the hosts in the network. It turns out that if bots select their peers randomly and independently (i.e. unstructured topology), any given pair of P2P bots in a network communicate with at least one mutual peer outside the network with a surprisingly high probability. This, along with the low probability of any other host communicating with this mutual peer, allows us to link local nodes within a P2P botnet together. We propose a simple method to identify potential members of an unstructured P2P botnet in a network starting from a known peer. We formulate the problem as a graph problem and mathematically analyze a solution using an iterative algorithm. The proposed scheme is simple and requires only flow records captured at network borders. We analyze the efficacy of the proposed scheme using real botnet data, including data obtained from both observing and crawling the Nugache botnet." ] }
1709.06265
2754686312
The past several years have witnessed the rapid progress of end-to-end Neural Machine Translation (NMT). However, there exists discrepancy between training and inference in NMT when decoding, which may lead to serious problems since the model might be in a part of the state space it has never seen during training. To address the issue, Scheduled Sampling has been proposed. However, there are certain limitations in Scheduled Sampling and we propose two dynamic oracle-based methods to improve it. We manage to mitigate the discrepancy by changing the training process towards a less guided scheme and meanwhile aggregating the oracle's demonstrations. Experimental results show that the proposed approaches improve translation quality over standard NMT system.
To mitigate the discrepancy between training and inference, Daume introduce SEARN, which aims to tackle the problems that training examples might be different from actual test examples. They show that structured prediction can be mapped into a search setting using language from reinforcement learning, and known techniques for reinforcement learning can give formal performance bounds on the structured prediction task. In addition, Dataset Aggregation (DAgger) @cite_7 is another method which adds on-policy samples to its dataset and then re-optimizes the policy by asking human to label these new data.
{ "cite_N": [ "@cite_7" ], "mid": [ "1931877416" ], "abstract": [ "Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem." ] }
1709.06495
2755948605
In this paper, we present a novel deep learning based approach for addressing the problem of interaction recognition from a first person perspective. The proposed approach uses a pair of convolutional neural networks, whose parameters are shared, for extracting frame level features from successive frames of the video. The frame level features are then aggregated using a convolutional long short-term memory. The hidden state of the convolutional long short-term memory, after all the input video frames are processed, is used for classification in to the respective categories. The two branches of the convolutional neural network perform feature encoding on a short time interval whereas the convolutional long short term memory encodes the changes on a longer temporal duration. In our network the spatio-temporal structure of the input is preserved till the very final processing stage. Experimental results show that our method outperforms the state of the art on most recent first person interactions datasets that involve complex ego-motion. In particular, on UTKinect-FirstPerson it competes with methods that use depth image and skelet al joints information along with RGB images, while it surpasses all previous methods that use only RGB images by more than 20 in recognition accuracy.
An extensive amount of exploratory studies has been carried out in the area of first person activity or action recognition. They concentrate on the activity that the camera wearer is carrying out. These methods can be divided in to two major classes: activities involving object manipulation such as meal preparation @cite_4 @cite_27 @cite_13 @cite_30 @cite_3 @cite_43 and activities such as running, walking, etc. @cite_11 @cite_20 @cite_9 @cite_10 @cite_6 . The former relies on information about objects present in the scene for classifying the activity while the latter concentrates on the ego-motion and the salient motion in the scene.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_10", "@cite_9", "@cite_3", "@cite_6", "@cite_43", "@cite_27", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "", "2149276562", "", "", "", "854053868", "2050709211", "", "1967686239", "2432964524", "" ], "abstract": [ "", "We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.", "", "", "", "While egocentric video is becoming increasingly popular, browsing it is very difficult. In this paper we present a compact 3D Convolutional Neural Network (CNN) architecture for long-term activity recognition in egocentric videos. Recognizing long-term activities enables us to temporally segment (index) long and unstructured egocentric videos. Existing methods for this task are based on hand tuned features derived from visible objects, location of hands, as well as optical flow. Given a sparse optical flow volume as input, our CNN classifies the camera wearer's activity. We obtain classification accuracy of 89 , which outperforms the current state-of-the-art by 19 . Additional evaluation is performed on an extended egocentric video dataset, classifying twice the amount of categories than current state-of-the-art. Furthermore, our CNN is able to recognize whether a video is egocentric or not with 99.2 accuracy, up by 24 from current state-of-the-art. To better understand what the network actually learns, we propose a novel visualization of CNN kernels as flow fields.", "We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals with static and dynamic gestures and can achieve high accuracy results using a few positive samples. Specifically, we use and extend the dense trajectories approach that has been successfully introduced for action recognition. Dense features are extracted around regions selected by a new hand segmentation technique that integrates superpixel classification, temporal and spatial coherence. We extensively testour gesture recognition and segmentation algorithms on public datasets and propose a new dataset shot with a wearable camera. In addition, we demonstrate that our solution can work in near real-time on a wearable device.", "", "In this paper we present a model of action based on the change in the state of the environment. Many actions involve similar dynamics and hand-object relationships, but differ in their purpose and meaning. The key to differentiating these actions is the ability to identify how they change the state of objects and materials in the environment. We propose a weakly supervised method for learning the object and material states that are necessary for recognizing daily actions. Once these state detectors are learned, we can apply them to input videos and pool their outputs to detect actions. We further demonstrate that our method can be used to segment discrete actions from a continuous video of an activity. Our results outperform state-of-the-art action recognition and activity segmentation results.", "We focus on the problem of wearer's action recognition in first person a.k.a. egocentric videos. This problem is more challenging than third person activity recognition due to unavailability of wearer's pose and sharp movements in the videos caused by the natural head motion of the wearer. Carefully crafted features based on hands and objects cues for the problem have been shown to be successful for limited targeted datasets. We propose convolutional neural networks (CNNs) for end to end learning and classification of wearer's actions. The proposed network makes use of egocentric cues by capturing hand pose, head motion and saliency map. It is compact. It can also be trained from relatively small number of labeled egocentric videos that are available. We show that the proposed network can generalize and give state of the art performance on various disparate egocentric action datasets.", "" ] }
1709.06495
2755948605
In this paper, we present a novel deep learning based approach for addressing the problem of interaction recognition from a first person perspective. The proposed approach uses a pair of convolutional neural networks, whose parameters are shared, for extracting frame level features from successive frames of the video. The frame level features are then aggregated using a convolutional long short-term memory. The hidden state of the convolutional long short-term memory, after all the input video frames are processed, is used for classification in to the respective categories. The two branches of the convolutional neural network perform feature encoding on a short time interval whereas the convolutional long short term memory encodes the changes on a longer temporal duration. In our network the spatio-temporal structure of the input is preserved till the very final processing stage. Experimental results show that our method outperforms the state of the art on most recent first person interactions datasets that involve complex ego-motion. In particular, on UTKinect-FirstPerson it competes with methods that use depth image and skelet al joints information along with RGB images, while it surpasses all previous methods that use only RGB images by more than 20 in recognition accuracy.
More recent studies have focussed on the analysis of interaction recognition in the egocentric vision context. One of the pioneering work in this area is carried out by Ryoo & Matthies in @cite_24 . They extracted optical flow based features and sparse spatio-temporal features from the videos and a bag of words model as the video feature representation. Narayan @cite_16 use the TrajShape @cite_37 , MBH @cite_12 and HOF @cite_37 as the motion features followed by bag of words approach or Fisher vector encoding @cite_14 for generating the feature descriptor. Wray @cite_34 propose graph-based semantic embedding for recognizing egocentric object interactions. Instead of focussing on the objects @cite_15 , Bambach @cite_17 investigate the use of strong region proposals and CNN classifier to locate and distinguish hands involved in an interaction.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_34", "@cite_24", "@cite_15", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "", "1606858007", "2496009737", "2167626157", "2198667788", "2092611032", "", "2204609240" ], "abstract": [ "", "The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9 to 58.3 . Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets.", "We present SEMBED, an approach for embedding an egocentric object interaction video in a semantic-visual graph to estimate the probability distribution over its potential semantic labels. When object interactions are annotated using unbounded choice of verbs, we embrace the wealth and ambiguity of these labels by capturing the semantic relationships as well as the visual similarities over motion and appearance features. We show how SEMBED can interpret a challenging dataset of 1225 freely annotated egocentric videos, outperforming SVM classification by more than 5 .", "This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably.", "We present a fully unsupervised approach for the discovery of i) task relevant objects and ii) how these objects have been used. A Task Relevant Object (TRO) is an object, or part of an object, with which a person interacts during task performance. Given egocentric video from multiple operators, the approach can discover objects with which the users interact, both static objects such as a coffee machine as well as movable ones such as a cup. Importantly, we also introduce the term Mode of Interaction (MOI) to refer to the different ways in which TROs are used. Say, a cup can be lifted, washed, or poured into. When harvesting interactions with the same object from multiple operators, common MOIs can be found. Setup and Dataset: Using a wearable camera and gaze tracker (Mobile Eye-XG from ASL), egocentric video is collected of users performing tasks, along with their gaze in pixel coordinates. Six locations were chosen: kitchen, workspace, laser printer, corridor with a locked door, cardiac gym and weight-lifting machine. The Bristol Egocentric Object Interactions Dataset is publically available .", "In this work, we evaluate the performance of the popular dense trajectories approach on first-person action recognition datasets. A person moving around with a wearable camera will actively interact with humans and objects and also passively observe others interacting. Hence, in order to represent real-world scenarios, the dataset must contain actions from first-person perspective as well as third-person perspective. For this purpose, we introduce a new dataset which contains actions from both the perspectives captured using a head-mounted camera. We employ a motion pyramidal structure for grouping the dense trajectory features. The relative strengths of motion along the trajectories are used to compute different bag-of-words descriptors and concatenated to form a single descriptor for the action. The motion pyramidal approach performs better than the baseline improved trajectory descriptors. The method achieves 96.7 on the JPL interaction dataset and 61.8 on our NUS interaction dataset. The same is used to detect actions in long video sequences and achieves average precision of 0.79 on JPL interaction dataset.", "", "Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduce a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. We show how these high-quality bounding boxes can be used to create accurate pixelwise hand regions, and as an application, we investigate the extent to which hand segmentation alone can distinguish between different activities. We evaluate these techniques on a new dataset of 48 first-person videos of people interacting in realistic environments, with pixel-level ground truth for over 15,000 hand instances." ] }
1709.06495
2755948605
In this paper, we present a novel deep learning based approach for addressing the problem of interaction recognition from a first person perspective. The proposed approach uses a pair of convolutional neural networks, whose parameters are shared, for extracting frame level features from successive frames of the video. The frame level features are then aggregated using a convolutional long short-term memory. The hidden state of the convolutional long short-term memory, after all the input video frames are processed, is used for classification in to the respective categories. The two branches of the convolutional neural network perform feature encoding on a short time interval whereas the convolutional long short term memory encodes the changes on a longer temporal duration. In our network the spatio-temporal structure of the input is preserved till the very final processing stage. Experimental results show that our method outperforms the state of the art on most recent first person interactions datasets that involve complex ego-motion. In particular, on UTKinect-FirstPerson it competes with methods that use depth image and skelet al joints information along with RGB images, while it surpasses all previous methods that use only RGB images by more than 20 in recognition accuracy.
Another line of research in this area was influenced by the development of Kinect device which is capable of capturing depth information from the scene together with the skelet al joints of the humans present in the scene. Methods exploiting this additional modality of data have been proposed by Gori @cite_36 , Xia @cite_19 and Gori @cite_2 . Gori @cite_36 use the histogram of 3D joints, histogram of direction vectors and depth images along with the visual features. Xia propose to use the spatio-temporal features computed from the RGB and depth images @cite_5 together with the skelet al joints information as the feature descriptor. Gori @cite_2 propose a feature descriptor called relation history image which extracts information from skelet al joints and depth images.
{ "cite_N": [ "@cite_36", "@cite_19", "@cite_5", "@cite_2" ], "mid": [ "2295007657", "2035036811", "2162415752", "2962851488" ], "abstract": [ "This paper considers the problem of recognizing spontaneous human activities from a robot’s perspective. We present a novel dataset, where data is collected by an autonomous mobile robot moving around in a building and recording the activities of people in the surroundings. Activities are not specified beforehand and humans are not prompted to perform them in any way. Instead, labels are determined on the basis of the recorded spontaneous activities. The classification of such activities presents a number of challenges, as the robot’s movement affects its perceptions. To address it, we propose a combined descriptor that, along with visual features, integrates information related to the robot’s actions. We show experimentally that such information is important for classifying natural activities with high accuracy. Along with initial results for future benchmarking, we also provide an analysis of the usefulness and importance of the various features for the activity recognition task.", "We present a framework and algorithm to analyze first person RGBD videos captured from the robot while physically interacting with humans. Specifically, we explore reactions and interactions of persons facing a mobile robot from a robot centric view. This new perspective offers social awareness to the robots, enabling interesting applications. As far as we know, there is no public 3D dataset for this problem. Therefore, we record two multi-modal first-person RGBD datasets that reflect the setting we are analyzing. We use a humanoid and a non-humanoid robot equipped with a Kinect. Notably, the videos contain a high percentage of ego-motion due to the robot self-exploration as well as its reactions to the persons' interactions. We show that separating the descriptors extracted from ego-motion and independent motion areas, and using them both, allows us to achieve superior recognition results. Experiments show that our algorithm recognizes the activities effectively and outperforms other state-of-the-art methods on related tasks.", "Local spatio-temporal interest points (STIPs) and the resulting features from RGB videos have been proven successful at activity recognition that can handle cluttered backgrounds and partial occlusions. In this paper, we propose its counterpart in depth video and show its efficacy on activity recognition. We present a filtering method to extract STIPs from depth videos (called DSTIP) that effectively suppress the noisy measurements. Further, we build a novel depth cuboid similarity feature (DCSF) to describe the local 3D depth cuboid around the DSTIPs with an adaptable supporting size. We test this feature on activity recognition application using the public MSRAction3D, MSRDailyActivity3D datasets and our own dataset. Experimental evaluation shows that the proposed approach outperforms state-of-the-art activity recognition algorithms on depth videos, and the framework is more widely applicable than existing approaches. We also give detailed comparisons with other features and analysis of choice of parameters as a guidance for applications.", "Activity recognition is very useful in scenarios where robots interact with, monitor, or assist humans. In the past years many types of activities—single actions, two persons interactions or ego-centric activities, to name a few—have been analyzed. Whereas traditional methods treat such types of activities separately, an autonomous robot should be able to detect and recognize multiple types of activities to effectively fulfill its tasks. We propose a method that is intrinsically able to detect and recognize activities of different types that happen in sequence or concurrently. We present a new unified descriptor, called relation history image (RHI), which can be extracted from all the activity types we are interested in. We then formulate an optimization procedure to detect and recognize activities of different types. We apply our approach to a new dataset recorded from a robot-centric perspective and systematically evaluate its quality compared to multiple baselines. Finally, we show the efficacy of the RHI descriptor on publicly available datasets performing extensive comparisons." ] }
1709.06495
2755948605
In this paper, we present a novel deep learning based approach for addressing the problem of interaction recognition from a first person perspective. The proposed approach uses a pair of convolutional neural networks, whose parameters are shared, for extracting frame level features from successive frames of the video. The frame level features are then aggregated using a convolutional long short-term memory. The hidden state of the convolutional long short-term memory, after all the input video frames are processed, is used for classification in to the respective categories. The two branches of the convolutional neural network perform feature encoding on a short time interval whereas the convolutional long short term memory encodes the changes on a longer temporal duration. In our network the spatio-temporal structure of the input is preserved till the very final processing stage. Experimental results show that our method outperforms the state of the art on most recent first person interactions datasets that involve complex ego-motion. In particular, on UTKinect-FirstPerson it competes with methods that use depth image and skelet al joints information along with RGB images, while it surpasses all previous methods that use only RGB images by more than 20 in recognition accuracy.
Several deep learning based approaches have been developed by researchers for action recognition from third person view. Simonyan & Zisserman @cite_41 use raw frames and optical flow images as input to two CNNs for extracting the feature descriptor. Donahue @cite_22 and Srivastava @cite_28 use an architecture consisting of CNN followed by long short-term memory (LSTM) RNN for action recognition. A variant of the LSTM architecture, in which the fully-connected gates are replaced with convolutional gates (convLSTM), have been proposed by Shi @cite_26 for addressing the problem of precipitation nowcasting prediction from radar images. The convLSTM is found to be functioning with improved performance compared to the fully-connected LSTM. The convLSTM model has been later used for predicting optical flow images @cite_18 and anomaly detection @cite_1 . The results show that the convLSTM model suits applications involving spatio-temporal data such as videos. For this reason, we propose to use convLSTM as one of the important blocks in the proposed model for first person interaction recognition.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_22", "@cite_41", "@cite_28", "@cite_1" ], "mid": [ "2175030374", "1485009520", "2951183276", "2952186347", "2952453038", "2559927751" ], "abstract": [ "We describe a new spatio-temporal video autoencoder, based on a classic spatial image autoencoder and a novel nested temporal autoencoder. The temporal encoder is represented by a differentiable visual memory composed of convolutional long short-term memory (LSTM) cells that integrate changes over time. Here we target motion changes and use as temporal decoder a robust optical flow prediction module together with an image sampler serving as built-in feedback loop. The architecture is end-to-end differentiable. At each time step, the system receives as input a video frame, predicts the optical flow based on the current observation and the LSTM memory state as a dense transformation map, and applies it to the current frame to generate the next frame. By minimising the reconstruction error between the predicted next frame and the corresponding ground truth next frame, we train the whole system to extract features useful for motion estimation without any supervision effort. We present one direct application of the proposed framework in weakly-supervised semantic segmentation of videos through label propagation using optical flow.", "The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation nowcasting as a spatiotemporal sequence forecasting problem in which both the input and the prediction target are spatiotemporal sequences. By extending the fully connected LSTM (FC-LSTM) to have convolutional structures in both the input-to-state and state-to-state transitions, we propose the convolutional LSTM (ConvLSTM) and use it to build an end-to-end trainable model for the precipitation nowcasting problem. Experiments show that our ConvLSTM network captures spatiotemporal correlations better and consistently outperforms FC-LSTM and the state-of-the-art operational ROVER algorithm for precipitation nowcasting.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.", "Automating the detection of anomalous events within long video sequences is challenging due to the ambiguity of how such events are defined. We approach the problem by learning generative models that can identify anomalies in videos using limited supervision. We propose end-to-end trainable composite Convolutional Long Short-Term Memory (Conv-LSTM) networks that are able to predict the evolution of a video sequence from a small number of input frames. Regularity scores are derived from the reconstruction errors of a set of predictions with abnormal video sequences yielding lower regularity scores as they diverge further from the actual sequence over time. The models utilize a composite structure and examine the effects of conditioning in learning more meaningful representations. The best model is chosen based on the reconstruction and prediction accuracy. The Conv-LSTM models are evaluated both qualitatively and quantitatively, demonstrating competitive results on anomaly detection datasets. Conv-LSTM units are shown to be an effective tool for modeling and predicting video sequences." ] }
1709.05789
2755931755
The columnwise Khatri–Rao product of two matrices is an important matrix type, reprising its role as a structured sensing matrix in many fundamental linear inverse problems. Robust signal recovery in such inverse problems is often contingent on proving the restricted isometry property (RIP) of a certain system matrix expressible as a Khatri–Rao product of two matrices. In this paper, we analyze the RIP of a generic columnwise Khatri–Rao product matrix by deriving two upper bounds for its @math th order restricted isometry constant ( @math -RIC) for different values of @math . The first RIC bound is computed in terms of the individual RICs of the real-valued input matrices participating in the Khatri–Rao product. The second RIC bound is probabilistic and is specified in terms of the input matrix dimensions. We show that the Khatri–Rao product of a pair of @math sized random matrices comprising independent and identically distributed sub-Gaussian entries satisfies @math -RIP with arbitrarily high probability, provided @math exceeds @math . This is a substantially milder condition compared to @math rows needed to guarantee @math -RIP of the input sub-Gaussian random matrices participating in the Khatri–Rao product. Our RIC bounds confirm that the Khatri–Rao product exhibits stronger restricted isometry compared to its constituent matrices for the same RIP order. The proposed RIC bounds are potentially useful in obtaining improved performance guarantees in several sparse signal recovery and tensor decomposition problems.
Perhaps the most direct approach for analyzing the RICs of the KR product matrix is to use the eigenvalue interlacing theorem @cite_1 , which relates the singular values of any @math -column submatrix of the KR product between two matrices to the singular values of their Kronecker product. This is possible because any @math columns of the KR product can together be interpreted as a submatrix of the Kronecker product. However, barring the maximum and minimum singular values of the Kronecker product, there is no available explicit characterization of its non-extremal singular values that can be used to obtain tight bounds for the @math -RIC of the KR product. Bounding the RIC using the extreme singular values of the Kronecker product matrix turns out to be too loose to be useful. In this context, it is noteworthy to mention that an upper bound for the @math -RIC of the Kronecker product is derived in terms of the @math -RICs of the input matrices in @cite_6 @cite_28 . However, the @math -RIC of the Khatri-Rao product is yet to be analyzed.
{ "cite_N": [ "@cite_28", "@cite_1", "@cite_6" ], "mid": [ "", "210359992", "2001380973" ], "abstract": [ "", "1. Basic Concepts. 2. Nonparametric Methods. 3. Parametric Methods for Rational Spectra. 4. Parametric Methods for Line Spectra. 5. Filter Bank Methods. 6. Spatial Methods. Appendix A: Linear Algebra and Matrix Analysis Tools. Appendix B: Cramer-Rao Bound Tools. Appendix C: Model Order Selection Tools. Appendix D: Answers to Selected Exercises. Bibliography. References Grouped by Subject. Subject Index.", "Compressive sensing (CS) is an emerging approach for the acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve multidimensional signals; the construction of sparsifying bases and measurement systems for such signals is complicated by their higher dimensionality. In this paper, we propose the use of Kronecker product matrices in CS for two purposes. First, such matrices can act as sparsifying bases that jointly model the structure present in all of the signal dimensions. Second, such matrices can represent the measurement protocols used in distributed settings. Our formulation enables the derivation of analytical bounds for the sparse approximation of multidimensional signals and CS recovery performance, as well as a means of evaluating novel distributed measurement schemes." ] }
1709.05789
2755931755
The columnwise Khatri–Rao product of two matrices is an important matrix type, reprising its role as a structured sensing matrix in many fundamental linear inverse problems. Robust signal recovery in such inverse problems is often contingent on proving the restricted isometry property (RIP) of a certain system matrix expressible as a Khatri–Rao product of two matrices. In this paper, we analyze the RIP of a generic columnwise Khatri–Rao product matrix by deriving two upper bounds for its @math th order restricted isometry constant ( @math -RIC) for different values of @math . The first RIC bound is computed in terms of the individual RICs of the real-valued input matrices participating in the Khatri–Rao product. The second RIC bound is probabilistic and is specified in terms of the input matrix dimensions. We show that the Khatri–Rao product of a pair of @math sized random matrices comprising independent and identically distributed sub-Gaussian entries satisfies @math -RIP with arbitrarily high probability, provided @math exceeds @math . This is a substantially milder condition compared to @math rows needed to guarantee @math -RIP of the input sub-Gaussian random matrices participating in the Khatri–Rao product. Our RIC bounds confirm that the Khatri–Rao product exhibits stronger restricted isometry compared to its constituent matrices for the same RIP order. The proposed RIC bounds are potentially useful in obtaining improved performance guarantees in several sparse signal recovery and tensor decomposition problems.
In @cite_22 , the isometry property of the Kronecker product @math is analyzed in the @math -norm sense for the restricted class of input vectors expressible as vectorized @math -distributed sparse matrices, wherein, @math and @math are the adjacency matrices of two independent uniformly random @math -left regular bipartite graphs. In this paper, we instead analyze the restricted isometry of the columnwise Khatri-Rao product @math , which is equivalent to the RIP of the Kronecker product @math with respect to vectorized sparse diagonal matrices. In our work, we assume that input matrices @math and @math are random with independent subgaussian elements.
{ "cite_N": [ "@cite_22" ], "mid": [ "2003571173" ], "abstract": [ "This paper considers the problem of recovering an unknown sparse p×p matrix X from an m×m matrix Y=AXB T , where A and B are known m×p matrices with m≪p. The main result shows that there exist constructions of the sketching matrices A and B so that even if X has O(p) nonzeros, it can be recovered exactly and efficiently using a convex program as long as these nonzeros are not concentrated in any single row column of X. Furthermore, it suffices for the size of Y (the sketch dimension) to scale as m = O(√(# nonzeros in X) × log p). The results also show that the recovery is robust and stable in the sense that if X is equal to a sparse matrix plus a perturbation, then the convex program we propose produces an approximation with accuracy proportional to the size of the perturbation. Unlike traditional results on sparse recovery, where the sensing matrix produces independent measurements, our sensing operator is highly constrained (it assumes a tensor product structure). Therefore, proving recovery guarantees require nonstandard techniques. Indeed, our approach relies on a novel result concerning tensor products of bipartite graphs, which may be of independent interest. This problem is motivated by the following application, among others. Consider a p×n data matrix D, consisting of n observations of p variables. Assume that the correlation matrix X:=DD T is (approximately) sparse in the sense that each of the p variables is significantly correlated with only a few others. Our results show that these significant correlations can be detected even if we have access to only a sketch of the data S=AD with A ∈ R m×p ." ] }
1709.05903
2755935915
Traditional Bag-of-visual Words (BoWs) model is commonly generated with many steps including local feature extraction, codebook generation, and feature quantization, etc. Those steps are relatively independent with each other and are hard to be jointly optimized. Moreover, the dependency on hand-crafted local feature makes BoWs model not effective in conveying high-level semantics. These issues largely hinder the performance of BoWs model in large-scale image applications. To conquer these issues, we propose an End-to-End BoWs (E @math BoWs) model based on Deep Convolutional Neural Network (DCNN). Our model takes an image as input, then identifies and separates the semantic objects in it, and finally outputs the visual words with high semantic discriminative power. Specifically, our model firstly generates Semantic Feature Maps (SFMs) corresponding to different object categories through convolutional layers, then introduces Bag-of-Words Layers (BoWL) to generate visual words for each individual feature map. We also introduce a novel learning algorithm to reinforce the sparsity of the generated E @math BoWs model, which further ensures the time and memory efficiency. We evaluate the proposed E @math BoWs model on several image search datasets including CIFAR-10, CIFAR-100, MIRFLICKR-25K and NUS-WIDE. Experimental results show that our method achieves promising accuracy and efficiency compared with recent deep learning based retrieval works.
As a fundamental task in multimedia content analysis and computer vision @cite_15 @cite_13 @cite_18 , CBIR aims to search for images similar with the query in an image gallery. Since directly computing similarity between two images with raw image pixels is infeasible, BoWs model is widely used as an image representation for large-scale image retrieval. Over the past decade, various BoWs models @cite_38 @cite_27 @cite_35 @cite_6 have been proposed based on local descriptors, such as SIFT @cite_24 and SURF @cite_32 . Those BoWs models have shown promising performance in large-scale image retrieval. Conventional BoWs models consider an image as a collection of visual words and is generated by many stages, , feature extraction, codebook generation and feature quantization @cite_38 @cite_27 @cite_35 @cite_6 . For instnace, Nister @cite_38 extract SIFT @cite_24 descriptors from MSER regions @cite_21 and then hierarchically quantize SIFT descriptors by the vocabulary tree. As individual visual word cannot depict the spatial cues in images, some works combine visual words with spatial information @cite_23 @cite_36 to make the resulting BoWs model more discriminative to the spatial cues. Some other works aim to generate more effective and discriminative vocabularies @cite_41 @cite_5 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_35", "@cite_36", "@cite_41", "@cite_21", "@cite_32", "@cite_6", "@cite_24", "@cite_27", "@cite_23", "@cite_5", "@cite_15", "@cite_13" ], "mid": [ "2128017662", "2118509786", "2131846894", "2168133252", "2164996087", "2124404372", "", "2103658758", "", "2154952031", "", "2121627225", "2130660124", "2123229215" ], "abstract": [ "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.", "Because of the relative ease in understanding and processing text, commercial image-search systems often rely on techniques that are largely indistinguishable from text search. Recently, academic studies have demonstrated the effectiveness of employing image-based features to provide either alternative or additional signals to use in this process. However, it remains uncertain whether such techniques will generalize to a large number of popular Web queries and whether the potential improvement to search quality warrants the additional computational cost. In this work, we cast the image-ranking problem into the task of identifying \"authority\" nodes on an inferred visual similarity graph and propose VisualRank to analyze the visual link structures among images. The images found to be \"authorities\" are chosen as those that answer the image-queries well. To understand the performance of such an approach in a real system, we conducted a series of large-scale experiments based on the task of retrieving images for 2,000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google image search results. Maintaining modest computational cost is vital to ensuring that this procedure can be used in practice; we describe the techniques required to make this system practical for large-scale deployment in commercial search engines.", "We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.", "In computer vision, the bag-of-visual words image representation has been shown to yield good results. Recent work has shown that modeling the spatial relationship between visual words further improves performance. Previous work extracts higher-order spatial features exhaustively. However, these spatial features are expensive to compute. We propose a novel method that simultaneously performs feature selection and feature extraction. Higher-order spatial features are progressively extracted based on selected lower order ones, thereby avoiding exhaustive computation. The method can be based on any additive feature selection algorithm such as boosting. Experimental results show that the method is computationally much more efficient than previous approaches, without sacrificing accuracy.", "This paper proposes a technique for jointly quantizing continuous features and the posterior distributions of their class labels based on minimizing empirical information loss such that the quantizer index of a given feature vector approximates a sufficient statistic for its class label. Informally, the quantized representation retains as much information as possible for classifying the feature vector correctly. We derive an alternating minimization procedure for simultaneously learning codebooks in the Euclidean feature space and in the simplex of posterior class distributions. The resulting quantizer can be used to encode unlabeled points outside the training set and to predict their posterior class distributions, and has an elegant interpretation in terms of lossless source coding. The proposed method is validated on synthetic and real data sets and is applied to two diverse problems: learning discriminative visual vocabularies for bag-of-features image classification and image segmentation.", "Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained.", "", "We seek to discover the object categories depicted in a set of unlabelled images. We achieve this using a model developed in the statistical text literature: probabilistic latent semantic analysis (pLSA). In text analysis, this is used to discover topics in a corpus using the bag-of-words document representation. Here we treat object categories as topics, so that an image containing instances of several categories is modeled as a mixture of topics. The model is applied to images by using a visual analogue of a word, formed by vector quantizing SIFT-like region descriptors. The topic discovery approach successfully translates to the visual domain: for a small set of objects, we show that both the object categories and their approximate spatial layout are found without supervision. Performance of this unsupervised method is compared to the supervised approach of (2003) on a set of unseen images containing only one object per image. We also extend the bag-of-words vocabulary to include 'doublets' which encode spatially local co-occurring regions. It is demonstrated that this extended vocabulary gives a cleaner image segmentation. Finally, the classification and segmentation methods are applied to a set of images containing multiple objects per image. These results demonstrate that we can successfully build object class models from an unsupervised analysis of images.", "", "In state-of-the-art image retrieval systems, an image is represented by a bag of visual words obtained by quantizing high-dimensional local image descriptors, and scalable schemes inspired by text retrieval are then applied for large scale image indexing and retrieval. Bag-of-words representations, however: 1) reduce the discriminative power of image features due to feature quantization; and 2) ignore geometric relationships among visual words. Exploiting such geometric constraints, by estimating a 2D affine transformation between a query image and each candidate image, has been shown to greatly improve retrieval precision but at high computational cost. In this paper we present a novel scheme where image features are bundled into local groups. Each group of bundled features becomes much more discriminative than a single feature, and within each group simple and robust geometric constraints can be efficiently enforced. Experiments in Web image search, with a database of more than one million images, show that our scheme achieves a 49 improvement in average precision over the baseline bag-of-words approach. Retrieval performance is comparable to existing full geometric verification approaches while being much less computationally expensive. When combined with full geometric verification we achieve a 77 precision improvement over the baseline bag-of-words approach, and a 24 improvement over full geometric verification alone.", "", "Generic visual categorization (GVC) is the pattern classification problem that consists in assigning labels to an image based on its semantic content. This is a challenging task as one has to deal with inherent object scene variations, as well as changes in viewpoint, lighting, and occlusion. Several state-of-the-art GVC systems use a vocabulary of visual terms to characterize images with a histogram of visual word counts. We propose a novel practical approach to GVC based on a universal vocabulary, which describes the content of all the considered classes of images, and class vocabularies obtained through the adaptation of the universal vocabulary using class-specific data. The main novelty is that an image is characterized by a set of histograms - one per class - where each histogram describes whether the image content is best modeled by the universal vocabulary or the corresponding class vocabulary. This framework is applied to two types of local image features: low-level descriptors such as the popular SIFT and high-level histograms of word co-occurrences in a spatial neighborhood. It is shown experimentally on two challenging data sets (an in-house database of 19 categories and the PASCAL VOC 2006 data set) that the proposed approach exhibits state-of-the-art performance at a modest computational cost.", "Presents a review of 200 references in content-based image retrieval. The paper starts with discussing the working conditions of content-based retrieval: patterns of use, types of pictures, the role of semantics, and the sensory gap. Subsequent sections discuss computational steps for image retrieval systems. Step one of the review is image processing for retrieval sorted by color, texture, and local geometry. Features for retrieval are discussed next, sorted by: accumulative and global features, salient points, object and shape features, signs, and structural combinations thereof. Similarity of pictures and objects in pictures is reviewed for each of the feature types, in close connection to the types and means of feedback the user of the systems is capable of giving by interaction. We briefly discuss aspects of system engineering: databases, system architecture, and evaluation. In the concluding section, we present our view on: the driving force of the field, the heritage from computer vision, the influence on computer vision, the role of similarity and of interaction, the need for databases, the problem of evaluation, and the role of the semantic gap.", "Learning effective feature representations and similarity measures are crucial to the retrieval performance of a content-based image retrieval (CBIR) system. Despite extensive research efforts for decades, it remains one of the most challenging open problems that considerably hinders the successes of real-world CBIR systems. The key challenge has been attributed to the well-known semantic gap'' issue that exists between low-level image pixels captured by machines and high-level semantic concepts perceived by human. Among various techniques, machine learning has been actively investigated as a possible direction to bridge the semantic gap in the long term. Inspired by recent successes of deep learning techniques for computer vision and other applications, in this paper, we attempt to address an open problem: if deep learning is a hope for bridging the semantic gap in CBIR and how much improvements in CBIR tasks can be achieved by exploring the state-of-the-art deep learning techniques for learning feature representations and similarity measures. Specifically, we investigate a framework of deep learning with application to CBIR tasks with an extensive set of empirical studies by examining a state-of-the-art deep learning method (Convolutional Neural Networks) for CBIR tasks under varied settings. From our empirical studies, we find some encouraging results and summarize some important insights for future research." ] }
1709.05903
2755935915
Traditional Bag-of-visual Words (BoWs) model is commonly generated with many steps including local feature extraction, codebook generation, and feature quantization, etc. Those steps are relatively independent with each other and are hard to be jointly optimized. Moreover, the dependency on hand-crafted local feature makes BoWs model not effective in conveying high-level semantics. These issues largely hinder the performance of BoWs model in large-scale image applications. To conquer these issues, we propose an End-to-End BoWs (E @math BoWs) model based on Deep Convolutional Neural Network (DCNN). Our model takes an image as input, then identifies and separates the semantic objects in it, and finally outputs the visual words with high semantic discriminative power. Specifically, our model firstly generates Semantic Feature Maps (SFMs) corresponding to different object categories through convolutional layers, then introduces Bag-of-Words Layers (BoWL) to generate visual words for each individual feature map. We also introduce a novel learning algorithm to reinforce the sparsity of the generated E @math BoWs model, which further ensures the time and memory efficiency. We evaluate the proposed E @math BoWs model on several image search datasets including CIFAR-10, CIFAR-100, MIRFLICKR-25K and NUS-WIDE. Experimental results show that our method achieves promising accuracy and efficiency compared with recent deep learning based retrieval works.
Some works have been proposed to enhance the discriminative power of BoWs model to semantic cues @cite_37 @cite_11 @cite_44 . Wu @cite_37 propose an off-line distance metric learning scheme to map related features to the same visual words to generate an optimized codebook. Wu @cite_11 present an on-line metric learning algorithm to improve the BoWs model by optimizing the proposed semantic loss. Zhang @cite_44 propose a method to co-index semantic attributes into inverted index generated by local features to make it convey more semantic cues. However, most of these works need extra computations either in the off-line indexing or on-line retrieval stages. Moreover, since these models are generated by many independent steps, they are hard to be jointly optimized to achieve better efficiency and accuracy.
{ "cite_N": [ "@cite_44", "@cite_37", "@cite_11" ], "mid": [ "1997814073", "2147277317", "2091753323" ], "abstract": [ "In content-based image retrieval, inverted indexes allow fast access to database images and summarize all knowledge about the database. Indexing multiple clues of image contents allows retrieval algorithms search for relevant images from different perspectives, which is appealing to deliver satisfactory user experiences. However, when incorporating diverse image features during online retrieval, it is challenging to ensure retrieval efficiency and scalability. In this paper, for large-scale image retrieval, we propose a semantic-aware co-indexing algorithm to jointly embed two strong cues into the inverted indexes: 1) local invariant features that are robust to delineate low-level image contents, and 2) semantic attributes from large-scale object recognition that may reveal image semantic meanings. Specifically, for an initial set of inverted indexes of local features, we utilize semantic attributes to filter out isolated images and insert semantically similar images to this initial set. Encoding these two distinct and complementary cues together effectively enhances the discriminative capability of inverted indexes. Such co-indexing operations are totally off-line and introduce small computation overhead to online retrieval, because only local features but no semantic attributes are employed for the query. Hence, this co-indexing is different from existing image retrieval methods fusing multiple features or retrieval results. Extensive experiments and comparisons with recent retrieval methods manifest the competitive performance of our method.", "The Bag-of-Words (BoW) model is a promising image representation technique for image categorization and annotation tasks. One critical limitation of existing BoW models is that much semantic information is lost during the codebook generation process, an important step of BoW. This is because the codebook generated by BoW is often obtained via building the codebook simply by clustering visual features in Euclidian space. However, visual features related to the same semantics may not distribute in clusters in the Euclidian space, which is primarily due to the semantic gap between low-level features and high-level semantics. In this paper, we propose a novel scheme to learn optimized BoW models, which aims to map semantically related features to the same visual words. In particular, we consider the distance between semantically identical features as a measurement of the semantic gap, and attempt to learn an optimized codebook by minimizing this gap, aiming to achieve the minimal loss of the semantics. We refer to such kind of novel codebook as semantics-preserving codebook (SPC) and the corresponding model as the Semantics-Preserving Bag-of-Words (SPBoW) model. Extensive experiments on image annotation and object detection tasks with public testbeds from MIT's Labelme and PASCAL VOC challenge databases show that the proposed SPC learning scheme is effective for optimizing the codebook generation process, and the SPBoW model is able to greatly enhance the performance of the existing BoW model.", "The authors present an online semantics preserving, metric learning technique for improving the bag-of-words model and addressing the semantic-gap issue. This article investigates the challenge of reducing the semantic gap for building BoW models for image representation; propose a novel OSPML algorithm for enhancing BoW by minimizing the semantic loss, which is efficient and scalable for enhancing BoW models for large-scale applications; apply the proposed technique for large-scale image annotation and object recognition; and compare it to the state of the art." ] }
1709.05788
2755164689
One-stage object detectors such as SSD or YOLO already have shown promising accuracy with small memory footprint and fast speed. However, it is widely recognized that one-stage detectors have difficulty in detecting small objects while they are competitive with two-stage methods on large objects. In this paper, we investigate how to alleviate this problem starting from the SSD framework. Due to their pyramidal design, the lower layer that is responsible for small objects lacks strong semantics(e.g contextual information). We address this problem by introducing a feature combining module that spreads out the strong semantics in a top-down manner. Our final model StairNet detector unifies the multi-scale representations and semantic distribution effectively. Experiments on PASCAL VOC 2007 and PASCAL VOC 2012 datasets demonstrate that StairNet significantly improves the weakness of SSD and outperforms the other state-of-the-art one-stage detectors.
Detection frameworks have been dominated by sliding-window paradigm for many years. These methods heavily relied on hand-crafted features such as HOG @cite_45 . However, after the dramatic performance boost brought on by R-CNN @cite_34 , which combines an object proposal mechanism @cite_24 with a powerful CNN classifer, traditional methods were surpassed in a short period time. The R-CNN detector has been improved over the years both in terms of speed and accuracy. Recently, Faster-RCNN @cite_4 integrated proposal generation module and the Fast R-CNN @cite_46 classifier into a single CNN. Many researchers adopted @cite_4 framework and proposed a numerous extensions. This two-stage approaches consistently have occupied the top entries of the challenging benchmarks so far. However, due to the two-stage design, two-stage detector hurts the detection efficiency. They suffer from high memory usage and slow inference time. This motivates to build one-stage detectors that predicts outputs in a proposal-free manner.
{ "cite_N": [ "@cite_4", "@cite_24", "@cite_45", "@cite_46", "@cite_34" ], "mid": [ "2953106684", "2088049833", "2161969291", "", "2102605133" ], "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn." ] }
1709.05788
2755164689
One-stage object detectors such as SSD or YOLO already have shown promising accuracy with small memory footprint and fast speed. However, it is widely recognized that one-stage detectors have difficulty in detecting small objects while they are competitive with two-stage methods on large objects. In this paper, we investigate how to alleviate this problem starting from the SSD framework. Due to their pyramidal design, the lower layer that is responsible for small objects lacks strong semantics(e.g contextual information). We address this problem by introducing a feature combining module that spreads out the strong semantics in a top-down manner. Our final model StairNet detector unifies the multi-scale representations and semantic distribution effectively. Experiments on PASCAL VOC 2007 and PASCAL VOC 2012 datasets demonstrate that StairNet significantly improves the weakness of SSD and outperforms the other state-of-the-art one-stage detectors.
OverFeat @cite_32 is the first CNN based one-stage object detector using sliding-window paradigm. YOLO @cite_3 and SSD @cite_44 have recently been proposed for real-time detection. They are a fast single stage methods which divide an image in to a multiple grids and simultaneously predict bounding boxes and class confidences. Unlike YOLO, SSD uses in-network multiple feature maps to detect objects with sizes of a specified ranges. This makes SSD more robust to detect varying shapes and sizes of objects than YOLO. We adopt the SSD framework for our starting point.
{ "cite_N": [ "@cite_44", "@cite_32", "@cite_3" ], "mid": [ "2193145675", "1487583988", "" ], "abstract": [ "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "" ] }
1709.05788
2755164689
One-stage object detectors such as SSD or YOLO already have shown promising accuracy with small memory footprint and fast speed. However, it is widely recognized that one-stage detectors have difficulty in detecting small objects while they are competitive with two-stage methods on large objects. In this paper, we investigate how to alleviate this problem starting from the SSD framework. Due to their pyramidal design, the lower layer that is responsible for small objects lacks strong semantics(e.g contextual information). We address this problem by introducing a feature combining module that spreads out the strong semantics in a top-down manner. Our final model StairNet detector unifies the multi-scale representations and semantic distribution effectively. Experiments on PASCAL VOC 2007 and PASCAL VOC 2012 datasets demonstrate that StairNet significantly improves the weakness of SSD and outperforms the other state-of-the-art one-stage detectors.
A number of studies have shown that exploiting multiple layers within a CNN can improve detection and segmentation. HyperNet @cite_9 and ION @cite_35 concatenate the features from different layers and pool object proposals from the coupled layer. FCN @cite_48 and Hypercolumns @cite_2 upsample multiple layers and combine partial scores of each layers for final decision. SSD @cite_44 enforces each layer to predict certain scale of objects by distributing various scales of default boxes to multiple layers. Similar to SSD, MS-CNN @cite_43 also uses multiple feature maps for prediction and they newly introduced deconvolution layer to increase the resolution of feature maps. FPN @cite_36 attempted to leverage the pyramidal shape of CNN. They augmented the CNN to build strong semantics at all scales of feature maps, enabled by nearest neighbor upsampling and lateral connections.
{ "cite_N": [ "@cite_35", "@cite_36", "@cite_48", "@cite_9", "@cite_44", "@cite_43", "@cite_2" ], "mid": [ "2951829713", "2949533892", "2952632681", "2337897552", "2193145675", "2490270993", "1948751323" ], "abstract": [ "It is well known that contextual and multi-scale representations are important for accurate visual recognition. In this paper we present the Inside-Outside Net (ION), an object detector that exploits information both inside and outside the region of interest. Contextual information outside the region of interest is integrated using spatial recurrent neural networks. Inside, we use skip pooling to extract information at multiple scales and levels of abstraction. Through extensive experiments we evaluate the design space and provide readers with an overview of what tricks of the trade are important. ION improves state-of-the-art on PASCAL VOC 2012 object detection from 73.9 to 76.4 mAP. On the new and more challenging MS COCO dataset, we improve state-of-art-the from 19.7 to 33.1 mAP. In the 2015 MS COCO Detection Challenge, our ION model won the Best Student Entry and finished 3rd place overall. As intuition suggests, our detection results provide strong evidence that context and multi-scale representations improve small object detection.", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "Almost all of the current top-performing object detection networks employ region proposals to guide the search for object instances. State-of-the-art region proposal methods usually need several thousand proposals to get high recall, thus hurting the detection efficiency. Although the latest Region Proposal Network method gets promising detection accuracy with several hundred proposals, it still struggles in small-size object detection and precise localization (e.g., large IoU thresholds), mainly due to the coarseness of its feature maps. In this paper, we present a deep hierarchical network, namely HyperNet, for handling region proposal generation and object detection jointly. Our HyperNet is primarily based on an elaborately designed Hyper Feature which aggregates hierarchical feature maps first and then compresses them into a uniform space. The Hyper Features well incorporate deep but highly semantic, intermediate but really complementary, and shallow but naturally high-resolution features of the image, thus enabling us to construct HyperNet by sharing them both in generating proposals and detecting objects via an end-to-end joint training strategy. For the deep VGG16 model, our method achieves completely leading recall and state-of-the-art object detection accuracy on PASCAL VOC 2007 and 2012 using only 100 proposals per image. It runs with a speed of 5 fps (including all steps) on a GPU, thus having the potential for real-time processing.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the proposal sub-network, detection is performed at multiple output layers, so that receptive fields match objects of different scales. These complementary scale-specific detectors are combined to produce a strong multi-scale object detector. The unified network is learned end-to-end, by optimizing a multi-task loss. Feature upsampling by deconvolution is also explored, as an alternative to input upsampling, to reduce the memory and computation costs. State-of-the-art object detection performance, at up to 15 fps, is reported on datasets, such as KITTI and Caltech, containing a substantial number of small objects.", "Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline." ] }
1709.05788
2755164689
One-stage object detectors such as SSD or YOLO already have shown promising accuracy with small memory footprint and fast speed. However, it is widely recognized that one-stage detectors have difficulty in detecting small objects while they are competitive with two-stage methods on large objects. In this paper, we investigate how to alleviate this problem starting from the SSD framework. Due to their pyramidal design, the lower layer that is responsible for small objects lacks strong semantics(e.g contextual information). We address this problem by introducing a feature combining module that spreads out the strong semantics in a top-down manner. Our final model StairNet detector unifies the multi-scale representations and semantic distribution effectively. Experiments on PASCAL VOC 2007 and PASCAL VOC 2012 datasets demonstrate that StairNet significantly improves the weakness of SSD and outperforms the other state-of-the-art one-stage detectors.
Global contexts are well known to play critical role in visual recognition problems. Recent architectures attempt to use this strong semantics for their specific tasks. DPM @cite_15 integrated a global root model and finer local part models to represent deformable objects efficiently. Viewpoints and Keypoints @cite_31 leverages the global viewpoint estimation to improve local keypoint predictions. RRC @cite_11 transfers each feature semantic information to other layers by stacking pooling and deconvolution layers upon SSD. @cite_8 @cite_55 @cite_25 have shown that encoder-decoder, hourglass shape, is effective to propagate the context information. FPN @cite_36 builds rich semantics at all levels by combining each layers. CoupleNet @cite_42 introduces global FCN branch to extract global semantics. All of which show that effective combination of the strong semantics(e.g. global context information) and fine local details improve the discrimination performance. Inspired by recent works, we propose to use top-down feature combining module to diffuse out the semantics effectively. Our proposed StairNet follows the SSD-style pyramid and thus it inherits the advantages of SSD, while produces more accurate models. We show that our model is simple and effective which outperforms current state-of-the art one-stage detectors.
{ "cite_N": [ "@cite_31", "@cite_8", "@cite_36", "@cite_55", "@cite_42", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "2951900634", "2952637581", "2949533892", "", "2743620784", "2168356304", "2579985080", "2953226057" ], "abstract": [ "We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal.", "We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network.", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "", "The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7 on VOC07, 80.4 on VOC12, and 34.4 on COCO. Codes will be made publicly available.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.", "Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in the first stage then followed by a second stage for decision refinement. Despite the simplicity of training and the efficiency in deployment, the single stage detection methods have not been as competitive when evaluated in benchmarks consider mAP for high IoU thresholds. In this paper, we proposed a novel single stage end-to-end trainable object detection network to overcome this limitation. We achieved this by introducing Recurrent Rolling Convolution (RRC) architecture over multi-scale feature maps to construct object classifiers and bounding box regressors which are \"deep in context\". We evaluated our method in the challenging KITTI dataset which measures methods under IoU threshold of 0.7. We showed that with RRC, a single reduced VGG-16 based model already significantly outperformed all the previously published results. At the time this paper was written our models ranked the first in KITTI car detection (the hard level), the first in cyclist detection and the second in pedestrian detection. These results were not reached by the previous single stage methods. The code is publicly available." ] }
1709.06039
2754378822
We consider the problem of developing robots that navigate like pedestrians on sidewalks through city centers for performing various tasks including delivery and surveillance. One particular challenge for such robots is crossing streets without pedestrian traffic lights. To solve this task the robot has to decide based on its sensory input if the road is clear. In this work, we propose a novel multi-modal learning approach for the problem of autonomous street crossing. Our approach solely relies on laser and radar data and learns a classifier based on Random Forests to predict when it is safe to cross the road. We present extensive experimental evaluations using real-world data collected from multiple street crossing situations which demonstrate that our approach yields a safe and accurate street crossing behavior and generalizes well over different types of situations. A comparison to alternative methods demonstrates the advantages of our approach.
Research on the problem of safe autonomous navigation across intersections has been an active topic in the context of self-driving vehicles. However, few approaches have addressed this problem for pedestrian robots. bauer2009autonomous present an autonomous outdoor navigation pedestrian robot operating in outdoor urban environments @cite_10 . The robot is capable of navigating through signalized crossings by detecting and classifying traffic lights and signals at the intersection. baker2005automated use a vision based system for autonomous street crossing targeted at assistive robots @cite_3 . Using cameras mounted to the left and right of the platform, they track oncoming vehicles to determine whether it is safe to cross. Once the decision to cross the street has been made, they continue to track oncoming vehicles to maintain an updated measure of the intersection safety. Due to the short range of the camera, the approach can only detect nearby vehicles in a two-lane street.
{ "cite_N": [ "@cite_10", "@cite_3" ], "mid": [ "2087259295", "2104941426" ], "abstract": [ "The Autonomous City Explorer (ACE) project combines research from autonomous outdoor navigation and human-robot interaction. The ACE robot is capable of navigating unknown urban environments without the use of GPS data or prior map knowledge. It finds its way by interacting with pedestrians in a natural and intuitive way and building a topological representation of its surroundings. In a recent experiment the robot managed to successfully travel a 1.5 km distance from the campus of the Technische Universitat Munchen to Marienplatz, the central square of Munich. This article describes the principles and system components for navigation in urban environments, information retrieval through natural human-robot interaction, the construction of a suitable semantic representation as well as results from the field experiment.", "Robotic systems that assist users by crossing a street safely and automatically would benefit people with vision and or mobility impairments. This paper describes progress toward a street-crossing system for an assistive robotic system. The system detects and tracks vehicles in real time. It reasons about extracted motion regions in order to decide when it is safe to cross." ] }
1709.05859
2788910306
This paper considers a class of reinforcement-based learning (namely, perturbed learning automata) and provides a stochastic-stability analysis in repeatedly-played, positive-utility, finite strategic-form games. Prior work in this class of learning dynamics primarily analyzes asymptotic convergence through stochastic approximations, where convergence can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through an ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the analysis to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing asymptotic convergence that is based upon an explicit characterization of the invariant probability measure of the induced Markov chain. We further provide a methodology for computing the invariant probability measure in positive-utility games, together with an illustration in the context of coordination games.
A type of learning dynamics which is quite closely related to the dynamics of Table is the discrete-time version of (cf., @cite_12 ). There have been several variations with respect to the selection of the step-size sequence. For example, Arthur @cite_15 considered a similar rule, with @math and step-size @math , for some positive constant @math and for @math (in the place of the constant step-size @math of )). A comparative model is also used by Hopkins and Posch in @cite_0 , with @math , where @math is the accumulated benefits of agent @math up to time @math , which gives rise to the urn process of Erev-Roth @cite_7 . Some similarities are also shared with the Cross' learning model of @cite_25 , where @math and @math , and its modification presented by Leslie in @cite_1 , where @math , instead, is decreasing with time.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_0", "@cite_15", "@cite_25", "@cite_12" ], "mid": [ "1564229172", "", "2096702014", "2030329396", "1995622844", "261814290" ], "abstract": [ "The authors examine learning in all experiments they could locate involving one hundred periods or more of games with a unique equilibrium in mixed strategies, and in a new experiment. They study both the ex post ('best fit') descriptive power of learning models, and their ex ante predictive power, by simulating each experiment using parameters estimated from the other experiments. Even a one-parameter reinforcement learning model robustly outperforms the equilibrium predictions. Predictive power is improved by adding 'forgetting' and 'experimentation,' or by allowing greater rationality as in probabilistic fictitious play. Implications for developing a low-rationality, cognitive game theory are discussed. Copyright 1998 by American Economic Association.", "", "This paper investigates the properties of the most common form of reinforcement learning (the \"basic model\" of Erev and Roth, American Economic Review, 88, 848-881, 1998). Stochastic approximation theory has been used to analyse the local stability of fixed points under this learning process. However, as we show, when such points are on the boundary of the state space, for example, pure strategy equilibria, standard results from the theory of stochastic approximation do not apply. We offer what we believe to be the correct treatment of boundary points, and provide a new and more general result: this model of learning converges with zero probability to fixed points which are unstable under the Maynard Smith or adjusted version of the evolutionary replicator dynamics. For two player games these are the fixed points that are linearly unstable under the standard replicator dynamics.", "This paper explores the idea of constructing theoretical economic agents that behave like actual human agents and using them in neoclassical economic models. It does this in a repeated-choice setting by postulating \"artificial agents\" who use a learning algorithm calibrated against human learning data from psychological experiments. The resulting calibrated algorithm appears to replicate human learning behavior to a high degree and reproduces several \"stylized facts\" of learning. It can, therefore, be used to replace the idealized, perfectly rational agents in appropriate neoclassical models with \"calibrated agents\" that represent actual human behavior. The paper discusses the possibilities of using the algorithm to represent human learning in normal-form stage games and in more general neoclassical models in economics. It explores the likelihood of convergence to long-run optimality and to Nash behavior, and the \"characteristic learning time\" implicit in human adaptation in the economy.", "This paper considers a version of Bush and Mosteller's stochastic learning theory in the context of games. We compare this model of learning to a model of biological evolution. The purpose is to investigate analogies between learning and evolution. We and that in the continuous time limit the biological model coincides with the deterministic, continuous time replicator process. We give conditions under which the same is true for the learning model. For the case that these conditions do not hold, we show that the replicator process continues to play an important role in characterising the continuous time limit of the learning model, but that a di®erent e®ect ( Matching\") enters as well.(This abstract was borrowed from another version of this item.)", "1 Preliminaries.- 1.1 Introduction.- 1.2 Measures and Functions.- 1.3 Weak Topologies.- 1.4 Convergence of Measures.- 1.5 Complements.- 1.6 Notes.- I Markov Chains and Ergodicity.- 2 Markov Chains and Ergodic Theorems.- 2.1 Introduction.- 2.2 Basic Notation and Definitions.- 2.3 Ergodic Theorems.- 2.4 The Ergodicity Property.- 2.5 Pathwise Results.- 2.6 Notes.- 3 Countable Markov Chains.- 3.1 Introduction.- 3.2 Classification of States and Class Properties.- 3.3 Limit Theorems.- 3.4 Notes.- 4 Harris Markov Chains.- 4.1 Introduction.- 4.2 Basic Definitions and Properties.- 4.3 Characterization of Harris recurrence.- 4.4 Sufficient Conditions for P.H.R.- 4.5 Harris and Doeblin Decompositions.- 4.6 Notes.- 5 Markov Chains in Metric Spaces.- 5.1 Introduction.- 5.2 The Limit in Ergodic Theorems.- 5.3 Yosida's Ergodic Decomposition.- 5.4 Pathwise Results.- 5.5 Proofs.- 5.6 Notes.- 6 Classification of Markov Chains via Occupation Measures.- 6.1 Introduction.- 6.2 A Classification.- 6.3 On the Birkhoff Individual Ergodic Theorem.- 6.4 Notes.- II Further Ergodicity Properties.- 7 Feller Markov Chains.- 7.1 Introduction.- 7.2 Weak-and Strong-Feller Markov Chains.- 7.3 Quasi Feller Chains.- 7.4 Notes.- 8 The Poisson Equation.- 8.1 Introduction.- 8.2 The Poisson Equation.- 8.3 Canonical Pairs.- 8.4 The Cesaro-Averages Approach.- 8.5 The Abelian Approach.- 8.6 Notes.- 9 Strong and Uniform Ergodicity.- 9.1 Introduction.- 9.2 Strong and Uniform Ergodicity.- 9.3 Weak and Weak Uniform Ergodicity.- 9.4 Notes.- III Existence and Approximation of Invariant Probability Measures.- 10 Existence of Invariant Probability Measures.- 10.1 Introduction and Statement of the Problems.- 10.2 Notation and Definitions.- 10.3 Existence Results.- 10.4 Markov Chains in Locally Compact Separable Metric Spaces.- 10.5 Other Existence Results in Locally Compact Separable Metric Spaces.- 10.6 Technical Preliminaries.- 10.7 Proofs.- 10.8 Notes.- 11 Existence and Uniqueness of Fixed Points for Markov Operators.- 11.1 Introduction and Statement of the Problems.- 11.2 Notation and Definitions.- 11.3 Existence Results.- 11.4 Proofs.- 11.5 Notes.- 12 Approximation Procedures for Invariant Probability Measures.- 12.1 Introduction.- 12.2 Statement of the Problem and Preliminaries.- 12.3 An Approximation Scheme.- 12.4 A Moment Approach for a Special Class of Markov Chains.- 12.5 Notes." ] }
1709.05859
2788910306
This paper considers a class of reinforcement-based learning (namely, perturbed learning automata) and provides a stochastic-stability analysis in repeatedly-played, positive-utility, finite strategic-form games. Prior work in this class of learning dynamics primarily analyzes asymptotic convergence through stochastic approximations, where convergence can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through an ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the analysis to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing asymptotic convergence that is based upon an explicit characterization of the invariant probability measure of the induced Markov chain. We further provide a methodology for computing the invariant probability measure in positive-utility games, together with an illustration in the context of coordination games.
Although excluding convergence to non-Nash action profiles can be guaranteed by sufficiently small @math , establishing convergence to action profiles that are Nash equilibria () may still be an issue. This is desirable in the context of coordination games @cite_17 , where Pareto-efficient outcomes are usually pure Nash equilibria (see, e.g., the definition of a coordination game in @cite_26 ). As shown in @cite_2 , convergence to pure Nash equilibria can be guaranteed only under strong conditions in the utility function. For example, as shown in [Proposition 8] ChasparisShammaRantzer15 , and under the ODE-method for stochastic approximations, it requires a) the existence of a potential function, and b) conditions over the Jacobian matrix of the potential function. Even if a potential function does exist, verifying conditions (b) is practically infeasible for games of more than 2 players @cite_2 .
{ "cite_N": [ "@cite_26", "@cite_2", "@cite_17" ], "mid": [ "2570514057", "2015836240", "" ], "abstract": [ "We consider the problem of distributed convergence to efficient outcomes in coordination games through dynamics based on aspiration learning. Our first contribution is the characterization of the asymptotic behavior of the induced Markov chain of the iterated process in terms of an equivalent finite-state Markov chain. We then characterize explicitly the behavior of the proposed aspiration learning in a generalized version of coordination games, examples of which include network formation and common-pool games. In particular, we show that in generic coordination games the frequency at which an efficient action profile is played can be made arbitrarily large. Although convergence to efficient outcomes is desirable, in several coordination games, such as common-pool games, attainability of fair outcomes, i.e., sequences of plays at which players experience highly rewarding returns with the same frequency, might also be of special interest. To this end, we demonstrate through analysis and simulations that aspiration learning also establishes fair outcomes in all symmetric coordination games, including common-pool games. @PARASPLIT Read More: http: epubs.siam.org doi abs 10.1137 110852462", "A particle in Rd moves in discrete time. The size of the nth step is of order 1 n and when the particle is at a position v the expectation of the next step is in the direction F(v) for some fixed vector function F of class C2. It is well known that the only possible points p where v(n) may converge are those satisfying F(p) = 0. This paper proves that convergence to some of these points is in fact impossible as long as the \"noise\" -the difference between each step and its expectation-is sufficiently omnidirectional. The points where convergence is impossible are the unstable critical points for the autonomous flow (d dt)v(t) = F(v(t)). This generalizes several known results that say convergence is impossible at a repelling node of the flow.", "" ] }
1709.05859
2788910306
This paper considers a class of reinforcement-based learning (namely, perturbed learning automata) and provides a stochastic-stability analysis in repeatedly-played, positive-utility, finite strategic-form games. Prior work in this class of learning dynamics primarily analyzes asymptotic convergence through stochastic approximations, where convergence can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through an ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the analysis to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing asymptotic convergence that is based upon an explicit characterization of the invariant probability measure of the induced Markov chain. We further provide a methodology for computing the invariant probability measure in positive-utility games, together with an illustration in the context of coordination games.
On the other hand, an important side-benefit of using this class of dynamics is the indirect filtering'' of the utility-function measurements (through the formulation of the strategy vectors in )). This is demonstrated, for example, in @cite_0 for the Erev-Roth model @cite_7 , where the robustness of convergence non-convergence asymptotic results is presented under the presence of noise in the utility measurements.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "2096702014", "1564229172" ], "abstract": [ "This paper investigates the properties of the most common form of reinforcement learning (the \"basic model\" of Erev and Roth, American Economic Review, 88, 848-881, 1998). Stochastic approximation theory has been used to analyse the local stability of fixed points under this learning process. However, as we show, when such points are on the boundary of the state space, for example, pure strategy equilibria, standard results from the theory of stochastic approximation do not apply. We offer what we believe to be the correct treatment of boundary points, and provide a new and more general result: this model of learning converges with zero probability to fixed points which are unstable under the Maynard Smith or adjusted version of the evolutionary replicator dynamics. For two player games these are the fixed points that are linearly unstable under the standard replicator dynamics.", "The authors examine learning in all experiments they could locate involving one hundred periods or more of games with a unique equilibrium in mixed strategies, and in a new experiment. They study both the ex post ('best fit') descriptive power of learning models, and their ex ante predictive power, by simulating each experiment using parameters estimated from the other experiments. Even a one-parameter reinforcement learning model robustly outperforms the equilibrium predictions. Predictive power is improved by adding 'forgetting' and 'experimentation,' or by allowing greater rationality as in probabilistic fictitious play. Implications for developing a low-rationality, cognitive game theory are discussed. Copyright 1998 by American Economic Association." ] }
1709.05859
2788910306
This paper considers a class of reinforcement-based learning (namely, perturbed learning automata) and provides a stochastic-stability analysis in repeatedly-played, positive-utility, finite strategic-form games. Prior work in this class of learning dynamics primarily analyzes asymptotic convergence through stochastic approximations, where convergence can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through an ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the analysis to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing asymptotic convergence that is based upon an explicit characterization of the invariant probability measure of the induced Markov chain. We further provide a methodology for computing the invariant probability measure in positive-utility games, together with an illustration in the context of coordination games.
The property of is closely related to the existence of a , as in the case of potential games @cite_31 . Similarly to the discrete-time replicator dynamics, convergence to non-Nash action profiles cannot be excluded when the step-size sequence is constant, even if the utility function satisfies @math . (The behavior under decreasing step-size is different as [Proposition 2] ChasparisShammaRantzer15 has shown.) Furthermore, deriving conditions for excluding convergence to mixed strategy profiles in coordination games continues to be an issue, as in discrete-time replicator dynamics.
{ "cite_N": [ "@cite_31" ], "mid": [ "2103151730" ], "abstract": [ "We present a view of cooperative control using the language of learning in games. We review the game-theoretic concepts of potential and weakly acyclic games, and demonstrate how several cooperative control problems, such as consensus and dynamic sensor coverage, can be formulated in these settings. Motivated by this connection, we build upon game-theoretic concepts to better accommodate a broader class of cooperative control problems. In particular, we extend existing learning algorithms to accommodate restricted action sets caused by the limitations of agent capabilities and group based decision making. Furthermore, we also introduce a new class of games called sometimes weakly acyclic games for time-varying objective functions and action sets, and provide distributed algorithms for convergence to an equilibrium." ] }
1709.05859
2788910306
This paper considers a class of reinforcement-based learning (namely, perturbed learning automata) and provides a stochastic-stability analysis in repeatedly-played, positive-utility, finite strategic-form games. Prior work in this class of learning dynamics primarily analyzes asymptotic convergence through stochastic approximations, where convergence can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through an ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the analysis to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing asymptotic convergence that is based upon an explicit characterization of the invariant probability measure of the induced Markov chain. We further provide a methodology for computing the invariant probability measure in positive-utility games, together with an illustration in the context of coordination games.
Similar questions of convergence to Nash equilibria also appear in alternative reinforcement-based learning formulations, such as approximate dynamic programming and @math -learning. Usually, under @math -learning, players keep track of the discounted running average reward received by each action, based on which optimal decisions are made (see, e.g., @cite_27 ). Convergence to Nash equilibria can be accomplished under a stronger set of assumptions, which increases the computational complexity of the dynamics. For example, in the Nash-Q learning algorithm of @cite_3 , it is indirectly assumed that agents need to have full access to the joint action space and the rewards received by other agents.
{ "cite_N": [ "@cite_27", "@cite_3" ], "mid": [ "1967250398", "2120846115" ], "abstract": [ "The single-agent multi-armed bandit problem can be solved by an agent that learns the values of each action using reinforcement learning. However, the multi-agent version of the problem, the iterated normal form game, presents a more complex challenge, since the rewards available to each agent depend on the strategies of the others. We consider the behavior of value-based learning agents in this situation, and show that such agents cannot generally play at a Nash equilibrium, although if smooth best responses are used, a Nash distribution can be reached. We introduce a particular value-based learning algorithm, which we call individual Q-learning, and use stochastic approximation to study the asymptotic behavior, showing that strategies will converge to Nash distribution almost surely in 2-player zero-sum games and 2-player partnership games. Player-dependent learning rates are then considered, and it is shown that this extension converges in some games for which many algorithms, including the basic algorithm initially considered, fail to converge.", "We extend Q-learning to a noncooperative multiagent context, using the framework of general-sum stochastic games. A learning agent maintains Q-functions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Q-values. This learning protocol provably converges given certain restrictions on the stage games (defined by Q-values) that arise during learning. Experiments with a pair of two-player grid games suggest that such restrictions on the game structure are not necessarily required. Stage games encountered during learning in both grid environments violate the conditions. However, learning consistently converges in the first grid game, which has a unique equilibrium Q-function, but sometimes fails to converge in the second, which has three different equilibrium Q-functions. In a comparison of offline learning performance in both games, we find agents are more likely to reach a joint optimal path with Nash Q-learning than with a single-agent Q-learning method. When at least one agent adopts Nash Q-learning, the performance of both agents is better than using single-agent Q-learning. We have also implemented an online version of Nash Q-learning that balances exploration with exploitation, yielding improved performance." ] }
1709.05859
2788910306
This paper considers a class of reinforcement-based learning (namely, perturbed learning automata) and provides a stochastic-stability analysis in repeatedly-played, positive-utility, finite strategic-form games. Prior work in this class of learning dynamics primarily analyzes asymptotic convergence through stochastic approximations, where convergence can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through an ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the analysis to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing asymptotic convergence that is based upon an explicit characterization of the invariant probability measure of the induced Markov chain. We further provide a methodology for computing the invariant probability measure in positive-utility games, together with an illustration in the context of coordination games.
When the evaluation of the @math -values is totally independent, as in the individual @math -learning in @cite_27 , then convergence to Nash equilibria has been shown only for 2-player zero-sum games and 2-player partnership games with countably many Nash equilibria. Currently, there exist no convergence results in multi-player games. To overcome this deficiency of @math -learning, in the context of stochastic dynamic games, reference @cite_8 employs an additional feature (motivated by @cite_19 ), namely . In any such , agents use constant policies, something that allows for an accurate computation of the optimal @math -factors. We may argue that the introduction of common exploration phases for all agents partially destroys the distributed nature of the dynamics, since it requires synchronization between agents.
{ "cite_N": [ "@cite_19", "@cite_27", "@cite_8" ], "mid": [ "2176451521", "1967250398", "2962990479" ], "abstract": [ "We consider repeated multiplayer games in which players repeatedly and simultaneously choose strategies from a finite set of available strategies according to some strategy adjustment process. We focus on the specific class of weakly acyclic games, which is particularly relevant for multiagent cooperative control problems. A strategy adjustment process determines how players select their strategies at any stage as a function of the information gathered over previous stages. Of particular interest are “payoff-based” processes in which, at any stage, players know only their own actions and (noise corrupted) payoffs from previous stages. In particular, players do not know the actions taken by other players and do not know the structural form of payoff functions. We introduce three different payoff-based processes for increasingly general scenarios and prove that, after a sufficiently large number of stages, player actions constitute a Nash equilibrium at any stage with arbitrarily high probability. We also show how to modify player utility functions through tolls and incentives in so-called congestion games, a special class of weakly acyclic games, to guarantee that a centralized objective can be realized as a Nash equilibrium. We illustrate the methods with a simulation of distributed routing over a network.", "The single-agent multi-armed bandit problem can be solved by an agent that learns the values of each action using reinforcement learning. However, the multi-agent version of the problem, the iterated normal form game, presents a more complex challenge, since the rewards available to each agent depend on the strategies of the others. We consider the behavior of value-based learning agents in this situation, and show that such agents cannot generally play at a Nash equilibrium, although if smooth best responses are used, a Nash distribution can be reached. We introduce a particular value-based learning algorithm, which we call individual Q-learning, and use stochastic approximation to study the asymptotic behavior, showing that strategies will converge to Nash distribution almost surely in 2-player zero-sum games and 2-player partnership games. Player-dependent learning rates are then considered, and it is shown that this extension converges in some games for which many algorithms, including the basic algorithm initially considered, fail to converge.", "There are only a few learning algorithms applicable to stochastic dynamic teams and games which generalize Markov decision processes to decentralized stochastic control problems involving possibly self-interested decision makers. Learning in games is generally difficult because of the non-stationary environment in which each decision maker aims to learn its optimal decisions with minimal information in the presence of the other decision makers who are also learning. In stochastic dynamic games, learning is more challenging because, while learning, the decision makers alter the state of the system and hence the future cost. In this paper, we present decentralized Q-learning algorithms for stochastic games, and study their convergence for the weakly acyclic case which includes team problems as an important special case. The algorithms are decentralized in that each decision maker has access only to its own decisions and cost realizations as well as the state transitions; in particular, each decision maker is completely oblivious to the presence of the other decision makers. We show that these algorithms converge to equilibrium policies almost surely in large classes of stochastic games." ] }
1709.05859
2788910306
This paper considers a class of reinforcement-based learning (namely, perturbed learning automata) and provides a stochastic-stability analysis in repeatedly-played, positive-utility, finite strategic-form games. Prior work in this class of learning dynamics primarily analyzes asymptotic convergence through stochastic approximations, where convergence can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through an ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the analysis to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing asymptotic convergence that is based upon an explicit characterization of the invariant probability measure of the induced Markov chain. We further provide a methodology for computing the invariant probability measure in positive-utility games, together with an illustration in the context of coordination games.
Recently, there have been several attempts to establish convergence to Nash equilibria through alternative payoff-based learning dynamics, e.g., the of @cite_19 for convergence to Nash equilibria in weakly-acyclic games, the @cite_11 for convergence to Nash equilibria in generic games, the of @cite_13 for maximizing welfare in generic games and the in @cite_26 for convergence to efficient outcomes in coordination games. We will refer to such approaches as . For these types of dynamics, convergence to Nash equilibria or efficient outcomes can be established without requiring any strong monotonicity properties (as in the multi-player weakly-acyclic games in @cite_19 ).
{ "cite_N": [ "@cite_19", "@cite_26", "@cite_13", "@cite_11" ], "mid": [ "2176451521", "2570514057", "1996013982", "" ], "abstract": [ "We consider repeated multiplayer games in which players repeatedly and simultaneously choose strategies from a finite set of available strategies according to some strategy adjustment process. We focus on the specific class of weakly acyclic games, which is particularly relevant for multiagent cooperative control problems. A strategy adjustment process determines how players select their strategies at any stage as a function of the information gathered over previous stages. Of particular interest are “payoff-based” processes in which, at any stage, players know only their own actions and (noise corrupted) payoffs from previous stages. In particular, players do not know the actions taken by other players and do not know the structural form of payoff functions. We introduce three different payoff-based processes for increasingly general scenarios and prove that, after a sufficiently large number of stages, player actions constitute a Nash equilibrium at any stage with arbitrarily high probability. We also show how to modify player utility functions through tolls and incentives in so-called congestion games, a special class of weakly acyclic games, to guarantee that a centralized objective can be realized as a Nash equilibrium. We illustrate the methods with a simulation of distributed routing over a network.", "We consider the problem of distributed convergence to efficient outcomes in coordination games through dynamics based on aspiration learning. Our first contribution is the characterization of the asymptotic behavior of the induced Markov chain of the iterated process in terms of an equivalent finite-state Markov chain. We then characterize explicitly the behavior of the proposed aspiration learning in a generalized version of coordination games, examples of which include network formation and common-pool games. In particular, we show that in generic coordination games the frequency at which an efficient action profile is played can be made arbitrarily large. Although convergence to efficient outcomes is desirable, in several coordination games, such as common-pool games, attainability of fair outcomes, i.e., sequences of plays at which players experience highly rewarding returns with the same frequency, might also be of special interest. To this end, we demonstrate through analysis and simulations that aspiration learning also establishes fair outcomes in all symmetric coordination games, including common-pool games. @PARASPLIT Read More: http: epubs.siam.org doi abs 10.1137 110852462", "We propose a simple payoff-based learning rule that is completely decentralized and that leads to an efficient configuration of actions in any @math -person finite strategic-form game with generic payoffs. The algorithm follows the theme of exploration versus exploitation and is hence stochastic in nature. We prove that if all agents adhere to this algorithm, then the agents will select the action profile that maximizes the sum of the agents' payoffs a high percentage of time. The algorithm requires no communication. Agents respond solely to changes in their own realized payoffs, which are affected by the actions of other agents in the system in ways that they do not necessarily understand. The method can be applied to the optimization of complex systems with many distributed components, such as the routing of information in networks and the design and control of wind farms. The proof of the proposed learning algorithm relies on the theory of large deviations for perturbed Markov chains.", "" ] }
1709.05859
2788910306
This paper considers a class of reinforcement-based learning (namely, perturbed learning automata) and provides a stochastic-stability analysis in repeatedly-played, positive-utility, finite strategic-form games. Prior work in this class of learning dynamics primarily analyzes asymptotic convergence through stochastic approximations, where convergence can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through an ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the analysis to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing asymptotic convergence that is based upon an explicit characterization of the invariant probability measure of the induced Markov chain. We further provide a methodology for computing the invariant probability measure in positive-utility games, together with an illustration in the context of coordination games.
The case of noisy utility measurements, which are present in many engineering applications, has not currently been addressed through aspiration-based learning. The only exception is reference @cite_19 , under benchmark-based dynamics, where (synchronized) are introduced through which each agent plays a fixed action for the duration of the exploration phase. If such exploration phases are large in duration (as required by the results in @cite_19 ), this may reduce the robustness of the dynamics to dynamic changes in the environment (e.g., changes in the utility function). One reason that such robustness analysis is currently not possible in this class of dynamics is the fact that decisions are taken directly based on the measured performances (e.g., by comparing the currently measured performance with the benchmark performance in @cite_19 ).
{ "cite_N": [ "@cite_19" ], "mid": [ "2176451521" ], "abstract": [ "We consider repeated multiplayer games in which players repeatedly and simultaneously choose strategies from a finite set of available strategies according to some strategy adjustment process. We focus on the specific class of weakly acyclic games, which is particularly relevant for multiagent cooperative control problems. A strategy adjustment process determines how players select their strategies at any stage as a function of the information gathered over previous stages. Of particular interest are “payoff-based” processes in which, at any stage, players know only their own actions and (noise corrupted) payoffs from previous stages. In particular, players do not know the actions taken by other players and do not know the structural form of payoff functions. We introduce three different payoff-based processes for increasingly general scenarios and prove that, after a sufficiently large number of stages, player actions constitute a Nash equilibrium at any stage with arbitrarily high probability. We also show how to modify player utility functions through tolls and incentives in so-called congestion games, a special class of weakly acyclic games, to guarantee that a centralized objective can be realized as a Nash equilibrium. We illustrate the methods with a simulation of distributed routing over a network." ] }
1709.05840
2753947103
Emergency response applications for nuclear or radiological events can be significantly improved via deep feature learning due its ability to capture the inherent complexity of the data involved. In this paper we present a novel methodology for rapid source estimation during radiological releases based on deep feature extraction and weather clustering. Atmospheric dispersions are then calculated based on identified predominant weather patterns and are matched against simulated incidents indicated by radiation readings on the ground. We evaluate the accuracy of our methods over multiple years of weather reanalysis data in the European region. We juxtapose these results with deep classification convolution networks and discuss advantages and disadvantages. We find that deep autoencoder configurations can lead to accurate-enough origin estimation to complement existing systems, while allowing for rapid initial response. A cluster-based method for inverse nuclear release source estimation is proposed.Weather clustering is improved via deep-learning latent representation extraction.Evaluation is performed using multiple years of weather data for Europe.The proposed methods are up to 75 accurate in challenging evaluation conditions.The proposed methodology is suitable for rapid emergency response scenarios.
In its simplest form, when there is a single hidden layer and the number of hidden units equals the number of inputs, the auto-encoder is too successful in replicating the input, leading to overfitting. Various methods have been suggested to avoid overfitting, having fewer hidden than input units, enforcing activation sparsity or introducing noise which the auto-encoder learns to compensate for @cite_10 @cite_0 . An alternative approach is to use deeper configurations of stacked autoencoders, where inner layers encode and decode previously encoded vectors. Encodings generated by stacked autoencoders can capture deeper statistical representations of the input data.
{ "cite_N": [ "@cite_0", "@cite_10" ], "mid": [ "2145094598", "2025768430" ], "abstract": [ "We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.", "Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite." ] }
1709.05840
2753947103
Emergency response applications for nuclear or radiological events can be significantly improved via deep feature learning due its ability to capture the inherent complexity of the data involved. In this paper we present a novel methodology for rapid source estimation during radiological releases based on deep feature extraction and weather clustering. Atmospheric dispersions are then calculated based on identified predominant weather patterns and are matched against simulated incidents indicated by radiation readings on the ground. We evaluate the accuracy of our methods over multiple years of weather reanalysis data in the European region. We juxtapose these results with deep classification convolution networks and discuss advantages and disadvantages. We find that deep autoencoder configurations can lead to accurate-enough origin estimation to complement existing systems, while allowing for rapid initial response. A cluster-based method for inverse nuclear release source estimation is proposed.Weather clustering is improved via deep-learning latent representation extraction.Evaluation is performed using multiple years of weather data for Europe.The proposed methods are up to 75 accurate in challenging evaluation conditions.The proposed methodology is suitable for rapid emergency response scenarios.
In this study we evaluate simple, stacked and convolutional denoising autoencoders @cite_9 to reach smaller and statistically robust representations of weather variables. We then use latent representations to discover weather patterns in Europe using the algorithm.
{ "cite_N": [ "@cite_9" ], "mid": [ "2136655611" ], "abstract": [ "We present a novel convolutional auto-encoder (CAE) for unsupervised feature learning. A stack of CAEs forms a convolutional neural network (CNN). Each CAE is trained using conventional on-line gradient descent without additional regularization terms. A max-pooling layer is essential to learn biologically plausible features consistent with those found by previous approaches. Initializing a CNN with filters of a trained CAE stack yields superior performance on a digit (MNIST) and an object recognition (CIFAR10) benchmark." ] }
1709.05424
2754213847
Automatically learned quality assessment for images has recently become a hot topic due to its usefulness in a wide variety of applications, such as evaluating image capture pipelines, storage techniques, and sharing media. Despite the subjective nature of this problem, most existing methods only predict the mean opinion score provided by data sets, such as AVA and TID2013. Our approach differs from others in that we predict the distribution of human opinion scores using a convolutional neural network. Our architecture also has the advantage of being significantly simpler than other methods with comparable performance. Our proposed approach relies on the success (and retraining) of proven, state-of-the-art deep object recognition networks. Our resulting network can be used to not only score images reliably and with high correlation to human perception, but also to assist with adaptation and optimization of photo editing enhancement algorithms in a photographic pipeline. All this is done without need for a “golden” reference image, consequently allowing for single-image, semantic- and perceptually-aware, no-reference quality assessment.
Machine learning has shown promising success in predicting technical quality of images @cite_18 @cite_21 @cite_33 @cite_43 . Kang et. al. @cite_37 show that extracting high level features using CNNs can result in state-of-the-art blind quality assessment performance. It appears that replacing hand-crafted features with an end-to-end feature learning system is the main advantage of using CNNs for pixel-level quality assessment tasks @cite_37 @cite_33 . The proposed method in @cite_37 is a shallow network with one convolutional layer and two fully-connected layers, and input patches are of size @math . @cite_25 use a deep CNN with 12 layers to improve on image quality predictions of @cite_37 . Given the small input size ( @math patch), both methods require score aggregation across the whole image. in @cite_11 propose a deep quality predictor based on AlexNet @cite_0 . Multiple CNN features are extracted from image crops of size @math , and then regressed to the human scores.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_33", "@cite_21", "@cite_0", "@cite_43", "@cite_25", "@cite_11" ], "mid": [ "2051596736", "2162915697", "", "", "2163605009", "", "2509123681", "2286686646" ], "abstract": [ "In this work we describe a Convolutional Neural Network (CNN) to accurately predict image quality without a reference image. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max and min pooling, two fully connected layers and an output node. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating image quality. This approach achieves state of the art performance on the LIVE dataset and shows excellent generalization ability in cross dataset experiments. Further experiments on images with local distortions demonstrate the local quality estimation ability of our CNN, which is rarely reported in previous literature.", "General purpose blind image quality assessment (BIQA) has been recently attracting significant attention in the fields of image processing, vision and machine learning. State-of-the-art BIQA methods usually learn to evaluate the image quality by regression from human subjective scores of the training samples. However, these methods need a large number of human scored images for training, and lack an explicit explanation of how the image quality is affected by image local features. An interesting question is then: can we learn for effective BIQA without using human scored images? This paper makes a good effort to answer this question. We partition the distorted images into overlapped patches, and use a percentile pooling strategy to estimate the local quality of each patch. Then a quality-aware clustering (QAC) method is proposed to learn a set of centroids on each quality level. These centroids are then used as a codebook to infer the quality of each patch in a given image, and subsequently a perceptual quality score of the whole image can be obtained. The proposed QAC based BIQA method is simple yet effective. It not only has comparable accuracy to those methods using human scored images in learning, but also has merits such as high linearity to human perception of image quality, real-time implementation and availability of image local quality map.", "", "", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "", "This paper presents a no reference image (NR) quality assessment (IQA) method based on a deep convolutional neural network (CNN). The CNN takes unpreprocessed image patches as an input and estimates the quality without employing any domain knowledge. By that, features and natural scene statistics are learnt purely data driven and combined with pooling and regression in one framework. We evaluate the network on the LIVE database and achieve a linear Pearson correlation superior to state-of-the-art NR IQA methods. We also apply the network to the image forensics task of decoder-sided quantization parameter estimation and also here achieve correlations of r = 0.989.", "In this work, we investigate the use of deep learning for distortion-generic blind image quality assessment. We report on different design choices, ranging from the use of features extracted from pre-trained convolutional neural networks (CNNs) as a generic image description, to the use of features extracted from a CNN fine-tuned for the image quality task. Our best proposal, named DeepBIQ, estimates the image quality by average-pooling the scores predicted on multiple subregions of the original image. Experimental results on the LIVE In the Wild Image Quality Challenge Database show that DeepBIQ outperforms the state-of-the-art methods compared, having a linear correlation coefficient with human subjective scores of almost 0.91. These results are further confirmed also on four benchmark databases of synthetically distorted images: LIVE, CSIQ, TID2008, and TID2013." ] }
1709.05436
2769151336
Cross-view video understanding is an important yet under-explored area in computer vision. In this paper, we introduce a joint parsing framework that integrates view-centric proposals into scene-centric parse graphs that represent a coherent scene-centric understanding of cross-view scenes. Our key observations are that overlapping fields of views embed rich appearance and geometry correlations and that knowledge fragments corresponding to individual vision tasks are governed by consistency constraints available in commonsense knowledge. The proposed joint parsing framework represents such correlations and constraints explicitly and generates semantic scene-centric parse graphs. Quantitative experiments show that scene-centric predictions in the parse graph outperform view-centric predictions.
Typical multi-view visual analytics tasks include object detection @cite_3 @cite_23 , cross-view tracking @cite_30 @cite_37 @cite_34 @cite_19 , action recognition @cite_15 , person re-identification @cite_5 @cite_1 and 3D reconstruction @cite_14 . While heuristics such as appearances and motion consistency constraints have been used to regularize the solution space, these methods focus on a specific multi-view vision task whereas we aim to propose a general framework to jointly resolve a wide variety of tasks.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_14", "@cite_1", "@cite_3", "@cite_19", "@cite_23", "@cite_5", "@cite_15", "@cite_34" ], "mid": [ "2171243491", "", "2084652104", "2085161844", "", "2473532709", "2004972618", "", "2949462896", "" ], "abstract": [ "Multi-object tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a false-positive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the k-shortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.", "", "We generalize the network flow formulation for multiobject tracking to multi-camera setups. In the past, reconstruction of multi-camera data was done as a separate extension. In this work, we present a combined maximum a posteriori (MAP) formulation, which jointly models multicamera reconstruction as well as global temporal data association. A flow graph is constructed, which tracks objects in 3D world space. The multi-camera reconstruction can be efficiently incorporated as additional constraints on the flow graph without making the graph unnecessarily large. The final graph is efficiently solved using binary linear programming. On the PETS 2009 dataset we achieve results that significantly exceed the current state of the art.", "This paper presents a novel framework for a multimedia search task: searching a person in a scene using human body appearance. Existing works mostly focus on two independent problems related to this task, i.e., people detection and person re-identification. However, a sequential combination of these two components does not solve the person search problem seamlessly for two reasons: 1) the errors in people detection are carried into person re-identification unavoidably; 2) the setting of person re-identification is different from that of person search which is essentially a verification problem. To bridge this gap, we propose a unified framework which jointly models the commonness of people (for detection) and the uniqueness of a person (for identification). We demonstrate superior performance of our approach on public benchmarks compared with the sequential combination of the state-of-the-art detection and identification algorithms.", "", "This paper presents a hierarchical composition approach for multi-view object tracking. The key idea is to adaptively exploit multiple cues in both 2D and 3D, e.g., ground occupancy consistency, appearance similarity, motion coherence etc., which are mutually complementary while tracking the humans of interests over time. While feature online selection has been extensively studied in the past literature, it remains unclear how to effectively schedule these cues for the tracking purpose especially when encountering various challenges, e.g. occlusions, conjunctions, and appearance variations. To do so, we propose a hierarchical composition model and re-formulate multi-view multi-object tracking as a problem of compositional structure optimization. We setup a set of composition criteria, each of which corresponds to one particular cue. The hierarchical composition process is pursued by exploiting different criteria, which impose constraints between a graph node and its offsprings in the hierarchy. We learn the composition criteria using MLE on annotated data and efficiently construct the hierarchical graph by an iterative greedy pursuit algorithm. In the experiments, we demonstrate superior performance of our approach on three public datasets, one of which is newly created by us to test various challenges in multi-view multi-object tracking.", "In this paper we introduce a probabilistic approach on multiple person localization using multiple calibrated camera views. People present in the scene are approximated by a population of cylinder objects in the 3-D world coordinate system, which is a realization of a Marked Point Process. The observation model is based on the projection of the pixels of the obtained motion masks in the different camera images to the ground plane and to other parallel planes with different height. The proposed pixel-level feature is based on physical properties of the 2-D image formation process and can accurately localize the leg position on the ground plane and estimate the height of the people, even if the area of interest is only a part of the scene, meanwhile silhouettes from irrelevant outside motions may significantly overlap with the monitored region in some of the camera views. We introduce an energy function, which contains a data term calculated from the extracted features and a geometrical constraint term modeling the distance between two people. The final configuration results (location and height) are obtained by an iterative stochastic energy optimization process, called the Multiple Birth and Death dynamics. The proposed approached is compared to a recent state-of-the-art technique in a publicly available dataset and its advantages are quantitatively demonstrated.", "", "Existing methods on video-based action recognition are generally view-dependent, i.e., performing recognition from the same views seen in the training data. We present a novel multiview spatio-temporal AND-OR graph (MST-AOG) representation for cross-view action recognition, i.e., the recognition is performed on the video from an unknown and unseen view. As a compositional model, MST-AOG compactly represents the hierarchical combinatorial structures of cross-view actions by explicitly modeling the geometry, appearance and motion variations. This paper proposes effective methods to learn the structure and parameters of MST-AOG. The inference based on MST-AOG enables action recognition from novel views. The training of MST-AOG takes advantage of the 3D human skeleton data obtained from Kinect cameras to avoid annotating enormous multi-view video frames, which is error-prone and time-consuming, but the recognition does not need 3D information and is based on 2D video input. A new Multiview Action3D dataset has been created and will be released. Extensive experiments have demonstrated that this new action representation significantly improves the accuracy and robustness for cross-view action recognition on 2D videos.", "" ] }
1709.05436
2769151336
Cross-view video understanding is an important yet under-explored area in computer vision. In this paper, we introduce a joint parsing framework that integrates view-centric proposals into scene-centric parse graphs that represent a coherent scene-centric understanding of cross-view scenes. Our key observations are that overlapping fields of views embed rich appearance and geometry correlations and that knowledge fragments corresponding to individual vision tasks are governed by consistency constraints available in commonsense knowledge. The proposed joint parsing framework represents such correlations and constraints explicitly and generates semantic scene-centric parse graphs. Quantitative experiments show that scene-centric predictions in the parse graph outperform view-centric predictions.
Semantic and expressive representations have been developed for various vision tasks, e.g., image parsing @cite_11 , 3D scene reconstruction @cite_35 @cite_20 , human-object interaction @cite_31 , pose and attribute estimation @cite_33 . In this paper, our representation also falls into this category. The difference is that our model is defined upon cross-view spatio-temporal domain and is able to incorporate a variety of tasks.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_31", "@cite_20", "@cite_11" ], "mid": [ "2020762599", "2416798379", "1483019628", "2156802865", "" ], "abstract": [ "In this paper, we present an attributed grammar for parsing man-made outdoor scenes into semantic surfaces, and recovering its 3D model simultaneously. The grammar takes superpixels as its terminal nodes and use five production rules to generate the scene into a hierarchical parse graph. Each graph node actually correlates with a surface or a composite of surfaces in the 3D world or the 2D image. They are described by attributes for the global scene model, e.g. focal length, vanishing points, or the surface properties, e.g. surface normal, contact line with other surfaces, and relative spatial location etc. Each production rule is associated with some equations that constraint the attributes of the parent nodes and those of their children nodes. Given an input image, our goal is to construct a hierarchical parse graph by recursively applying the five grammar rules while preserving the attributes constraints. We develop an effective top-down bottom-up cluster sampling procedure which can explore this constrained space efficiently. We evaluate our method on both public benchmarks and newly built datasets, and achieve state-of-the-art performances in terms of layout estimation and region segmentation. We also demonstrate that our method is able to recover detailed 3D model with relaxed Manhattan structures which clearly advances the state-of-the-arts of singleview 3D reconstruction.", "In this paper, we present a 4D human-object interaction (4DHOI) model for solving three vision tasks jointly: i) event segmentation from a video sequence, ii) event recognition and parsing, and iii) contextual object localization. The 4DHOI model represents the geometric, temporal, and semantic relations in daily events involving human-object interactions. In 3D space, the interactions of human poses and contextual objects are modeled by semantic co-occurrence and geometric compatibility. On the time axis, the interactions are represented as a sequence of atomic event transitions with coherent objects. The 4DHOI model is a hierarchical spatial-temporal graph representation which can be used for inferring scene functionality and object affordance. The graph structures and parameters are learned using an ordered expectation maximization algorithm which mines the spatial-temporal structures of events from RGB-D video samples. Given an input RGB-D video, the inference is performed by a dynamic programming beam search algorithm which simultaneously carries out event segmentation, recognition, and object localization. We collected a large multiview RGB-D event dataset which contains 3,815 video sequences and 383,036 RGB-D frames captured by three RGB-D cameras. The experimental results on three challenging datasets demonstrate the strength of the proposed method.", "An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how) can enable an assistive robot to plan ahead for reactive responses. Furthermore, anticipation can even improve the detection accuracy of past activities. The challenge, however, is two-fold: We need to capture the rich context for modeling the activities and object affordances, and we need to anticipate the distribution over a large space of future human activities. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, we first show that anticipation improves the state-of-the-art detection results. We then show that for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 84.1, 74.4 and 62.2 percent for an anticipation time of 1, 3 and 10 seconds respectively. Finally, we also show a robot using our algorithm for performing a few reactive responses.", "We develop a comprehensive Bayesian generative model for understanding indoor scenes. While it is common in this domain to approximate objects with 3D bounding boxes, we propose using strong representations with finer granularity. For example, we model a chair as a set of four legs, a seat and a backrest. We find that modeling detailed geometry improves recognition and reconstruction, and enables more refined use of appearance for scene understanding. We demonstrate this with a new likelihood function that rewards 3D object hypotheses whose 2D projection is more uniform in color distribution. Such a measure would be confused by background pixels if we used a bounding box to represent a concave object like a chair. Complex objects are modeled using a set or re-usable 3D parts, and we show that this representation captures much of the variation among object instances with relatively few parameters. We also designed specific data-driven inference mechanisms for each part that are shared by all objects containing that part, which helps make inference transparent to the modeler. Further, we show how to exploit contextual relationships to detect more objects, by, for example, proposing chairs around and underneath tables. We present results showing the benefits of each of these innovations. The performance of our approach often exceeds that of state-of-the-art methods on the two tasks of room layout estimation and object recognition, as evaluated on two bench mark data sets used in this domain.", "" ] }
1709.05733
2963547676
To understand the spatial deployment of base stations (BSs) is the first step to analyze the performance of cellular networks and further design efficient networking protocols. Poisson point process (PPP), which has been widely adopted to characterize the deployment of BSs and established the reputation to give tractable results in the stochastic geometry analyses, usually assumes a static BS deployment density in homogeneous PPP (HPPP) models or delicately designed location-dependent density functions in in-homogeneous PPP models. However, the simultaneous existence of attractiveness and repulsiveness among BSs practically deployed in a large-scale area defies such an assumption, and the @math -stable distribution, one kind of heavy-tailed distributions, has recently demonstrated superior accuracy to statistically model the varying BS density in different areas. In this paper, we start with these new findings and investigate the intrinsic feature (i.e., the spatial self-similarity) embedded in the BSs. Afterwards, we refer to a generalized PPP setup with @math -stable distributed density and theoretically derive the related coverage probability. In particular, we give an upper bound of the derived coverage probability for high signal-to-interference-plus-noise ratio thresholds and show the monotonically decreasing property of this bound with respect to the variance of BS density. Besides, we prove that our model could reduce to the single-tier HPPP for some special cases and demonstrate the superior accuracy of the @math -stable model to approach the real environment.
As discussed later in Section , the PPP model has provided many useful performance trends. However, the concern about total independence between nodes (e.g., BSs) has never stopped. Hence, in order to reduce the modeling gap between the single-tier HPPP model and the practical BS deployment, some researchers have adopted two-tier or multiple tier HPPP models @cite_70 @cite_91 @cite_54 @cite_6 @cite_63 @cite_67 @cite_81 @cite_69 . Though these models may lead to some tractable results, but it lacks the reasonable explanation to divide the gradually deployed and increasingly denser heterogeneous cells @cite_34 into different tiers. On the contrary, the @math -stable model contributes to understanding the spatial self-similarity and bridging the gap between cellular networks and other social behavior-based complex networks.
{ "cite_N": [ "@cite_67", "@cite_69", "@cite_91", "@cite_70", "@cite_54", "@cite_6", "@cite_81", "@cite_63", "@cite_34" ], "mid": [ "2063846408", "1567643386", "2109830484", "2040714707", "2964159250", "2142350309", "2149170915", "1971672874", "2191948661" ], "abstract": [ "In this paper, the optimal BS (Base Station) density for both homogeneous and heterogeneous cellular networks to minimize network energy cost is analyzed with stochastic geometry theory. For homogeneous cellular networks, both upper and lower bounds of the optimal BS density are derived. For heterogeneous cellular networks, our analysis reveals the best type of BSs to be deployed for capacity extension, or to be switched off for energy saving. Specifically, if the ratio between the micro BS cost and the macro BS cost is lower than a threshold, which is a function of path loss and their transmit power, then the optimal strategy is to deploy micro BSs for capacity extension or to switch off macro BSs (if possible) for energy saving with higher priority. Otherwise, the optimal strategy is the opposite. The optimal combination of macro and micro BS densities can be calculated numerically through our analysis, or alternatively be conservatively approximated with a closed-form solution. Based on the parameters from EARTH, numerical results show that in the dense urban scenario, compared to the traditional macro-only homogeneous cellular network with no BS sleeping, deploying micro BSs can reduce about 40 of the total energy cost, and further reduce up to 35 with BS sleeping capability.", "In this paper, a new mathematical framework to the analysis of millimeter wave cellular networks is introduced. Its peculiarity lies in considering realistic path-loss and blockage models, which are derived from recently reported experimental data. The path-loss model accounts for different distributions of line-of-sight and non-line-of-sight propagation conditions and the blockage model includes an outage state that provides a better representation of the outage possibilities of millimeter wave communications. By modeling the locations of the base stations as points of a Poisson point process and by relying on a noise-limited approximation for typical millimeter wave network deployments, simple and exact integral as well as approximated and closed-form formulas for computing the coverage probability and the average rate are obtained. With the aid of Monte Carlo simulations, the noise-limited approximation is shown to be sufficiently accurate for typical network densities. The noise-limited approximation, however, may not be sufficiently accurate for ultra-dense network deployments and for sub-gigahertz transmission bandwidths. For these case studies, the analytical approach is generalized to take the other-cell interference into account at the cost of increasing its computational complexity. The proposed mathematical framework is applicable to cell association criteria based on the smallest path-loss and on the highest received power. It accounts for beamforming alignment errors and for multi-tier cellular network deployments. Numerical results confirm that sufficiently dense millimeter wave cellular networks are capable of outperforming micro wave cellular networks, in terms of coverage probability and average rate.", "The deployment of femtocells in a macrocell network is an economical and effective way to increase network capacity and coverage. Nevertheless, such deployment is challenging due to the presence of inter-tier and intra-tier interference, and the ad hoc operation of femtocells. Motivated by the flexible subchannel allocation capability of OFDMA, we investigate the effect of spectrum allocation in two-tier networks, where the macrocells employ closed access policy and the femtocells can operate in either open or closed access. By introducing a tractable model, we derive the success probability for each tier under different spectrum allocation and femtocell access policies. In particular, we consider joint subchannel allocation, in which the whole spectrum is shared by both tiers, as well as disjoint subchannel allocation, whereby disjoint sets of subchannels are assigned to both tiers. We formulate the throughput maximization problem subject to quality of service constraints in terms of success probabilities and per-tier minimum rates, and provide insights into the optimal spectrum allocation. Our results indicate that with closed access femtocells, the optimized joint and disjoint subchannel allocations provide the highest throughput among all schemes in sparse and dense femtocell networks, respectively. With open access femtocells, the optimized joint subchannel allocation provides the highest possible throughput for all femtocell densities.", "In a two-tier heterogeneous network (HetNet) where femto access points (FAPs) with lower transmission power coexist with macro base stations (BSs) with higher transmission power, the FAPs may suffer significant performance degradation due to inter-tier interference. Introducing cognition into the FAPs through the spectrum sensing (or carrier sensing) capability helps them avoiding severe interference from the macro BSs and enhance their performance. In this paper, we use stochastic geometry to model and analyze performance of HetNets composed of macro BSs and cognitive FAPs in a multichannel environment. The proposed model explicitly accounts for the spatial distribution of the macro BSs, FAPs, and users in a Rayleigh fading environment. We quantify the performance gain in outage probability obtained by introducing cognition into the femto-tier, provide design guidelines, and show the existence of an optimal spectrum sensing threshold for the cognitive FAPs, which depends on the HetNet parameters. We also show that looking into the overall performance of the HetNets is quite misleading in the scenarios where the majority of users are served by the macro BSs. Therefore, the performance of femto-tier needs to be explicitly accounted for and optimized.", "", "With the exponential increase in high rate traffic given by a new generation of wireless devices, data is expected to overwhelm cellular network capacity in the near future. Femtocell networks have been recently proposed as an efficient and cost-effective approach to provide unprecedented levels of network capacity and coverage. However, the dense and random deployment of femtocells and their uncoordinated operation raise important questions concerning interference pollution and spectrum allocation. Motivated by the flexible subchannel allocation capabilities of cognitive radio, we propose a cognitive hybrid division duplex (CHDD) that is suitable for heterogeneous networks in future mobile communication systems. Specifically, our CHDD scheme has a pair of frequency bands to perform frequency division duplex (FDD) on the macrocell, while time division duplex (TDD) is simultaneously operated in these bands by underlaid cognitive femtocells. By doing so, the proposed CHDD scheme exploits the advantages of both FDD and TDD schemes: operating in FDD at the macrocell tier controls inter-tier interference, whereas operating in TDD at the femtocell tier provides to femtocells the flexibility of adjusting uplink and downlink rates together with opportunistic access benefits. Using tools from stochastic geometry, we provide a methodology on how to design efficient switching mechanisms for cognitive TDD operation of femtocells. In particular, we derive closed-form expressions for the success probability and the area spectral efficiency of the proposed CHDD scheme when the macro tier is in downlink and uplink mode. Furthermore, we propose an open access policy as a means to improve the performance of macrocell transmissions. Our analysis and numerical results show the effectiveness of introducing cognition in femtocells so as to improve the system performance of two-tier femtocell networks.", "Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given , adding more tiers and or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR.", "Partial Spectrum Reuse (PSR) in the second tier of two-tier heterogeneous cellular networks has a potential to improve spectrum efficiency by reducing inter-cell interference, and thus energy efficiency as well by deploying less or switching off more Base Stations (BSs). In this paper, we analyze the optimal PSR factor, defined as the portion of spectrum reused by micro cells in two-tier heterogeneous networks, which is not in an explicit form generally. Then, a closed-form limit of the optimal PSR factor is derived as the ratio of the user rate requirement over the whole system spectrum bandwidth is approaching zero, based on which a threshold of the micro-BS energy cost is also derived to determine which type of BSs is preferable. Specifically, one should deploy more micro BSs or switch off more macro BSs if the micro-BS energy cost is lower than the threshold. Otherwise, the optimal choice is the opposite. This threshold with the PSR scheme is higher than that without PSR scheme, i.e., PSR can improve both spectrum efficiency and energy efficiency. Numerical results show that adopting PSR can reduce the network energy consumption by up to 50 when the transmit power of macro BSs is 10dB higher than that of micro BSs.", "Traditional ultra-dense wireless networks are recommended as a complement for cellular networks and are deployed in partial areas, such as hotspot and indoor scenarios. Based on the massive multiple-input multi-output antennas and the millimeter wave communication technologies, the 5G ultra-dense cellular network is proposed to deploy in overall cellular scenarios. Moreover, a distribution network architecture is presented for 5G ultra-dense cellular networks. Furthermore, the backhaul network capacity and the backhaul energy efficiency of ultra-dense cellular networks are investigated to answer an important question, that is, how much densification can be deployed for 5G ultra-dense cellular networks. Simulation results reveal that there exist densification limits for 5G ultra-dense cellular networks with backhaul network capacity and backhaul energy efficiency constraints." ] }
1709.05638
2755358588
In this work, we develop an end-to-end Reinforcement Learning based architecture for a conversational search agent to assist users in searching on an e-commerce marketplace for digital assets. Our approach caters to a search task fundamentally different from the ones which have limited search modalities where the user can express his preferences objectively. The system interacts with the users to display search results to the queries, and gauges user's intent and context of the conversation to choose the next action and reply. To train the agent in the absence of true conversation data, a virtual user is constructed to model a human user using the query and session logs from a major stock photography and digital assets marketplace. The system provides an alternative that is more engaging than the traditional search while maintaining similar effectiveness. This work provides a mechanism to build and deploy bootstrapped version of an effective conversational agent from readily available query log data. The system can then be used to acquire true conversational data and be fine-tuned further. The methodology discussed in this paper can be extended to e-commerce domains in general.
Often task oriented dialogue systems are difficult to train due to absence of real conversations and subjectivity involved in measuring shortcomings and success of a dialogue @cite_1 . Evaluation becomes much more complex for subjective search systems due to absence of any label which tells whether the intended task had been completed or not. We evaluate our system through rewards obtained while interacting with the user model and also on various real world metrics (discussed in experiments section) through human evaluation.
{ "cite_N": [ "@cite_1" ], "mid": [ "2175256910" ], "abstract": [ "A long-term goal of machine learning is to build intelligent conversational agents. One recent popular approach is to train end-to-end models on a large amount of real dialog transcripts between humans (, 2015; Vinyals & Le, 2015; , 2015). However, this approach leaves many questions unanswered as an understanding of the precise successes and shortcomings of each model is hard to assess. A contrasting recent proposal are the bAbI tasks (, 2015b) which are synthetic data that measure the ability of learning machines at various reasoning tasks over toy language. Unfortunately, those tests are very small and hence may encourage methods that do not scale. In this work, we propose a suite of new tasks of a much larger scale that attempt to bridge the gap between the two regimes. Choosing the domain of movies, we provide tasks that test the ability of models to answer factual questions (utilizing OMDB), provide personalization (utilizing MovieLens), carry short conversations about the two, and finally to perform on natural dialogs from Reddit. We provide a dataset covering 75k movie entities and with 3.5M training examples. We present results of various models on these tasks, and evaluate their performance." ] }
1709.05666
2755560206
Latent factor models are increasingly popular for modeling multi-relational knowledge graphs. By their vectorial nature, it is not only hard to interpret why this class of models works so well, but also to understand where they fail and how they might be improved. We conduct an experimental survey of state-of-the-art models, not towards a purely comparative end, but as a means to get insight about their inductive abilities. To assess the strengths and weaknesses of each model, we create simple tasks that exhibit first, atomic properties of binary relations, and then, common inter-relational inference through synthetic genealogies. Based on these experimental results, we propose new research directions to improve on existing models.
Learning from knowledge graphs and more generally relational data is an old problem of artificial intelligence @cite_5 . Many contributions have been made using inductive logic programming for relational data during the last decades @cite_62 @cite_20 @cite_58 . Handling inference probabilistically gave birth to the statistical relational learning field @cite_54 , and link prediction has always been one of the main problems in that field. Different probabilistic logic-based inference models have been proposed @cite_13 @cite_29 @cite_19 . The main contribution along this line of research is probably Markov Logic Networks (MLNs) @cite_8 . MLNs take as input a set of first-order rules and facts, build a Markov random field between facts co-occuring in possible groundings of the formulae, from which they learn a weight over each of these rules that represents their likeliness of being applied at inference time. Some other proposals followed a purely probabilistic approach @cite_42 @cite_34 .
{ "cite_N": [ "@cite_62", "@cite_8", "@cite_54", "@cite_29", "@cite_42", "@cite_34", "@cite_19", "@cite_5", "@cite_58", "@cite_13", "@cite_20" ], "mid": [ "", "1977970897", "1585529040", "1975130368", "2126185296", "2463713334", "1791364091", "138005845", "2107306718", "2000805332", "2942000300" ], "abstract": [ "", "We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.", "Handling inherent uncertainty and exploiting compositional structure are fundamental to understanding and designing large-scale systems. Statistical relational learning builds on ideas from probability theory and statistics to address uncertainty while incorporating tools from logic, databases and programming languages to represent structure. In Introduction to Statistical Relational Learning, leading researchers in this emerging area of machine learning describe current formalisms, models, and algorithms that enable effective and robust reasoning about richly structured systems and data. The early chapters provide tutorials for material used in later chapters, offering introductions to representation, inference and learning in graphical models, and logic. The book then describes object-oriented approaches, including probabilistic relational models, relational Markov networks, and probabilistic entity-relationship models as well as logic-based formalisms including Bayesian logic programs, Markov logic, and stochastic logic programs. Later chapters discuss such topics as probabilistic models with unknown objects, relational dependency networks, reinforcement learning in relational domains, and information extraction. By presenting a variety of approaches, the book highlights commonalities and clarifies important differences among proposed approaches and, along the way, identifies important representational and algorithmic issues. Numerous applications are provided throughout.Lise Getoor is Assistant Professor in the Department of Computer Science at the University of Maryland. Ben Taskar is Assistant Professor in the Computer and Information Science Department at the University of Pennsylvania.", "In recent years there has been a growing interest among AI researchers in probabilistic and decision modelling, spurred by significant advances in representation and computation with network modelling formalisms. In applying these techniques to decision support tasks, fixed network models have proven to be inadequately expressive when a broad range of situations must be handled. Hence many researchers have sought to combine the strengths of flexible knowledge representation languages with the normative status and well-understood computational properties of decision-modelling formalisms and algorithms. One approach is to encode general knowledge in an expressive language, then dynamically construct a decision model for each particular situation or problem instance. We have developed several systems adopting this approach, which illustrate a variety of interesting techniques and design issues.", "A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with “flat” data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. Although PRMs are significantly more expressive than standard models, such as Bayesian networks, we show how to extend well-known statistical methods for learning Bayesian networks to learn these models. We describe both parameter estimation and structure learning — the automatic induction of the dependency structure in a model. Moreover, we show how the learning procedure can exploit standard database retrieval techniques for efficient learning from large datasets. We present experimental results on both real and synthetic relational databases.", "Universal schema predicts the types of entities and relations in a knowledge base (KB) by jointly embedding the union of all available schema types---not only types from multiple structured databases (such as Freebase or Wikipedia infoboxes), but also types expressed as textual patterns from raw text. This prediction is typically modeled as a matrix completion problem, with one type per column, and either one or two entities per row (in the case of entity types or binary relation types, respectively). Factorizing this sparsely observed matrix yields a learned vector embedding for each row and each column. In this paper we explore the problem of making predictions for entities or entity-pairs unseen at training time (and hence without a pre-learned row embedding). We propose an approach having no per-row parameters at all; rather we produce a row vector on the fly using a learned aggregation function of the vectors of the observed columns for that row. We experiment with various aggregation functions, including neural network attention models. Our approach can be understood as a natural language database, in that questions about KB entities are answered by attending to textual or database evidence. In experiments predicting both relations and entity types, we demonstrate that despite having an order of magnitude fewer parameters than traditional universal schema, we can match the accuracy of the traditional model, and more importantly, we can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.", "Recently, new representation languages that integrate first order logic with Bayesian networks have been developed. Bayesian logic programs are one of these languages. In this paper, we present results on combining Inductive Logic Programming (ILP) with Bayesian networks to learn both the qualitative and the quantitative components of Bayesian logic programs. More precisely, we show how to combine the ILP setting learning from interpretations with score-based techniques for learning Bayesian networks. Thus, the paper positively answers Koller and Pfeffer's question, whether techniques from ILP could help to learn the logical component of first order probabilistic models.", "", "Recent advances in information extraction have led to huge knowledge bases (KBs), which capture knowledge in a machine-readable format. Inductive logic programming (ILP) can be used to mine logical rules from these KBs, such as \"If two persons are married, then they (usually) live in the same city.\" While ILP is a mature field, mining logical rules from KBs is difficult, because KBs make an open-world assumption. This means that absent information cannot be taken as counterexamples. Our approach AMIE ( in WWW, 2013) has shown how rules can be mined effectively from KBs even in the absence of counterexamples. In this paper, we show how this approach can be optimized to mine even larger KBs with more than 12M statements. Extensive experiments show how our new approach, AMIE @math +, extends to areas of mining that were previously beyond reach.", "We define a language for representing context-sensitive probabilistic knowledge. A knowledge base consists of a set of universally quantified probability sentences that include context constraints, which allow inference to be focused on only the relevant portions of the probabilistic knowledge. We provide a declarative semantics for our language. We present a query answering procedure that takes a query Q and a set of evidence E and constructs a Bayesian network to compute P(Q¦E). The posterior probability is then computed using any of a number of Bayesian network inference algorithms. We use the declarative semantics to prove the query procedure sound and complete. We use concepts from logic programming to justify our approach.", "" ] }
1709.05666
2755560206
Latent factor models are increasingly popular for modeling multi-relational knowledge graphs. By their vectorial nature, it is not only hard to interpret why this class of models works so well, but also to understand where they fail and how they might be improved. We conduct an experimental survey of state-of-the-art models, not towards a purely comparative end, but as a means to get insight about their inductive abilities. To assess the strengths and weaknesses of each model, we create simple tasks that exhibit first, atomic properties of binary relations, and then, common inter-relational inference through synthetic genealogies. Based on these experimental results, we propose new research directions to improve on existing models.
The link-prediction problem has recently drawn attention from a wider community. Driven by the W3C standard data representation for the semantic web, the resource description framework @cite_37 , various knowledge graphs---also called knowledge bases---have been collaboratively or automatically created in recent years such as DBpedia @cite_11 , Freebase @cite_15 or the Google Knowledge Vault @cite_4 . Since the Netflix challenge @cite_0 , latent factor models have taken the advantage over probabilistic and symbolic approaches in the link-prediction task. In terms of prediction performances first, but also in scalability. This rise of predictive performances and speed enabled many applications including automated personal assistants and recommender systems @cite_38 @cite_43 .
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_4", "@cite_0", "@cite_43", "@cite_15", "@cite_11" ], "mid": [ "1507806757", "", "2016753842", "2054141820", "1994389483", "2094728533", "102708294" ], "abstract": [ "We propose Inference Knowledge Graph, a novel approach of remapping existing, large scale, semantic knowledge graphs into Markov Random Fields in order to create user goal tracking models that could form part of a spoken dialog system. Since semantic knowledge graphs include both entities and their attributes, the proposed method merges the semantic dialog-state-tracking of attributes and the database lookup of entities that fulfill users' requests into one single unified step. Using a large semantic graph that contains all businesses in Bellevue, WA, extracted from Microsoft Satori, we demonstrate that the proposed approach can return significantly more relevant entities to the user than a baseline system using database lookup.", "", "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", "As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.", "Recommender systems provide users with personalized suggestions for products or services. These systems often rely on Collaborating Filtering (CF), where past transactions are analyzed in order to establish connections between users and products. The two more successful approaches to CF are latent factor models, which directly profile both users and products, and neighborhood models, which analyze similarities between products or users. In this work we introduce some innovations to both approaches. The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model. Further accuracy improvements are achieved by extending the models to exploit both explicit and implicit feedback by the users. The methods are tested on the Netflix data. Results are better than those previously published on that dataset. In addition, we suggest a new evaluation metric, which highlights the differences among methods, based on their performance at a top-K recommendation task.", "Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications.", "DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data." ] }
1709.05666
2755560206
Latent factor models are increasingly popular for modeling multi-relational knowledge graphs. By their vectorial nature, it is not only hard to interpret why this class of models works so well, but also to understand where they fail and how they might be improved. We conduct an experimental survey of state-of-the-art models, not towards a purely comparative end, but as a means to get insight about their inductive abilities. To assess the strengths and weaknesses of each model, we create simple tasks that exhibit first, atomic properties of binary relations, and then, common inter-relational inference through synthetic genealogies. Based on these experimental results, we propose new research directions to improve on existing models.
Statistical models for learning in knowledge graphs are summarized in a recent review @cite_28 , and among them latent factor models. We discuss these models in detail in the following section. One notable latent factor model that is not tested in this paper is the holographic embeddings model @cite_61 , as it has been shown to be equivalent to the model @cite_49 @cite_50 . The model @cite_39 is detailed in the next section. Also, the latent factor model proposed by is not included as it is a combination of uni-, bi- and trigram terms that will be evaluated in separate models to understand the contribution of each modeling choice in different situations.
{ "cite_N": [ "@cite_61", "@cite_28", "@cite_39", "@cite_50", "@cite_49" ], "mid": [ "2949972983", "1529533208", "2432356473", "2593682006", "2733421109" ], "abstract": [ "Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator HolE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. In extensive experiments we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction in knowledge graphs and relational learning benchmark datasets.", "Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive data sets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's knowledge vault project as an example of such combination.", "In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.", "We show the equivalence of two state-of-the-art link prediction knowledge graph completion methods: 's holographic embedding and 's complex embedding. We first consider a spectral version of the holographic embedding, exploiting the frequency domain in the Fourier transform for efficient computation. The analysis of the resulting method reveals that it can be viewed as an instance of the complex embedding with certain constraints cast on the initial vectors upon training. Conversely, any complex embedding can be converted to an equivalent holographic embedding.", "Embeddings of knowledge graphs have received significant attention due to their excellent performance for tasks like link prediction and entity resolution. In this short paper, we are providing a comparison of two state-of-the-art knowledge graph embeddings for which their equivalence has recently been established, i.e., ComplEx and HolE [Nickel, Rosasco, and Poggio, 2016; , 2016; Hayashi and Shimbo, 2017]. First, we briefly review both models and discuss how their scoring functions are equivalent. We then analyze the discrepancy of results reported in the original articles, and show experimentally that they are likely due to the use of different loss functions. In further experiments, we evaluate the ability of both models to embed symmetric and antisymmetric patterns. Finally, we discuss advantages and disadvantages of both models and under which conditions one would be preferable to the other." ] }
1709.05666
2755560206
Latent factor models are increasingly popular for modeling multi-relational knowledge graphs. By their vectorial nature, it is not only hard to interpret why this class of models works so well, but also to understand where they fail and how they might be improved. We conduct an experimental survey of state-of-the-art models, not towards a purely comparative end, but as a means to get insight about their inductive abilities. To assess the strengths and weaknesses of each model, we create simple tasks that exhibit first, atomic properties of binary relations, and then, common inter-relational inference through synthetic genealogies. Based on these experimental results, we propose new research directions to improve on existing models.
Not all latent models are actually factorization models. Among these are a variety of neural-network models, including the neural tensor networks @cite_23 , or the multi-layer perceptron used in . We did not survey these models in this work and focus on latent models, that is models that can be expressed as a factorization of the knowledge graph represented as a tensor.
{ "cite_N": [ "@cite_23" ], "mid": [ "2127426251" ], "abstract": [ "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively." ] }
1709.05666
2755560206
Latent factor models are increasingly popular for modeling multi-relational knowledge graphs. By their vectorial nature, it is not only hard to interpret why this class of models works so well, but also to understand where they fail and how they might be improved. We conduct an experimental survey of state-of-the-art models, not towards a purely comparative end, but as a means to get insight about their inductive abilities. To assess the strengths and weaknesses of each model, we create simple tasks that exhibit first, atomic properties of binary relations, and then, common inter-relational inference through synthetic genealogies. Based on these experimental results, we propose new research directions to improve on existing models.
Advances in bringing both worlds together include the work of and , where a latent factor model is used, as well as a set of logical rules. An error-term over the rules is added to the classical latent factor objective function. In , a fully differentiable neural theorem prover is used to handle both facts and rules, whereas use adversarial training to do so. learned first-order logic embeddings from formulae learned by ILP. Similar proposals for integrating logical knowledge in distributional representations of words include the work of . Conversely, learn a latent factor model over the facts only, and then try to extract rules from the learned embeddings. @cite_12 proposed to use projections of the subject and object entity embeddings that conserve transitivity and symmetry.
{ "cite_N": [ "@cite_12" ], "mid": [ "2460423734" ], "abstract": [ "This paper proposes a novel translation-based knowledge graph embedding that preserves the logical properties of relations such as transitivity and symmetricity. The embedding space generated by existing translation-based embeddings do not represent transitive and symmetric relations precisely, because they ignore the role of entities in triples. Thus, we introduce a role-specific projection which maps an entity to distinct vectors according to its role in a triple. That is, a head entity is projected onto an embedding space by a head projection operator, and a tail entity is projected by a tail projection operator. This idea is applied to TransE, TransR, and TransD to produce lppTransE, lppTransR, and lppTransD, respectively. According to the experimental results on link prediction and triple classification, the proposed logical property preserving embeddings show the state-of-the-art performance at both tasks. These results prove that it is critical to preserve logical properties of relations while embedding knowledge graphs, and the proposed method does it effectively." ] }
1709.05545
2754686230
Tree ensembles are flexible predictive models that can capture relevant variables and to some extent their interactions in a compact and interpretable manner. Most algorithms for obtaining tree ensembles are based on versions of boosting or Random Forest. Previous work showed that boosting algorithms exhibit a cyclic behavior of selecting the same tree again and again due to the way the loss is optimized. At the same time, Random Forest is not based on loss optimization and obtains a more complex and less interpretable model. In this paper we present a novel method for obtaining compact tree ensembles by growing a large pool of trees in parallel with many independent boosting threads and then selecting a small subset and updating their leaf weights by loss optimization. We allow for the trees in the initial pool to have different depths which further helps with generalization. Experiments on real datasets show that the obtained model has usually a smaller loss than boosting, which is also reflected in a lower misclassification error on the test set.
There has been a large amount of work on different versions of boosting, which can be used for generating tree ensembles. Different versions of boosting minimize different loss functions, starting from Adaboost @cite_23 for the exponential loss, Logitboost @cite_2 for the logistic loss, and Gradientboost @cite_10 for any differentiable loss. Other examples include Floatboost @cite_9 and Robust Logitboost @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_23", "@cite_2", "@cite_10" ], "mid": [ "1647002864", "2140274257", "1988790447", "2024046085", "1678356000" ], "abstract": [ "Logitboost is an influential boosting algorithm for classification. In this paper, we develop robust logitboost to provide an explicit formulation of tree-split criterion for building weak learners (regression trees) for logitboost. This formulation leads to a numerically stable implementation of logitboost. We then propose abc-logitboost for multi-class classification, by combining robust logitboost with the prior work of abc-boost. Previously, abc-boost was implemented as abc-mart using the mart algorithm. Our extensive experiments on multi-class classification compare four algorithms: mart, abcmart, (robust) logitboost, and abc-logitboost, and demonstrate the superiority of abc-logitboost. Comparisons with other learning methods including SVM and deep learning are also available through prior publications.", "A novel learning procedure, called FloatBoost, is proposed for learning a boosted classifier for achieving the minimum error rate. FloatBoost learning uses a backtrack mechanism after each iteration of AdaBoost learning to minimize the error rate directly, rather than minimizing an exponential function of the margin as in the traditional AdaBoost algorithms. A second contribution of the paper is a novel statistical model for learning best weak classifiers using a stagewise approximation of the posterior probability. These novel techniques lead to a classifier which requires fewer weak classifiers than AdaBoost yet achieves lower error rates in both training and testing, as demonstrated by extensive experiments. Applied to face detection, the FloatBoost learning method, together with a proposed detector pyramid architecture, leads to the first real-time multiview face detection system reported.", "In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.", "Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers thus produced. For many classification algorithms, this simple strategy results in dramatic improvements in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scale data mining applications.", "Function estimation approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest-descent minimization. A general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion. Specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification. Special enhancements are derived for the particular case where the individual additive components are regression trees, and tools for interpreting such TreeBoost models are presented. Gradient boosting of regression trees produces competitive, highly robust, interpretable procedures for both regression and classification, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire and Friedman, Hastie and Tibshirani are discussed." ] }
1709.05545
2754686230
Tree ensembles are flexible predictive models that can capture relevant variables and to some extent their interactions in a compact and interpretable manner. Most algorithms for obtaining tree ensembles are based on versions of boosting or Random Forest. Previous work showed that boosting algorithms exhibit a cyclic behavior of selecting the same tree again and again due to the way the loss is optimized. At the same time, Random Forest is not based on loss optimization and obtains a more complex and less interpretable model. In this paper we present a novel method for obtaining compact tree ensembles by growing a large pool of trees in parallel with many independent boosting threads and then selecting a small subset and updating their leaf weights by loss optimization. We allow for the trees in the initial pool to have different depths which further helps with generalization. Experiments on real datasets show that the obtained model has usually a smaller loss than boosting, which is also reflected in a lower misclassification error on the test set.
To facilitate enhanced interpretability and overcome memory based limitations, the problem of obtaining compact tree ensembles has received much attention in the recent years. An improved interpretability was aimed in @cite_18 by selecting optimal rule subsets from tree-ensembles. The classical cost-complexity pruning of individual trees was extended in @cite_21 to combined pruning of ensembles. The tree-ensemble based model was reformulated in @cite_3 as a linear model in terms of node indicator functions and a L1-norm regularization based approach (LASSO) was used to select a minimal subset of these indicator functions. All of these works focused on the simultaneous pruning of individual trees and large ensembles.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_3" ], "mid": [ "2048231652", "1597346716", "2132506847" ], "abstract": [ "General regression and classification models are constructed as linear combinations of simple rules derived from the data. Each rule consists of a conjunction of a small number of simple statements concerning the values of individual input variables. These rule ensembles are shown to produce predictive accuracy comparable to the best methods. However, their principal advantage lies in interpretation. Because of its simple form, each rule is easy to understand, as is its influence on individual predictions, selected subsets of predictions, or globally over the entire space of joint input variable values. Similarly, the degree of relevance of the respective input variables can be assessed globally, locally in different regions of the input space, or at individual prediction points. Techniques are presented for automatically identifying those variables that are involved in interactions with other variables, the strength and degree of those interactions, as well as the identities of the other variables with which they interact. Graphical representations are used to visualize both main and interaction effects.", "This paper investigates enhancements of decision tree bagging which mainly aim at improving computation times, but also accuracy. The three questions which are reconsidered are: discretization of continuous attributes, tree pruning, and sampling schemes. A very simple discretization procedure is proposed, resulting in a dramatic speedup without significant decrease in accuracy. Then a new method is proposed to prune an ensemble of trees in a combined fashion, which is significantly more effective than individual pruning. Finally, different resampling schemes are considered leading to different CPU time accuracy tradeoffs. Combining all these enhancements makes it possible to apply tree bagging to very large datasets, with computational performances similar to single tree induction. Simulations are carried out on two synthetic databases and four real-life datasets.", "Random forests are effective supervised learning methods applicable to large-scale datasets. However, the space complexity of tree ensembles, in terms of their total number of nodes, is often prohibitive, spe- cially in the context of problems with very high-dimensional input spaces. We propose to study their compressibility by applying a L1-based regu- larization to the set of indicator functions defined by all their nodes. We show experimentally that preserving or even improving the model accuracy while significantly reducing its space complexity is indeed possible." ] }
1709.05545
2754686230
Tree ensembles are flexible predictive models that can capture relevant variables and to some extent their interactions in a compact and interpretable manner. Most algorithms for obtaining tree ensembles are based on versions of boosting or Random Forest. Previous work showed that boosting algorithms exhibit a cyclic behavior of selecting the same tree again and again due to the way the loss is optimized. At the same time, Random Forest is not based on loss optimization and obtains a more complex and less interpretable model. In this paper we present a novel method for obtaining compact tree ensembles by growing a large pool of trees in parallel with many independent boosting threads and then selecting a small subset and updating their leaf weights by loss optimization. We allow for the trees in the initial pool to have different depths which further helps with generalization. Experiments on real datasets show that the obtained model has usually a smaller loss than boosting, which is also reflected in a lower misclassification error on the test set.
Another line of work focuses on lossless compression of tree ensembles. In @cite_12 a probabilistic model was used for the tree ensemble and was combined with a clustering algorithm to find a minimal set of models that provides a perfect reconstruction of the original ensemble. Methods in @cite_21 , @cite_3 , @cite_12 were developed for ensembles based on bagging or Random Forests only and exploit the fact that each of the individual trees is independent and identically distributed random entity for a given training data set. However, our method uses several threads of randomly initialized boosted ensembles. And it is well known that the trees generated with boosting are more diverse and much less complex compared to trees from bagging or Random Forests based models.
{ "cite_N": [ "@cite_21", "@cite_3", "@cite_12" ], "mid": [ "1597346716", "2132506847", "" ], "abstract": [ "This paper investigates enhancements of decision tree bagging which mainly aim at improving computation times, but also accuracy. The three questions which are reconsidered are: discretization of continuous attributes, tree pruning, and sampling schemes. A very simple discretization procedure is proposed, resulting in a dramatic speedup without significant decrease in accuracy. Then a new method is proposed to prune an ensemble of trees in a combined fashion, which is significantly more effective than individual pruning. Finally, different resampling schemes are considered leading to different CPU time accuracy tradeoffs. Combining all these enhancements makes it possible to apply tree bagging to very large datasets, with computational performances similar to single tree induction. Simulations are carried out on two synthetic databases and four real-life datasets.", "Random forests are effective supervised learning methods applicable to large-scale datasets. However, the space complexity of tree ensembles, in terms of their total number of nodes, is often prohibitive, spe- cially in the context of problems with very high-dimensional input spaces. We propose to study their compressibility by applying a L1-based regu- larization to the set of indicator functions defined by all their nodes. We show experimentally that preserving or even improving the model accuracy while significantly reducing its space complexity is indeed possible.", "" ] }
1709.05745
2754419751
The conventional methods for estimating camera poses and scene structures from severely blurry or low resolution images often result in failure. The off-the-shelf deblurring or super-resolution methods may show visually pleasing results. However, applying each technique independently before matching is generally unprofitable because this naive series of procedures ignores the consistency between images. In this paper, we propose a pioneering unified framework that solves four problems simultaneously, namely, dense depth reconstruction, camera pose estimation, super-resolution, and deblurring. By reflecting a physical imaging process, we formulate a cost minimization problem and solve it using an alternating optimization technique. The experimental results on both synthetic and real videos show high-quality depth maps derived from severely degraded images that contrast the failures of naive multi-view stereo methods. Our proposed method also produces outstanding deblurred and super-resolved images unlike the independent application or combination of conventional video deblurring, super-resolution methods.
Few researchers have attempted to perform image matching on blurry images. Portz al @cite_31 proposed an optical flow method that uses a blur-aware matching procedure originally introduced in tracking methods @cite_3 @cite_10 . Based on the assumed commutativity of blur operations, this method blurs the input images with the kernels of one another instead of deblurring them using their own kernels.
{ "cite_N": [ "@cite_31", "@cite_10", "@cite_3" ], "mid": [ "", "2071098456", "2112107661" ], "abstract": [ "", "This article addresses the problem of real-time visual tracking in presence of complex motion blur. Previous authors have observed that efficient tracking can be obtained by matching blurred images instead of applying the computationally expensive task of deblurring (H. , 2005). The study was however limited to translational blur. In this work, we analyse the problem of tracking in presence of spatially variant motion blur generated by a planar template. We detail how to model the blur formation and parallelise the blur generation, enabling a real-time GPU implementation. Through the estimation of the camera exposure time, we discuss how tracking initialisation can be improved. Our algorithm is tested on challenging real data with complex motion blur where simple models fail. The benefit of blur estimation is shown for structure and motion.", "We consider the problem of visual tracking of regions of interest in a sequence of motion blurred images. Traditional methods couple tracking with deblurring in order to correctly account for the effects of motion blur. Such coupling is usually appropriate, but computationally wasteful when visual tracking is the lone objective. Instead of deblurring images, we propose to match regions by blurring them. The matching score for two image regions is governed by a cost function that only involves the region deformation parameters and two motion blur vectors. We present an efficient algorithm to minimize the proposed cost function and demonstrate it on sequences of real blurred images." ] }
1709.05745
2754419751
The conventional methods for estimating camera poses and scene structures from severely blurry or low resolution images often result in failure. The off-the-shelf deblurring or super-resolution methods may show visually pleasing results. However, applying each technique independently before matching is generally unprofitable because this naive series of procedures ignores the consistency between images. In this paper, we propose a pioneering unified framework that solves four problems simultaneously, namely, dense depth reconstruction, camera pose estimation, super-resolution, and deblurring. By reflecting a physical imaging process, we formulate a cost minimization problem and solve it using an alternating optimization technique. The experimental results on both synthetic and real videos show high-quality depth maps derived from severely degraded images that contrast the failures of naive multi-view stereo methods. Our proposed method also produces outstanding deblurred and super-resolved images unlike the independent application or combination of conventional video deblurring, super-resolution methods.
Lee al @cite_28 @cite_4 extended this idea and proposed several methods for handling blurred input images in camera pose estimation @cite_28 and dense stereo matching @cite_4 . However, given that scene depth and camera motion can generate the exact blur kernels only when both values are correct, estimating these parameters separately would be inappropriate. Moreover, the aforementioned works @cite_4 @cite_13 are limited by a simple assumption that the blur kernel can be modeled by using linear optical flow vectors between consecutive frames.
{ "cite_N": [ "@cite_28", "@cite_4", "@cite_13" ], "mid": [ "2127621202", "2060613391", "" ], "abstract": [ "Handling motion blur is one of important issues in visual SLAM. For a fast-moving camera, motion blur is an unavoidable effect and it can degrade the results of localization and reconstruction severely. In this paper, we present a unified algorithm to handle motion blur for visual SLAM, including the blur-robust data association method and the fast deblurring method. In our framework, camera motion and 3-D point structures are reconstructed by SLAM, and the information from SLAM makes the estimation of motion blur quite easy and effective. Reversely, estimating motion blur enables robust data association and drift-free localization of SLAM with blurred images. The blurred images are recovered by fast deconvolution using SLAM data, and more features are extracted and registered to the map so that the SLAM procedure can be continued even with the blurred images. In this way, visual SLAM and deblurring are solved simultaneously, and improve each other's results significantly.", "Motion blur frequently occurs in dense 3D reconstruction using a single moving camera, and it degrades the quality of the 3D reconstruction. To handle motion blur caused by rapid camera shakes, we propose a blur-aware depth reconstruction method, which utilizes a pixel correspondence that is obtained by considering the effect of motion blur. Motion blur is dependent on 3D geometry, thus parameter zing blurred appearance of images with scene depth given camera motion is possible and a depth map can be accurately estimated from the blur-considered pixel correspondence. The estimated depth is then converted into pixel-wise blur kernels, and non-uniform motion blur is easily removed with low computational cost. The obtained blur kernel is depth-dependent, thus it effectively addresses scene-depth variation, which is a challenging problem in conventional non-uniform deblurring methods.", "" ] }
1709.05745
2754419751
The conventional methods for estimating camera poses and scene structures from severely blurry or low resolution images often result in failure. The off-the-shelf deblurring or super-resolution methods may show visually pleasing results. However, applying each technique independently before matching is generally unprofitable because this naive series of procedures ignores the consistency between images. In this paper, we propose a pioneering unified framework that solves four problems simultaneously, namely, dense depth reconstruction, camera pose estimation, super-resolution, and deblurring. By reflecting a physical imaging process, we formulate a cost minimization problem and solve it using an alternating optimization technique. The experimental results on both synthetic and real videos show high-quality depth maps derived from severely degraded images that contrast the failures of naive multi-view stereo methods. Our proposed method also produces outstanding deblurred and super-resolved images unlike the independent application or combination of conventional video deblurring, super-resolution methods.
By contrast, our proposed blur model () covers more general camera motions by adopting the linear model in an Lie algebra space @cite_20 . The blur kernel is explicitly approximated by interpolating the camera path and depth maps between adjacent frames. Figure shows the difference between the conventional and proposed blur models.
{ "cite_N": [ "@cite_20" ], "mid": [ "88434223" ], "abstract": [ "An arbitrary rigid transformation in SE(3) can be separated into two parts, namely, a translation and a rigid rotation. This technical report reviews, under a unifying viewpoint, three common alternatives to representing the rotation part: sets of three (yaw-pitch-roll) Euler angles, orthogonal rotation matrices from SO(3) and quaternions. It will be described: (i) the equivalence between these representations and the formulas for transforming one to each other (in all cases considering the translational and rotational parts as a whole), (ii) how to compose poses with poses and poses with points in each representation and (iii) how the uncertainty of the poses (when modeled as Gaussian distributions) is affected by these transformations and compositions. Some brief notes are also given about the Jacobians required to implement least-squares optimization on manifolds, an very promising approach in recent engineering literature. The text reflects which MRPT C++ library1 functions implement each of the described algorithms. All the implementations have been thoroughly validated by means of unit testing and numerical estimation of the Jacobians. http: www.mrpt.org A tutorial on SE(3) transformation parameterizations and on-manifold optimization MAPIR Group Technical report #012010 Dpto. de Ingenieŕia de Sistemas y Automatica http: mapir.isa.uma.es History of document versions: • Version 5: Fixed a typo in Eq. 9.19 (21 Oct 2014) (Thanks to Tanner Schmidt for reporting) • Version 4: Added formulas for the Jacobian of the SO(3) logarithm map, in §10.3.2 (9 May 2013) • Version 3: Added the explicit formulas for the logarithm map of SO(3) and SE(3), fixed error in Eq. (10.25), explained the equivalence between the yaw-pitch-roll and roll-pitch-yaw forms and introduction of the [lnR] ▽ notation when discussing the logarithm maps (14 Aug 2012) • Version 2: Added more Jacobians (§10.3.5, §10.3.6, §10.3.4), the Appendix A and approximation in §10.3.8. (12 Sep 2010) • Version 1: First version (Released 1 Sep 2010). Notice: Part of this report was also published within chapter 10 and appendix IV of the book [6]." ] }
1709.05745
2754419751
The conventional methods for estimating camera poses and scene structures from severely blurry or low resolution images often result in failure. The off-the-shelf deblurring or super-resolution methods may show visually pleasing results. However, applying each technique independently before matching is generally unprofitable because this naive series of procedures ignores the consistency between images. In this paper, we propose a pioneering unified framework that solves four problems simultaneously, namely, dense depth reconstruction, camera pose estimation, super-resolution, and deblurring. By reflecting a physical imaging process, we formulate a cost minimization problem and solve it using an alternating optimization technique. The experimental results on both synthetic and real videos show high-quality depth maps derived from severely degraded images that contrast the failures of naive multi-view stereo methods. Our proposed method also produces outstanding deblurred and super-resolved images unlike the independent application or combination of conventional video deblurring, super-resolution methods.
Recent works @cite_22 @cite_7 have attempted to solve stereo matching and image deblurring jointly using the same blur model as the proposed one. However, both of these methods depend on additional or external data. The method proposed by Sellent al @cite_22 can only handle stereo video sequences, in which per-frame depth cues are available. Zhen and Stevenson @cite_7 proposed a method for single-view image sequences, but this method requires additional data, including inertial measurements and sharp noisy frames.
{ "cite_N": [ "@cite_22", "@cite_7" ], "mid": [ "2949088881", "2511026390" ], "abstract": [ "Videos acquired in low-light conditions often exhibit motion blur, which depends on the motion of the objects relative to the camera. This is not only visually unpleasing, but can hamper further processing. With this paper we are the first to show how the availability of stereo video can aid the challenging video deblurring task. We leverage 3D scene flow, which can be estimated robustly even under adverse conditions. We go beyond simply determining the object motion in two ways: First, we show how a piecewise rigid 3D scene flow representation allows to induce accurate blur kernels via local homographies. Second, we exploit the estimated motion boundaries of the 3D scene flow to mitigate ringing artifacts using an iterative weighting scheme. Being aware of 3D object motion, our approach can deal robustly with an arbitrary number of independently moving objects. We demonstrate its benefit over state-of-the-art video deblurring using quantitative and qualitative experiments on rendered scenes and real videos.", "Scene depth variation is an important factor that leads to spatially-varying camera motion blur. Most of the previous methods require auxiliary cameras or user interaction to make depth-aware deblurring tractable. In this work, we propose to use a noisy blurred noisy image sequence and simultaneously recorded inertial measurements to jointly estimate scene depth and remove spatially-varying blur caused by depth variation and camera in-plane motion. The inertial data could provide initialization of camera motion parameters, while the noisy image pair preserve large-scale sharp edges from which a coarse disparity map can be generated. However, this initial estimate is not accurate enough to produce a high-quality clean image. Therefore, we develop an optimization scheme to refine depth, motion parameters and latent image alternately. A Markov Random Field (MRF) framework is formulated to solve for the depth map by exploiting both stereo cues and motion blur cues and the residual Richardson-Lucy algorithm is used to effectively suppress deconvolution ringing artifacts. Experimental results demonstrate that our approach can address both depth estimation as well as image deblurring." ] }
1709.05745
2754419751
The conventional methods for estimating camera poses and scene structures from severely blurry or low resolution images often result in failure. The off-the-shelf deblurring or super-resolution methods may show visually pleasing results. However, applying each technique independently before matching is generally unprofitable because this naive series of procedures ignores the consistency between images. In this paper, we propose a pioneering unified framework that solves four problems simultaneously, namely, dense depth reconstruction, camera pose estimation, super-resolution, and deblurring. By reflecting a physical imaging process, we formulate a cost minimization problem and solve it using an alternating optimization technique. The experimental results on both synthetic and real videos show high-quality depth maps derived from severely degraded images that contrast the failures of naive multi-view stereo methods. Our proposed method also produces outstanding deblurred and super-resolved images unlike the independent application or combination of conventional video deblurring, super-resolution methods.
Some researchers proposed to solve super-resolution and deblurring jointly @cite_14 @cite_17 . The method proposed by Bascle al @cite_17 relies on external tracking information to estimate the blur kernel using the trajectory of the object and establish the sub-pixel correspondence for multi-frame super-resolution. However, this method is applicable only on some objects, which should be easy to track, not on the entire image. Takeda and Milanfar @cite_14 proposed an intriguing method to handle a spatio-temporal super-resolution and deblurring problem in a spatially invariant 3D deconvolution framework. However, this method cannot handle large blur kernels because the size of motion vectors between consecutive frames is limited by a few pixels.
{ "cite_N": [ "@cite_14", "@cite_17" ], "mid": [ "2154409533", "1531803620" ], "abstract": [ "Although spatial deblurring is relatively well understood by assuming that the blur kernel is shift invariant, motion blur is not so when we attempt to deconvolve on a frame-by-frame basis: this is because, in general, videos include complex, multilayer transitions. Indeed, we face an exceedingly difficult problem in motion deblurring of a single frame when the scene contains motion occlusions. Instead of deblurring video frames individually, a fully 3-D deblurring method is proposed in this paper to reduce motion blur from a single motion-blurred video to produce a high-resolution video in both space and time. Unlike other existing approaches, the proposed deblurring kernel is free from knowledge of the local motions. Most importantly, due to its inherent locally adaptive nature, the 3-D deblurring is capable of automatically deblurring the portions of the sequence, which are motion blurred, without segmentation and without adversely affecting the rest of the spatiotemporal domain, where such blur is not present. Our method is a two-step approach; first we upscale the input video in space and time without explicit estimates of local motions, and then perform 3-D deblurring to obtain the restored sequence.", "In many applications, like surveillance, image sequences are of poor quality. Motion blur in particular introduces significant image degradation. An interesting challenge is to merge these many images into one high-quality, estimated still. We propose a method to achieve this. Firstly, an object of interest is tracked through the sequence using region based matching. Secondly, degradation of images is modelled in terms of pixel sampling, defocus blur and motion blur. Motion blur direction and magnitude are estimated from tracked displacements. Finally, a high-resolution deblurred image is reconstructed. The approach is illustrated with video sequences of moving people and blurred script." ] }
1709.05435
2951669995
The theoretical ability of modular robots to reconfigure in response to complex tasks in a priori unknown environments has frequently been cited as an advantage, but has never been experimentally demonstrated. For the first time, we present a system that integrates perception, high-level mission planning, and modular robot hardware, allowing a modular robot to autonomously reconfigure in response to an a priori unknown environment in order to complete high-level tasks. Three hardware experiments validate the system, and demonstrate a modular robot autonomously exploring, reconfiguring, and manipulating objects to complete high-level tasks in unknown environments. We present system architecture, software and hardware in a general framework that enables modular robots to solve tasks in unknown environments using autonomous, reactive reconfiguration. The physical robot is composed of modules that support multiple robot configurations. An onboard 3D sensor provides information about the environment and informs exploration, reconfiguration decision making and feedback control. A centralized high-level mission planner uses information from the environment and the user-specified task description to autonomously compose low-level controllers to perform locomotion, reconfiguration, and other behaviors. A novel, centralized self-reconfiguration method is used to change robot configurations as needed.
Here we provide a more detailed overview of prior work in MSRR systems. These systems provide partial sets of the capabilities of our system. The Millibot system demonstrated mapping when operating as a swarm. Certain members of the swarm are designated as beacons,'' and have known locations. The autonomy of the Millibot swarm is limited: a human operator makes all high-level decisions, and is responsible for navigation using a GUI @cite_13 .
{ "cite_N": [ "@cite_13" ], "mid": [ "2136247341" ], "abstract": [ "In this article, we present the design of a team of heterogeneous, centimeter-scale robots that collaborate to map and explore unknown environments. The robots, called Millibots, are configured from modular components that include sonar and IR sensors, camera, communication, computation, and mobility modules. Robots with different configurations use their special capabilities collaboratively to accomplish a given task. For mapping and exploration with multiple robots, it is critical to know the relative positions of each robot with respect to the others. We have developed a novel localization system that uses sonar-based distance measurements to determine the positions of all the robots in the group. With their positions known, we use an occupancy grid Bayesian mapping algorithm to combine the sensor data from multiple robots with different sensing modalities. Finally, we present the results of several mapping experiments conducted by a user-guided team of five robots operating in a room containing multiple obstacles." ] }
1709.05435
2951669995
The theoretical ability of modular robots to reconfigure in response to complex tasks in a priori unknown environments has frequently been cited as an advantage, but has never been experimentally demonstrated. For the first time, we present a system that integrates perception, high-level mission planning, and modular robot hardware, allowing a modular robot to autonomously reconfigure in response to an a priori unknown environment in order to complete high-level tasks. Three hardware experiments validate the system, and demonstrate a modular robot autonomously exploring, reconfiguring, and manipulating objects to complete high-level tasks in unknown environments. We present system architecture, software and hardware in a general framework that enables modular robots to solve tasks in unknown environments using autonomous, reactive reconfiguration. The physical robot is composed of modules that support multiple robot configurations. An onboard 3D sensor provides information about the environment and informs exploration, reconfiguration decision making and feedback control. A centralized high-level mission planner uses information from the environment and the user-specified task description to autonomously compose low-level controllers to perform locomotion, reconfiguration, and other behaviors. A novel, centralized self-reconfiguration method is used to change robot configurations as needed.
Self-reconfiguration has been demonstrated with several other modular robot systems. CKbot, Conro, and MTRAN have all demonstrated the ability to join disconnected clusters of modules together @cite_17 @cite_16 @cite_25 . In order to align, Conro uses infra-red sensors on the docking faces of the modules, while CKBot and MTRAN use a separate sensor module on each cluster. In all cases, individual clusters locate and servo towards each other until they are close enough to dock. These experiments do not include any planning or sequencing of multiple reconfiguration actions in order to create a goal structure appropriate for a task. Additionally, modules are not individually mobile, and mobile clusters of modules are limited to slow crawling gaits. Consequently, reconfiguration is very time consuming, with a single connection requiring 5-15 minutes.
{ "cite_N": [ "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "", "2033814129", "2152735424" ], "abstract": [ "", "We have been developing a self-reconfigurable robot M-TRAN. In this paper, we focus on the docking of M-TRAN modules. In order to dock modules in separate configurations, we use a camera module, which is compatible to the M-TRAN system. Proposed docking is a combination of a simple visual feedback procedure and a special docking configuration to absorb positional errors. We verified reliability of the docking method by experiments.", "This paper introduces a new challenge problem: designing robotic systems to recover after disassembly from high-energy events and a first implemented solution of a simplified problem. It uses vision-based localization for self- reassembly. The control architecture for the various states of the robot, from fully-assembled to the modes for sequential docking, are explained and inter-module communication details for the robotic system are described." ] }
1709.05435
2951669995
The theoretical ability of modular robots to reconfigure in response to complex tasks in a priori unknown environments has frequently been cited as an advantage, but has never been experimentally demonstrated. For the first time, we present a system that integrates perception, high-level mission planning, and modular robot hardware, allowing a modular robot to autonomously reconfigure in response to an a priori unknown environment in order to complete high-level tasks. Three hardware experiments validate the system, and demonstrate a modular robot autonomously exploring, reconfiguring, and manipulating objects to complete high-level tasks in unknown environments. We present system architecture, software and hardware in a general framework that enables modular robots to solve tasks in unknown environments using autonomous, reactive reconfiguration. The physical robot is composed of modules that support multiple robot configurations. An onboard 3D sensor provides information about the environment and informs exploration, reconfiguration decision making and feedback control. A centralized high-level mission planner uses information from the environment and the user-specified task description to autonomously compose low-level controllers to perform locomotion, reconfiguration, and other behaviors. A novel, centralized self-reconfiguration method is used to change robot configurations as needed.
Other work has focused on reconfiguration planning. present a system in which self-reconfigurable modular boats self-assemble into prescribed floating structures, such as a bridge @cite_6 . Individual boat modules are able to move about the pool, allowing for rapid reconfiguration. In these experiments, the environment is known and external localization is provided by an overhead AprilTag system.
{ "cite_N": [ "@cite_6" ], "mid": [ "1984426410" ], "abstract": [ "We present the methodology, algorithms, system design, and experiments addressing the self-assembly of large teams of autonomous robotic boats into floating platforms. Identical self-propelled robotic boats autonomously dock together and form connected structures with controllable variable stiffness. These structures can self-reconfigure into arbitrary shapes limited only by the number of rectangular elements assembled in brick-like patterns. An @math complexity algorithm automatically generates assembly plans which maximize opportunities for parallelism while constructing operator-specified target configurations with @math components. The system further features an @math complexity algorithm for the concurrent assignment and planning of trajectories from @math free robots to the growing structure. Such peer-to-peer assembly among modular robots compares favorably to a single active element assembling passive components in terms of both construction rate and potential robustness through redundancy. We describe hardware and software techniques to facilitate reliable docking of elements in the presence of estimation and actuation errors, and we consider how these local variable stiffness connections may be used to control the structural properties of the larger assembly. Assembly experiments validate these ideas in a fleet of 0.5 m long modular robotic boats with onboard thrusters, active connectors, and embedded computers." ] }
1709.05047
2964020599
Abstract Semi-supervised learning is attracting increasing attention due to the fact that datasets of many domains lack enough labeled data. Variational Auto-Encoder (VAE), in particular, has demonstrated the benefits of semi-supervised learning. The majority of existing semi-supervised VAEs utilize a classifier to exploit label information, where the parameters of the classifier are introduced to the VAE. Given the limited labeled data, learning the parameters for the classifiers may not be an optimal solution for exploiting label information. Therefore, in this paper, we develop a novel approach for semi-supervised VAE without classifier. Specifically, we propose a new model called Semi-supervised Disentangled VAE (SDVAE), which encodes the input data into disentangled representation and non-interpretable representation, then the category information is directly utilized to regularize the disentangled representation via the equality constraint. To further enhance the feature learning ability of the proposed VAE, we incorporate reinforcement learning to relieve the lack of data. The dynamic framework is capable of dealing with both image and text data with its corresponding encoder and decoder networks. Extensive experiments on image and text datasets demonstrate the effectiveness of the proposed framework.
Because of the effectiveness of deep generative models in capturing data distribution, semi-supervised models based on deep generative models such as generative adversarial network @cite_39 and VAE @cite_13 are becoming very popular. Various semi-supervised models based on VAE are proposed @cite_13 @cite_26 . A typical VAE is composed of an encoder network @math which encodes the input @math to latent representation @math , and a decoder network @math which reconstructs @math from @math . The essential idea of semi-supervised VAE is to add a classifier on top of the latent representation. Thus, the semi-supervised VAEs are typically composed of three main components: an encoder network @math , a decoder @math and a classifier @math . For example, Semi-VAE @cite_13 incorporates learned latent variable into a classifier and improves the performance greatly. SSVAE @cite_26 extends Semi-VAE for sequence data and also demonstrates its effectiveness in the semi-supervised learning on the text data. The aforementioned semi-supervised VAE all use a parametric classifier, which increases the burden to learn more parameters given the limited labeled data. Therefore, in this work, the proposed framework incorporates the label information directly into the disentangled representation and thus avoids the parametric classifier.
{ "cite_N": [ "@cite_26", "@cite_13", "@cite_39" ], "mid": [ "2558737541", "2949416428", "" ], "abstract": [ "Although semi-supervised variational autoencoder (SemiVAE) works in image classification task, it fails in text classification task if using vanilla LSTM as its decoder. From a perspective of reinforcement learning, it is verified that the decoder's capability to distinguish between different categorical labels is essential. Therefore, Semi-supervised Sequential Variational Autoencoder (SSVAE) is proposed, which increases the capability by feeding label into its decoder RNN at each time-step. Two specific decoder structures are investigated and both of them are verified to be effective. Besides, in order to reduce the computational complexity in training, a novel optimization method is proposed, which estimates the gradient of the unlabeled objective function by sampling, along with two variance reduction techniques. Experimental results on Large Movie Review Dataset (IMDB) and AG's News corpus show that the proposed approach significantly improves the classification accuracy compared with pure-supervised classifiers, and achieves competitive performance against previous advanced methods. State-of-the-art results can be obtained by integrating other pretraining-based methods.", "The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.", "" ] }
1709.05368
2755158994
Mobile robots operating on unstructured terrain must predict which areas of the environment they are able to pass in order to plan feasible paths and to react to unforeseen terrain patterns. We address traversability estimation as a heightmap classification problem: we build a convolutional neural network that, given an image representing the heightmap of a terrain patch, predicts whether the robot will be able to traverse such patch from left to right. The classifier is trained for a specific wheeled robot model (but may implement any other locomotion type such as tracked, legged, snake-like), using simulation data on a variety of procedurally generated training terrains; once trained, the classifier can quickly be applied to patches extracted from unseen large heightmaps, in multiple orientations, thus building oriented traversability maps. We quantitatively validate the approach on real-elevation datasets and implement a path planning approach that employs our traversability estimation.
Estimating terrain traversability is a fundamental capability for animals and mobile ground robots @cite_22 ; most organisms are known to use visual perception for this task, and most related works in robotics use on-board cameras or depth sensors. Then, traversability can be estimated from appearance cues, 3D geometry, or both @cite_28 . In many cases, the link between such input and traversability is learned in a supervised fashion. We first categorize related works by their input data, then focus on different options to gather training information.
{ "cite_N": [ "@cite_28", "@cite_22" ], "mid": [ "1974868116", "2108359755" ], "abstract": [ "Motion planning for unmanned ground vehicles (UGV) constitutes a domain of research where several disciplines meet, ranging from artificial intelligence and machine learning to robot perception and computer vision. In view of the plurality of related applications such as planetary exploration, search and rescue, agriculture, mining and off-road exploration, the aim of the present survey is to review the field of 3D terrain traversability analysis that is employed at a preceding stage as a means to effectively and efficiently guide the task of motion planning. We identify that in the epicenter of all related methodologies, 3D terrain information is used which is acquired from LIDAR, stereo range data, color or other sensory data and occasionally combined with static or dynamic vehicle models expressing the interaction of the vehicle with the terrain. By taxonomizing the various directions that have been explored in terrain perception and analysis, this review takes a step toward agglomerating the dispersed contributions from individual domains by elaborating on a number of key similarities as well as differences, in order to stimulate research in addressing the open challenges and inspire future developments.", "The concept of affordances, introduced in psychology by J. J. Gibson, has recently attracted interest in the development of cognitive systems in autonomous robotics. In earlier work (Sahin, Cakmak, Dogar, Ugur, & Aœcoluk), we reviewed the uses of this concept in different fields and proposed a formalism to use affordances at different levels of robot control. In this article, we first review studies in ecological psychology on the learning and perception of traversability in organisms and describe how the existence of traversability was judged to exist. We then describe the implementation of one part of the affordance formalism for the learning and perception of traversability affordances on a mobile robot equipped with range sensing ability. Through experiments inspired by ecological psychology, we show that the robot, by interacting with its environment, can learn to perceive the traversability affordances. Moreover, we claim that three of the main attributes that are commonly associated with affordances, that is, affordances being relative to the environment, providing perceptual economy, and providing general information, are simply consequences of learning from the interactions of the robot with the environment." ] }
1709.05368
2755158994
Mobile robots operating on unstructured terrain must predict which areas of the environment they are able to pass in order to plan feasible paths and to react to unforeseen terrain patterns. We address traversability estimation as a heightmap classification problem: we build a convolutional neural network that, given an image representing the heightmap of a terrain patch, predicts whether the robot will be able to traverse such patch from left to right. The classifier is trained for a specific wheeled robot model (but may implement any other locomotion type such as tracked, legged, snake-like), using simulation data on a variety of procedurally generated training terrains; once trained, the classifier can quickly be applied to patches extracted from unseen large heightmaps, in multiple orientations, thus building oriented traversability maps. We quantitatively validate the approach on real-elevation datasets and implement a path planning approach that employs our traversability estimation.
Geometry-based approaches use local @cite_15 or global sensory data to derive an elevation map of the terrain, which is a convenient spatial representation for ground robots @cite_34 . Then, one option is to simulate a model of the robot on different areas of such elevation map: this allows one to explicitly test traversability, or, in a simpler setup, to evaluate the pose that the robot would assume when lying on each point of the elevation map @cite_31 . A more common approach evaluates each point of the elevation map by extracting simple local features (such as slope, roughness, step height), and then estimate traversability either through handcrafted rules, or using a learned classifier @cite_12 @cite_7 @cite_24 .
{ "cite_N": [ "@cite_31", "@cite_7", "@cite_24", "@cite_15", "@cite_34", "@cite_12" ], "mid": [ "2116764847", "2113296585", "2536431081", "2571268510", "2538812460", "2133918634" ], "abstract": [ "Autonomous long-range navigation in partially known planetary-like terrains is still an open challenge for robotics. Navigating hundreds of meters without any human intervention requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to control its motions and to localize itself as it moves. All these activities have to be scheduled, triggered, controlled and interrupted according to the rover context. In this paper, we briefly review some functionalities that have been developed in our laboratory, and implemented on board the Marsokhod model robot, Lama. We then present how the various concurrent instances of the perception, localization and motion generation functionalities are integrated. Experimental results illustrate the functionalities throughout the paper.", "In this paper we present a traversability assessment method for motion planning in autonomous walking robots. The aim is to plan the motion of the robot in a real scenario on a rough terrain, where the level of details in the obtained terrain maps is not sufficient for motion planning. A guided RRT (Rapidly-exploring Random Trees) algorithm is used to plan the motion of the robot on rough terrain. We are looking for a method that can learn the terrain traversability cost function to the benefit of the guiding function of the planning algorithm. A probabilistic regression technique is used to solve the traversability assessment problem. Computing the predictions of the traversability values we use the RRT planner to explore the space of possible solutions. We demonstrate efficiency of the prediction method and we show results of experiments on the real walking robot.", "This paper presents a framework for planning safe and efficient paths for a legged robot in rough and unstructured terrain. The proposed approach allows to exploit the distinctive obstacle negotiation capabilities of legged robots, while keeping the complexity low enough to enable planning over considerable distances in short time. We compute typical terrain characteristics such as slope, roughness, and steps to build a traversability map. This map is used to assess the costs of individual robot footprints as a function of the robot-specific obstacle negotiating capabilities for steps, gaps and stairs. Our sampling-based planner employs the RRT* algorithm to optimize path length and safety. The planning framework has a hierarchical architecture to frequently replan the path during execution as new terrain is perceived with onboard sensors. Furthermore, a cascaded planning structure makes use of different levels of simplification to allow for fast search in simple environments, while retaining the ability to find complex solutions, such as paths through narrow passages. The proposed navigation planning framework is integrated on the quadrupedal robot StarlETH and extensively tested in simulation as well as on the real platform.", "We address the problem of planning a path for a ground robot through unknown terrain, using observations from a flying robot. In search and rescue missions, which are our target scenarios, the time from arrival at the disaster site to the delivery of aid is critically important. Previous works required exhaustive exploration before path planning, which is time-consuming but eventually leads to an optimal path for the ground robot. Instead, we propose active exploration of the environment, where the flying robot chooses regions to map in a way that optimizes the overall response time of the system, which is the combined time for the air and ground robots to execute their missions. In our approach, we estimate terrain classes throughout our terrain map, and we also add elevation information in areas where the active exploration algorithm has chosen to perform 3-D reconstruction. This terrain information is used to estimate feasible and efficient paths for the ground robot. By exploring the environment actively, we achieve superior response times compared to both exhaustive and greedy exploration strategies. We demonstrate the performance and capabilities of the proposed system in simulated and real-world outdoor experiments. To the best of our knowledge, this is the first work to address ground robot path planning using active aerial exploration.", "", "In this paper we address the problem of closing the loop from perception to action selection for unmanned ground vehicles, with a focus on navigating slopes. A new non-parametric learning technique is presented to generate a mobility representation where the maximum feasible speed is used as a criterion to classify the world. The inputs to the algorithm are terrain gradients derived from an elevation map and past observations of wheel slip. It is argued that such a representation can aid in path planning with improved selection of vehicle heading and velocity in off-road slopes. In addition, an information theoretic test is proposed to validate a chosen proprioceptive representation (such as slip) for mobility map generation. Results of mobility map generation and its benefits to path planning are shown." ] }
1709.05368
2755158994
Mobile robots operating on unstructured terrain must predict which areas of the environment they are able to pass in order to plan feasible paths and to react to unforeseen terrain patterns. We address traversability estimation as a heightmap classification problem: we build a convolutional neural network that, given an image representing the heightmap of a terrain patch, predicts whether the robot will be able to traverse such patch from left to right. The classifier is trained for a specific wheeled robot model (but may implement any other locomotion type such as tracked, legged, snake-like), using simulation data on a variety of procedurally generated training terrains; once trained, the classifier can quickly be applied to patches extracted from unseen large heightmaps, in multiple orientations, thus building oriented traversability maps. We quantitatively validate the approach on real-elevation datasets and implement a path planning approach that employs our traversability estimation.
. Using a supervised learning approach exempts one from the need to manually define the link between the input data and traversability; on the other hand, it requires a set of labeled examples, whose size, quality and representativeness are key to the final performance: the strategy adopted to acquire such training data is an important (and sometimes, prevailing) component of all related literature @cite_28 . Our approach exclusively relies on training data acquired from simulations on procedurally-generated terrains: this allows us to cheaply generate large datasets for data-hungry deep learning models. This approach has not yet been attempted in the traversability literature, but training from simulations is a common strategy for learning manipulation or legged locomotion skills, especially when reinforcement learning techniques are adopted @cite_33 @cite_5 .
{ "cite_N": [ "@cite_28", "@cite_5", "@cite_33" ], "mid": [ "1974868116", "2211115409", "" ], "abstract": [ "Motion planning for unmanned ground vehicles (UGV) constitutes a domain of research where several disciplines meet, ranging from artificial intelligence and machine learning to robot perception and computer vision. In view of the plurality of related applications such as planetary exploration, search and rescue, agriculture, mining and off-road exploration, the aim of the present survey is to review the field of 3D terrain traversability analysis that is employed at a preceding stage as a means to effectively and efficiently guide the task of motion planning. We identify that in the epicenter of all related methodologies, 3D terrain information is used which is acquired from LIDAR, stereo range data, color or other sensory data and occasionally combined with static or dynamic vehicle models expressing the interaction of the vehicle with the terrain. By taxonomizing the various directions that have been explored in terrain perception and analysis, this review takes a step toward agglomerating the dispersed contributions from individual domains by elaborating on a number of key similarities as well as differences, in order to stimulate research in addressing the open challenges and inspire future developments.", "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.", "" ] }
1709.05368
2755158994
Mobile robots operating on unstructured terrain must predict which areas of the environment they are able to pass in order to plan feasible paths and to react to unforeseen terrain patterns. We address traversability estimation as a heightmap classification problem: we build a convolutional neural network that, given an image representing the heightmap of a terrain patch, predicts whether the robot will be able to traverse such patch from left to right. The classifier is trained for a specific wheeled robot model (but may implement any other locomotion type such as tracked, legged, snake-like), using simulation data on a variety of procedurally generated training terrains; once trained, the classifier can quickly be applied to patches extracted from unseen large heightmaps, in multiple orientations, thus building oriented traversability maps. We quantitatively validate the approach on real-elevation datasets and implement a path planning approach that employs our traversability estimation.
Transferring models learned in simulation to the real world is a very active research topic in the manipulation literature @cite_21 . Instead of dealing with this issue, the traversability literature acquires training data from the real world; then, associates observed input data to a ground-truth traversability label, which can be determined in one of two ways.
{ "cite_N": [ "@cite_21" ], "mid": [ "2952629144" ], "abstract": [ "Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards." ] }
1709.05368
2755158994
Mobile robots operating on unstructured terrain must predict which areas of the environment they are able to pass in order to plan feasible paths and to react to unforeseen terrain patterns. We address traversability estimation as a heightmap classification problem: we build a convolutional neural network that, given an image representing the heightmap of a terrain patch, predicts whether the robot will be able to traverse such patch from left to right. The classifier is trained for a specific wheeled robot model (but may implement any other locomotion type such as tracked, legged, snake-like), using simulation data on a variety of procedurally generated training terrains; once trained, the classifier can quickly be applied to patches extracted from unseen large heightmaps, in multiple orientations, thus building oriented traversability maps. We quantitatively validate the approach on real-elevation datasets and implement a path planning approach that employs our traversability estimation.
The first option is to use the robot’s own experience. Then, the robot needs a way to detect whether the area it is passing is traversable or not, using wheel slippage @cite_16 , vibration sensors @cite_0 , or visual odometry to check progress (which is a possible extension for our approach, as we note in ).
{ "cite_N": [ "@cite_0", "@cite_16" ], "mid": [ "2040121219", "1498688739" ], "abstract": [ "A key feature for an autonomous mobile robot navigating in off-road unknown areas is environment sensing. Extraction of meaningful information from sensor data allows a good characterization of the near to far terrains, and thus, the ability for the vehicle to achieve its tasks with easiness. We present an image feature extraction scheme to predict mobile platform motion information. For a sequence of run, several images of terrains and vibrations endured by the mobile robot are acquired using a camera and an acceleration sensor. Texture information extracted by the Segmentation-based Fractal Texture Analysis descriptor (SFTA) was used to find correlations with acceleration features quantified using different time analysis parameters. Experimental results showed that texture information is a good candidate to predict running information.", "Real time estimation of soil parameters is essential in achieving precise, robust autonomous guidance and control of a tracked vehicle. The paper shows that the slip of the tracks over the terrain can be identified from trajectory data using an extended Kalman filter. The use of a suitable soil model can then allow key soil parameters to be estimated as the vehicle passes over the soil. Knowledge of the soil parameters may in turn be used to allow reference trajectories and control algorithms to be adjusted to suit the soil conditions." ] }
1709.05368
2755158994
Mobile robots operating on unstructured terrain must predict which areas of the environment they are able to pass in order to plan feasible paths and to react to unforeseen terrain patterns. We address traversability estimation as a heightmap classification problem: we build a convolutional neural network that, given an image representing the heightmap of a terrain patch, predicts whether the robot will be able to traverse such patch from left to right. The classifier is trained for a specific wheeled robot model (but may implement any other locomotion type such as tracked, legged, snake-like), using simulation data on a variety of procedurally generated training terrains; once trained, the classifier can quickly be applied to patches extracted from unseen large heightmaps, in multiple orientations, thus building oriented traversability maps. We quantitatively validate the approach on real-elevation datasets and implement a path planning approach that employs our traversability estimation.
A second option is to entrust labeling to a human; this could be done in a direct, straightforward way (the robot acquires data, a human marks each input with a traversability label, yielding a training set) @cite_10 , or using strategies to make the process more efficient. For example, in @cite_27 a human draws a path from a source to a target, and the system infers which areas the human purposefully avoided and automatically uses such patches as non-traversable examples.
{ "cite_N": [ "@cite_27", "@cite_10" ], "mid": [ "2082764616", "2296673577" ], "abstract": [ "Rough terrain autonomous navigation continues to pose a challenge to the robotics community. Robust navigation by a mobile robot depends not only on the individual performance of perception and planning systems, but on how well these systems are coupled. When traversing complex unstructured terrain, this coupling (in the form of a cost function) has a large impact on robot behavior and performance, necessitating a robust design. This paper explores the application of Learning from Demonstration to this task for the Crusher autonomous navigation platform. Using expert examples of desired navigation behavior, mappings from both online and offline perceptual data to planning costs are learned. Challenges in adapting existing techniques to complex online planning systems and imperfect demonstration are addressed, along with additional practical considerations. The benefits to autonomous performance of this approach are examined, as well as the decrease in necessary designer effort. Experimental results are presented from autonomous traverses through complex natural environments.", "We study the problem of perceiving forest or mountain trails from a single monocular image acquired from the viewpoint of a robot traveling on the trail itself. Previous literature focused on trail segmentation, and used low-level features such as image saliency or appearance contrast; we propose a different approach based on a deep neural network used as a supervised image classifier. By operating on the whole image at once, our system outputs the main direction of the trail compared to the viewing direction. Qualitative and quantitative results computed on a large real-world dataset (which we provide for download) show that our approach outperforms alternatives, and yields an accuracy comparable to the accuracy of humans that are tested on the same image classification task. Preliminary results on using this information for quadrotor control in unseen trails are reported. To the best of our knowledge, this is the first letter that describes an approach to perceive forest trials, which is demonstrated on a quadrotor micro aerial vehicle." ] }
1709.05087
2754444390
In video-based action recognition, viewpoint variations often pose major challenges because the same actions can appear different from different views. We use the complementary RGB and Depth information from the RGB-D cameras to address this problem. The proposed technique capitalizes on the spatio-temporal information available in the two data streams to the extract action features that are largely insensitive to the viewpoint variations. We use the RGB data to compute dense trajectories that are translated to viewpoint insensitive deep features under a non-linear knowledge transfer model. Similarly, the Depth stream is used to extract CNN-based view invariant features on which Fourier Temporal Pyramid is computed to incorporate the temporal information. The heterogeneous features from the two streams are combined and used as a dictionary to predict the label of the test samples. To that end, we propose a sparse-dense collaborative representation classification scheme that strikes a balance between the discriminative abilities of the dense and the sparse representations of the samples over the extracted heterogeneous dictionary.
In RGB video based action recognition, few exiting approaches @cite_18 @cite_29 @cite_20 directly use geometric transformations to incorporate the much needed viewpoint invariance. However, to achieve the desired performance level, it is critical for these methods to accurately estimate the skeleton joints. In practical conditions, it is often challenging to achieve high level of accuracy in skeleton joint estimation, which makes these methods less appealing for the practical purpose. Another stream of techniques @cite_3 @cite_23 @cite_40 @cite_30 exploits spatio-temporal features in the RGB videos to incorporate the viewpoint invariance. Nevertheless, the action recognition performance of these approaches is generally limited by the structure of the extracted features @cite_39 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_29", "@cite_3", "@cite_39", "@cite_40", "@cite_23", "@cite_20" ], "mid": [ "2013076218", "2156135524", "2166070055", "1984219317", "2465488276", "2125854396", "2021733262", "" ], "abstract": [ "Action recognition is an important and challenging topic in computer vision, with many important applications including video surveillance, automated cinematography and understanding of social interaction. Yet, most current work in gesture or action interpretation remains rooted in view-dependent representations. This paper introduces Motion History Volumes (MHV) as a free-viewpoint representation for human actions in the case of multiple calibrated, and background-subtracted, video cameras. We present algorithms for computing, aligning and comparing MHVs of different actions performed by different people in a variety of viewpoints. Alignment and comparisons are performed efficiently using Fourier transforms in cylindrical coordinates around the vertical axis. Results indicate that this representation can be used to learn and recognize basic human action classes, independently of gender, body size and viewpoint.", "3D human pose recovery is considered as a fundamental step in view-invariant human action recognition. However, inferring 3D poses from a single view usually is slow due to the large number of parameters that need to be estimated and recovered poses are often ambiguous due to the perspective projection. We present an approach that does not explicitly infer 3D pose at each frame. Instead, from existing action models we search for a series of actions that best match the input sequence. In our approach, each action is modeled as a series of synthetic 2D human poses rendered from a wide range of viewpoints. The constraints on transition of the synthetic poses is represented by a graph model called Action Net. Given the input, silhouette matching between the input frames and the key poses is performed first using an enhanced Pyramid Match Kernel algorithm. The best matched sequence of actions is then tracked using the Viterbi algorithm. We demonstrate this approach on a challenging video sets consisting of 15 complex action classes.", "In this paper, we address the problem of learning compact, view-independent, realistic 3D models of human actions recorded with multiple cameras, for the purpose of recognizing those same actions from a single or few cameras, without prior knowledge about the relative orientations between the cameras and the subjects. To this aim, we propose a new framework where we model actions using three dimensional occupancy grids, built from multiple viewpoints, in an exemplar-based HMM. The novelty is, that a 3D reconstruction is not required during the recognition phase, instead learned 3D exemplars are used to produce 2D image information that is compared to the observations. Parameters that describe image projections are added as latent variables in the recognition process. In addition, the temporal Markov dependency applied to view parameters allows them to evolve during recognition as with a smoothly moving camera. The effectiveness of the framework is demonstrated with experiments on real datasets and with challenging recognition scenarios.", "Human activity recognition is central to many practical applications, ranging from visual surveillance to gaming interfacing. Most approaches addressing this problem are based on localized spatio-temporal features that can vary significantly when the viewpoint changes. As a result, their performances rapidly deteriorate as the difference between the viewpoints of the training and testing data increases. In this paper, we introduce a new type of feature, the “Hankelet” that captures dynamic properties of short tracklets. While Hankelets do not carry any spatial information, they bring invariant properties to changes in viewpoint that allow for robust cross-view activity recognition, i.e. when actions are recognized using a classifier trained on data from a different viewpoint. Our experiments on the IXMAS dataset show that using Hanklets improves the state of the art performance by over 20 .", "We propose a human pose representation model that transfers human poses acquired from different unknown views to a view-invariant high-level space. The model is a deep convolutional neural network and requires a large corpus of multiview training data which is very expensive to acquire. Therefore, we propose a method to generate this data by fitting synthetic 3D human models to real motion capture data and rendering the human poses from numerous viewpoints. While learning the CNN model, we do not use action labels but only the pose labels after clustering all training poses into k clusters. The proposed model is able to generalize to real depth images of unseen poses without the need for re-training or fine-tuning. Real depth videos are passed through the model frame-wise to extract viewinvariant features. For spatio-temporal representation, we propose group sparse Fourier Temporal Pyramid which robustly encodes the action specific most discriminative output features of the proposed human pose model. Experiments on two multiview and three single-view benchmark datasets show that the proposed method dramatically outperforms existing state-of-the-art in action recognition.", "Analysis of human perception of motion shows that information for representing the motion is obtained from the dramatic changes in the speed and direction of the trajectory. In this paper, we present a computational representation of human action to capture these dramatic changes using spatio-temporal curvature of 2-D trajectory. This representation is compact, view-invariant, and is capable of explaining an action in terms of meaningful action units called dynamic instants and intervals. A dynamic instant is an instantaneous entity that occurs for only one frame, and represents an important change in the motion characteristics. An interval represents the time period between two dynamic instants during which the motion characteristics do not change. Starting without a model, we use this representation for recognition and incremental learning of human actions. The proposed method can discover instances of the same action performed by different people from different view points. Experiments on 47 actions performed by 7 individuals in an environment with no constraints shows the robustness of the proposed method.", "This paper presents an approach for viewpoint invariant human action recognition, an area that has received scant attention so far, relative to the overall body of work in human action recognition. It has been established previously that there exist no invariants for 3D to 2D projection. However, there exist a wealth of techniques in 2D invariance that can be used to advantage in 3D to 2D projection. We exploit these techniques and model actions in terms of view-invariant canonical body poses and trajectories in 2D invariance space, leading to a simple and effective way to represent and recognize human actions from a general viewpoint. We first evaluate the approach theoretically and show why a straightforward application of the 2D invariance idea will not work. We describe strategies designed to overcome inherent problems in the straightforward approach and outline the recognition algorithm. We then present results on 2D projections of publicly available human motion capture data as well on manually segmented real image sequences. In addition to robustness to viewpoint change, the approach is robust enough to handle different people, minor variabilities in a given action, and the speed of aciton (and hence, frame-rate) while encoding sufficient distinction among actions.", "" ] }
1709.05087
2754444390
In video-based action recognition, viewpoint variations often pose major challenges because the same actions can appear different from different views. We use the complementary RGB and Depth information from the RGB-D cameras to address this problem. The proposed technique capitalizes on the spatio-temporal information available in the two data streams to the extract action features that are largely insensitive to the viewpoint variations. We use the RGB data to compute dense trajectories that are translated to viewpoint insensitive deep features under a non-linear knowledge transfer model. Similarly, the Depth stream is used to extract CNN-based view invariant features on which Fourier Temporal Pyramid is computed to incorporate the temporal information. The heterogeneous features from the two streams are combined and used as a dictionary to predict the label of the test samples. To that end, we propose a sparse-dense collaborative representation classification scheme that strikes a balance between the discriminative abilities of the dense and the sparse representations of the samples over the extracted heterogeneous dictionary.
Another popular framework in RGB video based view-invariant action recognition is to find a latent space where the features are insensitive to viewpoint variations and classify the actions employing that latent space @cite_14 @cite_15 @cite_21 @cite_25 @cite_16 @cite_1 @cite_31 . A combination of hand-crafted features and deep-learned features was also proposed by @cite_48 for the RGB action recognition. In their approach, trajectory pooling was used for one stream of the data and deep learning framework was used for the other. The two feature streams were combined to form trajectory-pooled deep-convolutional descriptors. Nevertheless, only RGB data is used in their approach and the problem of viewpoint variance in action recognition is not directly addressed.
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_48", "@cite_21", "@cite_1", "@cite_15", "@cite_16", "@cite_25" ], "mid": [ "2169560406", "1487322600", "", "2128053425", "2159001013", "2536349189", "2010243644", "" ], "abstract": [ "We present an approach to jointly learn a set of view-specific dictionaries and a common dictionary for cross-view action recognition. The set of view-specific dictionaries is learned for specific views while the common dictionary is shared across different views. Our approach represents videos in each view using both the corresponding view-specific dictionary and the common dictionary. More importantly, it encourages the set of videos taken from different views of the same action to have similar sparse representations. In this way, we can align view-specific features in the sparse feature spaces spanned by the view-specific dictionary set and transfer the view-shared features in the sparse feature space spanned by the common dictionary. Meanwhile, the incoherence between the common dictionary and the view-specific dictionary set enables us to exploit the discrimination information encoded in view-specific features and view-shared features separately. In addition, the learned common dictionary not only has the capability to represent actions from unseen views, but also makes our approach effective in a semi-supervised setting where no correspondence videos exist and only a few labels exist in the target view. Extensive experiments using the multi-view IXMAS dataset demonstrate that our approach outperforms many recent approaches for cross-view action recognition.", "Appearance features are good at discriminating activities in a fixed view, but behave poorly when aspect is changed. We describe a method to build features that are highly stable under change of aspect. It is not necessary to have multiple views to extract our features. Our features make it possible to learn a discriminative model of activity in one view, and spot that activity in another view, for which one might poses no labeled examples at all. Our construction uses labeled examples to build activity models, and unlabeled, but corresponding, examples to build an implicit model of how appearance changes with aspect. We demonstrate our method with challenging sequences of real human motion, where discriminative methods built on appearance alone fail badly.", "", "Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.", "In this paper, we propose a novel method for cross-view action recognition via a continuous virtual path which connects the source view and the target view. Each point on this virtual path is a virtual view which is obtained by a linear transformation of the action descriptor. All the virtual views are concatenated into an infinite-dimensional feature to characterize continuous changes from the source to the target view. However, these infinite-dimensional features cannot be used directly. Thus, we propose a virtual view kernel to compute the value of similarity between two infinite-dimensional features, which can be readily used to construct any kernelized classifiers. In addition, there are a lot of unlabeled samples from the target view, which can be utilized to improve the performance of classifiers. Thus, we present a constraint strategy to explore the information contained in the unlabeled samples. The rationality behind the constraint is that any action video belongs to only one class. Our method is verified on the IXMAS dataset, and the experimental results demonstrate that our method achieves better performance than the state-of-the-art methods.", "Recognition using appearance features is confounded by phenomena that cause images of the same object to look different, or images of different objects to look the same. This may occur because the same object looks different from different viewing directions, or because two generally different objects have views from which they look similar. In this paper, we introduce the idea of discriminative aspect, a set of latent variables that encode these phenomena. Changes in view direction are one cause of changes in discriminative aspect, but others include changes in texture or lighting. However, images are not labelled with relevant discriminative aspect parameters. We describe a method to improve discrimination by inferring and then using latent discriminative aspect parameters. We apply our method to two parallel problems: object category recognition and human activity recognition. In each case, appearance features are powerful given appropriate training data, but traditionally fail badly under large changes in view. Our method can recognize an object quite reliably in a view for which it possesses no training example. Our method also reweights features to discount accidental similarities in appearance. We demonstrate that our method produces a significant improvement on the state of the art for both object and activity recognition.", "In this paper, we present a novel approach to recognizing human actions from different views by view knowledge transfer. An action is originally modelled as a bag of visual-words (BoVW), which is sensitive to view changes. We argue that, as opposed to visual words, there exist some higher level features which can be shared across views and enable the connection of action models for different views. To discover these features, we use a bipartite graph to model two view-dependent vocabularies, then apply bipartite graph partitioning to co-cluster two vocabularies into visual-word clusters called bilingual-words (i.e., high-level features), which can bridge the semantic gap across view-dependent vocabularies. Consequently, we can transfer a BoVW action model into a bag-of-bilingual-words (BoBW) model, which is more discriminative in the presence of view changes. We tested our approach on the IXMAS data set and obtained very promising results. Moreover, to further fuse view knowledge from multiple views, we apply a Locally Weighted Ensemble scheme to dynamically weight transferred models based on the local distribution structure around each test example. This process can further improve the average recognition rate by about 7 .", "" ] }
1709.05087
2754444390
In video-based action recognition, viewpoint variations often pose major challenges because the same actions can appear different from different views. We use the complementary RGB and Depth information from the RGB-D cameras to address this problem. The proposed technique capitalizes on the spatio-temporal information available in the two data streams to the extract action features that are largely insensitive to the viewpoint variations. We use the RGB data to compute dense trajectories that are translated to viewpoint insensitive deep features under a non-linear knowledge transfer model. Similarly, the Depth stream is used to extract CNN-based view invariant features on which Fourier Temporal Pyramid is computed to incorporate the temporal information. The heterogeneous features from the two streams are combined and used as a dictionary to predict the label of the test samples. To that end, we propose a sparse-dense collaborative representation classification scheme that strikes a balance between the discriminative abilities of the dense and the sparse representations of the samples over the extracted heterogeneous dictionary.
With the easy availability of the Depth data through the Microsoft Kinect sensor, the Depth video based action recognition became much popular in the last decade. In @cite_46 and @cite_44 , 3D data points are used at silhouettes of the Depth images and 3D joint positions to extract features for action recognition. A binary range-sample feature was proposed for the Depth videos by Lu @math @math @cite_32 , that demonstrated significant improvement in achieving viewpoint invariance. Rahmani @math @math @cite_43 proposed the Histogram of Oriented Principal Components (HOPC) to detect interest points in Depth videos and extracting their spatio-temporal descriptors. HOPC extracts local features in an object-centered local coordinate basis, thereby making them viewpoint invariant. Nevertheless, these features must be extracted at a lagre number of interest points that makes the overall approach computationally expansive. In another work, @cite_35 clustered hypersurface normals in the Depth sequences to characterize the local motions and the shape information. An adaptive spatio-temporal pyramid is used in their approach to divide the Depth data into a set of space-time grids and low-level polynomials are aggregated to form a Super Normal Vector (SNV). This vector is eventually employed in action recognition.
{ "cite_N": [ "@cite_35", "@cite_32", "@cite_44", "@cite_43", "@cite_46" ], "mid": [ "2091911422", "2086663212", "1953802779", "1874503286", "2144380653" ], "abstract": [ "This paper presents a new framework for human activity recognition from video sequences captured by a depth camera. We cluster hypersurface normals in a depth sequence to form the polynormal which is used to jointly characterize the local motion and shape information. In order to globally capture the spatial and temporal orders, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time grids. We then propose a novel scheme of aggregating the low-level polynormals into the super normal vector (SNV) which can be seen as a simplified version of the Fisher kernel representation. In the extensive experiments, we achieve classification results superior to all previous published results on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.", "We propose binary range-sample feature in depth. It is based on τ tests and achieves reasonable invariance with respect to possible change in scale, viewpoint, and background. It is robust to occlusion and data corruption as well. The descriptor works in a high speed thanks to its binary property. Working together with standard learning algorithms, the proposed descriptor achieves state-of-theart results on benchmark datasets in our experiments. Impressively short running time is also yielded.", "Our goal is to automatically segment and recognize basic human actions, such as stand, walk and wave hands, from a sequence of joint positions or pose angles. Such recognition is difficult due to high dimensionality of the data and large spatial and temporal variations in the same action. We decompose the high dimensional 3-D joint space into a set of feature spaces where each feature corresponds to the motion of a single joint or combination of related multiple joints. For each feature, the dynamics of each action class is learned with one HMM. Given a sequence, the observation probability is computed in each HMM and a weak classifier for that feature is formed based on those probabilities. The weak classifiers with strong discriminative power are then combined by the Multi-Class AdaBoost (AdaBoost.M2) algorithm. A dynamic programming algorithm is applied to segment and recognize actions simultaneously. Results of recognizing 22 actions on a large number of motion capture sequences as well as several annotated and automatically tracked sequences show the effectiveness of the proposed algorithms.", "Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which change significantly with viewpoint. In contrast, we directly process the pointclouds and propose a new technique for action recognition which is more robust to noise, action speed and viewpoint variations. Our technique consists of a novel descriptor and keypoint detection algorithm. The proposed descriptor is extracted at a point by encoding the Histogram of Oriented Principal Components (HOPC) within an adaptive spatio-temporal support volume around that point. Based on this descriptor, we present a novel method to detect Spatio-Temporal Key-Points (STKPs) in 3D pointcloud sequences. Experimental results show that the proposed descriptor and STKP detector outperform state-of-the-art algorithms on three benchmark human activity datasets. We also introduce a new multiview public dataset and show the robustness of our proposed method to viewpoint variations.", "This paper presents a method to recognize human actions from sequences of depth maps. Specifically, we employ an action graph to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph. In addition, we propose a simple, but effective projection based sampling scheme to sample the bag of 3D points from the depth maps. Experimental results have shown that over 90 recognition accuracy were achieved by sampling only about 1 3D points from the depth maps. Compared to the 2D silhouette based recognition, the recognition errors were halved. In addition, we demonstrate the potential of the bag of points posture model to deal with occlusions through simulation." ] }
1709.05087
2754444390
In video-based action recognition, viewpoint variations often pose major challenges because the same actions can appear different from different views. We use the complementary RGB and Depth information from the RGB-D cameras to address this problem. The proposed technique capitalizes on the spatio-temporal information available in the two data streams to the extract action features that are largely insensitive to the viewpoint variations. We use the RGB data to compute dense trajectories that are translated to viewpoint insensitive deep features under a non-linear knowledge transfer model. Similarly, the Depth stream is used to extract CNN-based view invariant features on which Fourier Temporal Pyramid is computed to incorporate the temporal information. The heterogeneous features from the two streams are combined and used as a dictionary to predict the label of the test samples. To that end, we propose a sparse-dense collaborative representation classification scheme that strikes a balance between the discriminative abilities of the dense and the sparse representations of the samples over the extracted heterogeneous dictionary.
Most of the Depth sensors also provide simultaneous RGB videos. This fact has lead to a significant interest of the scientific community to jointly exploit the two data streams for various tasks, including action recognition @cite_6 @cite_34 @cite_47 @cite_50 @cite_45 @cite_42 @cite_5 . For instance, a restricted graph-based genetic programming approach is proposed by Liu and Shao @cite_45 for the fusion of the Depth and the RGB data streams for improved RGB-D data based classification. In another approach, @cite_42 proposed to learn heterogeneous features for the RGB-D video based action recognition. They proposed a joint heterogeneous features learning model (JOULE) to take advantage of both shared and action-specific components in the RGB-D videos. Kong and Fu @cite_5 also projected and compressed both Depth and RGB features to a shared feature space, in which the decision boundaries are learned for the classification purpose.
{ "cite_N": [ "@cite_42", "@cite_6", "@cite_45", "@cite_50", "@cite_5", "@cite_47", "@cite_34" ], "mid": [ "1893516992", "2078157583", "1588788171", "1972283961", "1895914852", "2114216982", "2085900439" ], "abstract": [ "In this paper, we focus on heterogeneous feature learning for RGB-D activity recognition. Considering that features from different channels could share some similar hidden structures, we propose a joint learning model to simultaneously explore the shared and feature-specific components as an instance of heterogenous multi-task learning. The proposed model in an unified framework is capable of: 1) jointly mining a set of subspaces with the same dimensionality to enable the multi-task classifier learning, and 2) meanwhile, quantifying the shared and feature-specific components of features in the subspaces. To efficiently train the joint model, a three-step iterative optimization algorithm is proposed, followed by two inference models. Extensive results on three activity datasets have demonstrated the efficacy of the proposed method. In addition, a novel RGB-D activity dataset focusing on human-object interaction is collected for evaluating the proposed method, which will be made available to the community for RGB-D activity benchmarking and analysis.", "Since the Microsoft Kinect has been released, the usage of marker-less body pose estimation has been enormously eased. Based on 3D skelet al pose information, complex human gestures and actions can be recognised in real time. However, due to errors in tracking or occlusions, the obtained information can be noisy. Since the RGB-D data is available, the 3D or 2D shape of the person can be used instead. However, depending on the viewpoint and the action to recognise, it might present a low discriminative value. In this paper, the combination of body pose estimation and 2D shape, in order to provide additional characteristic value, is considered so as to improve human action recognition. Using efficient feature extraction techniques, skelet al and silhouette-based features are obtained which are low dimensional and can be obtained in real time. These two features are then combined by means of feature fusion. The proposed approach is validated using a state-of-the-art learning method and the MSR Action3D dataset as benchmark. The obtained results show that the fused feature achieves to improve the recognition rates, outperforming state-of-the-art results in recognition rate and robustness.", "Recently, the low-cost Microsoft Kinect sensor, which can capture real-time high-resolution RGB and depth visual information, has attracted increasing attentions for a wide range of applications in computer vision. Existing techniques extract hand-tuned features from the RGB and the depth data separately and heuristically fuse them, which would not fully exploit the complementarity of both data sources. In this paper, we introduce an adaptive learning methodology to automatically extract (holistic) spatio-temporal features, simultaneously fusing the RGB and depth information, from RGB-D video data for visual recognition tasks. We address this as an optimization problem using our proposed restricted graph-based genetic programming (RGGP) approach, in which a group of primitive 3D operators are first randomly assembled as graph-based combinations and then evolved generation by generation by evaluating on a set of RGB-D video samples. Finally the best-performed combination is selected as the (near-)optimal representation for a pre-defined task. The proposed method is systematically evaluated on a new hand gesture dataset, SKIG, that we collected ourselves and the public MSR Daily Activity 3D dataset, respectively. Extensive experimental results show that our approach leads to significant advantages compared with state-of-the-art hand-crafted and machine-learned features.", "Recognizing the events and objects in the video sequence are two challenging tasks due to the complex temporal structures and the large appearance variations. In this paper, we propose a 4D human-object interaction model, where the two tasks jointly boost each other. Our human-object interaction is defined in 4D space: i) the co occurrence and geometric constraints of human pose and object in 3D space, ii) the sub-events transition and objects coherence in 1D temporal dimension. We represent the structure of events, sub-events and objects in a hierarchical graph. For an input RGB-depth video, we design a dynamic programming beam search algorithm to: i) segment the video, ii) recognize the events, and iii) detect the objects simultaneously. For evaluation, we built a large-scale multiview 3D event dataset which contains 3815 video sequences and 383,036 RGBD frames captured by the Kinect cameras. The experiment results on this dataset show the effectiveness of our method.", "This paper proposes a novel approach to action recognition from RGB-D cameras, in which depth features and RGB visual features are jointly used. Rich heterogeneous RGB and depth data are effectively compressed and projected to a learned shared space, in order to reduce noise and capture useful information for recognition. Knowledge from various sources can then be shared with others in the learned space to learn cross-modal features. This guides the discovery of valuable information for recognition. To capture complex spatiotemporal structural relationships in visual and depth features, we represent both RGB and depth data in a matrix form. We formulate the recognition task as a low-rank bilinear model composed of row and column parameter matrices. The rank of the model parameter is minimized to build a low-rank classifier, which is beneficial for improving the generalization power. The proposed method is extensively evaluated on two public RGB-D action datasets, and achieves state-of-the-art results. It also shows promising results if RGB or depth data are missing in training or testing procedure.", "We consider the problem of detecting past activities as well as anticipating which activity will happen in the future and how. We start by modeling the rich spatio-temporal relations between human poses and objects (called affordances) using a conditional random field (CRF). However, because of the ambiguity in the temporal segmentation of the sub-activities that constitute an activity, in the past as well as in the future, multiple graph structures are possible. In this paper, we reason about these alternate possibilities by reasoning over multiple possible graph structures. We obtain them by approximating the graph with only additive features, which lends to efficient dynamic programming. Starting with this proposal graph structure, we then design moves to obtain several other likely graph structures. We then show that our approach improves the state-of-the-art significantly for detecting past activities as well as for anticipating future activities, on a dataset of 120 activity videos collected from four subjects.", "Microsoft Kinect's output is a multi-modal signal which gives RGB videos, depth sequences and skeleton information simultaneously. Various action recognition techniques focused on different single modalities of the signals and built their classifiers over the features extracted from one of these channels. For better recognition performance, it's desirable to fuse these multi-modal information into an integrated set of discriminative features. Most of current fusion methods merged heterogeneous features in a holistic manner and ignored the complementary properties of these modalities in finer levels. In this paper, we proposed a new hierarchical bag-of-words feature fusion technique based on multi-view structured spar-sity learning to fuse atomic features from RGB and skeletons for the task of action recognition." ] }
1709.05087
2754444390
In video-based action recognition, viewpoint variations often pose major challenges because the same actions can appear different from different views. We use the complementary RGB and Depth information from the RGB-D cameras to address this problem. The proposed technique capitalizes on the spatio-temporal information available in the two data streams to the extract action features that are largely insensitive to the viewpoint variations. We use the RGB data to compute dense trajectories that are translated to viewpoint insensitive deep features under a non-linear knowledge transfer model. Similarly, the Depth stream is used to extract CNN-based view invariant features on which Fourier Temporal Pyramid is computed to incorporate the temporal information. The heterogeneous features from the two streams are combined and used as a dictionary to predict the label of the test samples. To that end, we propose a sparse-dense collaborative representation classification scheme that strikes a balance between the discriminative abilities of the dense and the sparse representations of the samples over the extracted heterogeneous dictionary.
Whereas one of the major advantages of the Depth videos is in the easy availability of the information useful for the problem of viewpoint invariant action recognition, none of the aforementioned approaches directly address this problem. Moreover, the Depth and the RGB video frames are mainly combined in those approaches by either projecting them to a common feature space @cite_42 @cite_41 or by using the same filtering-pooling operation for both modalities @cite_45 . We empirically verified that on the used action recognition datasets that involve multiple camera views, these techniques achieve no more than 4 the other hand, the technique proposed in this work achieves up to 7.7 strength of our approach resides in processing the RGB and the Depth streams individually to fully capitalize on their individual characteristics and then fusing the two modalities at a latter stage of the pipeline. This strategy has proven much more beneficial than combining the two data streams earlier in data processing.
{ "cite_N": [ "@cite_41", "@cite_42", "@cite_45" ], "mid": [ "", "1893516992", "1588788171" ], "abstract": [ "", "In this paper, we focus on heterogeneous feature learning for RGB-D activity recognition. Considering that features from different channels could share some similar hidden structures, we propose a joint learning model to simultaneously explore the shared and feature-specific components as an instance of heterogenous multi-task learning. The proposed model in an unified framework is capable of: 1) jointly mining a set of subspaces with the same dimensionality to enable the multi-task classifier learning, and 2) meanwhile, quantifying the shared and feature-specific components of features in the subspaces. To efficiently train the joint model, a three-step iterative optimization algorithm is proposed, followed by two inference models. Extensive results on three activity datasets have demonstrated the efficacy of the proposed method. In addition, a novel RGB-D activity dataset focusing on human-object interaction is collected for evaluating the proposed method, which will be made available to the community for RGB-D activity benchmarking and analysis.", "Recently, the low-cost Microsoft Kinect sensor, which can capture real-time high-resolution RGB and depth visual information, has attracted increasing attentions for a wide range of applications in computer vision. Existing techniques extract hand-tuned features from the RGB and the depth data separately and heuristically fuse them, which would not fully exploit the complementarity of both data sources. In this paper, we introduce an adaptive learning methodology to automatically extract (holistic) spatio-temporal features, simultaneously fusing the RGB and depth information, from RGB-D video data for visual recognition tasks. We address this as an optimization problem using our proposed restricted graph-based genetic programming (RGGP) approach, in which a group of primitive 3D operators are first randomly assembled as graph-based combinations and then evolved generation by generation by evaluating on a set of RGB-D video samples. Finally the best-performed combination is selected as the (near-)optimal representation for a pre-defined task. The proposed method is systematically evaluated on a new hand gesture dataset, SKIG, that we collected ourselves and the public MSR Daily Activity 3D dataset, respectively. Extensive experimental results show that our approach leads to significant advantages compared with state-of-the-art hand-crafted and machine-learned features." ] }
1709.05095
2754349512
Properties expressed as the provability of a first-order sentence can be disproved by just finding a model of the negation of the sentence. This fact, however, is meaningful in restricted cases only, depending on the shape of the sentence and the class of systems at stake. In this paper we show that a number of interesting properties of rewriting-based systems can be investigated in this way, including infeasibility and non-joinability of critical pairs in (conditional) rewriting, non-loopingness of conditional rewrite systems, or the secure access to protected pages of a web site modeled as an order-sorted rewrite theory. Interestingly, this uniform, semantic approach succeeds when specific techniques developed to deal with the aforementioned problems fail.
Other approaches like the ITP tool, @cite_3 work similarly: the tool can be used to verify such properties which are actually special versions of the Herbrand model of the underlying theory. Then, one may have similar decidability problems as discussed for .
{ "cite_N": [ "@cite_3" ], "mid": [ "2122891736" ], "abstract": [ "We present a tutorial of the ITP tool, a rewriting-based theorem prover that can be used to prove inductive properties of membership equational specifications. We also introduce membership equational logic as a formal language particularly ad- equate for specifying and verifying semantic data structures, such as ordered lists, binary search trees, priority queues, and powerlists. The ITP tool is a Maude program that makes extensive use of the reflective capabilities of this system. In fact, rewriting- based proof simplification steps are directly executed by the powerful underlying Maude rewriting engine. The ITP tool is currently available as a web-based application that includes a module editor, a formula editor, and a command editor. These editors allow users to create and modify their specifications, to formalize properties about them, and to guide their proofs by filling and submitting web forms." ] }
1709.05050
2755907759
According to a report online, more than 200 million unique users search for jobs online every month. This incredibly large and fast growing demand has enticed software giants such as Google and Facebook to enter this space, which was previously dominated by companies such as LinkedIn, Indeed and CareerBuilder. Recently, Google released their "AI-powered Jobs Search Engine", "Google For Jobs" while Facebook released "Facebook Jobs" within their platform. These current job search engines and platforms allow users to search for jobs based on general narrow filters such as job title, date posted, experience level, company and salary. However, they have severely limited filters relating to skill sets such as C++, Python, and Java and company related attributes such as employee size, revenue, technographics and micro-industries. These specialized filters can help applicants and companies connect at a very personalized, relevant and deeper level. In this paper we present a framework that provides an end-to-end "Data-driven Jobs Search Engine". In addition, users can also receive potential contacts of recruiters and senior positions for connection and networking opportunities. The high level implementation of the framework is described as follows: 1) Collect job postings data in the United States, 2) Extract meaningful tokens from the postings data using ETL pipelines, 3) Normalize the data set to link company names to their specific company websites, 4) Extract and ranking the skill sets, 5) Link the company names and websites to their respective company level attributes with the EVERSTRING Company API, 6) Run user-specific search queries on the database to identify relevant job postings and 7) Rank the job search results. This framework offers a highly customizable and highly targeted search experience for end users.
Two widely popular classes of keyword extraction techniques were considered for extraction ranking of skill sets in the job postings. One class of keyword extraction ranking technique is based on keyword matching or Vector Space models with basic TF-IDF weighting [1]. The TF-IDF weighting is obtained by using only the content of the document itself. Then several similarity measurements were used to compare the similarity of the two documents based on their feature vectors [2]. The other class of keyword extraction ranking technique is based on using context information to improve keyword extraction. Recently, there has been lot of work on developing different machine learning methods to make use of the context in the document [3], @cite_7 . [4] discusses the use of support vector machines for keyword extraction from documents using both the local and global context. There are number of techniques developed to use local and global context in keyword extraction [3], [4], [5].
{ "cite_N": [ "@cite_7" ], "mid": [ "2198678892" ], "abstract": [ "Common diculties like the cold-start problem and a lack of sucient information about users due to their limited interactions have been major challenges for most recommender systems (RS). To overcome these challenges and many similar ones that result in low accuracy (precision and recall) recommendations, we propose a novel system that extracts semantically-related search keywords based on the aggregate behavioral data of many users. These semantically-related search keywords can be used to substantially increase the amount of knowledge about a specic user’s interests based upon even a few searches and thus improve the accuracy of the RS. The proposed system is capable of mining aggregate user search logs to discover semantic relationships between key phrases in a manner that is language agnostic, human understandable, and virtually noise-free. These semantically related keywords are obtained by looking at the links between queries of similar users which, we believe, represent a largely untapped source for discovering latent semantic relationships between search terms." ] }
1709.05050
2755907759
According to a report online, more than 200 million unique users search for jobs online every month. This incredibly large and fast growing demand has enticed software giants such as Google and Facebook to enter this space, which was previously dominated by companies such as LinkedIn, Indeed and CareerBuilder. Recently, Google released their "AI-powered Jobs Search Engine", "Google For Jobs" while Facebook released "Facebook Jobs" within their platform. These current job search engines and platforms allow users to search for jobs based on general narrow filters such as job title, date posted, experience level, company and salary. However, they have severely limited filters relating to skill sets such as C++, Python, and Java and company related attributes such as employee size, revenue, technographics and micro-industries. These specialized filters can help applicants and companies connect at a very personalized, relevant and deeper level. In this paper we present a framework that provides an end-to-end "Data-driven Jobs Search Engine". In addition, users can also receive potential contacts of recruiters and senior positions for connection and networking opportunities. The high level implementation of the framework is described as follows: 1) Collect job postings data in the United States, 2) Extract meaningful tokens from the postings data using ETL pipelines, 3) Normalize the data set to link company names to their specific company websites, 4) Extract and ranking the skill sets, 5) Link the company names and websites to their respective company level attributes with the EVERSTRING Company API, 6) Run user-specific search queries on the database to identify relevant job postings and 7) Rank the job search results. This framework offers a highly customizable and highly targeted search experience for end users.
We also considered techniques used to enhance information retrieval using concepts of semantic analysis such as ontology based similarity measures [9], [10]. In these approaches the ontology information is used to find similarity between words and find words even if the exact match is not available. Other ways in which semantic information is extracted is using Wordnet libraries. Wordnet based approaches have used concepts such as relatedness of words for information retrieval [11]-[14]. @cite_4 Demonstrated that informative structured snippets of the job postings can be generated in an unsupervised way to improve the user experience for a job search engine. in his book [15], describes the various uses of search engines in information retrieval. Recent works [16] have shown the use of encyclopedic knowledge for information retrieval. [17], describe the use of Google distance to find concept similarity. Google distance based approaches have been used in various applications such as relevant information extraction [21], [18], keyword prediction [19], and tag filtering [20]. Given the keywords that we need to look for and rank them based on the documents given is also a similar problem.
{ "cite_N": [ "@cite_4" ], "mid": [ "2148212498" ], "abstract": [ "KEY BENEFIT: Written by a leader in the field of information retrieval, this text provides the background and tools needed to evaluate, compare and modify search engines. KEY TOPICS: Coverage of the underlying IR and mathematical models reinforce key concepts. Numerous programming exercises make extensive use of Galago, a Java-based open source search engine. MARKET: A valuable tool for search engine and information retrieval professionals." ] }
1709.05185
2754695199
Our understanding of the world depends highly on our capacity to produce intuitive and simplified representations which can be easily used to solve problems. We reproduce this simplification process using a neural network to build a low dimensional state representation of the world from images acquired by a robot. As in 2015, we learn in an unsupervised way using prior knowledge about the world as loss functions called robotic priors and extend this approach to high dimension richer images to learn a 3D representation of the hand position of a robot from RGB images. We propose a quantitative evaluation of the learned representation using nearest neighbors in the state space that allows to assess its quality and show both the potential and limitations of robotic priors in realistic environments. We augment image size, add distractors and domain randomization, all crucial components to achieve transfer learning to real robots. Finally, we also contribute a new prior to improve the robustness of the representation. The applications of such low dimensional state representation range from easing reinforcement learning (RL) and knowledge transfer across tasks, to facilitating learning from raw data with more efficient and compact high level representations. The results show that the robotic prior approach is able to extract high level representation as the 3D position of an arm and organize it into a compact and coherent space of states in a challenging dataset.
The goal of state representation learning is to find a mapping from a set of observations to a set of states that makes possible to describe an environment with enough information, for example, to fulfill a given objective. This state representation learning can be viewed as searching a small set of hidden parameters which explain the observation. In our approach we impose a dimension on the state, and use the priors to guide the neural network in learning task-specific state representations in this given dimension. This is an alternative approach to selecting a state representation from a set ( @cite_11 , @cite_12 ), or creating an autoencoder to compress information into a lower dimensional state ( @cite_18 , @cite_5 ), @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_3", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "1968962398", "2567455162", "2210483910", "1678414046", "2151620419" ], "abstract": [ "We propose a learning architecture, that is able to do reinforcement learning based on raw visual input data. In contrast to previous approaches, not only the control policy is learned. In order to be successful, the system must also autonomously learn, how to extract relevant information out of a high-dimensional stream of input information, for which the semantics are not provided to the learning system. We give a first proof-of-concept of this novel learning architecture on a challenging benchmark, namely visual control of a racing slot car. The resulting policy, learned only by success or failure, is hardly beaten by an experienced human player.", "For many tasks, tactile or visual feedback is helpful or even crucial. However, designing controllers that take such high-dimensional feedback into account is non-trivial. Therefore, robots should be able to learn tactile skills through trial and error by using reinforcement learning algorithms. The input domain for such tasks, however, might include strongly correlated or non-relevant dimensions, making it hard to specify a suitable metric on such domains. Auto-encoders specialize in finding compact representations, where defining such a metric is likely to be easier. Therefore, we propose a reinforcement learning algorithm that can learn non-linear policies in continuous state spaces, which leverages representations learned using auto-encoders. We first evaluate this method on a simulated toy-task with visual input. Then, we validate our approach on a real-robot tactile stabilization task.", "Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm.", "We present an algorithm for selecting an appropriate abstraction when learning a new skill. We show empirically that it can consistently select an appropriate abstraction using very little sample data, and that it significantly improves skill learning performance in a reasonably large real-valued reinforcement learning domain.", "This article addresses reinforcement learning problems based on factored Markov decision processes MDPs in which the agent must choose among a set of candidate abstractions, each build up from a different combination of state components. We present and evaluate a new approach that can perform effective abstraction selection that is more resource-efficient and or more general than existing approaches. The core of the approach is to make selection of an abstraction part of the learning agent's decision-making process by augmenting the agent's action space with internal actions that select the abstraction it uses. We prove that under certain conditions this approach results in a derived MDP whose solution yields both the optimal abstraction for the original MDP and the optimal policy under that abstraction. We examine our approach in three domains of increasing complexity: contextual bandit problems, episodic MDPs, and general MDPs with context-specific structure. © 2013 Wiley Periodicals, Inc." ] }
1709.05185
2754695199
Our understanding of the world depends highly on our capacity to produce intuitive and simplified representations which can be easily used to solve problems. We reproduce this simplification process using a neural network to build a low dimensional state representation of the world from images acquired by a robot. As in 2015, we learn in an unsupervised way using prior knowledge about the world as loss functions called robotic priors and extend this approach to high dimension richer images to learn a 3D representation of the hand position of a robot from RGB images. We propose a quantitative evaluation of the learned representation using nearest neighbors in the state space that allows to assess its quality and show both the potential and limitations of robotic priors in realistic environments. We augment image size, add distractors and domain randomization, all crucial components to achieve transfer learning to real robots. Finally, we also contribute a new prior to improve the robustness of the representation. The applications of such low dimensional state representation range from easing reinforcement learning (RL) and knowledge transfer across tasks, to facilitating learning from raw data with more efficient and compact high level representations. The results show that the robotic prior approach is able to extract high level representation as the 3D position of an arm and organize it into a compact and coherent space of states in a challenging dataset.
An important aspect of our approach compared to other state representation works is the usage of representation constraints based on both physics and a given task, that is exploited to find relevant information, instead of trying to encode all available information. This characteristic bears some similarity with approaches such as @cite_1 , which learns states that follow a linear dynamic, or the approach of @cite_3 , @cite_22 , @cite_6 which learn states that make possible to reconstruct the next observation with models such as PSRs (predictive state representation) @cite_16 . However, optimizing reconstruction is often a weak criterion to learn state representations, as the learning process may focus on the reconstruction of the most visible features and ignore small but relevant parts of the observations.
{ "cite_N": [ "@cite_22", "@cite_1", "@cite_3", "@cite_6", "@cite_16" ], "mid": [ "2951751411", "2963430173", "2567455162", "", "1540337045" ], "abstract": [ "Training deep feature hierarchies to solve supervised learning tasks has achieved state of the art performance on many problems in computer vision. However, a principled way in which to train such hierarchies in the unsupervised setting has remained elusive. In this work we suggest a new architecture and loss for training deep feature hierarchies that linearize the transformations observed in unlabeled natural video sequences. This is done by training a generative model to predict video frames. We also address the problem of inherent uncertainty in prediction by introducing latent variables that are non-deterministic functions of the input into the network architecture.", "We introduce Embed to Control (E2C), a method for model learning and control of non-linear dynamical systems from raw pixel images. E2C consists of a deep generative model, belonging to the family of variational autoencoders, that learns to generate image trajectories from a latent space in which the dynamics is constrained to be locally linear. Our model is derived directly from an optimal control formulation in latent space, supports long-term prediction of image sequences and exhibits strong performance on a variety of complex control problems.", "For many tasks, tactile or visual feedback is helpful or even crucial. However, designing controllers that take such high-dimensional feedback into account is non-trivial. Therefore, robots should be able to learn tactile skills through trial and error by using reinforcement learning algorithms. The input domain for such tasks, however, might include strongly correlated or non-relevant dimensions, making it hard to specify a suitable metric on such domains. Auto-encoders specialize in finding compact representations, where defining such a metric is likely to be easier. Therefore, we propose a reinforcement learning algorithm that can learn non-linear policies in continuous state spaces, which leverages representations learned using auto-encoders. We first evaluate this method on a simulated toy-task with visual input. Then, we validate our approach on a real-robot tactile stabilization task.", "", "Modeling dynamical systems, both for control purposes and to make predictions about their behavior, is ubiquitous in science and engineering. Predictive state representations (PSRs) are a recently introduced class of models for discrete-time dynamical systems. The key idea behind PSRs and the closely related OOMs (Jaeger's observable operator models) is to represent the state of the system as a set of predictions of observable outcomes of experiments one can do in the system. This makes PSRs rather different from history-based models such as nth-order Markov models and hidden-state-based models such as HMMs and POMDPs. We introduce an interesting construct, the system-dynamics matrix, and show how PSRs can be derived simply from it. We also use this construct to show formally that PSRs are more general than both nth-order Markov models and HMMs POMDPs. Finally, we discuss the main difference between PSRs and OOMs and conclude with directions for future work." ] }
1709.05185
2754695199
Our understanding of the world depends highly on our capacity to produce intuitive and simplified representations which can be easily used to solve problems. We reproduce this simplification process using a neural network to build a low dimensional state representation of the world from images acquired by a robot. As in 2015, we learn in an unsupervised way using prior knowledge about the world as loss functions called robotic priors and extend this approach to high dimension richer images to learn a 3D representation of the hand position of a robot from RGB images. We propose a quantitative evaluation of the learned representation using nearest neighbors in the state space that allows to assess its quality and show both the potential and limitations of robotic priors in realistic environments. We augment image size, add distractors and domain randomization, all crucial components to achieve transfer learning to real robots. Finally, we also contribute a new prior to improve the robustness of the representation. The applications of such low dimensional state representation range from easing reinforcement learning (RL) and knowledge transfer across tasks, to facilitating learning from raw data with more efficient and compact high level representations. The results show that the robotic prior approach is able to extract high level representation as the 3D position of an arm and organize it into a compact and coherent space of states in a challenging dataset.
Several approaches rely on neural networks with an autoencoder or variational autoencoder architecture ( @cite_11 , @cite_12 ), @cite_18 , @cite_5 ). However, in our approach, the priors are used as loss functions that encode constraints between states, a configuration that we address using Siamese networks (e.g., @cite_13 , @cite_23 , @cite_7 ), which use two (or more) copies of a network with tied weights to process two (or more) inputs whose relation has to be imposed. This strategy constructs a coherent space of representations where each state representation is learned depending on each other.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_23", "@cite_5", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "1968962398", "2338684808", "2117154949", "2210483910", "", "1678414046", "2151620419" ], "abstract": [ "We propose a learning architecture, that is able to do reinforcement learning based on raw visual input data. In contrast to previous approaches, not only the control policy is learned. In order to be successful, the system must also autonomously learn, how to extract relevant information out of a high-dimensional stream of input information, for which the semantics are not provided to the learning system. We give a first proof-of-concept of this novel learning architecture on a challenging benchmark, namely visual control of a racing slot car. The resulting policy, learned only by success or failure, is hardly beaten by an experienced human player.", "What is the right supervisory signal to train visual representations? Current approaches in computer vision use category labels from datasets such as ImageNet to train ConvNets. However, in case of biological agents, visual representation learning does not require millions of semantic labels. We argue that biological agents use physical interactions with the world to learn visual representations unlike current vision systems which just use passive observations (images and videos downloaded from web). For example, babies push objects, poke them, put them in their mouth and throw them to learn representations. Towards this goal, we build one of the first systems on a Baxter platform that pushes, pokes, grasps and observes objects in a tabletop environment. It uses four different types of physical interactions to collect more than 130K datapoints, with each datapoint providing supervision to a shared ConvNet architecture allowing us to learn visual representations. We show the quality of learned representations by observing neuron activations and performing nearest neighbor retrieval on this learned representation. Quantitatively, we evaluate our learned ConvNet on image classification tasks and show improvements compared to learning without external data. Finally, on the task of instance retrieval, our network outperforms the ImageNet network on recall@1 by 3", "Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many \"plausible\" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider \"similar.\" For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ℝn, learns a distance metric over ℝn that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance.", "Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm.", "", "We present an algorithm for selecting an appropriate abstraction when learning a new skill. We show empirically that it can consistently select an appropriate abstraction using very little sample data, and that it significantly improves skill learning performance in a reasonably large real-valued reinforcement learning domain.", "This article addresses reinforcement learning problems based on factored Markov decision processes MDPs in which the agent must choose among a set of candidate abstractions, each build up from a different combination of state components. We present and evaluate a new approach that can perform effective abstraction selection that is more resource-efficient and or more general than existing approaches. The core of the approach is to make selection of an abstraction part of the learning agent's decision-making process by augmenting the agent's action space with internal actions that select the abstraction it uses. We prove that under certain conditions this approach results in a derived MDP whose solution yields both the optimal abstraction for the original MDP and the optimal policy under that abstraction. We examine our approach in three domains of increasing complexity: contextual bandit problems, episodic MDPs, and general MDPs with context-specific structure. © 2013 Wiley Periodicals, Inc." ] }
1709.05185
2754695199
Our understanding of the world depends highly on our capacity to produce intuitive and simplified representations which can be easily used to solve problems. We reproduce this simplification process using a neural network to build a low dimensional state representation of the world from images acquired by a robot. As in 2015, we learn in an unsupervised way using prior knowledge about the world as loss functions called robotic priors and extend this approach to high dimension richer images to learn a 3D representation of the hand position of a robot from RGB images. We propose a quantitative evaluation of the learned representation using nearest neighbors in the state space that allows to assess its quality and show both the potential and limitations of robotic priors in realistic environments. We augment image size, add distractors and domain randomization, all crucial components to achieve transfer learning to real robots. Finally, we also contribute a new prior to improve the robustness of the representation. The applications of such low dimensional state representation range from easing reinforcement learning (RL) and knowledge transfer across tasks, to facilitating learning from raw data with more efficient and compact high level representations. The results show that the robotic prior approach is able to extract high level representation as the 3D position of an arm and organize it into a compact and coherent space of states in a challenging dataset.
We follow the common approach of using pre-trained convolutional networks on large image datasets and fine-tune them on a robotic task. We use ResNet18 network @cite_0 with additional fully connected layers that constrain the output to be low dimensional.
{ "cite_N": [ "@cite_0" ], "mid": [ "2949650786" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ] }
1709.05256
2754700116
Face detection has achieved great success using the region-based methods. In this report, we propose a region-based face detector applying deep networks in a fully convolutional fashion, named Face R-FCN. Based on Region-based Fully Convolutional Networks (R-FCN), our face detector is more accurate and computational efficient compared with the previous R-CNN based face detectors. In our approach, we adopt the fully convolutional Residual Network (ResNet) as the backbone network. Particularly, We exploit several new techniques including position-sensitive average pooling, multi-scale training and testing and on-line hard example mining strategy to improve the detection accuracy. Over two most popular and challenging face detection benchmarks, FDDB and WIDER FACE, Face R-FCN achieves superior performance over state-of-the-arts.
In the past decades, face detection has been extensively studied. The pioneering work of Viola and Jones @cite_3 invents a cascaded AdaBoost face detector using Haar-like features. proposed to learn the cascaded AdaBoost classifier based on Haar-like features for face detection. After that, numerous of works @cite_28 @cite_33 @cite_21 have focused on developing more advanced features and more powerful classifiers. Besides the boosted cascade methods, several studies apply deformable part models (DPM) for face detection @cite_30 @cite_14 @cite_4 . Besides the cascade based face detection methods, @cite_30 @cite_14 @cite_4 use the deformable part models (DPM) for face detection. The DPM methods detect faces by modeling the relationship of deformable facial parts. define a face with a collection of deformable parts and train a classifier to find these parts and their relationship.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_4", "@cite_33", "@cite_28", "@cite_21", "@cite_3" ], "mid": [ "1818102884", "", "", "2041497292", "2153461700", "2099355420", "2137401668" ], "abstract": [ "We present a face detection algorithm based on Deformable Part Models and deep pyramidal features. The proposed method called DP2MFD is able to detect faces of various sizes and poses in unconstrained conditions. It reduces the gap in training and testing of DPM on deep features by adding a normalization layer to the deep convolutional neural network (CNN). Extensive experiments on four publicly available unconstrained face detection datasets show that our method is able to capture the meaningful structure of faces and performs significantly better than many competitive face detection algorithms.", "", "", "Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS", "The integral image is typically used for fast integrating a function over a rectangular region in an image. We propose a method that extends the integral image to do fast integration over the interior of any polygon that is not necessarily rectilinear. The integration time of the method is fast, independent of the image resolution, and only linear to the polygon's number of vertices. We apply the method to Viola and Jones' object detection framework, in which we propose to improve classical Haar-like features with polygonal Haar-like features. We show that the extended feature set improves object detection's performance. The experiments are conducted in three domains: frontal face detection, fixed-pose hand detection, and rock detection for Mars' surface terrain assessment.", "We integrate the cascade-of-rejectors approach with the Histograms of Oriented Gradients (HoG) features to achieve a fast and accurate human detection system. The features used in our system are HoGs of variable-size blocks that capture salient features of humans automatically. Using AdaBoost for feature selection, we identify the appropriate set of blocks, from a large set of possible blocks. In our system, we use the integral image representation and a rejection cascade which significantly speed up the computation. For a 320 × 280 image, the system can process 5 to 30 frames per second depending on the density in which we scan the image, while maintaining an accuracy level similar to existing methods.", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second." ] }
1709.05305
2754247709
Effective models of social dialog must understand a broad range of rhetorical and figurative devices. Rhetorical questions (RQs) are a type of figurative language whose aim is to achieve a pragmatic goal, such as structuring an argument, being persuasive, emphasizing a point, or being ironic. While there are computational models for other forms of figurative language, rhetorical questions have received little attention to date. We expand a small dataset from previous work, presenting a corpus of 10,270 RQs from debate forums and Twitter that represent different discourse functions. We show that we can clearly distinguish between RQs and sincere questions (0.76 F1). We then show that RQs can be used both sarcastically and non-sarcastically, observing that non-sarcastic (other) uses of RQs are frequently argumentative in forums, and persuasive in tweets. We present experiments to distinguish between these uses of RQs using SVM and LSTM models that represent linguistic features and post-level context, achieving results as high as 0.76 F1 for "sarcastic" and 0.77 F1 for "other" in forums, and 0.83 F1 for both "sarcastic" and "other" in tweets. We supplement our quantitative experiments with an in-depth characterization of the linguistic variation in RQs.
Although it has been observed in the literature that RQs are often used sarcastically @cite_34 @cite_10 , previous work on sarcasm classification has not focused on RQs @cite_1 @cite_22 @cite_0 @cite_24 @cite_13 @cite_11 @cite_8 . investigated the utility of sequential features in tweets, emphasizing a subtype of sarcasm that consists of an expression of positive emotion contrasted with a negative situation, and showed that sequential features performed much better than features that did not capture sequential information. More recent work on sarcasm has focused specifically on sarcasm identification on Twitter using neural network approaches @cite_2 @cite_16 @cite_3 @cite_17 .
{ "cite_N": [ "@cite_13", "@cite_22", "@cite_8", "@cite_1", "@cite_17", "@cite_16", "@cite_3", "@cite_0", "@cite_24", "@cite_2", "@cite_34", "@cite_10", "@cite_11" ], "mid": [ "2250489604", "2250710744", "2099653665", "2263859238", "", "2512532697", "2575367545", "2114661483", "2250204095", "2544767710", "2008217938", "", "" ], "abstract": [ "Sarcasm transforms the polarity of an apparently positive or negative utterance into its opposite. We report on a method for constructing a corpus of sarcastic Twitter messages in which determination of the sarcasm of each message has been made by its author. We use this reliable corpus to compare sarcastic utterances in Twitter to utterances that express positive or negative attitudes without sarcasm. We investigate the impact of lexical and pragmatic factors on machine learning effectiveness for identifying sarcastic utterances and we compare the performance of machine learning techniques and human judges on this task. Perhaps unsurprisingly, neither the human judges nor the machine learning techniques perform very well.", "A common form of sarcasm on Twitter consists of a positive sentiment contrasted with a negative situation. For example, many sarcastic tweets include a positive sentiment, such as “love” or “enjoy”, followed by an expression that describes an undesirable activity or state (e.g., “taking exams” or “being ignored”). We have developed a sarcasm recognizer to identify this type of sarcasm in tweets. We present a novel bootstrapping algorithm that automatically learns lists of positive sentiment phrases and negative situation phrases from sarcastic tweets. We show that identifying contrasting contexts using the phrases learned through bootstrapping yields improved recall for sarcasm recognition.", "Sarcasm is a sophisticated form of speech act widely used in online communities. Automatic recognition of sarcasm is, however, a novel task. Sarcasm recognition could contribute to the performance of review summarization and ranking systems. This paper presents SASI, a novel Semi-supervised Algorithm for Sarcasm Identification that recognizes sarcastic sentences in product reviews. SASI has two stages: semisupervised pattern acquisition, and sarcasm classification. We experimented on a data set of about 66000 Amazon reviews for various books and products. Using a gold standard in which each sentence was tagged by 3 annotators, we obtained precision of 77 and recall of 83.1 for identifying sarcastic sentences. We found some strong features that characterize sarcastic utterances. However, a combination of more subtle pattern-based features proved more promising in identifying the various facets of sarcasm. We also speculate on the motivation for using sarcasm in online communities and social networks.", "Sarcasm requires some shared knowledge between speaker and audience; it is a profoundly contextual phenomenon. Most computational approaches to sarcasm detection, however, treat it as a purely linguistic matter, using information such as lexical cues and their corresponding sentiment as predictive features. We show that by including extra-linguistic information from the context of an utterance on Twitter — such as properties of the author, the audience and the immediate communicative environment — we are able to achieve gains in accuracy compared to purely linguistic features in the detection of this complex phenomenon, while also shedding light on features of interpersonal interaction that enable sarcasm in conversation.", "", "Precise semantic representation of a sentence and definitive information extraction are key steps in the accurate processing of sentence meaning, especially for figurative phenomena such as sarcasm, Irony, and metaphor cause literal meanings to be discounted and secondary or extended meanings to be intentionally profiled. Semantic modelling faces a new challenge in social media, because grammatical inaccuracy is commonplace yet many previous state-of-the-art methods exploit grammatical structure. For sarcasm detection over social media content, researchers so far have counted on Bag-of-Words(BOW), N-grams etc. In this paper, we propose a neural network semantic model for the task of sarcasm detection. We also review semantic modelling using Support Vector Machine (SVM) that employs constituency parsetrees fed and labeled with syntactic and semantic information. The proposed neural network model composed of Convolution Neural Network(CNN) and followed by a Long short term memory (LSTM) network and finally a Deep neural network(DNN). The proposed model outperforms state-of-the-art textbased methods for sarcasm detection, yielding an F-score of .92.", "", "To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with the hashtag ‘#sarcasm’. We collected a training corpus of about 78 thousand Dutch tweets with this hashtag. Assuming that the human labeling is correct (annotation of a sample indicates that about 85 of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a test set of a day’s stream of 3.3 million Dutch tweets. Of the 135 explicitly marked tweets on this day, we detect 101 (75 ) when we remove the hashtag. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 30 of the top-250 ranked tweets are indeed sarcastic. Analysis shows that sarcasm is often signalled by hyperbole, using intensifiers and exclamations; in contrast, non-hyperbolic sarcastic messages often receive an explicit marker. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of nonverbal expressions that people employ in live interaction when conveying sarcasm.", "The ability to reliably identify sarcasm and irony in text can improve the performance of many Natural Language Processing (NLP) systems including summarization, sentiment analysis, etc. The existing sarcasm detection systems have focused on identifying sarcasm on a sentence level or for a specific phrase. However, often it is impossible to identify a sentence containing sarcasm without knowing the context. In this paper we describe a corpus generation experiment where we collect regular and sarcastic Amazon product reviews. We perform qualitative and quantitative analysis of the corpus. The resulting corpus can be used for identifying sarcasm on two levels: a document and a text utterance (where a text utterance can be as short as a sentence and as long as a whole document).", "Sarcasm detection is a key task for many natural language processing tasks. In sentiment analysis, for example, sarcasm can flip the polarity of an \"apparently positive\" sentence and, hence, negatively affect polarity detection performance. To date, most approaches to sarcasm detection have treated the task primarily as a text categorization problem. Sarcasm, however, can be expressed in very subtle ways and requires a deeper understanding of natural language that standard text categorization techniques cannot grasp. In this work, we develop models based on a pre-trained convolutional neural network for extracting sentiment, emotion and personality features for sarcasm detection. Such features, along with the network's baseline features, allow the proposed models to outperform the state of the art on benchmark datasets. We also address the often ignored generalizability issue of classifying data that have not been seen by the models at learning phase.", "This article reports the findings of a single study examining irony in talk among friends. Sixty-two 10-min conversations between college students and their friends were recorded and analyzed. Five main types of irony were found: jocularity, sarcasm, hyperbole, rhetorical questions, and understatements. These different forms of ironic language were part of 8 of all conversational turns. Analysis of these utterances revealed varying linguistic and social patterns and suggested several constraints on how and why people achieve ironic meaning. The implications of this conclusion for psychological theories of irony are discussed.", "", "" ] }
1709.05254
2756166446
Learning to detect fraud in large-scale accounting data is one of the long-standing challenges in financial statement audits or fraud investigations. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. To overcome this disadvantage and inspired by the recent success of deep learning we propose the application of deep autoencoder neural networks to detect anomalous journal entries. We demonstrate that the trained network's reconstruction error obtainable for a journal entry and regularized by the entry's individual attribute probabilities can be interpreted as a highly adaptive anomaly assessment. Experiments on two real-world datasets of journal entries, show the effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A) and 16.95 (dataset B) and less false positive alerts compared to state of the art baseline methods. Initial feedback received by chartered accountants and fraud examiners underpinned the quality of the approach in capturing highly relevant accounting anomalies.
The forensic analysis of journal entries emerged with the advent of Enterprise Resource Planning (ERP) systems and the increased volume of data recorded by such systems. in @cite_16 used Naive Bayes methods to identify suspicious general ledger accounts, by evaluating attributes derived from journal entries measuring any unusual general ledger account activity. Their approach was enhanced by applying link analysis to identify (sub-) groups of high-risk general ledger accounts @cite_41 .
{ "cite_N": [ "@cite_41", "@cite_16" ], "mid": [ "2077233202", "2127005418" ], "abstract": [ "Classifying nodes in networks is a task with a wide range of applications. It can be particularly useful in anomaly and fraud detection. Many resources are invested in the task of fraud detection due to the high cost of fraud, and being able to automatically detect potential fraud quickly and precisely allows human investigators to work more efficiently. Many data analytic schemes have been put into use; however, schemes that bolster link analysis prove promising. This work builds upon the belief propagation algorithm for use in detecting collusion and other fraud schemes. We propose an algorithm called SNARE (Social Network Analysis for Risk Evaluation). By allowing one to use domain knowledge as well as link knowledge, the method was very successful for pinpointing misstated accounts in our sample of general ledger data, with a significant improvement over the default heuristic in true positive rates, and a lift factor of up to 6.5 (more than twice that of the default heuristic). We also apply SNARE to the task of graph labeling in general on publicly-available datasets. We show that with only some information about the nodes themselves in a network, we get surprisingly high accuracy of labels. Not only is SNARE applicable in a wide variety of domains, but it is also robust to the choice of parameters and highly scalable-linearly with the number of edges in a graph.", "In recent years, there have been several large accounting frauds where a company's financial results have been intentionally misrepresented by billions of dollars. In response, regulatory bodies have mandated that auditors perform analytics on detailed financial data with the intent of discovering such misstatements. For a large auditing firm, this may mean analyzing millions of records from thousands of clients. This paper proposes techniques for automatic analysis of company general ledgers on such a large scale, identifying irregularities - which may indicate fraud or just honest errors - for additional review by auditors. These techniques have been implemented in a prototype system, called Sherlock, which combines aspects of both outlier detection and classification. In developing Sherlock, we faced three major challenges: developing an efficient process for obtaining data from many heterogeneous sources, training classifiers with only positive and unlabeled examples, and presenting information to auditors in an easily interpretable manner. In this paper, we describe how we addressed these challenges over the past two years and report on experiments evaluating Sherlock." ] }