aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1610.04936
2536457637
Geometric model fitting is a fundamental task in computer graphics and computer vision. However, most geometric model fitting methods are unable to fit an arbitrary geometric model (e.g. a surface with holes) to incomplete data, due to that the similarity metrics used in these methods are unable to measure the rigid partial similarity between arbitrary models. This paper hence proposes a novel rigid geometric similarity metric, which is able to measure both the full similarity and the partial similarity between arbitrary geometric models. The proposed metric enables us to perform partial procedural geometric model fitting (PPGMF). The task of PPGMF is to search a procedural geometric model space for the model rigidly similar to a query of non-complete point set. Models in the procedural model space are generated according to a set of parametric modeling rules. A typical query is a point cloud. PPGMF is very useful as it can be used to fit arbitrary geometric models to non-complete (incomplete, over-complete or hybrid-complete) point cloud data. For example, most laser scanning data is non-complete due to occlusion. Our PPGMF method uses Markov chain Monte Carlo technique to optimize the proposed similarity metric over the model space. To accelerate the optimization process, the method also employs a novel coarse-to-fine model dividing strategy to reject dissimilar models in advance. Our method has been demonstrated on a variety of geometric models and non-complete data. Experimental results show that the PPGMF method based on the proposed metric is able to fit non-complete data, while the method based on other metrics is unable. It is also shown that our method can be accelerated by several times via early rejection.
It is worth noting that there are a lot of other types of geometric data reconstruction methods but not GIPM methods such as @cite_22 @cite_32 @cite_5 @cite_18 @cite_39 @cite_23 @cite_36 @cite_6 @cite_43 @cite_4 . A GIPM method results in procedural models, while other types of methods usually result in mesh models. Procedural model is more powerful than mesh model @cite_33 , as it perceives the abstract causal structure of input data @cite_12 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_33", "@cite_36", "@cite_32", "@cite_6", "@cite_39", "@cite_43", "@cite_23", "@cite_5", "@cite_12" ], "mid": [ "2052976238", "2511444570", "2087166203", "2148052781", "2033889706", "1995050439", "2149668583", "2041170980", "2028541416", "2152654418", "2050195189", "2194321275" ], "abstract": [ "Given a noisy and incomplete point set, we introduce a method that simultaneously recovers a set of locally fitted primitives along with their global mutual relations. We operate under the assumption that the data corresponds to a man-made engineering object consisting of basic primitives, possibly repeated and globally aligned under common relations. We introduce an algorithm to directly couple the local and global aspects of the problem. The local fit of the model is determined by how well the inferred model agrees to the observed data, while the global relations are iteratively learned and enforced through a constrained optimization. Starting with a set of initial RANSAC based locally fitted primitives, relations across the primitives such as orientation, placement, and equality are progressively learned and conformed to. In each stage, a set of feasible relations are extracted among the candidate relations, and then aligned to, while best fitting to the input data. The global coupling corrects the primitives obtained in the local RANSAC stage, and brings them to precise global alignment. We test the robustness of our algorithm on a range of synthesized and scanned data, with varying amounts of noise, outliers, and non-uniform sampling, and validate the results against ground truth, where available.", "LiDAR scanning has become a prevalent technique for digitalizing large-scale outdoor scenes. However, the raw LiDAR data often contain imperfections, e.g., missing large regions, anisotropy of sampling density, and contamination of noise and outliers, which are the major obstacles that hinder its more ambitious and higher level applications in digital city modeling. Observing that 3D urban scenes can be locally described with several low dimensional subspaces, we propose to locally classify the neighborhoods of the scans to model the substructures of the scenes. The key enabler is the adaptive kernel-scale scoring, filtering and clustering of substructures, making it possible to recover the local structures at all points simultaneously, even in the presence of severe data imperfections. Integrating the local analyses leads to robust shape detection from raw LiDAR data. On this basis, we develop several urban scene applications and verify them on a number of LiDAR scans with various complexities and styles, which demonstrates the effectiveness and robustness of our methods.", "Abstract This paper presents an automatic method for reconstruction of building facade models from terrestrial laser scanning data. Important facade elements such as walls and roofs are distinguished as features. Knowledge about the features’ sizes, positions, orientations, and topology is then introduced to recognize these features in a segmented laser point cloud. An outline polygon of each feature is generated by least squares fitting, convex hull fitting or concave polygon fitting, according to the size of the feature. Knowledge is used again to hypothesise the occluded parts from the directly extracted feature polygons. Finally, a polyhedron building model is combined from extracted feature polygons and hypothesised parts. The reconstruction method is tested with two data sets containing various building shapes.", "Urban models are key to navigation, architecture and entertainment. Apart from visualizing facades, a number of tedious tasks remain largely manual (e.g. compression, generating new facade designs and structurally comparing facades for classification, retrieval and clustering). We propose a novel procedural modelling method to automatically learn a grammar from a set of facades, generate new facade instances and compare facades. To deal with the difficulty of grammatical inference, we reformulate the problem. Instead of inferring a compromising, one-size-fits-all, single grammar for all tasks, we infer a model whose successive refinements are production rules tailored for each task. We demonstrate our automatic rule inference on datasets of two different architectural styles. Our method supercedes manual expert work and cuts the time required to build a procedural model of a facade from several days to a few milliseconds.", "We present a complete system to semantically decompose and reconstruct 3D models from point clouds. Different than previous urban modeling approaches, our system is designed for residential scenes, which consist of mainly low-rise buildings that do not exhibit the regularity and repetitiveness as high-rise buildings in downtown areas. Our system first automatically labels the input into distinctive categories using supervised learning techniques. Based on the semantic labels, objects in different categories are reconstructed with domain-specific knowledge. In particular, we present a novel building modeling scheme that aims to decompose and fit the building point cloud into basic blocks that are block-wise symmetric and convex. This building representation and its reconstruction algorithm are flexible, efficient, and robust to missing data. We demonstrate the effectiveness of our system on various datasets and compare our building modeling scheme with other state-of-the-art reconstruction algorithms to show its advantage in terms of both quality and speed.", "Recent advances in scanning technologies, in particular devices that extract depth through active sensing, allow fast scanning of urban scenes. Such rapid acquisition incurs imperfections: large regions remain missing, significant variation in sampling density is common, and the data is often corrupted with noise and outliers. However, buildings often exhibit large scale repetitions and self-similarities. Detecting, extracting, and utilizing such large scale repetitions provide powerful means to consolidate the imperfect data. Our key observation is that the same geometry, when scanned multiple times over reoccurrences of instances, allow application of a simple yet effective non-local filtering. The multiplicity of the geometry is fused together and projected to a base-geometry defined by clustering corresponding surfaces. Denoising is applied by separating the process into off-plane and in-plane phases. We show that the consolidation of the reoccurrences provides robust denoising and allow reliable completion of missing parts. We present evaluation results of the algorithm on several LiDAR scans of buildings of varying complexity and styles.", "Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.", "We present a novel and robust method for modeling cities from 3D-point data. Our algorithm provides a more complete description than existing approaches by reconstructing simultaneously buildings, trees and topologically complex grounds. A major contribution of our work is the original way of modeling buildings which guarantees a high generalization level while having semantized and compact representations. Geometric 3D-primitives such as planes, cylinders, spheres or cones describe regular roof sections, and are combined with mesh-patches that represent irregular roof components. The various urban components interact through a non-convex energy minimization problem in which they are propagated under arrangement constraints over a planimetric map. Our approach is experimentally validated on complex buildings and large urban scenes of millions of points, and is compared to state-of-the-art methods.", "With the proliferation of acquisition devices, gathering massive volumes of 3D data is now easy. Processing such large masses of pointclouds, however, remains a challenge. This is particularly a problem for raw scans with missing data, noise, and varying sampling density. In this work, we present a simple, scalable, yet powerful data reconstruction algorithm. We focus on reconstruction of man-made scenes as regular arrangements of planes (RAP), thereby selecting both local plane-based approximations along with their global inter-plane relations. We propose a novel selection formulation to directly balance between data fitting and the simplicity of the resulting arrangement of extracted planes. The main technical contribution is a formulation that allows less-dominant orientations to still retain their internal regularity, and not become overwhelmed and regularized by the dominant scene orientations. We evaluate our approach on a variety of complex 2D and 3D pointclouds, and demonstrate the advantages over existing alternative methods.", "We propose a complete framework for the automatic modeling from point cloud data. Initially, the point cloud data are preprocessed into manageable datasets, which are then separated into clusters using a novel two-step, unsupervised clustering algorithm. The boundaries extracted for each cluster are then simplified and refined using a fast energy minimization process. Finally, three-dimensional models are generated based on the roof outlines. The proposed framework has been extensively tested, and the results are reported.", "We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. These boxes quickly snap to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense that their locations and sizes are automatically adjusted on-the-fly to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two snapping forces defined respectively by a data-fitting term and a contextual term, which together assist the user in reconstructing the architectural model from a sparse and noisy point cloud. We show that a combination of the user's interactive guidance and high-level knowledge about the semantics of the underlying model, together with the snapping forces, allows the reconstruction of structures which are partially or even completely missing from the input.", "People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior." ] }
1610.04599
2537297001
We present a novel distribution-free approach, the data-driven threshold machine (DTM), for a fundamental problem at the core of many learning tasks: choose a threshold for a given pre-specified level that bounds the tail probability of the maximum of a (possibly dependent but stationary) random sequence. We do not assume data distribution, but rather relying on the asymptotic distribution of extremal values, and reduce the problem to estimate three parameters of the extreme value distributions and the extremal index. We specially take care of data dependence via estimating extremal index since in many settings, such as scan statistics, change-point detection, and extreme bandits, where dependence in the sequence of statistics can be significant. Key features of our DTM also include robustness and the computational efficiency, and it only requires one sample path to form a reliable estimate of the threshold, in contrast to the Monte Carlo sampling approach which requires drawing a large number of sample paths. We demonstrate the good performance of DTM via numerical examples in various dependent settings.
Choosing threshold using EVT has been studied in @cite_19 ; however, they assume samples, which cannot be applied to the settings we consider here such as scan-statistic, change-point detection since the dependence in the sequence of statistics is very significant. In other settings, EVT has been used to understand the theoretical basis of machine learning algorithms: recognition score analysis @cite_1 @cite_13 , for novelty detection @cite_8 , and for satellite image analysis @cite_21 .
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_1", "@cite_19", "@cite_13" ], "mid": [ "2170954383", "2143747537", "2123460005", "2170021787", "2100332042" ], "abstract": [ "Extreme Value Theory (EVT) describes the distribution of data considered extreme with respect to some generative distribution, effectively modelling the tails of that distribution. In novelty detection, or one-class classification, we wish to determine if data are “normal” with respect to some model of normality. If that model consists of generative distributions, then EVT is appropriate for describing the behaviour of extremes generated from the model, and can be used to determine the location of decision boundaries that separate “normal” areas of data space from “abnormal” areas in a principled manner. This paper introduces existing work in the use of EVT for novelty detection, shows that existing work does not accurately describe the extrema of multivariate, multimodal generative distributions, and proposes a novel method for overcoming such problems. The method is numerical, and provides optimal solutions for generative multivariate, multimodal distributions of arbitrary complexity. In a companion paper, we present analytical closed-form solutions which are currently limited to unimodal, multivariate generative distributions.", "This article presents a hierarchical classification method for high-resolution satellite imagery incorporating extreme value theory EVT-based normalization to calibrate multiple-feature scores. First, a simple linear iterative clustering algorithm is used to over-segment an image to build a superpixel representation of the scene. Then, each superpixel is characterized by three different visual descriptors. Finally, a two-phase classification model is proposed for achieving classification of the scene: 1 in the first phase, a support vector machine SVM with histogram intersection kernel is applied to each feature channel to obtain raw soft probability; and 2 in the second phase, the derived soft outputs are multiplied to build a product space for score-level fusion. The fused scores are subsequently further calibrated using the EVT and fed to an L1-regularized L2-loss SVM to obtain the final result. Experimental analysis on high-resolution satellite scenes shows that the proposed method achieves promising classification results and outperforms other competitive methods.", "Recognition problems in computer vision often benefit from a fusion of different algorithms and or sensors, with score level fusion being among the most widely used fusion approaches. Choosing an appropriate score normalization technique before fusion is a fundamentally difficult problem because of the disparate nature of the underlying distributions of scores for different sources of data. Further complications are introduced when one or more fusion inputs outright fail or have adversarial inputs, which we find in the fields of biometrics and forgery detection. Ideally a score normalization should be robust to model assumptions, modeling errors, and parameter estimation errors, as well as robust to algorithm failure. In this paper, we introduce the w-score, a new technique for robust recognition score normalization. We do not assume a match or non-match distribution, but instead suggest that the top scores of a recognition system's non-match scores follow the statistical Extreme Value Theory, and show how to use that to provide consistent robust normalization with a strong statistical basis.", "Determining a detection threshold to automatically maintain a low false alarm rate is a challenging problem. In a number of different applications, the underlying parametric assumptions of most automatic target detection algorithms are invalid. Therefore, thresholds derived using these incorrect distribution assumptions do not produce desirable results when applied to real sensor data. Monte Carlo methods for threshold determination work well but tend to perform poorly when targets are present. In order to mitigate these effects, we propose an algorithm using extreme value theory through the use of the generalized Pareto distribution (GPD) and a Kolmogorov-Smirnov statistical test. Unlike previous work based on GPD estimates, this algorithm incorporates a way to adaptively maintain low false alarm rates in the presence of targets. Both synthetic and real-world detection results demonstrate the usefulness of this algorithm.", "In this paper, we define meta-recognition, a performance prediction method for recognition algorithms, and examine the theoretical basis for its postrecognition score analysis form through the use of the statistical extreme value theory (EVT). The ability to predict the performance of a recognition system based on its outputs for each match instance is desirable for a number of important reasons, including automatic threshold selection for determining matches and nonmatches, and automatic algorithm selection or weighting for multi-algorithm fusion. The emerging body of literature on postrecognition score analysis has been largely constrained to biometrics, where the analysis has been shown to successfully complement or replace image quality metrics as a predictor. We develop a new statistical predictor based upon the Weibull distribution, which produces accurate results on a per instance recognition basis across different recognition problems. Experimental results are provided for two different face recognition algorithms, a fingerprint recognition algorithm, a SIFT-based object recognition system, and a content-based image retrieval system." ] }
1610.04871
2949105277
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.
One approach is to calibrate the transformation between hand and eye offline, prior to performing the manipulation task. Commonly, the robot is required to hold and move specific calibration objects in front of its eyes @cite_20 @cite_8 @cite_1 . This can however be tedious, time-consuming and the calibrated parameters may degrade over time so that the process has to be repeated. Instead, our approach is to continuously track the arm during a manipulation task, which is robust against drifting calibration parameters or against online effects, such as cable stretch due to increased load or contact with the environment.
{ "cite_N": [ "@cite_1", "@cite_20", "@cite_8" ], "mid": [ "92914038", "2079716332", "2416872609" ], "abstract": [ "Complex robots with multiple arms and sensors need good calibration to perform precise tasks in unstructured environments. The sensors must be calibrated both to the manipulators and to each other, since fused sensor data is often needed. We propose an extendable framework that combines measurements from the robot’s various sensors (proprioceptive and external) to calibrate the robot’s joint offsets and external sensor locations. Our approach is unique in that it accounts for sensor measurement uncertainties, thereby allowing sensors with very different error characteristics to be used side by side in the calibration. The framework is general enough to handle complex robots with kinematic components, including external sensors on kinematic chains. We validate the framework by implementing it on the Willow Garage PR2 robot, providing a significant improvement in the robot’s calibration.", "Precise kinematic forward models are important for robots to successfully perform dexterous grasping and manipulation tasks, especially when visual servoing is rendered infeasible due to occlusions. A lot of research has been conducted to estimate geometric and non-geometric parameters of kinematic chains to minimize reconstruction errors. However, kinematic chains can include non-linearities, e.g. due to cable stretch and motor-side encoders, that result in significantly different errors for different parts of the state space. Previous work either does not consider such non-linearities or proposes to estimate non-geometric parameters of carefully engineered models that are robot specific. We propose a data-driven approach that learns task error models that account for such unmodeled non-linearities. We argue that in the context of grasping and manipulation, it is sufficient to achieve high accuracy in the task relevant state space. We identify this relevant state space using previously executed joint configurations and learn error corrections for those. Therefore, our system is developed to generate subsequent executions that are similar to previous ones. The experiments show that our method successfully captures the non-linearities in the head kinematic chain (due to a counterbalancing spring) and the arm kinematic chains (due to cable stretch) of the considered experimental platform, see Fig. 1. The feasibility of the presented error learning approach has also been evaluated in independent DARPA ARM-S testing contributing to successfully complete 67 out of 72 grasping and manipulation tasks.", "We present a novel on-line approach for extrinsic robot-camera calibration, a process often referred to as hand-eye calibration, that uses object pose estimates from a real-time model-based tracking approach. While off-line calibration has seen much progress recently due to the incorporation of bundle adjustment techniques, on-line calibration still remains a largely open problem. Since we update the calibration in each frame, the improvements can be incorporated immediately in the pose estimation itself to facilitate object tracking. Our method does not require the camera to observe the robot or to have markers at certain fixed locations on the robot. To comply with a limited computational budget, it maintains a fixed size configuration set of samples. This set is updated each frame in order to maximize an observability criterion. We show that a set of size 20 is sufficient in real-world scenarios with static and actuated cameras. With this set size, only 100 microseconds are required to update the calibration in each frame, and we typically achieve accurate robot-camera calibration in 10 to 20 seconds. Together, these characteristics enable the incorporation of calibration in normal task execution." ] }
1610.04871
2949105277
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.
One possible solution is the use of fiducial markers on specific parts of the robot @cite_17 . Markers have the advantage of being easy to detect but the disadvantage of limiting the arm configurations to keep the markers always in view, and requiring to precisely know their position relative to the robot's kinematic chain. As an alternative to markers, different 2D visual cues can be used to track the entire arm, at the cost of higher computational demand, e.g. texture, gradients or silhouettes, which are also often used in general object tracking @cite_9 @cite_14 @cite_15 @cite_19 . Instead, we choose as visual sensor a depth camera, which readily provides geometric information while being less dependent on illumination effects.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_19", "@cite_15", "@cite_17" ], "mid": [ "2112725185", "2017161545", "2101199297", "2166654914", "1985382409" ], "abstract": [ "Describes a synergistic integration of a grasping simulator and a real-time visual tracking system, that work in concert to (1) find an object's pose, (2) plan grasps and movement trajectories, and (3) visually monitor task execution. Starting with a CAD model of an object to be grasped, the system can find the object's pose through vision which then synchronizes the state of the robot workcell with an online, model-based grasp planning and visualization system we have developed called GraspIt. GraspIt can then plan a stable grasp for the object, and direct the robotic hand system to perform the grasp. It can also generate trajectories for the movement of the grasped object, which are used by the visual control system to monitor the task and compare the actual grasp and trajectory with the planned ones. We present experimental results using typical grasping tasks.", "We propose a combined approach for 3D real-time object recognition and tracking, which is directly applicable to robotic manipulation. We use keypoints features for the initial pose estimation. This pose estimate serves as an initial estimate for edge-based tracking. The combination of these two complementary methods provides an efficient and robust tracking solution. The main contributions of this paper includes: 1) While most of the RAPiD style tracking methods have used simplified CAD models or at least manually well designed models, our system can handle any form of polygon mesh model. To achieve the generality of object shapes, salient edges are automatically identified during an offline stage. Dull edges usually invisible in images are maintained as well for the cases when they constitute the object boundaries. 2) Our system provides a fully automatic recognition and tracking solution, unlike most of the previous edge-based tracking that require a manual pose initialization scheme. Since the edge-based tracking sometimes drift because of edge ambiguity, the proposed system monitors the tracking results and occasionally re-initialize when the tracking results are inconsistent. Experimental results demonstrate our system's efficiency as well as robustness.", "We present a method for real-time 3D object instance detection that does not require a time-consuming training stage, and can handle untextured objects. At its core, our approach is a novel image representation for template matching designed to be robust to small image transformations. This robustness is based on spread image gradient orientations and allows us to test only a small subset of all possible pixel locations when parsing the image, and to represent a 3D object with a limited set of templates. In addition, we demonstrate that if a dense depth sensor is available we can extend our approach for an even better performance also taking 3D surface normal orientations into account. We show how to take advantage of the architecture of modern computers to build an efficient but very discriminant representation of the input images that can be used to consider thousands of templates in real time. We demonstrate in many experiments on real data that our method is much faster and more robust with respect to background clutter than current state-of-the-art methods.", "We study visual servoing in a framework of detection and grasping of unknown objects. Classically, visual servoing has been used for applications where the object to be servoed on is known to the robot prior to the task execution. In addition, most of the methods concentrate on aligning the robot hand with the object without grasping it. In our work, visual servoing techniques are used as building blocks in a system capable of detecting and grasping unknown objects in natural scenes. We show how different visual servoing techniques facilitate a complete grasping cycle.", "In this work we present a visual servoing approach that enables a humanoid robot to robustly execute dual arm grasping and manipulation tasks. Therefore the target object(s) and both hands are tracked alternately and a combined open- closed-loop controller is used for positioning the hands with respect to the target(s). We address the perception system and how the observable workspace can be increased by using an active vision system on a humanoid head. Furthermore a control framework for reactive positioning of both hands using position based visual servoing is presented, where the sensor data streams coming from the vision system, the joint encoders and the force torque sensors are fused and joint velocity values are generated. This framework can be used for bimanual grasping as well as for two handed manipulations which is demonstrated with the humanoid robot Armar-III that executes grasping and manipulation tasks in a kitchen environment." ] }
1610.04871
2949105277
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.
Pauwels @cite_4 consider the problem of simultaneously estimating the pose of multiple objects and the robot arm while it is interacting with them; all relative to a freely moving RGB-D camera. The objective is defined according to the agreement between a number of visual cues (depth, motion) when comparing observations with rendered images given the state. Instead of estimating the robot joint configuration, the authors consider the arm as a rigid object given the encoder values, which are assumed to be precise. To cope with remaining error in the model, robot base and end-effector are considered single objects with the kinematics being enforced through soft-constraints.
{ "cite_N": [ "@cite_4" ], "mid": [ "2084313563" ], "abstract": [ "We introduce a real-time system for recognizing and tracking the position and orientation of a large number of complex real-world objects, together with an articulated robotic manipulator operating upon them. The proposed system is fast, accurate and reliable and yet does not require precise camera calibration. The key to this high level of performance is a continuously-refined internal 3D representation of all the relevant scene elements. Occlusions are handled implicitly in this approach and a soft-constraint mechanism is used to obtain the highest precision at a specific region-of-interest. The system is well-suited for implementation on Graphics Processing Units and thanks to a tight integration of the latter's graphical and computational capability, scene updates can be obtained at framerates exceeding 40 Hz. We demonstrate the robustness and accuracy of this system on a complex real-world manipulation task involving active endpoint closed-loop visual servo control in the presence of both camera and target object motion." ] }
1610.04871
2949105277
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.
Hebert @cite_5 estimate the pose of the object relative to the end-effector, and the offset between the end-effector pose according to forward kinematics and visual data. They consider a multitude of cues from stereo and RGB data, such as markers on the hand, the silhouette of object and arm, and 3D point clouds. They employ articulated ICP similar to @cite_18 or 2D signed distance transforms similar to @cite_15 for optimization. An Unscented Kalman Filter fuses the results.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_15" ], "mid": [ "2143320034", "2346963205", "2166654914" ], "abstract": [ "This paper develops an estimation framework for sensor-guided manipulation of a rigid object via a robot arm. Using an unscented Kalman Filter (UKF), the method combines dense range information (from stereo cameras and 3D ranging sensors) as well as visual appearance features and silhouettes of the object and manipulator to track both an object-fixed frame location as well as a manipulator tool or palm frame location. If available, tactile data is also incorporated. By using these different imaging sensors and different imaging properties, we can leverage the advantages of each sensor and each feature type to realize more accurate and robust object and reference frame tracking. The method is demonstrated using the DARPA ARM-S system, consisting of a Barrett™WAM manipulator.", "", "We study visual servoing in a framework of detection and grasping of unknown objects. Classically, visual servoing has been used for applications where the object to be servoed on is known to the robot prior to the task execution. In addition, most of the methods concentrate on aligning the robot hand with the object without grasping it. In our work, visual servoing techniques are used as building blocks in a system capable of detecting and grasping unknown objects in natural scenes. We show how different visual servoing techniques facilitate a complete grasping cycle." ] }
1610.04871
2949105277
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.
The method we propose extends our previous work on visual tracking based on a model of raw depth images @cite_12 @cite_11 . These measurements are highly nonlinear and non-Gaussian, for which particle filters are well suited.
{ "cite_N": [ "@cite_12", "@cite_11" ], "mid": [ "2012804028", "1797490870" ], "abstract": [ "We address the problem of tracking the 6-DoF pose of an object while it is being manipulated by a human or a robot. We use a dynamic Bayesian network to perform inference and compute a posterior distribution over the current object pose. Depending on whether a robot or a human manipulates the object, we employ a process model with or without knowledge of control inputs. Observations are obtained from a range camera. As opposed to previous object tracking methods, we explicitly model self-occlusions and occlusions from the environment, e.g, the human or robotic hand. This leads to a strongly non-linear observation model and additional dependencies in the Bayesian network. We employ a Rao-Blackwellised particle filter to compute an estimate of the object pose at every time step. In a set of experiments, we demonstrate the ability of our method to accurately and robustly track the object pose in real-time while it is being manipulated by a human or a robot.", "Parametric filters, such as the Extended Kalman Filter and the Unscented Kalman Filter, typically scale well with the dimensionality of the problem, but they are known to fail if the posterior state distribution cannot be closely approximated by a density of the assumed parametric form." ] }
1610.04871
2949105277
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.
In @cite_12 , we propose a method to track the @math -DoF pose of a rigid object given its shape. Our image model explicitly takes into account occlusions due to any other objects, which are common in real-world manipulation scenarios. The structure of our formulation makes it suitable to use a Rao-Blackwellized particle filter @cite_16 , which in combination with the factorization of the image likelihood over pixels yields a computationally efficient algorithm.
{ "cite_N": [ "@cite_16", "@cite_12" ], "mid": [ "2149020252", "2012804028" ], "abstract": [ "Particle filters (PFs) are powerful sampling-based inference learning algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a principled way, any type of probability distribution, nonlinearity and non-stationarity. They have appeared in several fields under such names as \"condensation\", \"sequential Monte Carlo\" and \"survival of the fittest\". In this paper, we show how we can exploit the structure of the DBN to increase the efficiency of particle filtering, using a technique known as Rao-Blackwellisation. Essentially, this samples some of the variables, and marginalizes out the rest exactly, using the Kalman filter, HMM filter, junction tree algorithm, or any other finite dimensional optimal filter. We show that Rao-Blackwellised particle filters (RBPFs) lead to more accurate estimates than standard PFs. We demonstrate RBPFs on two problems, namely non-stationary online regression with radial basis function networks and robot localization and map building. We also discuss other potential application areas and provide references to some finite dimensional optimal filters.", "We address the problem of tracking the 6-DoF pose of an object while it is being manipulated by a human or a robot. We use a dynamic Bayesian network to perform inference and compute a posterior distribution over the current object pose. Depending on whether a robot or a human manipulates the object, we employ a process model with or without knowledge of control inputs. Observations are obtained from a range camera. As opposed to previous object tracking methods, we explicitly model self-occlusions and occlusions from the environment, e.g, the human or robotic hand. This leads to a strongly non-linear observation model and additional dependencies in the Bayesian network. We employ a Rao-Blackwellised particle filter to compute an estimate of the object pose at every time step. In a set of experiments, we demonstrate the ability of our method to accurately and robustly track the object pose in real-time while it is being manipulated by a human or a robot." ] }
1610.04871
2949105277
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.
This model can be extended to articulated objects given knowledge of the object's kinematics and shape. The additional difficulty is that joint configurations can be quite high-dimensional ( @math ). In @cite_11 , we propose a method which alleviates this issue by sampling dimension-wise, and show an application to robot arm tracking.
{ "cite_N": [ "@cite_11" ], "mid": [ "1797490870" ], "abstract": [ "Parametric filters, such as the Extended Kalman Filter and the Unscented Kalman Filter, typically scale well with the dimensionality of the problem, but they are known to fail if the posterior state distribution cannot be closely approximated by a density of the assumed parametric form." ] }
1610.04871
2949105277
We propose a probabilistic filtering method which fuses joint measurements with depth images to yield a precise, real-time estimate of the end-effector pose in the camera frame. This avoids the need for frame transformations when using it in combination with visual object tracking methods. Precision is achieved by modeling and correcting biases in the joint measurements as well as inaccuracies in the robot model, such as poor extrinsic camera calibration. We make our method computationally efficient through a principled combination of Kalman filtering of the joint measurements and asynchronous depth-image updates based on the Coordinate Particle Filter. We quantitatively evaluate our approach on a dataset recorded from a real robotic platform, annotated with ground truth from a motion capture system. We show that our approach is robust and accurate even under challenging conditions such as fast motion, significant and long-term occlusions, and time-varying biases. We release the dataset along with open-source code of our approach to allow for quantitative comparison with alternative approaches.
In this paper we take the latter method further, by combining the visual data with the measurements from the joint encoders. Due to the complementary nature of these two sensors, fusing them allows for much more robust tracking than using only one of them. Additionally to estimating joint angles and or biases like @cite_11 @cite_18 @cite_0 @cite_5 @cite_2 , we simultaneously estimate the true camera pose relative to the kinematic chain of the robot. Further, differently from most approaches mentioned, we process all joint measurements as they arrive rather than limiting ourselves to the image rate, and we handle the delay in the images. A crucial difference to related work is that we model not only self-occlusion of the arm, but also occlusions due to external objects. Although we take as input fewer cues than other methods @cite_4 @cite_0 @cite_5 , we already achieve a remarkable robustness and accuracy in robot arm tracking. The dataset we propose in this paper will enable quantitative comparison to alternative approaches.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_0", "@cite_2", "@cite_5", "@cite_11" ], "mid": [ "2346963205", "2084313563", "2102393714", "2076593519", "2143320034", "1797490870" ], "abstract": [ "", "We introduce a real-time system for recognizing and tracking the position and orientation of a large number of complex real-world objects, together with an articulated robotic manipulator operating upon them. The proposed system is fast, accurate and reliable and yet does not require precise camera calibration. The key to this high level of performance is a continuously-refined internal 3D representation of all the relevant scene elements. Occlusions are handled implicitly in this approach and a soft-constraint mechanism is used to obtain the highest precision at a specific region-of-interest. The system is well-suited for implementation on Graphics Processing Units and thanks to a tight integration of the latter's graphical and computational capability, scene updates can be obtained at framerates exceeding 40 Hz. We demonstrate the robustness and accuracy of this system on a complex real-world manipulation task involving active endpoint closed-loop visual servo control in the presence of both camera and target object motion.", "This work integrates visual and physical constraints to perform real-time depth-only tracking of articulated objects, with a focus on tracking a robot's manipulators and manipulation targets in realistic scenarios. As such, we extend DART, an existing visual articulated object tracker, to additionally avoid interpenetration of multiple interacting objects, and to make use of contact information collected via torque sensors or touch sensors. To achieve greater stability, the tracker uses a switching model to detect when an object is stationary relative to the table or relative to the palm and then uses information from multiple frames to converge to an accurate and stable estimate. Deviation from stable states is detected in order to remain robust to failed grasps and dropped objects. The tracker is integrated into a shared autonomy system in which it provides state estimates used by a grasp planner and the controller of two anthropomorphic hands. We demonstrate the advantages and performance of the tracking system in simulation and on a real robot. Qualitative results are also provided for a number of challenging manipulations that are made possible by the speed, accuracy, and stability of the tracking system.", "Recognizing and manipulating objects is an important task for mobile robots performing useful services in everyday environments. While existing techniques for object recognition related to manipulation provide very good results even for noisy and incomplete data, they are typically trained using data generated in an offline process. As a result, they do not enable a robot to acquire new object models as it operates in an environment. In this paper we develop an approach to building 3D models of unknown objects based on a depth camera observing the robot's hand while moving an object. The approach integrates both shape and appearance information into an articulated Iterative Closest Point approach to track the robot's manipulator and the object. Objects are modeled by sets of surfels, which are small patches providing occlusion and appearance information. Experiments show that our approach provides very good 3D models even when the object is highly symmetric and lacks visual features and the manipulator motion is noisy. Autonomous object modeling represents a step toward improved semantic understanding, which will eventually enable robots to reason about their environments in terms of objects and their relations rather than through raw sensor data.", "This paper develops an estimation framework for sensor-guided manipulation of a rigid object via a robot arm. Using an unscented Kalman Filter (UKF), the method combines dense range information (from stereo cameras and 3D ranging sensors) as well as visual appearance features and silhouettes of the object and manipulator to track both an object-fixed frame location as well as a manipulator tool or palm frame location. If available, tactile data is also incorporated. By using these different imaging sensors and different imaging properties, we can leverage the advantages of each sensor and each feature type to realize more accurate and robust object and reference frame tracking. The method is demonstrated using the DARPA ARM-S system, consisting of a Barrett™WAM manipulator.", "Parametric filters, such as the Extended Kalman Filter and the Unscented Kalman Filter, typically scale well with the dimensionality of the problem, but they are known to fail if the posterior state distribution cannot be closely approximated by a density of the assumed parametric form." ] }
1610.04563
2533641151
Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors. In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the adversarial example generation techniques with respect to the quality of the produced images, and measure the robustness of the tested machine learning models to adversarial examples. Finally, we conduct large-scale experiments on cross-model adversarial portability. We find that adversarial examples are mostly transferable across similar network topologies, and we demonstrate that better machine learning models are less vulnerable to adversarial examples.
Adversarial examples in deep neural networks were discovered by @cite_4 . The authors demonstrated the first method to reliably find these perturbations by a box-constrained optimization technique (L-BFGS) that relies on internal network state. However, due to the computationally expensive L-BFGS optimization, this method is impractical. Furthermore, performed experiments on a few networks and datasets and concluded that a relatively large fraction of the adversarial examples were misclassified by different networks trained with varying hyperparameters or on disjoint training sets.
{ "cite_N": [ "@cite_4" ], "mid": [ "2964153729" ], "abstract": [ "Abstract: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input." ] }
1610.04563
2533641151
Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors. In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the adversarial example generation techniques with respect to the quality of the produced images, and measure the robustness of the tested machine learning models to adversarial examples. Finally, we conduct large-scale experiments on cross-model adversarial portability. We find that adversarial examples are mostly transferable across similar network topologies, and we demonstrate that better machine learning models are less vulnerable to adversarial examples.
@cite_18 presented the simpler and computationally cheaper fast gradient sign (FGS) algorithm for adversarial generation, and also demonstrated that machine learning models can benefit from these perturbed inputs. FGS uses the sign of the gradient of loss with respect to the input image to form adversarial perturbations. FGS is computationally efficient as the gradient can be effectively calculated using backpropagation, and the generated perturbations cause unexpected classification errors in various machine learning models. Based upon FGS and by simply using the raw gradient of loss, @cite_11 formalized the fast gradient value (FGV) method and introduced the hot cold (HC) approach, which is capable of efficiently producing multiple adversarial examples for each input.
{ "cite_N": [ "@cite_18", "@cite_11" ], "mid": [ "2963207607", "2963098487" ], "abstract": [ "Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "State-of-the-art deep neural networks suffer from a fundamental problem – they misclassify adversarial examples formed by applying small perturbations to inputs. In this paper, we present a new psychometric perceptual adversarial similarity score (PASS) measure for quantifying adversarial images, introduce the notion of hard positive generation, and use a diverse set of adversarial perturbations – not just the closest ones – for data augmentation. We introduce a novel hot cold approach for adversarial example generation, which provides multiple possible adversarial perturbations for every single image. The perturbations generated by our novel approach often correspond to semantically meaningful image structures, and allow greater flexibility to scale perturbation-amplitudes, which yields an increased diversity of adversarial images. We present adversarial images on several network topologies and datasets, including LeNet on the MNIST dataset, and GoogLeNet and ResidualNet on the ImageNet dataset. Finally, we demonstrate on LeNet and GoogLeNet that fine-tuning with a diverse set of hard positives improves the robustness of these networks compared to training with prior methods of generating adversarial images." ] }
1610.04563
2533641151
Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors. In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the adversarial example generation techniques with respect to the quality of the produced images, and measure the robustness of the tested machine learning models to adversarial examples. Finally, we conduct large-scale experiments on cross-model adversarial portability. We find that adversarial examples are mostly transferable across similar network topologies, and we demonstrate that better machine learning models are less vulnerable to adversarial examples.
@cite_6 introduced a method to produce adversarial perturbations by leveraging mappings between inputs and outputs of neural networks. Their experiments on the MNIST dataset suggest that identifying those mappings is computationally expensive. @cite_23 proposed an approach which generates perturbations, applies them to input samples, and then observes how models respond to these perturbed images. They applied small affine image transformations to form perturbations without utilizing the internal state of the networks. Also, in order to filter out radical perturbations and identify useful ones for retraining, used peer networks as a control-mechanism. Although these types of approaches are capable of finding adversarial perturbations, they can be prohibitively expensive.
{ "cite_N": [ "@cite_23", "@cite_6" ], "mid": [ "2247148075", "2180612164" ], "abstract": [ "Much of the recent success of neural networks can be attributed to the deeper architectures that have become prevalent. However, the deeper architectures often yield unintelligible solutions, require enormous amounts of labeled data, and still remain brittle and easily broken. In this paper, we present a method to efficiently and intuitively discover input instances that are misclassified by well-trained neural networks. As in previous studies, we can identify instances that are so similar to previously seen examples such that the transformation is visually imperceptible. Additionally, unlike in previous studies, we can also generate mistakes that are significantly different from any training sample, while, importantly, still remaining in the space of samples that the network should be able to classify correctly. This is achieved by training a basket of N \"peer networks\" rather than a single network. These are similarly trained networks that serve to provide consistency pressure on each other. When an example is found for which a single network, S, disagrees with all of the other @math networks, which are consistent in their prediction, that example is a potential mistake for S. We present a simple method to find such examples and demonstrate it on two visual tasks. The examples discovered yield realistic images that clearly illuminate the weaknesses of the trained models, as well as provide a source of numerous, diverse, labeled-training samples.", "Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97 adversarial success rate while only modifying on average 4.02 of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification." ] }
1610.04563
2533641151
Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors. In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the adversarial example generation techniques with respect to the quality of the produced images, and measure the robustness of the tested machine learning models to adversarial examples. Finally, we conduct large-scale experiments on cross-model adversarial portability. We find that adversarial examples are mostly transferable across similar network topologies, and we demonstrate that better machine learning models are less vulnerable to adversarial examples.
@cite_20 demonstrated that adversarial images can be produced by manipulating internal representations to mimic those that are present in targeted images. Their approach also relies on the inefficient L-BFGS algorithm. Moosavi- @cite_24 introduced a method, which can produce adversarial perturbations with small L @math or L @math norms, but their approach is also computationally expensive.
{ "cite_N": [ "@cite_24", "@cite_20" ], "mid": [ "2243397390", "2963955657" ], "abstract": [ "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.1", "Abstract: We show that the representation of an image in a deep neural network (DNN) can be manipulated to mimic those of other natural images, with only minor, imperceptible perturbations to the original image. Previous methods for generating adversarial images focused on image perturbations designed to produce erroneous class labels, while we concentrate on the internal layers of DNN representations. In this way our new class of adversarial images differs qualitatively from others. While the adversary is perceptually similar to one image, its internal representation appears remarkably similar to a different image, one from a different class, bearing little if any apparent similarity to the input; they appear generic and consistent with the space of natural images. This phenomenon raises questions about DNN representations, as well as the properties of natural images themselves." ] }
1610.04563
2533641151
Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors. In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the adversarial example generation techniques with respect to the quality of the produced images, and measure the robustness of the tested machine learning models to adversarial examples. Finally, we conduct large-scale experiments on cross-model adversarial portability. We find that adversarial examples are mostly transferable across similar network topologies, and we demonstrate that better machine learning models are less vulnerable to adversarial examples.
Researchers also try to assess the severity of the security threat imposed by adversarial examples in the physical world @cite_22 or even suggest attack strategies that rely on the transferability of such examples @cite_19 @cite_17 . Specifically, @cite_19 @cite_17 suggest a black-box attack by crafting adversarial samples on one model and apply those on the targeted oracle to cause misclassification. The authors report that such attacks can achieve high success rates with samples formed by adversarial perturbations with radically increased magnitudes. Since the informal'' definition of adversarial examples requires small perturbations to be applied on original inputs, we note that FGS examples with visually perceptible modifications cannot be considered adversarial in nature. Furthermore, recent advances indicate that those noisy samples would probably be classified as by open set deep networks @cite_8 , and -- depending on the dataset and context -- that might eliminate the threat posed by the introduced attack.
{ "cite_N": [ "@cite_19", "@cite_8", "@cite_22", "@cite_17" ], "mid": [ "2408141691", "2963149653", "2460937040", "" ], "abstract": [ "Many machine learning models are vulnerable to adversarial examples: inputs that are specially crafted to cause a machine learning model to produce an incorrect output. Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task. An attacker may therefore train their own substitute model, craft adversarial examples against the substitute, and transfer them to a victim model, with very little information about the victim. Recent work has further developed a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack. We extend these recent techniques using reservoir sampling to greatly enhance the efficiency of the training procedure for the substitute model. We introduce new transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees. We demonstrate our attacks on two commercial machine learning classification systems from Amazon (96.19 misclassification rate) and Google (88.94 ) using only 800 queries of the victim model, thereby showing that existing machine learning approaches are in general vulnerable to systematic black-box attacks regardless of their structure.", "Deep networks have produced significant gains for various visual recognition problems, leading to high impact academic and commercial applications. Recent work in deep networks highlighted that it is easy to generate images that humans would never classify as a particular object class, yet networks classify such images high confidence as that given class – deep network are easily fooled with images humans do not consider meaningful. The closed set nature of deep networks forces them to choose from one of the known classes leading to such artifacts. Recognition in the real world is open set, i.e. the recognition system should reject unknown unseen classes at test time. We present a methodology to adapt deep networks for open set recognition, by introducing a new model layer, OpenMax, which estimates the probability of an input being from an unknown class. A key element of estimating the unknown probability is adapting Meta-Recognition concepts to the activation patterns in the penultimate layer of the network. Open-Max allows rejection of \"fooling\" and unrelated open set images presented to the system, OpenMax greatly reduces the number of obvious errors made by a deep network. We prove that the OpenMax concept provides bounded open space risk, thereby formally providing an open set recognition solution. We evaluate the resulting open set deep networks using pre-trained networks from the Caffe Model-zoo on ImageNet 2012 validation data, and thousands of fooling and open set images. The proposed OpenMax model significantly outperforms open set recognition accuracy of basic deep networks as well as deep networks with thresholding of SoftMax probabilities.", "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "" ] }
1610.04025
2532272970
Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. Recently, (SP 2013) gave the first construction of an ideally-secure OPE scheme and Kerschbaum (CCS 2015) showed how to achieve the even stronger notion of frequency-hiding OPE. However, as (CCS 2015) have recently demonstrated, these constructions remain vulnerable to several attacks. Additionally, all previous ideal OPE schemes (with or without frequency-hiding) either require a large round complexity of O(log n) rounds for each insertion, or a large persistent client storage of size O(n), where n is the number of items in the database. It is thus desirable to achieve a range query scheme addressing both issues gracefully. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(ne) non-persistent client storage for 0
Order-preserving encryption (OPE) @cite_40 @cite_48 @cite_29 guarantees that @math iff @math . Thus, range queries can be performed directly over the ciphertexts in the same way that such a query would be performed over the plaintext data. However, OPE comes with a security cost. None of the original schemes @cite_40 @cite_48 achieve the security goal for OPE of IND-OCPA (indistinguishability under ordered chosen-plaintext attack) @cite_48 in which ciphertexts reveal no additional information beyond the order of the plaintexts. In fact @cite_48 prove that achieving a stateless encryption scheme with this security goal is impossible under reasonable assumptions. The existing schemes, instead, either lack formal analysis or strive for weaker notions of security which have been shown to reveal significant amount of information about the plaintext @cite_29 . The first scheme to achieve IND-OCPA security was the order-preserving encoding scheme of @cite_17 .
{ "cite_N": [ "@cite_48", "@cite_40", "@cite_29", "@cite_17" ], "mid": [ "1895952394", "2154496743", "2163992091", "2030888439" ], "abstract": [ "We initiate the cryptographic study of order-preserving symmetric encryption (OPE), a primitive suggested in the database community by (SIGMOD '04) for allowing efficient range queries on encrypted data. Interestingly, we first show that a straightforward relaxation of standard security notions for encryption such as indistinguishability against chosen-plaintext attack (IND-CPA) is unachievable by a practical OPE scheme. Instead, we propose a security notion in the spirit of pseudorandom functions (PRFs) and related primitives asking that an OPE scheme look \"as-random-as-possible\" subject to the order-preserving constraint. We then design an efficient OPE scheme and prove its security under our notion based on pseudorandomness of an underlying blockcipher. Our construction is based on a natural relation we uncover between a random order-preserving function and the hypergeometric probability distribution. In particular, it makes black-box use of an efficient sampling algorithm for the latter.", "Encryption is a well established technology for protecting sensitive data. However, once encrypted, data can no longer be easily queried aside from exact matches. We present an order-preserving encryption scheme for numeric data that allows any comparison operation to be directly applied on encrypted data. Query results produced are sound (no false hits) and complete (no false drops). Our scheme handles updates gracefully and new values can be added without requiring changes in the encryption of other values. It allows standard databse indexes to be built over encrypted tables and can easily be integrated with existing database systems. The proposed scheme has been designed to be deployed in application environments in which the intruder can get access to the encrypted database, but does not have prior domain information such as the distribution of values and annot encrypt or decrypt arbitrary values of his choice. The encryption is robust against estimation of the true value in such environments.", "We further the study of order-preserving symmetric encryption (OPE), a primitive for allowing efficient range queries on encrypted data, recently initiated (from a cryptographic perspective) by (Eurocrypt'09). First, we address the open problem of characterizing what encryption via a random order-preserving function (ROPF) leaks about underlying data (ROPF being the \"ideal object\" in the security definition, POPF, satisfied by their scheme.) In particular, we show that, for a database of randomly distributed plaintexts and appropriate choice of parameters, ROPF encryption leaks neither the precise value of any plaintext nor the precise distance between any two of them. The analysis here is quite technically non-trivial and introduces useful new techniques. On the other hand, we also show that ROPF encryption does leak both the value of any plaintext as well as the distance between any two plaintexts to within a range of possibilities roughly the square root of the domain size. We then study schemes that are not order-preserving, but which nevertheless allow efficient range queries and achieve security notions stronger than POPF. In a setting where the entire database is known in advance of key-generation (considered in several prior works), we show that recent constructions of \"monotone minimal perfect hash functions\" allow to efficiently achieve (an adaptation of) the notion of IND-O(rdered) CPA also considered by , which asks that only the order relations among the plaintexts is leaked. Finally, we introduce modular order-preserving encryption (MOPE), in which the scheme of is prepended with a shift cipher. MOPE improves the security of OPE in a sense, as it does not leak any information about plaintext location. We clarify that our work should not be interpreted as saying the original scheme of , or the variants that we introduce, are \"secure\" or \"insecure.\" Rather, the goal of this line of research is to help practitioners decide whether the options provide a suitable security-functionality tradeoff for a given application.", "Order-preserving encryption - an encryption scheme where the sort order of ciphertexts matches the sort order of the corresponding plaintexts - allows databases and other applications to process queries involving order over encrypted data efficiently. The ideal security guarantee for order-preserving encryption put forth in the literature is for the ciphertexts to reveal no information about the plaintexts besides order. Even though more than a dozen schemes were proposed, all these schemes leak more information than order. This paper presents the first order-preserving scheme that achieves ideal security. Our main technique is mutable ciphertexts, meaning that over time, the ciphertexts for a small number of plaintext values change, and we prove that mutable ciphertexts are needed for ideal security. Our resulting protocol is interactive, with a small number of interactions. We implemented our scheme and evaluated it on microbenchmarks and in the context of an encrypted MySQL database application. We show that in addition to providing ideal security, our scheme achieves 1 - 2 orders of magnitude higher performance than the state-of-the-art order-preserving encryption scheme, which is less secure than our scheme." ] }
1610.04025
2532272970
Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. Recently, (SP 2013) gave the first construction of an ideally-secure OPE scheme and Kerschbaum (CCS 2015) showed how to achieve the even stronger notion of frequency-hiding OPE. However, as (CCS 2015) have recently demonstrated, these constructions remain vulnerable to several attacks. Additionally, all previous ideal OPE schemes (with or without frequency-hiding) either require a large round complexity of O(log n) rounds for each insertion, or a large persistent client storage of size O(n), where n is the number of items in the database. It is thus desirable to achieve a range query scheme addressing both issues gracefully. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(ne) non-persistent client storage for 0
A related primitive to OPE is order-revealing encryption (ORE) @cite_48 , which provides a public mechanism for comparing two encrypted values and thus also enables range searches over encrypted data. (Note, OPE is the special case where this mechanism is lexicographic comparison,) The first construction of ORE satisfying security @cite_2 was based on multi-linear maps @cite_34 and is thus unlikely to be practical in the near future. An alternative scheme based only on pseudorandom functions @cite_35 , however, has additional leakage that weakens the achieved security.
{ "cite_N": [ "@cite_48", "@cite_34", "@cite_35", "@cite_2" ], "mid": [ "1895952394", "1603256047", "2400905995", "754106230" ], "abstract": [ "We initiate the cryptographic study of order-preserving symmetric encryption (OPE), a primitive suggested in the database community by (SIGMOD '04) for allowing efficient range queries on encrypted data. Interestingly, we first show that a straightforward relaxation of standard security notions for encryption such as indistinguishability against chosen-plaintext attack (IND-CPA) is unachievable by a practical OPE scheme. Instead, we propose a security notion in the spirit of pseudorandom functions (PRFs) and related primitives asking that an OPE scheme look \"as-random-as-possible\" subject to the order-preserving constraint. We then design an efficient OPE scheme and prove its security under our notion based on pseudorandomness of an underlying blockcipher. Our construction is based on a natural relation we uncover between a random order-preserving function and the hypergeometric probability distribution. In particular, it makes black-box use of an efficient sampling algorithm for the latter.", "We describe plausible lattice-based constructions with properties that approximate the sought-after multilinear maps in hard-discrete-logarithm groups, and show an example application of such multi-linear maps that can be realized using our approximation. The security of our constructions relies on seemingly hard problems in ideal lattices, which can be viewed as extensions of the assumed hardness of the NTRU function.", "In an order-preserving encryption scheme, the encryption algorithm produces ciphertexts that preserve the order of their plaintexts. Order-preserving encryption schemes have been studied intensely in the last decade, and yet not much is known about the security of these schemes. Very recently, Boneh eti¾?al. Eurocrypti¾?2015 introduced a generalization of order-preserving encryption, called order-revealing encryption, and presented a construction which achieves this notion with best-possible security. Because their construction relies on multilinear maps, it is too impractical for most applications and therefore remains a theoretical result. In this work, we build efficiently implementable order-revealing encryption from pseudorandom functions. We present the first efficient order-revealing encryption scheme which achieves a simulation-based security notion with respect to a leakage function that precisely quantifies what is leaked by the scheme. In fact, ciphertexts in our scheme are only about 1.6 times longer than their plaintexts. Moreover, we show how composing our construction with existing order-preserving encryption schemes results in order-revealing encryption that is strictly more secure than all preceding order-preserving encryption schemes.", "Deciding “greater-than” relations among data items just given their encryptions is at the heart of search algorithms on encrypted data, most notably, non-interactive binary search on encrypted data. Order-preserving encryption provides one solution, but provably provides only limited security guarantees. Two-input functional encryption is another approach, but requires the full power of obfuscation machinery and is currently not implementable." ] }
1610.04025
2532272970
Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. Recently, (SP 2013) gave the first construction of an ideally-secure OPE scheme and Kerschbaum (CCS 2015) showed how to achieve the even stronger notion of frequency-hiding OPE. However, as (CCS 2015) have recently demonstrated, these constructions remain vulnerable to several attacks. Additionally, all previous ideal OPE schemes (with or without frequency-hiding) either require a large round complexity of O(log n) rounds for each insertion, or a large persistent client storage of size O(n), where n is the number of items in the database. It is thus desirable to achieve a range query scheme addressing both issues gracefully. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(ne) non-persistent client storage for 0
Symmetric searchable encryption (SSE) was first proposed by Song, Wagner, and Perrig @cite_15 who showed how to search over encrypted data for keyword matches in sub-linear time. The first formal security definition for SSE was given by Goh @cite_45 , @cite_12 showed the first SSE scheme with sublinear search time and compact space, while @cite_39 showed the first SSE scheme with sublinear search time for conjunctive queries. Recent works @cite_10 @cite_28 @cite_36 achieve performance within a couple orders of magnitude of unencrypted databases for rich classes of queries including boolean formulas over keyword, and range queries. Of particular interest is the work of Hahn and Kerschbaum @cite_24 who show how to use lazy techniques to build SSE with quick updates. We refer interested readers to the survey by B "o @cite_30 for an excellent overview of this area.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_36", "@cite_39", "@cite_24", "@cite_45", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "2162567660", "", "", "1502708590", "1993536048", "", "2147929033", "1996453724", "2146828512" ], "abstract": [ "We survey the notion of provably secure searchable encryption (SE) by giving a complete and comprehensive overview of the two main SE techniques: searchable symmetric encryption (SSE) and public key encryption with keyword search (PEKS). Since the pioneering work of Song, Wagner, and Perrig (IEEE S&P '00), the field of provably secure SE has expanded to the point where we felt that taking stock would provide benefit to the community. The survey has been written primarily for the nonspecialist who has a basic information security background. Thus, we sacrifice full details and proofs of individual constructions in favor of an overview of the underlying key techniques. We categorize and compare the different SE schemes in terms of their security, efficiency, and functionality. For the experienced researcher, we point out connections between the many approaches to SE and identify open research problems. Two major conclusions can be drawn from our work. While the so-called IND-CKA2 security notion becomes prevalent in the literature and efficient (sublinear) SE schemes meeting this notion exist in the symmetric setting, achieving this strong form of security efficiently in the asymmetric setting remains an open problem. We observe that in multirecipient SE schemes, regardless of their efficiency drawbacks, there is a noticeable lack of query expressiveness that hinders deployment in practice.", "", "", "This work presents the design and analysis of the first searchable symmetric encryption (SSE) protocol that supports conjunctive search and general Boolean queries on outsourced symmetrically- encrypted data and that scales to very large databases and arbitrarily-structured data including free text search. To date, work in this area has focused mainly on single-keyword search. For the case of conjunctive search, prior SSE constructions required work linear in the total number of documents in the database and provided good privacy only for structured attribute-value data, rendering these solutions too slow and inflexible for large practical databases.", "Searchable (symmetric) encryption allows encryption while still enabling search for keywords. Its immediate application is cloud storage where a client outsources its files while the (cloud) service provider should search and selectively retrieve those. Searchable encryption is an active area of research and a number of schemes with different efficiency and security characteristics have been proposed in the literature. Any scheme for practical adoption should be efficient -- i.e. have sub-linear search time --, dynamic -- i.e. allow updates -- and semantically secure to the most possible extent. Unfortunately, efficient, dynamic searchable encryption schemes suffer from various drawbacks. Either they deteriorate from semantic security to the security of deterministic encryption under updates, they require to store information on the client and for deleted files and keywords or they have very large index sizes. All of this is a problem, since we can expect the majority of data to be later added or changed. Since these schemes are also less efficient than deterministic encryption, they are currently an unfavorable choice for encryption in the cloud. In this paper we present the first searchable encryption scheme whose updates leak no more information than the access pattern, that still has asymptotically optimal search time, linear, very small and asymptotically optimal index size and can be implemented without storage on the client (except the key). Our construction is based on the novel idea of learning the index for efficient access from the access pattern itself. Furthermore, we implement our system and show that it is highly efficient for cloud storage.", "", "It is desirable to store data on data storage servers such as mail servers and file servers in encrypted form to reduce security and privacy risks. But this usually implies that one has to sacrifice functionality for security. For example, if a client wishes to retrieve only documents containing certain words, it was not previously known how to let the data storage server perform the search and answer the query, without loss of data confidentiality. We describe our cryptographic schemes for the problem of searching on encrypted data and provide proofs of security for the resulting crypto systems. Our techniques have a number of crucial advantages. They are provably secure: they provide provable secrecy for encryption, in the sense that the untrusted server cannot learn anything about the plaintext when only given the ciphertext; they provide query isolation for searches, meaning that the untrusted server cannot learn anything more about the plaintext than the search result; they provide controlled searching, so that the untrusted server cannot search for an arbitrary word without the user's authorization; they also support hidden queries, so that the user may ask the untrusted server to search for a secret word without revealing the word to the server. The algorithms presented are simple, fast (for a document of length n, the encryption and search algorithms only need O(n) stream cipher and block cipher operations), and introduce almost no space and communication overhead, and hence are practical to use today.", "Query privacy in secure DBMS is an important feature, although rarely formally considered outside the theoretical community. Because of the high overheads of guaranteeing privacy in complex queries, almost all previous works addressing practical applications consider limited queries (e.g., just keyword search), or provide a weak guarantee of privacy. In this work, we address a major open problem in private DB: efficient sub linear search for arbitrary Boolean queries. We consider scalable DBMS with provable security for all parties, including protection of the data from both server (who stores encrypted data) and client (who searches it), as well as protection of the query, and access control for the query. We design, build, and evaluate the performance of a rich DBMS system, suitable for real-world deployment on today medium-to large-scale DBs. On a modern server, we are able to query a formula over 10TB, 100M-record DB, with 70 searchable index terms per DB row, in time comparable to (insecure) MySQL (many practical queries can be privately executed with work 1.2-3 times slower than MySQL, although some queries are costlier). We support a rich query set, including searching on arbitrary boolean formulas on keywords and ranges, support for stemming, and free keyword searches over text fields. We identify and permit a reasonable and controlled amount of leakage, proving that no further leakage is possible. In particular, we allow leakage of some search pattern information, but protect the query and data, provide a high level of privacy for individual terms in the executed search formula, and hide the difference between a query that returned no results and a query that returned a very small result set. We also support private and complex access policies, integrated in the search process so that a query with empty result set and a query that fails the policy are hard to tell apart.", "Searchable symmetric encryption (SSE) allows a party to outsource the storage of its data to another party (a server) in a private manner, while maintaining the ability to selectively search over it. This problem has been the focus of active research in recent years. In this paper we show two solutions to SSE that simultaneously enjoy the following properties: Both solutions are more efficient than all previous constant-round schemes. In particular, the work performed by the server per returned document is constant as opposed to linear in the size of the data. Both solutions enjoy stronger security guarantees than previous constant-round schemes. In fact, we point out subtle but serious problems with previous notions of security for SSE, and show how to design constructions which avoid these pitfalls. Further, our second solution also achieves what we call adaptive SSE security, where queries to the server can be chosen adaptively (by the adversary) during the execution of the search; this notion is both important in practice and has not been previously considered. Surprisingly, despite being more secure and more efficient, our SSE schemes are remarkably simple. We consider the simplicity of both solutions as an important step towards the deployment of SSE technologies.As an additional contribution, we also consider multi-user SSE. All prior work on SSE studied the setting where only the owner of the data is capable of submitting search queries. We consider the natural extension where an arbitrary group of parties other than the owner can submit search queries. We formally define SSE in the multi-user setting, and present an efficient construction that achieves better performance than simply using access control mechanisms." ] }
1610.04025
2532272970
Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. Recently, (SP 2013) gave the first construction of an ideally-secure OPE scheme and Kerschbaum (CCS 2015) showed how to achieve the even stronger notion of frequency-hiding OPE. However, as (CCS 2015) have recently demonstrated, these constructions remain vulnerable to several attacks. Additionally, all previous ideal OPE schemes (with or without frequency-hiding) either require a large round complexity of O(log n) rounds for each insertion, or a large persistent client storage of size O(n), where n is the number of items in the database. It is thus desirable to achieve a range query scheme addressing both issues gracefully. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(ne) non-persistent client storage for 0
Oblivious RAM @cite_4 @cite_0 @cite_13 @cite_3 and oblivious storage schemes @cite_16 @cite_42 @cite_32 @cite_9 can be used for the same applications as OPE and POPE, but achieve a stronger security definition that additionally hides the access pattern, and therefore incur a larger performance cost than our approach.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_42", "@cite_32", "@cite_3", "@cite_0", "@cite_16", "@cite_13" ], "mid": [ "2005756274", "1993131747", "2139829638", "2294070790", "2051576234", "1545698365", "2093852772", "2170993700" ], "abstract": [ "Software protection is one of the most important issues concerning computer practice. There exist many heuristics and ad-hoc methods for protection, but the problem as a whole has not received the theoretical treatment it deserves. In this paper, we make the first steps towards a theoretic treatment of software protection: First, we distill and formulate the key problem of learning about a program from its execution . Second, assuming the existence of one-way permutations, we present an efficient way of executing programs such that it is infeasible to learn anything about the program by monitoring its executions. How can one efficiently execute programs without allowing an adversary, monitoring the execution, to learn anything about the program? Traditional cryptographic techniques can be applied to keep the contents of the memory unknown throughout the execution, but are not applicable to the problem of hiding the access pattern. The problem of hiding the access pattern efficiently corresponds to efficient simulation of Random Access Machines (RAM) on an oblivious RAM. We define an oblivious RAM to be a (probabilistic) RAM for which (the distribution of) the memory access pattern is independent of the input. We present an (on-line) simulation of t steps of an arbitrary RAM with m memory cells, by less than t·m e steps of an oblivious RAM with 2 m memory cells, where e>0 is an arbitrary constant.", "There have been several attempts recently at using homomorphic encryption to increase the efficiency of Oblivious RAM protocols. One of the most successful has been Onion ORAM, which achieves O(1) communication overhead with polylogarithmic server computation. However, it has two drawbacks. It requires a large block size of B = Ω(log6 N) with large constants. Moreover, while it only needs polylogarithmic computation complexity, that computation consists mostly of expensive homomorphic multiplications. In this work, we address these problems and reduce the required block size to Ω(log4 N). We remove most of the homomorphic multiplications while maintaining O(1) communication complexity. Our idea is to replace their homomorphic eviction routine with a new, much cheaper permute-and-merge eviction which eliminates homomorphic multiplications and maintains the same level of security. In turn, this removes the need for layered encryption that Onion ORAM relies on and reduces both the minimum block size and server computation.", "We formalize the notion of Verifiable Oblivious Storage VOS, where a client outsources the storage of data to a server while ensuring data confidentiality, access pattern privacy, and integrity and freshness of data accesses. VOS generalizes the notion of Oblivious RAM ORAM in that it allows the server to perform computation, and also explicitly considers data integrity and freshness. We show that allowing server-side computation enables us to construct asymptotically more efficient VOS schemes whose bandwidth overhead cannot be matched by any ORAM scheme, due to a known lower bound by Goldreich and Ostrovsky. Specifically, for large block sizes we can construct a VOS scheme with constant bandwidth per query; further, answering queries requires only poly-logarithmic server computation. We describe applications of VOS to Dynamic Proofs of Retrievability, and RAM-model secure multi-party computation.", "We present Onion ORAM, an Oblivious RAM (ORAM) with constant worst-case bandwidth blowup that leverages poly-logarithmic server computation to circumvent the logarithmic lower bound on ORAM bandwidth blowup. Our construction does not require fully homomorphic encryption, but employs an additively homomorphic encryption scheme such as the Damgard-Jurik cryptosystem, or alternatively a BGV-style somewhat homomorphic encryption scheme without bootstrapping. At the core of our construction is an ORAM scheme that has “shallow circuit depth” over the entire history of ORAM accesses. We also propose novel techniques to achieve security against a malicious server, without resorting to expensive and non-standard techniques such as SNARKs. To the best of our knowledge, Onion ORAM is the first concrete instantiation of a constant bandwidth blowup ORAM under standard assumptions (even for the semi-honest setting).", "We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM makes both theoretical and practical contributions. From a theoretical perspective, Circuit ORAM shows that the well-known Goldreich-Ostrovsky logarithmic ORAM lower bound is tight under certain parameter ranges, for several performance metrics. Therefore, we are the first to give an answer to a theoretical challenge that remained open for the past twenty-seven years. Second, Circuit ORAM earns its name because it achieves (almost) optimal circuit size both in theory and in practice for realistic choices of block sizes. We demonstrate compelling practical performance and show that Circuit ORAM is an ideal candidate for secure multi-party computation applications.", "Oblivious RAM is a useful primitive that allows a client to hide its data access patterns from an untrusted server in storage outsourcing applications. Until recently, most prior works on Oblivious RAM aim to optimize its amortized cost, while suffering from linear or even higher worst-case cost. Such poor worst-case behavior renders these schemes impractical in realistic settings, since a data access request can occasionally be blocked waiting for an unreasonably large number of operations to complete. This paper proposes novel Oblivious RAM constructions that achieves poly-logarithmic worst-case cost, while consuming constant client-side storage. To achieve the desired worst-case asymptotic performance, we propose a novel technique in which we organize the O-RAM storage into a binary tree over data buckets, while moving data blocks obliviously along tree edges.", "We study oblivious storage (OS), a natural way to model privacy-preserving data outsourcing where a client, Alice, stores sensitive data at an honest-but-curious server, Bob. We show that Alice can hide both the content of her data and the pattern in which she accesses her data, with high probability, using a method that achieves O(1) amortized rounds of communication between her and Bob for each data access. We assume that Alice and Bob exchange small messages, of size O(N1 c), for some constant c>=2, in a single round, where N is the size of the data set that Alice is storing with Bob. We also assume that Alice has a private memory of size 2N1 c. These assumptions model real-world cloud storage scenarios, where trade-offs occur between latency, bandwidth, and the size of the client's private memory.", "We present Path ORAM, an extremely simple Oblivious RAM protocol with a small amount of client storage. Partly due to its simplicity, Path ORAM is the most practical ORAM scheme for small client storage known to date. We formally prove that Path ORAM requires log^2 N log X bandwidth overhead for block size B = X log N. For block sizes bigger than Omega(log^2 N), Path ORAM is asymptotically better than the best known ORAM scheme with small client storage. Due to its practicality, Path ORAM has been adopted in the design of secure processors since its proposal." ] }
1610.04025
2532272970
Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. Recently, (SP 2013) gave the first construction of an ideally-secure OPE scheme and Kerschbaum (CCS 2015) showed how to achieve the even stronger notion of frequency-hiding OPE. However, as (CCS 2015) have recently demonstrated, these constructions remain vulnerable to several attacks. Additionally, all previous ideal OPE schemes (with or without frequency-hiding) either require a large round complexity of O(log n) rounds for each insertion, or a large persistent client storage of size O(n), where n is the number of items in the database. It is thus desirable to achieve a range query scheme addressing both issues gracefully. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(ne) non-persistent client storage for 0
Finally, we note that techniques such as fully-homomorphic encryption @cite_14 , public-key searchable encryption @cite_19 @cite_5 @cite_47 , and secure multi-party computation @cite_37 @cite_6 @cite_31 can enable searching over encrypted data while achieving the strongest possible security. However, these approaches would require performing expensive cryptographic operations over the entire database on each query and are thus prohibitively expensive. Very recently cryptographic primitives such as order-revealing encryption @cite_2 , as well as garbled random-access memory @cite_8 , have offered the potential to achieve this level of security for sub-linear time search. However, all constructions of these primitive either rely on very non-standard assumptions or are prohibitively slow.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_31", "@cite_8", "@cite_6", "@cite_19", "@cite_2", "@cite_5", "@cite_47" ], "mid": [ "2156001253", "2031533839", "", "2071026487", "", "2161214567", "754106230", "1589843374", "2154448764" ], "abstract": [ "In this paper we introduce a new tool for controlling the knowl­ edge transfer process in cryptographic protocol design. It is applied to solve a general class of problems which include most of the two-party cryptographic problems in the literature. -- -_.Specifically, we show how two parties A and B can interactively generate a random integer N =p' q such that its secret, i.e., the prime factors (p, q), is hidden from either party individually but is recoverab jointly if desired. This can be utilized to give a protocol for two parties with private values i and j to compute any polynomially computable functions f(i,j) and g(i,j) with minimal knowledge transfer and a strong fairness property. As a special case, A and B can exchange a pair of secrets SA, SB, e.g. the factorizatio of an integer and a Hamiltonian circuit in a graph, in such a way that SA becomes computable by Bwhen and only when SB becomes computable by A. All these results are proved assuming only that the problem of factoring large intergers is computationally intractable.", "We propose a fully homomorphic encryption scheme -- i.e., a scheme that allows one to evaluate circuits over encrypted data without being able to decrypt. Our solution comes in three steps. First, we provide a general result -- that, to construct an encryption scheme that permits evaluation of arbitrary circuits, it suffices to construct an encryption scheme that can evaluate (slightly augmented versions of) its own decryption circuit; we call a scheme that can evaluate its (augmented) decryption circuit bootstrappable. Next, we describe a public key encryption scheme using ideal lattices that is almost bootstrappable. Lattice-based cryptosystems typically have decryption algorithms with low circuit complexity, often dominated by an inner product computation that is in NC1. Also, ideal lattices provide both additive and multiplicative homomorphisms (modulo a public-key ideal in a polynomial ring that is represented as a lattice), as needed to evaluate general circuits. Unfortunately, our initial scheme is not quite bootstrappable -- i.e., the depth that the scheme can correctly evaluate can be logarithmic in the lattice dimension, just like the depth of the decryption circuit, but the latter is greater than the former. In the final step, we show how to modify the scheme to reduce the depth of the decryption circuit, and thereby obtain a bootstrappable encryption scheme, without reducing the depth that the scheme can evaluate. Abstractly, we accomplish this by enabling the encrypter to start the decryption process, leaving less work for the decrypter, much like the server leaves less work for the decrypter in a server-aided cryptosystem.", "", "Yao's garbled circuit construction is a very fundamental result in cryptography and recent efficiency optimizations have brought it much closer to practice. However these constructions work only for circuits and garbling a RAM program involves the inefficient process of first converting it into a circuit. Towards the goal of avoiding this inefficiency, Lu and Ostrovsky (Eurocrypt 2013) introduced the notion of \"garbled RAM\" as a method to garble RAM programs directly. It can be seen as a RAM analogue of Yao's garbled circuits such that, the size of the garbled program and the time it takes to create and evaluate it, is proportional only to the running time on the RAM program rather than its circuit size. Known realizations of this primitive, either need to rely on strong computational assumptions or do not achieve the aforementioned efficiency (Gentry, Halevi, Lu, Ostrovsky, Raykova and Wichs, EUROCRYPT 2014). In this paper we provide the first construction with strictly poly-logarithmic overhead in both space and time based only on the minimal assumption that one-way functions exist. Our scheme allows for garbling multiple programs being executed on a persistent database, and has the additional feature that the program garbling is decoupled from the database garbling. This allows a client to provide multiple garbled programs to the server as part of a pre-processing phase and then later determine the order and the inputs on which these programs are to be executed, doing work independent of the running times of the programs itself.", "", "We study the problem of searching on data that is encrypted using a public key system. Consider user Bob who sends email to user Alice encrypted under Alice’s public key. An email gateway wants to test whether the email contains the keyword “urgent” so that it could route the email accordingly. Alice, on the other hand does not wish to give the gateway the ability to decrypt all her messages. We define and construct a mechanism that enables Alice to provide a key to the gateway that enables the gateway to test whether the word “urgent” is a keyword in the email without learning anything else about the email. We refer to this mechanism as Public Key Encryption with keyword Search. As another example, consider a mail server that stores various messages publicly encrypted for Alice by others. Using our mechanism Alice can send the mail server a key that will enable the server to identify all messages containing some specific keyword, but learn nothing else. We define the concept of public key encryption with keyword search and give several constructions.", "Deciding “greater-than” relations among data items just given their encryptions is at the heart of search algorithms on encrypted data, most notably, non-interactive binary search on encrypted data. Order-preserving encryption provides one solution, but provably provides only limited security guarantees. Two-input functional encryption is another approach, but requires the full power of obfuscation machinery and is currently not implementable.", "We construct public-key systems that support comparison queries (x ≥ a) on encrypted data as well as more general queries such as subset queries (x∈ S). Furthermore, these systems support arbitrary conjunctive queries (P1 ∧ ... ∧ Pl) without leaking information on individual conjuncts. We present a general framework for constructing and analyzing public-key systems supporting queries on encrypted data.", "We design an encryption scheme called multi-dimensional range query over encrypted data (MRQED), to address the privacy concerns related to the sharing of network audit logs and various other applications. Our scheme allows a network gateway to encrypt summaries of network flows before submitting them to an untrusted repository. When network intrusions are suspected, an authority can release a key to an auditor, allowing the auditor to decrypt flows whose attributes (e.g., source and destination addresses, port numbers, etc.) fall within specific ranges. However, the privacy of all irrelevant flows are still preserved. We formally define the security for MRQED and prove the security of our construction under the decision bilinear Diffie-Hellman and decision linear assumptions in certain bilinear groups. We study the practical performance of our construction in the context of network audit logs. Apart from network audit logs, our scheme also has interesting applications for financial audit logs, medical privacy, untrusted remote storage, etc. In particular, we show that MRQED implies a solution to its dual problem, which enables investors to trade stocks through a broker in a privacy-preserving manner." ] }
1610.04025
2532272970
Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. Recently, (SP 2013) gave the first construction of an ideally-secure OPE scheme and Kerschbaum (CCS 2015) showed how to achieve the even stronger notion of frequency-hiding OPE. However, as (CCS 2015) have recently demonstrated, these constructions remain vulnerable to several attacks. Additionally, all previous ideal OPE schemes (with or without frequency-hiding) either require a large round complexity of O(log n) rounds for each insertion, or a large persistent client storage of size O(n), where n is the number of items in the database. It is thus desirable to achieve a range query scheme addressing both issues gracefully. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(ne) non-persistent client storage for 0
Our POPE tree is is similar in concept to the Buffer Tree of @cite_38 . Their data structure delays insertions and searches in a buffer stored at each node, which are cleared (thus executing the actual operations) when they become sufficiently full. The main difference here is that our buffers contain only insertions, and they are cleared only when a search operation passes through that node.
{ "cite_N": [ "@cite_38" ], "mid": [ "2088116875" ], "abstract": [ "We present a technique for designing external memory data structures that support batched operations I O efficiently. We show how the technique can be used to develop external versions of a search tree, a priority queue, and a segment tree, and give examples of how these structures can be used to develop I O-efficient algorithms. The developed algorithms are either extremely simple or straightforward generalizations of known internal memory algorithms—given the developed external data structures." ] }
1610.04073
2949885492
Knowledge base completion aims to infer new relations from existing information. In this paper, we propose path-augmented TransR (PTransR) model to improve the accuracy of link prediction. In our approach, we base PTransR model on TransR, which is the best one-hop model at present. Then we regularize TransR with information of relation paths. In our experiment, we evaluate PTransR on the task of entity prediction. Experimental results show that PTransR outperforms previous models.
Text-based approaches extraction entities and or relations from plain text. For example, Hearst @cite_12 uses is a @math an'' pattern to extract hyponymy relations. @cite_0 proposes to extract open-domain relations from the Web. Fully supervised relation extraction, which classify two marked entities into several predefined relations, have become a hot research arena in the past several years @cite_16 @cite_14 @cite_4 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_0", "@cite_16", "@cite_12" ], "mid": [ "", "1750263989", "2127978399", "2950371387", "2068737686" ], "abstract": [ "", "Relation classification is an important research arena in the field of natural language processing (NLP). In this paper, we present SDP-LSTM, a novel neural network to classify the relation of two entities in a sentence. Our neural architecture leverages the shortest dependency path (SDP) between two entities; multichannel recurrent neural networks, with long short term memory (LSTM) units, pick up heterogeneous information along the SDP. Our proposed model has several distinct features: (1) The shortest dependency paths retain most relevant information (to relation classification), while eliminating irrelevant words in the sentence. (2) The multichannel LSTM networks allow effective information integration from heterogeneous sources over the dependency paths. (3) A customized dropout strategy regularizes the neural network to alleviate overfitting. We test our model on the SemEval 2010 relation classification task, and achieve an @math -score of 83.7 , higher than competing methods in the literature.", "To implement open information extraction, a new extraction paradigm has been developed in which a system makes a single data-driven pass over a corpus of text, extracting a large set of relational tuples without requiring any human input. Using training data, a Self-Supervised Learner employs a parser and heuristics to determine criteria that will be used by an extraction classifier (or other ranking model) for evaluating the trustworthiness of candidate tuples that have been extracted from the corpus of text, by applying heuristics to the corpus of text. The classifier retains tuples with a sufficiently high probability of being trustworthy. A redundancy-based assessor assigns a probability to each retained tuple to indicate a likelihood that the retained tuple is an actual instance of a relationship between a plurality of objects comprising the retained tuple. The retained tuples comprise an extraction graph that can be queried for information.", "Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features. In this work we tackle the relation classification task using a convolutional neural network that performs classification by ranking (CR-CNN). We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform experiments using the the SemEval-2010 Task 8 dataset, which is designed for the task of classifying the relationship between two nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art for this dataset and achieve a F1 of 84.1 without using any costly handcrafted features. Additionally, our experimental results show that: (1) our approach is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals.", "We describe a method for the automatic acquisition of the hyponymy lexical relation from unrestricted text. Two goals motivate the approach: (i) avoidance of the need for pre-encoded knowledge and (ii) applicability across a wide range of text. We identify a set of lexico-syntactic patterns that are easily recognizable, that occur frequently and across text genre boundaries, and that indisputably indicate the lexical relation of interest. We describe a method for discovering these patterns and suggest that other lexical relations will also be acquirable in this way. A subset of the acquisition algorithm is implemented and the results are used to augment and critique the structure of a large hand-built thesaurus. Extensions and applications to areas such as information retrieval are suggested." ] }
1610.04073
2949885492
Knowledge base completion aims to infer new relations from existing information. In this paper, we propose path-augmented TransR (PTransR) model to improve the accuracy of link prediction. In our approach, we base PTransR model on TransR, which is the best one-hop model at present. Then we regularize TransR with information of relation paths. In our experiment, we evaluate PTransR on the task of entity prediction. Experimental results show that PTransR outperforms previous models.
Knowledge base completion population, on the other hand, does not use additional text. @cite_17 propose a tensor model to predict missing relations in an existing knowledge base, showing neural networks' ability of entity-relation inference. Then, translating embeddings approaches are proposed for knowledge base completion @cite_5 @cite_9 @cite_19 @cite_6 . Recently, @cite_2 use additional information to improve knowledge base completion by using textual context.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_19", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "2283196293", "2952854166", "2184957013", "2571811098", "2127795553", "2127426251" ], "abstract": [ "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.", "Representation learning of knowledge bases (KBs) aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text.", "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.", "Learning the representations of a knowledge graph has attracted significant research interest in the field of intelligent Web. By regarding each relation as one translation from head entity to tail entity, translation-based methods including TransE, TransH and TransR are simple, effective and achieving the state-of-the-art performance. However, they still suffer the following issues: (i) low performance when modeling 1-to-N, N-to-1 and N-to-N relations. (ii) limited performance due to the structure sparseness of the knowledge graph. In this paper, we propose a novel knowledge graph representation learning method by taking advantage of the rich context information in a text corpus. The rich textual context information is incorporated to expand the semantic structure of the knowledge graph and each relation is enabled to own different representations for different head and tail entities to better handle 1-to-N, N-to-1 and N-to-N relations. Experiments on multiple benchmark datasets show that our proposed method successfully addresses the above issues and significantly outperforms the state-of-the-art methods.", "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.", "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively." ] }
1610.04073
2949885492
Knowledge base completion aims to infer new relations from existing information. In this paper, we propose path-augmented TransR (PTransR) model to improve the accuracy of link prediction. In our approach, we base PTransR model on TransR, which is the best one-hop model at present. Then we regularize TransR with information of relation paths. In our experiment, we evaluate PTransR on the task of entity prediction. Experimental results show that PTransR outperforms previous models.
In this paper, we focus on pure knowledge base completion, i.e., we do not use additional resources. We combine the state-of-the-art one-hop TransR model @cite_19 and path augmentation method @cite_6 , resulting in the new PTransR variant.
{ "cite_N": [ "@cite_19", "@cite_6" ], "mid": [ "2184957013", "2952854166" ], "abstract": [ "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.", "Representation learning of knowledge bases (KBs) aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text." ] }
1610.04124
2951281804
The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels. The goal of this work is to implement and evaluate a complete multi-stixel estimation pipeline on an embedded, energy-efficient, GPU-accelerated device. This work presents a full GPU-accelerated implementation of stixel estimation that produces reliable results at 26 frames per second (real-time) on the Tegra X1 for disparity images of 1024x440 pixels and stixel widths of 5 pixels, and achieves more than 400 frames per second on a high-end Titan X GPU card.
The stixel world was introduced by @cite_7 as an intermediate representation suitable for high-level tasks such as object detection or tracking @cite_1 @cite_2 . The ground surface of a scene, also called free-space (space without obstacles), is estimated using @cite_9 from a depth map computed with an stereo algorithm such as SGM @cite_12 . Then, dynamic programming is employed in order to find the bottom of the stixels and their heights.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_1", "@cite_2", "@cite_12" ], "mid": [ "2161337244", "2112921976", "1558922706", "", "2117248802" ], "abstract": [ "Ambitious driver assistance for complex urban scenarios demands a complete awareness of the situation, including all moving and stationary objects that limit the free space. Recent progress in real-time dense stereo vision provides precise depth information for nearly every pixel of an image. This rises new questions: How can one efficiently analyze half a million disparity values of next generation imagers? And how can one find all relevant obstacles in this huge amount of data in real-time? In this paper we build a medium-level representation named \"stixel-world\". It takes into account that the free space in front of vehicles is limited by objects with almost vertical surfaces. These surfaces are approximated by adjacent rectangular sticks of a certain width and height. The stixel-world turns out to be a compact but flexible representation of the three-dimensional traffic situation that can be used as the common basis for the scene understanding tasks of driver assistance and autonomous systems.", "We propose a general technique for modeling the visible road surface in front of a vehicle. The common assumption of a planar road surface is often violated in reality. A workaround proposed in the literature is the use of a piecewise linear or quadratic function to approximate the road surface. Our approach is based on representing the road surface as a general parametric B-spline curve. The surface parameters are tracked over time using a Kalman filter. The surface parameters are estimated from stereo measurements in the free space. To this end, we adopt a recently proposed road-obstacle segmentation algorithm to include disparity measurements and the B-spline road-surface representation. Experimental results in planar and undulating terrain verify the increase in free-space availability and accuracy using a flexible B-spline for road-surface modeling.", "Applications using pedestrian detection in street scene require both high speed and quality. Maximal speed is reached when exploiting the geometric information provided by stereo cameras. Yet, extracting useful information at speeds higher than 100 Hz is a non-trivial task. We propose a method to estimate the ground-obstacles boundary (and its distance), without computing a depth map. By properly parametrizing the search space in the image plane we improve the algorithmic performance, and reach speeds of @math on a desktop CPU. When connected with a state of the art GPU objects detector, we reach high quality detections at the record speed of @math .", "", "This paper describes the semiglobal matching (SGM) stereo method. It uses a pixelwise, mutual information (Ml)-based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement, and multibaseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments, and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed. A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2 seconds on typical test images. An in depth evaluation of the Ml-based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems." ] }
1610.04124
2951281804
The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels. The goal of this work is to implement and evaluate a complete multi-stixel estimation pipeline on an embedded, energy-efficient, GPU-accelerated device. This work presents a full GPU-accelerated implementation of stixel estimation that produces reliable results at 26 frames per second (real-time) on the Tegra X1 for disparity images of 1024x440 pixels and stixel widths of 5 pixels, and achieves more than 400 frames per second on a high-end Titan X GPU card.
In order to achieve real-time stixel estimation on CPU, @cite_19 developed a less accurate method to calculate the ground surface by accumulating matching costs into vertical-disparity space, and then computing the bottom and height of stixels in disparity-cost space. Disparities are computed only for object stixels and not for ground.
{ "cite_N": [ "@cite_19" ], "mid": [ "2047166686" ], "abstract": [ "Mobile robots require object detection and classification for safe and smooth navigation. Stereo vision improves such detection by doubling the views of the scene and by giving indirect access to depth information. This depth information can also be used to reduce the set of candidate detection windows. Up to now, most algorithms compute a depth map to discard unpromising detection windows. We propose a novel approach where a stixel world model is computed directly from the stereo images, without computing an intermediate depth map. We experimentally demonstrate that such approach can considerably reduce the set of candidate detection windows at a fraction of the computation cost of previous approaches." ] }
1610.04124
2951281804
The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels. The goal of this work is to implement and evaluate a complete multi-stixel estimation pipeline on an embedded, energy-efficient, GPU-accelerated device. This work presents a full GPU-accelerated implementation of stixel estimation that produces reliable results at 26 frames per second (real-time) on the Tegra X1 for disparity images of 1024x440 pixels and stixel widths of 5 pixels, and achieves more than 400 frames per second on a high-end Titan X GPU card.
The stixel world is the fundamental building block for more informative representations. Some extensions are stixel tracking, which provides a vector of velocity for each stixel @cite_0 ; stixel grouping, where stixels that seem to belong to the same object are grouped together @cite_15 ; and, finally, semantic stixels represent a combination of the stixel world and semantic segmentation @cite_5 . All these extensions would greatly benefit from real-time stixel computation.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_15" ], "mid": [ "", "2509033610", "2025991977" ], "abstract": [ "", "In this paper we present Semantic Stixels, a novel vision-based scene model geared towards automated driving. Our model jointly infers the geometric and semantic layout of a scene and provides a compact yet rich abstraction of both cues using Stixels as primitive elements. Geometric information is incorporated into our model in terms of pixel-level disparity maps derived from stereo vision. For semantics, we leverage a modern deep learning-based scene labeling approach that provides an object class label for each pixel. Our experiments involve an in-depth analysis and a comprehensive assessment of the constituent parts of our approach using three public benchmark datasets. We evaluate the geometric and semantic accuracy of our model and analyze the underlying run-times and the complexity of the obtained representation. Our results indicate that the joint treatment of both cues on the Semantic Stixel level yields a highly compact environment representation while maintaining an accuracy comparable to the two individual pixel-level input data sources. Moreover, our framework compares favorably to related approaches in terms of computational costs and operates in real-time.", "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude." ] }
1610.04124
2951281804
The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels. The goal of this work is to implement and evaluate a complete multi-stixel estimation pipeline on an embedded, energy-efficient, GPU-accelerated device. This work presents a full GPU-accelerated implementation of stixel estimation that produces reliable results at 26 frames per second (real-time) on the Tegra X1 for disparity images of 1024x440 pixels and stixel widths of 5 pixels, and achieves more than 400 frames per second on a high-end Titan X GPU card.
@cite_14 claim to run their FPGA implementation at 25 fps with stixel widths of 5 px, but the authors do not indicate the image resolution.
{ "cite_N": [ "@cite_14" ], "mid": [ "1978260476" ], "abstract": [ "In summer 2013, a Mercedes S-Class drove completely autonomously for about 100 km from Mannheim to Pforzheim, Germany, using only close-to-production sensors. In this project, called Mercedes Benz Intelligent Drive, stereo vision was one of the main sensing components. For the representation of free space and obstacles we relied on the so called Stixel World, a generic 3D intermediate representation which is computed from dense disparity images. In spite of the high performance of the Stixel World in most common traffic scenes, the availability of this technique is limited. For instance under adverse weather, rain or even spray water on the windshield results in erroneous disparity images which generate false Stixel results. This can lead to undesired behavior of autonomous vehicles. Our goal is to use the Stixel World for a robust free space estimation and a reliable obstacle detection even during difficult weather conditions. In this paper, we meet this challenge and fuse the Stixels incrementally into a reference grid map. Our new approach is formulated in a Bayesian manner and is based on existence estimation methods. We evaluate our new technique on a manually labeled database with emphasis on bad weather scenarios. The number of structures which are detected mistakenly within free space areas is reduced by a factor of two whereas the detection rate of obstacles increases at the same time." ] }
1610.04091
2538795372
Typical mobile agent networks, such as multi-unmanned aerial vehicle (UAV) systems, are constrained by limited resources: energy, computing power, memory and communication bandwidth. In particular, limited energy affects system performance directly, such as system lifetime. Moreover, it has been demonstrated experimentally in the wireless sensor network literature that the total energy consumption is often dominated by the communication cost, i.e., the computational and the sensing energy are small compared to the communication energy consumption. For this reason, the lifetime of the network can be extended significantly by minimizing the communication distance as well as the amount of communication data, at the expense of increasing computational cost. In this paper, we aim at attaining an optimal tradeoff between the communication and the computational energy. Specifically, we propose a mixed-integer optimization formulation for a multihop hierarchical clustering-based self-organizing UAV network incorporating data aggregation, to obtain an energy-efficient information routing scheme. The proposed framework is tested on two applications, namely target tracking and area mapping. Based on simulation results, our method can significantly save energy compared to a baseline strategy, where there is no data aggregation and clustering scheme.
Similar to our Multi-UAV information gathering problem, the goal of a Wireless Sensor Network (WSN) is to maximize network lifetime while delivering raw data to the sink (base station) @cite_7 . In order to maximize the lifetime of a network, data aggregation techniques have been proposed for WSNs where some computations are performed within the node to reduce the communication cost. It has been shown that by using a sensor node as a communication relay aggregator, an energy-efficient communication strategy can be obtained @cite_6 @cite_34 . Data correlations between different sensor nodes can be exploited to minimize the number of sensors sending the data to the base station @cite_19 . A compressed sensing technique to reduce the data volume to be transmitted was proposed in @cite_8 .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_6", "@cite_19", "@cite_34" ], "mid": [ "2099641228", "", "2111072230", "1996718482", "2145862204" ], "abstract": [ "We derive a general formula for the lifetime-of wireless sensor networks which holds independently of the underlying network model including network architecture and protocol, data collection initiation, lifetime definition, channel fading characteristics, and energy consumption model. This formula identifies two key parameters at the physical layer that affect the network lifetime: the channel state and the residual energy of sensors. As a result, it provides not only a gauge for performance evaluation of sensor networks but also a guideline for the design of network protocols. Based on this formula, we propose a medium access control protocol that exploits both the channel state information and the residual energy information of individual sensors. Referred to as the max-min approach, this protocol maximizes the minimum residual energy across the network in each data collection.", "", "A key challenge in ad-hoc, data-gathering, wireless sensor networks is achieving a lifetime of several years using nodes that carry merely hundreds of joules of stored energy. We explore the fundamental limits of energy-efficient collaborative data-gathering by deriving upper bounds on the lifetime of increasingly sophisticated sensor networks.", "In this article, we design techniques that exploit data correlations in sensor data to minimize communication costs (and hence, energy costs) incurred during data gathering in a sensor network. Our proposed approach is to select a small subset of sensor nodes that may be sufficient to reconstruct data for the entire sensor network. Then, during data gathering only the selected sensors need to be involved in communication. The selected set of sensors must also be connected, since they need to relay data to the data-gathering node. We define the problem of selecting such a set of sensors as the connected correlation-dominating set problem, and formulate it in terms of an appropriately defined correlation structure that captures general data correlations in a sensor network. We develop a set of energy-efficient distributed algorithms and competitive centralized heuristics to select a connected correlation-dominating set of small size. The designed distributed algorithms can be implemented in an asynchronous communication model, and can tolerate message losses. We also design an exponential (but nonexhaustive) centralized approximation algorithm that returns a solution within O(log n) of the optimal size. Based on the approximation algorithm, we design a class of centralized heuristics that are empirically shown to return near-optimal solutions. Simulation results over randomly generated sensor networks with both artificially and naturally generated data sets demonstrate the efficiency of the designed algorithms and the viability of our technique—even in dynamic conditions.", "The lifetime of a network, dependent upon battery capacity and energy consumption efficiency, plays an important role in wireless sensor networks. In this paper, the reduction of energy consumption is investigated by the considerations of the effects of cluster-based data aggregation routing in conjunction with well-scheduled and dynamic power range. Modeling power consumption as a mixed integer- and nonlinear-programming problem, the objective function provides the basis by which the total energy consumption is minimized. The problem of cluster construction and data aggregation trees were proven NP- complete. Thus, we proposed heuristics for cluster construction (i.e., AvgEnergy and MaxNumS) and data aggregation routing (i.e., CDAR). We also propose a duty cycle scheduling scheme and dynamic radius to ensure the total energy consumption is minimized. The experimental results show that the heuristic proposed above outperforms other modified existing algorithms." ] }
1610.04091
2538795372
Typical mobile agent networks, such as multi-unmanned aerial vehicle (UAV) systems, are constrained by limited resources: energy, computing power, memory and communication bandwidth. In particular, limited energy affects system performance directly, such as system lifetime. Moreover, it has been demonstrated experimentally in the wireless sensor network literature that the total energy consumption is often dominated by the communication cost, i.e., the computational and the sensing energy are small compared to the communication energy consumption. For this reason, the lifetime of the network can be extended significantly by minimizing the communication distance as well as the amount of communication data, at the expense of increasing computational cost. In this paper, we aim at attaining an optimal tradeoff between the communication and the computational energy. Specifically, we propose a mixed-integer optimization formulation for a multihop hierarchical clustering-based self-organizing UAV network incorporating data aggregation, to obtain an energy-efficient information routing scheme. The proposed framework is tested on two applications, namely target tracking and area mapping. Based on simulation results, our method can significantly save energy compared to a baseline strategy, where there is no data aggregation and clustering scheme.
Another approach is to have a mobile sensing node collect data from the nodes to reduce the communication overload @cite_5 @cite_38 @cite_15 @cite_13 . Since the UAVs are mobile, using another UAV to collect data from the surveying UAVs is not an ideal approach. However, similar to WSN data aggregation, the UAVs can perform computations on board to produce concise data and periodically transmit to the base station, as in @cite_28 for an image processing application. Data transmission to the base station can be performed either directly or through a UAV relay network @cite_45 @cite_48 . Therefore, in this work, we propose a self-organizing network topology that allows data aggregation as well as a multi-hop information routing pattern.
{ "cite_N": [ "@cite_38", "@cite_28", "@cite_48", "@cite_45", "@cite_5", "@cite_15", "@cite_13" ], "mid": [ "1832849348", "2091685917", "2157464905", "2151287854", "2103209505", "1978442646", "2024148187" ], "abstract": [ "Unlike traditional multihop forwarding among homogeneous static sensor nodes, use of mobile devices for data collection in wireless sensor networks has recently been gathering more attention. It is known that the use of mobility significantly reduces the energy consumption at each sensor, elongating the functional lifetime of the network, in exchange for increased data delivery latency. However, in previous work, mobility and communication capabilities are often underutilized, resulting in suboptimal solutions incurring unnecessarily large latency. In this paper, we focus on the problem of finding an optimal path of a mobile device, which we call \"data mule,\" to achieve the smallest data delivery latency in the case of minimum energy consumption at each sensor, i.e., each sensor only sends its data directly to the data mule. We formally define the path selection problem and show the problem is @math -hard. Then we present an approximation algorithm and analyze its approximation factor. Numerical experiments demonstrate that our approximation algorithm successfully finds the paths that result in 10 -50 shorter latency compared to previously proposed methods, suggesting that controlled mobility can be exploited much more effectively.", "An image based system and process for rendering novel views of a real or synthesized 3D scene based on a series of concentric mosaics depicting the scene. In one embodiment, each concentric mosaic represents a collection of consecutive slit images of the surrounding 3D scene taken from a different viewpoint tangent to a circle on a plane within the scene. Novel views from viewpoints within circular regions of the aforementioned circle plane defined by the concentric mosaics are rendered using these concentric mosaics. Specifically, a slit image can be identified by a ray originating at its viewpoint on the circle plane and extending toward the longitudinal midline of the slit image. Each of the rays associated with the slit images needed to construct a novel view will either coincide with one of the rays associated with a previously captured slit image, or it will pass between two of the concentric circles on the circle plane. If it coincides, then the previously captured slit image associated with the coinciding ray can be used directly to construct part of the novel view. If the ray passes between two of the concentric circles of the plane, then the needed slit image is interpolated using the two previously captured slit images associated with the rays originating from the adjacent concentric circles that are parallel to the non-coinciding ray. If the objects in the 3D scene are close to the camera, depth correction is applied to reduce image distortion for pixels located above and below the circle plane. In another embodiment, a single camera is used to capture a sequence of images. Each image includes image data that has a ray direction associated therewith. To render an image at a novel viewpoint, multiple ray directions from the novel viewpoint are chosen. Image data is combined from the sequence of images by selecting image data that has a ray direction substantially aligning with the ray direction from the novel viewpoint.", "This paper presents a cooperative distributed planning algorithm that ensures network connectivity for a team of heterogeneous agents operating in dynamic and communication-limited environments. The algorithm, named CBBA with Relays, builds on the Consensus-Based Bundle Algorithm (CBBA), a distributed task allocation framework developed previously by the authors and their colleagues. Information available through existing consensus phases of CBBA is leveraged to predict the network topology and to propose relay tasks to repair connectivity violations. The algorithm ensures network connectivity during task execution while preserving the distributed and polynomial-time guarantees of CBBA. By employing under-utilized agents as communication relays, CBBA with Relays improves the range of the team without limiting the scope of the active agents, thus improving mission performance. The algorithm is validated through simulation trials and through experimental indoor and outdoor field tests, demonstrating the real-time applicability of the approach.", "Herein, we investigate the optimal deployment of an unmanned aerial vehicle (UAV) in a wireless relay communication system. The optimal UAV position is found by maximizing the average data rate, while at the same time keeping the symbol error rate (SER) below a certain threshold. We derive a closed-form expression for the average data rate in a fixed wireless link using adaptive modulation. By using the alternate definite integral form for the Gaussian Q-function, the symbol error rate (SER) of the system in the link level is evaluated. An upper bound on the SER is also derived using the improved exponential bounds for the Q-function. It is shown that the derived SER expression matches the simulation results very well and the derived upper bound is tight for a wide range of SNRs. Simulation results also show that the system data rate matches the derived closed-form expression.", "We explore synergies among mobile robots and wireless sensor networks in environmental monitoring through a system in which robotic data mules collect measurements gathered by sensing nodes. A proof-of-concept implementation demonstrates that this approach significantly increases the lifetime of the system by conserving energy that the sensing nodes otherwise would use for communication.", "Autonomous Vehicles (AV) are used to solve the problem of data gathering in large scale sensor deployments with disconnected clusters of sensors networks. Our take is that an efficient strategy for data collection with AVs should leverage i) cooperation amongst sensors in communication range of each other forming a sensor cluster, ii) advanced coding and data storage techniques for easing the cooperation process, and iii) AV route-planning that is both content and cooperation-aware. Our work formulates the problem of efficient data gathering as a cooperative route-optimization problem with communication constraints. We also analyze (network) coded data transmission and storage for simplifying cooperation amongst sensors as well as data collection by the AV. Given the complexity of the problem, we focus on heuristic techniques, such as particle swarm optimization, to calculate the AV's route and the times for communication with each sensor and or cluster of sensors. We analyze two extreme cases, i.e., networks with and without intra- cluster cooperation, and provide numerical results to illustrate that the performance gap between them increases with the number of nodes. We show that cooperation in a 100 sensor deployment can increase the amount of data collected by up to a factor of 3 with respect to path planning without cooperation.", "In conventional wireless sensor networks (WSNs) to conserve energy Low Energy Adaptive Clustering Heirarchy (LEACH) is commonly used. Energy conversation is far more challenging for large scale deployments. This paper deals with selection of sensor network communication topology and the use of Unmanned Aerial Vehicle (UAV) for data gathering. Particle Swarm Optimzation (PSO) is proposed as an optimization method to find the optimal clusters in order to reduce the energy consumption, bit error rate (BER), and UAV travel time. PSO is compared to LEACH using a simulation case and the results show that PSO outperforms LEACH in terms of energy consumption and BER while the UAV travel time is similar. The numerical results further illustrate that the performance gap between them increases with the number of cluster head nodes. The reduced energy consumption means that the network life time can be significantly extended and the lower BER will increase the amount of data received from the entrie network." ] }
1610.04091
2538795372
Typical mobile agent networks, such as multi-unmanned aerial vehicle (UAV) systems, are constrained by limited resources: energy, computing power, memory and communication bandwidth. In particular, limited energy affects system performance directly, such as system lifetime. Moreover, it has been demonstrated experimentally in the wireless sensor network literature that the total energy consumption is often dominated by the communication cost, i.e., the computational and the sensing energy are small compared to the communication energy consumption. For this reason, the lifetime of the network can be extended significantly by minimizing the communication distance as well as the amount of communication data, at the expense of increasing computational cost. In this paper, we aim at attaining an optimal tradeoff between the communication and the computational energy. Specifically, we propose a mixed-integer optimization formulation for a multihop hierarchical clustering-based self-organizing UAV network incorporating data aggregation, to obtain an energy-efficient information routing scheme. The proposed framework is tested on two applications, namely target tracking and area mapping. Based on simulation results, our method can significantly save energy compared to a baseline strategy, where there is no data aggregation and clustering scheme.
UAVs have been used for mapping applications @cite_37 @cite_10 @cite_2 @cite_30 . However, the focus of mapping applications using UAVs has been on improving the accuracy of the acquired images, which could be orthomosaic, classification of vegetation, improving video transmission range, etc. In some applications, the objective is to determine the minimum energy cost path for UAVs. In @cite_42 , the objective for the UAV is to visit a set of pre-defined target locations. The determined path must minimize the total energy consumed in visiting the targets. In @cite_22 , the objective is to develop multi-UAV exploration strategies under limited battery constraints. In @cite_12 , a multi-UAV cooperative system using behavior was developed to efficiently explore a region with the constraint that the UAVs have limited energy. In most of the above UAV mapping applications, the issue of optimizing communication energy to enhance mission time is not considered. In our formulation, we want to optimize the energy consumed by communication and computation components, so that the mission duration can be increased. This aspect has not be adequately addressed in the UAV mapping literature.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_22", "@cite_42", "@cite_2", "@cite_10", "@cite_12" ], "mid": [ "1966614305", "186669090", "", "1561867011", "", "2019400639", "1601776805" ], "abstract": [ "The potential applications of unmanned aerial vehicles (UAVs), or drones, have generated intense interest across many fields. UAVs offer the potential to collect detailed spatial information in real time at relatively low cost and are being used increasingly in conservation and ecological research. Within infectious disease epidemiology and public health research, UAVs can provide spatially and temporally accurate data critical to understanding the linkages between disease transmission and environmental factors. Using UAVs avoids many of the limitations associated with satellite data (e.g., long repeat times, cloud contamination, low spatial resolution). However, the practicalities of using UAVs for field research limit their use to specific applications and settings. UAVs fill a niche but do not replace existing remote-sensing methods.", "In the last years UAV (Unmanned Aerial Vehicle)-systems became relevant for applications in precision farming and in infrastructure maintenance, like road maintenance and dam surveillance. This paper gives an overview about UAV (Unmanned Aerial Vehicle) systems and their application for photogrammetric recording and documentation of cultural heritage. First the historical development of UAV systems and the definition of UAV-helicopte rs will be given. The advantages of a photogrammetric system on-board a model helicopter will be briefly discussed and compared to standard aerial and terrestrial photogrammetry. UAVs are mostly low cost systems and flexible and therefore a suitable alternative solution compared to other mobile mapping systems. A mini UAV-system was used for photogrammetric image data acquisition near Palpa in Peru. A settlement from the 13 th century AD, which was presumably used as a mine, was flown with a model helicopter. Based on the image data, an accurate 3D-model will be generated in the future. With an orthophoto and a DEM derived from aerial images in a scale of 1:7 000, a flight planning was build up. The determined flying positions were implemented in the flight control system. Thus, the helicopter is able to fly to predefined pathpoints automatically. Tests in Switzerland and the flights in Pinchango Alto showed that using the built-in GPS INS- and stabilization units of the flight control system, predefined positions could be reached exactly to acquire the images. The predicted strip crossings and flying height were kept accurately in the autonomous flying mode.", "", "Coverage path planning is the operation of finding a path that covers all the points of a specific area. Thanks to the recent advances of hardware technology, Unmanned Aerial Vehicles (UAVs) are starting to be used for photogrammetric sensing of large areas in several application domains, such as agriculture, rescuing, and surveillance. However, most of the research focused on finding the optimal path taking only geometrical constraints into account, without considering the peculiar features of the robot, like available energy, weight, maximum speed, sensor resolution, etc. This paper proposes an energy-aware path planning algorithm that minimizes energy consumption while satisfying a set of other requirements, such as coverage and resolution. The algorithm is based on an energy model derived from real measurements. Finally, the proposed approach is validated through a set of experiments.", "", "Unmanned aerial vehicle (UAV) platforms are nowadays a valuable source of data for inspection, surveillance, mapping, and 3D modeling issues. As UAVs can be considered as a low-cost alternative to the classical manned aerial photogrammetry, new applications in the short- and close-range domain are introduced. Rotary or fixed-wing UAVs, capable of performing the photogrammetric data acquisition with amateur or SLR digital cameras, can fly in manual, semiautomated, and autonomous modes. Following a typical photogrammetric workflow, 3D results like digital surface or terrain models, contours, textured 3D models, vector information, etc. can be produced, even on large areas. The paper reports the state of the art of UAV for geomatics applications, giving an overview of different UAV platforms, applications, and case studies, showing also the latest developments of UAV image processing. New perspectives are also addressed.", "We propose a multi-robot exploration algorithm that uses adaptive coordination to provide heterogeneous behavior. The key idea is to maximize the efficiency of exploring and mapping an unknown environment when a team is faced with unreliable communication and limited battery life (e.g., with aerial rotorcraft). The proposed algorithm utilizes four states: explore, meet, sacrifice, and relay. The explore state uses a frontier-based exploration algorithm, the meet state returns to the last known location of communication to share data, the sacrifice state sends the robot out to explore without consideration of remaining battery, and the relay state lands the robot until a meeting occurs. This approach allows robots to take on the role of a relay to improve communication between team members. In addition, the robots can “sacrifice” themselves by continuing to explore even when they do not have sufficient battery to return to the base station. We compare the performance of the proposed approach to state-of-the-art frontier-based exploration, and results show gains in explored area. The feasibility of components of the proposed approach is also demonstrated on a team of two custom-built quadcopters exploring an office environment." ] }
1610.04256
2949162339
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
Since this discovery, several advancements in the understanding and creation of adversarial examples have taken place. @cite_14 demonstrated that the existence of adversarial examples could be the result of the architecture of DCNNs themselves. @cite_18 presented the fast gradient sign (FGS) method for generating adversarial examples, which forms perturbations @math using the sign of the elements of the gradient of the cost function with respect to the input,'' that is defined as where (x ) is the input to the model, (y ) is the target of (x ), and (J( ,x,y) ) is the cost used to train the network.
{ "cite_N": [ "@cite_18", "@cite_14" ], "mid": [ "1945616565", "2179402106" ], "abstract": [ "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "We show that the representation of an image in a deep neural network (DNN) can be manipulated to mimic those of other natural images, with only minor, imperceptible perturbations to the original image. Previous methods for generating adversarial images focused on image perturbations designed to produce erroneous class labels, while we concentrate on the internal layers of DNN representations. In this way our new class of adversarial images differs qualitatively from others. While the adversary is perceptually similar to one image, its internal representation appears remarkably similar to a different image, one from a different class, bearing little if any apparent similarity to the input; they appear generic and consistent with the space of natural images. This phenomenon raises questions about DNN representations, as well as the properties of natural images themselves." ] }
1610.04256
2949162339
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
@cite_17 extended upon the FGS approach for generating adversarial examples and introduced two effective ways to produce more robust adversarial images using the fast gradient value (FGV) and the hot cold (HC) approaches. The fast gradient value (FGV) approach uses a scaled version of the raw gradient of loss'' to create adversarial examples with distortions even less perceptible to humans. The perturbation formed by the FGV approach can be defined as
{ "cite_N": [ "@cite_17" ], "mid": [ "2346735539" ], "abstract": [ "State-of-the-art deep neural networks suffer from a fundamental problem - they misclassify adversarial examples formed by applying small perturbations to inputs. In this paper, we present a new psychometric perceptual adversarial similarity score (PASS) measure for quantifying adversarial images, introduce the notion of hard positive generation, and use a diverse set of adversarial perturbations - not just the closest ones - for data augmentation. We introduce a novel hot cold approach for adversarial example generation, which provides multiple possible adversarial perturbations for every single image. The perturbations generated by our novel approach often correspond to semantically meaningful image structures, and allow greater flexibility to scale perturbation-amplitudes, which yields an increased diversity of adversarial images. We present adversarial images on several network topologies and datasets, including LeNet on the MNIST dataset, and GoogLeNet and ResidualNet on the ImageNet dataset. Finally, we demonstrate on LeNet and GoogLeNet that fine-tuning with a diverse set of hard positives improves the robustness of these networks compared to training with prior methods of generating adversarial images." ] }
1610.04256
2949162339
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
The hot cold approach defines a hot class as the target classification class and a cold class as the original classification class. This method then uses these defined classes to create features that cause classification to move towards the hot or target class from the original cold class. In addition to the different approaches to generating adversarial examples, also defined a metric for quantifying adversarial examples by measuring both the element-wise difference and probability that the image could be a different perspective of the original input called a Perceptual Adversarial Similarity Score (PASS). This score is a number between 0 and 1, where 1 denotes an adversarial example with no visible difference from the original image. @cite_17 also explored the effectiveness of fine-tuning a DCNN with adversarial examples and showed that such networks were able to correctly classify 86 @cite_5 also contributed to the known information about adversarial images by defining adversarial examples that exist in nature as an image that is misclassified, but that will be correctly classified when an imperceptible modification is applied.'' Natural adversarial images demonstrate an additional aspect of the security threat of adversarial examples that needs to be considered.
{ "cite_N": [ "@cite_5", "@cite_17" ], "mid": [ "2395521575", "2346735539" ], "abstract": [ "Facial attributes are emerging soft biometrics that have the potential to reject non-matches, for example, based on mismatching gender. To be usable in stand-alone systems, facial attributes must be extracted from images automatically and reliably. In this paper, we propose a simple yet effective solution for automatic facial attribute extraction by training a deep convolutional neural network (DCNN) for each facial attribute separately, without using any pre-training or dataset augmentation, and we obtain new state-of-the-art facial attribute classification results on the CelebA benchmark. To test the stability of the networks, we generated adversarial images -- formed by adding imperceptible non-random perturbations to original inputs which result in classification errors -- via a novel fast flipping attribute (FFA) technique. We show that FFA generates more adversarial examples than other related algorithms, and that DCNNs for certain attributes are generally robust to adversarial inputs, while DCNNs for other attributes are not. This result is surprising because no DCNNs tested to date have exhibited robustness to adversarial images without explicit augmentation in the training procedure to account for adversarial examples. Finally, we introduce the concept of natural adversarial samples, i.e., images that are misclassified but can be easily turned into correctly classified images by applying small perturbations. We demonstrate that natural adversarial samples commonly occur, even within the training set, and show that many of these images remain misclassified even with additional training epochs. This phenomenon is surprising because correcting the misclassification, particularly when guided by training data, should require only a small adjustment to the DCNN parameters.", "State-of-the-art deep neural networks suffer from a fundamental problem - they misclassify adversarial examples formed by applying small perturbations to inputs. In this paper, we present a new psychometric perceptual adversarial similarity score (PASS) measure for quantifying adversarial images, introduce the notion of hard positive generation, and use a diverse set of adversarial perturbations - not just the closest ones - for data augmentation. We introduce a novel hot cold approach for adversarial example generation, which provides multiple possible adversarial perturbations for every single image. The perturbations generated by our novel approach often correspond to semantically meaningful image structures, and allow greater flexibility to scale perturbation-amplitudes, which yields an increased diversity of adversarial images. We present adversarial images on several network topologies and datasets, including LeNet on the MNIST dataset, and GoogLeNet and ResidualNet on the ImageNet dataset. Finally, we demonstrate on LeNet and GoogLeNet that fine-tuning with a diverse set of hard positives improves the robustness of these networks compared to training with prior methods of generating adversarial images." ] }
1610.04256
2949162339
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
In response to the growing interest and research in adversarial examples, @cite_19 asserted that by using deep learning algorithms, system designers made security assumptions about DCNNs, specifically in reference to adversarial samples. @cite_19 attempted to address the problem by demonstrating the use of distillation as an alternative form of training a DCNN to increase the percentage of correctly classified adversarial examples when presented to the DCNN as inputs using the CIFAR-10 @cite_2 and MNIST @cite_13 data sets. However, @cite_3 demonstrated that the assumptions made in this approach are wrong and the approach itself is not effective.
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_3", "@cite_2" ], "mid": [ "2174868984", "200806003", "2517229335", "" ], "abstract": [ "Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95 to less than 0.5 on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10^30. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800 on one of the DNNs we tested.", "Disclosed is an improved articulated bar flail having shearing edges for efficiently shredding materials. An improved shredder cylinder is disclosed with a plurality of these flails circumferentially spaced and pivotally attached to the periphery of a rotatable shaft. Also disclosed is an improved shredder apparatus which has a pair of these shredder cylinders mounted to rotate about spaced parallel axes which cooperates with a conveyer apparatus which has a pair of inclined converging conveyer belts with one of the belts mounted to move with respect to the other belt to allow the transport of articles of various sizes therethrough.", "We show that defensive distillation is not secure: it is no more resistant to targeted misclassification attacks than unprotected neural networks.", "" ] }
1610.04256
2949162339
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
@cite_16 also presented a method for attacking a DCNN with adversarial examples without prior knowledge of the architecture of the network itself, and only having access to the targeted network's output and some knowledge of the type of input. To accomplish this, trained a substitute DCNN on possible inputs for the targeted DCNN. After the network was trained, adversarial examples were crafted with the FGS method @cite_18 . These examples were generated with different values of ( ), as defined in Eq. . Examples of the FGS adversarial images generated with various values of ( ) can be seen in Fig. . The examples generated with higher values of ( ) do not fit the portion of the definition of adversarial examples that the perturbations need to be imperceptible to human observers.
{ "cite_N": [ "@cite_18", "@cite_16" ], "mid": [ "1945616565", "2274565976" ], "abstract": [ "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Advances in deep learning have led to the broad adoption of Deep Neural Networks (DNNs) to a range of important machine learning problems, e.g., guiding autonomous vehicles, speech recognition, malware detection. Yet, machine learning models, including DNNs, were shown to be vulnerable to adversarial samples-subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software. Adversarial examples are known to transfer from one model to another, even if the second model has a different architecture or was trained on a different set. We introduce the first practical demonstration that this cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data. In our demonstration, we only assume that the adversary can observe outputs from the target DNN given inputs chosen by the adversary. We introduce the attack strategy of fitting a substitute model to the input-output pairs in this manner, then crafting adversarial examples based on this auxiliary model. We evaluate the approach on existing DNN datasets and real-world settings. In one experiment, we force a DNN supported by MetaMind (one of the online APIs for DNN classifiers) to mis-classify inputs at a rate of 84.24 . We conclude with experiments exploring why adversarial samples transfer between DNNs, and a discussion on the applicability of our attack when targeting machine learning algorithms distinct from DNNs." ] }
1610.04256
2949162339
Deep neural networks are facing a potential security threat from adversarial examples, inputs that look normal but cause an incorrect classification by the deep neural network. For example, the proposed threat could result in hand-written digits on a scanned check being incorrectly classified but looking normal when humans see them. This research assesses the extent to which adversarial examples pose a security threat, when one considers the normal image acquisition process. This process is mimicked by simulating the transformations that normally occur in acquiring the image in a real world application, such as using a scanner to acquire digits for a check amount or using a camera in an autonomous car. These small transformations negate the effect of the carefully crafted perturbations of adversarial examples, resulting in a correct classification by the deep neural network. Thus just acquiring the image decreases the potential impact of the proposed security threat. We also show that the already widely used process of averaging over multiple crops neutralizes most adversarial examples. Normal preprocessing, such as text binarization, almost completely neutralizes adversarial examples. This is the first paper to show that for text driven classification, adversarial examples are an academic curiosity, not a security threat.
A different technique for increasing a DCNN's ability to handle and correctly classify adversarial examples was put forward by @cite_6 . This technique uses a transformation of the image that selects a region in which the convolutional neural network (CNN) is applied, denoted a foveation, discarding the information from the other regions'' as the input to the DCNN. This technique enabled the DCNN to correctly classify over 70 The previous state of the art @cite_8 takes crops of the original input and uses the average prediction over the classified crops which mimics natural perturbations. Crops like the ones used in GoogLeNet predate the discovery of adversarial examples and are used to improve accuracy of DCNNs. The improvement of accuracy in the DCNN also also helped to mitigate classification errors caused by adversarial examples. By taking crops of images, the accuracy of the DCNN when classifying adversarial examples is greatly increased. The networks tested by @cite_19 did not use this previous state-of-the-art. This research aims to assess the level of threat imposed by adversarial examples to DCNNs as suggested by @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_6", "@cite_8" ], "mid": [ "2174868984", "2217248474", "2950179405" ], "abstract": [ "Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95 to less than 0.5 on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10^30. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800 on one of the DNNs we tested.", "We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in different image regions. To see this, first, we report results in ImageNet that lead to a revision of the hypothesis that adversarial perturbations are a consequence of CNNs acting as a linear classifier: CNNs act locally linearly to changes in the image regions with objects recognized by the CNN, and in other regions the CNN may act non-linearly. Then, we corroborate that when the neural responses are linear, applying the foveation mechanism to the adversarial example tends to significantly reduce the effect of the perturbation. This is because, hypothetically, the CNNs for ImageNet are robust to changes of scale and translation of the object produced by the foveation, but this property does not generalize to transformations of the perturbation. As a result, the accuracy after a foveation is almost the same as the accuracy of the CNN without the adversarial perturbation, even if the adversarial perturbation is calculated taking into account a foveation.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection." ] }
1610.04057
2566167679
In this paper, we propose a novel model, named Stroke Sequence-dependent Deep Convolutional Neural Network (SSDCNN), using the stroke sequence information and eight-directional features for Online Handwritten Chinese Character Recognition (OLHCCR). On one hand, SSDCNN can learn the representation of Online Handwritten Chinese Character (OLHCC) by incorporating the natural sequence information of the strokes. On the other hand, SSDCNN can incorporate eight-directional features in a natural way. In order to train SSDCNN, we divide the process of training into two stages: 1) The training data is used to pre-train the whole architecture until the performance tends to converge. 2) Fully-connected neural network which is used to combine the stroke sequence-dependent representation with eight-directional features and softmax layer are further trained. Experiments were conducted on the OLHCCR competition tasks of ICDAR 2013. Results show that, SSDCNN can reduce the recognition error by 50 (5.13 vs 2.56 ) compared to the model which only use eight-directional features. The proposed SSDCNN achieves 97.44 accuracy which reduces the recognition error by about 1.9 compared with the best submitted system on ICDAR2013 competition. These results indicate that SSDCNN can exploit the stroke sequence information to learn high-quality representation of OLHCC. It also shows that the learnt representation and the classical eight-directional features complement each other within the SSDCNN architecture.
Intensive research efforts have been made on OLHCCR since 1960s. Generally, the proposed methods can be divided into structural methods and statistical methods @cite_16 : Before 1990s, most of works are related to structural methods, which is more relevant to human learning and perception. Afterward, statistical methods can achieve higher recognition accuracy by learning from samples. However both structural methods and statistical methods are based on handcraft features. We first review some related works on online isolated character recognition according to the flowchart Fig. . And then, we will describe some recent works related to DCNN based method on OLHCCR.
{ "cite_N": [ "@cite_16" ], "mid": [ "2115675483" ], "abstract": [ "Online handwriting recognition is gaining renewed interest owing to the increase of pen computing applications and new pen input devices. The recognition of Chinese characters is different from western handwriting recognition and poses a special challenge. To provide an overview of the technical status and inspire future research, this paper reviews the advances in online Chinese character recognition (OLCCR), with emphasis on the research works from the 1990s. Compared to the research in the 1980s, the research efforts in the 1990s aimed to further relax the constraints of handwriting, namely, the adherence to standard stroke orders and stroke numbers and the restriction of recognition to isolated characters only. The target of recognition has shifted from regular script to fluent script in order to better meet the requirements of practical applications. The research works are reviewed in terms of pattern representation, character classification, learning adaptation, and contextual processing. We compare important results and discuss possible directions of future research." ] }
1610.04057
2566167679
In this paper, we propose a novel model, named Stroke Sequence-dependent Deep Convolutional Neural Network (SSDCNN), using the stroke sequence information and eight-directional features for Online Handwritten Chinese Character Recognition (OLHCCR). On one hand, SSDCNN can learn the representation of Online Handwritten Chinese Character (OLHCC) by incorporating the natural sequence information of the strokes. On the other hand, SSDCNN can incorporate eight-directional features in a natural way. In order to train SSDCNN, we divide the process of training into two stages: 1) The training data is used to pre-train the whole architecture until the performance tends to converge. 2) Fully-connected neural network which is used to combine the stroke sequence-dependent representation with eight-directional features and softmax layer are further trained. Experiments were conducted on the OLHCCR competition tasks of ICDAR 2013. Results show that, SSDCNN can reduce the recognition error by 50 (5.13 vs 2.56 ) compared to the model which only use eight-directional features. The proposed SSDCNN achieves 97.44 accuracy which reduces the recognition error by about 1.9 compared with the best submitted system on ICDAR2013 competition. These results indicate that SSDCNN can exploit the stroke sequence information to learn high-quality representation of OLHCC. It also shows that the learnt representation and the classical eight-directional features complement each other within the SSDCNN architecture.
The performance of OLHCCR system depends largely on the feature extraction, since the feature vectors are the direct input to the classifier. For handwritten Chinese Character recognition, directional features @cite_24 and gradient features @cite_1 are usually used. Kawamura et.al @cite_17 propose to use 4-directional features for online Handwritten Japanese recognition. Bai and Huo extends 4-directional features to eight-directional features for OLHCCR. Ding et.al @cite_2 extract the 8-diretional features on imaginary strokes for OLHCCR and Liu et.al @cite_6 extract the original stroke direction and normalized direction features for OLHCCR. Yun et.al @cite_20 propose to use the combination of distance map and direction histogram to recognize online handwritten digits and freehand sketches. Zhu et.al, participate to the ICDAR 2013 Chinese Handwriting Recognition Competition http: www.nlpr.ia.ac.cn events CHRcompetition2013 competition Home.html and they submit the system named TUAT by using the histograms of normalized stroke direction @cite_1 . TUAT achieve 93.85
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_24", "@cite_2", "@cite_20", "@cite_17" ], "mid": [ "1978964824", "", "2119305530", "2158505079", "2161629270", "2131923426" ], "abstract": [ "This paper describes the Chinese handwriting recognition competition held at the 12th International Conference on Document Analysis and Recognition (ICDAR 2013). This third competition in the series again used the CASIA-HWDB OLHWDB databases as the training set, and all the submitted systems were evaluated on closed datasets to report character-level correct rates. This year, 10 groups submitted 27 systems for five tasks: classification on extracted features, online offline isolated character recognition, online offline handwritten text recognition. The best results (correct rates) are 93.89 for classification on extracted features, 94.77 for offline character recognition, 97.39 for online character recognition, 88.76 for offline text recognition, and 95.03 for online text recognition, respectively. In addition to the test results, we also provide short descriptions of the recognition methods and brief discussions on the results.", "", "Recently, the Institute of Automation of Chinese Academy of Sciences (CASIA) released the unconstrained online and offline Chinese handwriting databases CASIA-OLHWDB and CASIA-HWDB, which contain isolated character samples and handwritten texts produced by 1020 writers. This paper presents our benchmarking results using state-of-the-art methods on the isolated character datasets OLHWDB1.0 and HWDB1.0 (called DB1.0 in general), OLHWDB1.1 and HWDB1.1 (called DB1.1 in general). The DB1.1 covers 3755 Chinese character classes as in the level-1 set of GB2312-80. The evaluated methods include 1D and pseudo 2D normalization methods, gradient direction feature extraction from binary images and from gray-scale images, online stroke direction feature extraction from pen-down trajectory and from pen lifts, classification using the modified quadratic discriminant function (MQDF), discriminative feature extraction (DFE), and discriminative learning quadratic discriminant function (DLQDF). Our experiments reported the highest test accuracies 89.55 and 93.22 on the HWDB1.1 (offline) and OLHWDB1.1 (online), respectively, when using the MQDF classifier trained with DB1.1. When training with both the DB1.0 and DB1.1, the test accuracies on HWDB1.1 and OLHWDB are improved to 90.71 and 93.95 , respectively. Using DFE and DLQDF, the best results on HWDB1.1 and OLHWDB1.1 are 92.08 and 94.85 , respectively. Our results are comparable to the best results of the ICDAR2011 Chinese Handwriting Recognition Competition though we used less training samples.", "Imaginary stroke technique has been proved to be an effective solution to the problem of the stroke connection in online handwritten character recognition. However, it may cause confusions among characters with similar but actually different trajectories after adding imaginary strokes. In this paper, we first investigate both the benefit and the defect of the imaginary stroke technique, and then two modified methods are proposed under the framework of feature fusion and local feature enhance respectively. With the proposed methods, the feature of imaginary strokes is employed to unify the writing styles and the feature of real strokes is enhanced to strengthen discriminability. Experimental results for handwritten Chinese character recognition indicate that comparing with feature without imaginary strokes and feature with imaginary strokes, our proposed methods provide about 3 8 and 1 4 recognition accuracy improvement respectively.", "We propose a powerful shape representation to recognize sketches drawn on a pen-based input device. The proposed method is robust to the sketching order by using the combination of distance map and direction histogram. A distance map created after normalizing a freehand sketch represents a spatial feature of shape regardless of the writing order. Moreover, a distance map which acts a spatial feature is more robust to shape variation than chamfer distance. Direction histogram is also able to extract a directional feature unrelated to the drawing order by using the alignment of the spatial location between two neighboring points of the stroke. The combination of these two features represents rich information to recognize an input sketch. The experiment result demonstrates the superiority of the proposed method more than previous works. It shows 96 recognition performance for the experimental database, which consists of 28 freehand sketches and 10 on-line handwritten digits.", "The authors propose an online handwritten Japanese character recognition method permitting both stroke number and stroke order variations. The method is based on the pattern matching technique. Matching is done by the multiple similarity method using directional feature densities, which are independent of both stroke number and stroke order. This method has achieved a good recognition rate, 91 , for 2965 freely written Japanese kanji characters. >" ] }
1610.04141
2536828819
Digital whole-slide images of pathological tissue samples have recently become feasible for use within routine diagnostic practice. These gigapixel sized images enable pathologists to perform reviews using computer workstations instead of microscopes. Existing workstations visualize scanned images by providing a zoomable image space that reproduces the capabilities of the microscope. This paper presents a novel visualization approach that enables filtering of the scale-space according to color preference. The visualization method reveals diagnostically important patterns that are otherwise not visible. The paper demonstrates how this approach has been implemented into a fully functional prototype that lets the user navigate the visualization parameter space in real time. The prototype was evaluated for two common clinical tasks with eight pathologists in a within-subjects study. The data reveal that task efficiency increased by 15 using the prototype, with maintained accuracy. By analyzing behavioral strategies, it was possible to conclude that efficiency gain was caused by a reduction of the panning needed to perform systematic search of the images. The prototype system was well received by the pathologists who did not detect any risks that would hinder use in clinical routine.
Several advances of these methods have been explored. @cite_31 presented a method to weight the visibility of different image objects based on a predefined importance function, forcing specific features to become visible. Bruckner, Gr " o ller @cite_30 formulated a method combining the benefits from MIP and DVR. Both methods counteract the occlusion of small image objects that can otherwise be hard to distinguish with normal DVR.
{ "cite_N": [ "@cite_30", "@cite_31" ], "mid": [ "2085106745", "1983985737" ], "abstract": [ "It has long been recognized that transfer function setup for Direct Volume Rendering (DVR) is crucial to its usability. However, the task of finding an appropriate transfer function is complex and time-consuming even for experts. Thus, in many practical applications simpler techniques which do not rely on complex transfer functions are employed. One common example is Maximum Intensity Projection (MIP) which depicts the maximum value along each viewing ray. In this paper, we introduce Maximum Intensity Difference Accumulation (MIDA), a new approach which combines the advantages of DVR and MIP. Like MIP, MIDA exploits common data characteristics and hence does not require complex transfer functions to generate good visualization results. It does, however, feature occlusion and shape cues similar to DVR. Furthermore, we show that MIDA - in addition to being a useful technique in its own right - can be used to smoothly transition between DVR and MIP in an intuitive manner. MIDA can be easily implemented using volume raycasting and achieves real-time performance on current graphics hardware.", "This paper presents importance-driven feature enhancement as a technique for the automatic generation of cut-away and ghosted views out of volumetric data. The presented focus+context approach removes or suppresses less important parts of a scene to reveal more important underlying information. However, less important parts are fully visible in those regions, where important visual information is not lost, i.e., more relevant features are not occluded. Features within the volumetric data are first classified according to a new dimension, denoted as object importance. This property determines which structures should be readily discernible and which structures are less important. Next, for each feature, various representations (levels of sparseness) from a dense to a sparse depiction are defined. Levels of sparseness define a spectrum of optical properties or rendering styles. The resulting image is generated by ray-casting and combining the intersected features proportional to their importance (importance compositing). The paper includes an extended discussion on several possible schemes for levels of sparseness specification. Furthermore, different approaches to importance compositing are treated." ] }
1610.04141
2536828819
Digital whole-slide images of pathological tissue samples have recently become feasible for use within routine diagnostic practice. These gigapixel sized images enable pathologists to perform reviews using computer workstations instead of microscopes. Existing workstations visualize scanned images by providing a zoomable image space that reproduces the capabilities of the microscope. This paper presents a novel visualization approach that enables filtering of the scale-space according to color preference. The visualization method reveals diagnostically important patterns that are otherwise not visible. The paper demonstrates how this approach has been implemented into a fully functional prototype that lets the user navigate the visualization parameter space in real time. The prototype was evaluated for two common clinical tasks with eight pathologists in a within-subjects study. The data reveal that task efficiency increased by 15 using the prototype, with maintained accuracy. By analyzing behavioral strategies, it was possible to conclude that efficiency gain was caused by a reduction of the panning needed to perform systematic search of the images. The prototype system was well received by the pathologists who did not detect any risks that would hinder use in clinical routine.
Another way to raise the salience of important visual features is to modify their apparent size. @cite_12 enlarged image regions based on a user-selected transfer function. Correa and Ma @cite_14 @cite_15 experimented with local size and occlusion of objects as an additional parameter to the transfer function to increase different discrimination possibilities between image features. Our method also extracts important details from a large dataset, but brings this capability to the domain of large two-dimensional images.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_12" ], "mid": [ "", "2163363303", "2149557512" ], "abstract": [ "", "The visualization of complex 3D images remains a challenge, a fact that is magnified by the difficulty to classify or segment volume data. In this paper, we introduce size-based transfer functions, which map the local scale of features to color and opacity. Features in a data set with similar or identical scalar values can be classified based on their relative size. We achieve this with the use of scale fields, which are 3D fields that represent the relative size of the local feature at each voxel. We present a mechanism for obtaining these scale fields at interactive rates, through a continuous scale-space analysis and a set of detection filters. Through a number of examples, we show that size-based transfer functions can improve classification and enhance volume rendering techniques, such as maximum intensity projection. The ability to classify objects based on local size at interactive rates proves to be a powerful method for complex data exploration.", "The growing sizes of volumetric data sets pose a great challenge for interactive visualization. In this paper, we present a feature-preserving data reduction and focus+context visualization method based on transfer function driven, continuous voxel repositioning and resampling techniques. Rendering reduced data can enhance interactivity. Focus+context visualization can show details of selected features in context on display devices with limited resolution. Our method utilizes the input transfer function to assign importance values to regularly partitioned regions of the volume data. According to user interaction, it can then magnify regions corresponding to the features of interest while compressing the rest by deforming the 3D mesh. The level of data reduction achieved is significant enough to improve overall efficiency. By using continuous deformation, our method avoids the need to smooth the transition between low and high-resolution regions as often required by multiresolution methods. Furthermore, it is particularly attractive for focus+context visualization of multiple features. We demonstrate the effectiveness and efficiency of our method with several volume data sets from medical applications and scientific simulations." ] }
1610.04141
2536828819
Digital whole-slide images of pathological tissue samples have recently become feasible for use within routine diagnostic practice. These gigapixel sized images enable pathologists to perform reviews using computer workstations instead of microscopes. Existing workstations visualize scanned images by providing a zoomable image space that reproduces the capabilities of the microscope. This paper presents a novel visualization approach that enables filtering of the scale-space according to color preference. The visualization method reveals diagnostically important patterns that are otherwise not visible. The paper demonstrates how this approach has been implemented into a fully functional prototype that lets the user navigate the visualization parameter space in real time. The prototype was evaluated for two common clinical tasks with eight pathologists in a within-subjects study. The data reveal that task efficiency increased by 15 using the prototype, with maintained accuracy. By analyzing behavioral strategies, it was possible to conclude that efficiency gain was caused by a reduction of the panning needed to perform systematic search of the images. The prototype system was well received by the pathologists who did not detect any risks that would hinder use in clinical routine.
Previous visualization work for pathology imaging has focused on improving the speed of viewing these gigapixel images @cite_29 , or dealing with 3D-stacks of microscopic images @cite_0 . For the day to day needs of a pathology lab, commercially available scanners can capture, stitch and organize digital slides into tiled pyramidal images in minutes @cite_10 , and digital viewers that can closely reproduce the experience of using a microscope @cite_18 . However, it is important to keep viewing latency down, where the bottlenecks typically are slow hard drives @cite_29 and slow network response times.
{ "cite_N": [ "@cite_0", "@cite_29", "@cite_10", "@cite_18" ], "mid": [ "2001749563", "2117299321", "", "1978404837" ], "abstract": [ "This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.", "Histology is the study of the structure of biological tissue using microscopy techniques. As digital imaging technology advances, high resolution microscopy of large tissue volumes is becoming feasible; however, new interactive tools are needed to explore and analyze the enormous datasets. In this paper we present a visualization framework that specifically targets interactive examination of arbitrarily large image stacks. Our framework is built upon two core techniques: display-aware processing and GPU-accelerated texture compression. With display-aware processing, only the currently visible image tiles are fetched and aligned on-the-fly, reducing memory bandwidth and minimizing the need for time-consuming global pre-processing. Our novel texture compression scheme for GPUs is tailored for quick browsing of image stacks. We evaluate the usability of our viewer for two histology applications: digital pathology and visualization of neural structure at nanoscale-resolution in serial electron micrographs.", "", "Digital pathology promises a number of benefits in efficiency in surgical pathology, yet the longer time required to review a virtual slide than a glass slide currently represents a significant barrier to the routine use of digital pathology. We aimed to create a novel workstation that enables pathologists to view a case as quickly as on the conventional microscope. The Leeds Virtual Microscope (LVM) was evaluated using a mixed factorial experimental design. Twelve consultant pathologists took part, each viewing one long cancer case (12-25 slides) on the LVM and one on a conventional microscope. Total time taken and diagnostic confidence were similar for the microscope and LVM, as was the mean slide viewing time. On the LVM, participants spent a significantly greater proportion of the total task time viewing slides and revisited slides more often. The unique design of the LVM, enabling real-time rendering of virtual slides while providing users with a quick and intuitive way to navigate within and between slides, makes use of digital pathology in routine practice a realistic possibility. With further practice with the system, diagnostic efficiency on the LVM is likely to increase yet more." ] }
1610.03577
2531035071
Preserving privacy of continuous and or high-dimensional data such as images, videos and audios, can be challenging with syntactic anonymization methods which are designed for discrete attributes. Differential privacy, which provides a more formal definition of privacy, has shown more success in sanitizing continuous data. However, both syntactic and differential privacy are susceptible to inference attacks, i.e., an adversary can accurately infer sensitive attributes from sanitized data. The paper proposes a novel filter-based mechanism which preserves privacy of continuous and high-dimensional attributes against inference attacks. Finding the optimal utility-privacy tradeoff is formulated as a min-diff-max optimization problem. The paper provides an ERM-like analysis of the generalization error and also a practical algorithm to perform the optimization. In addition, the paper proposes an extension that combines minimax filter and differentially-private noisy mechanism. Advantages of the method over purely noisy mechanisms is explained and demonstrated with examples. Experiments with several real-world tasks including facial expression classification, speech emotion classification, and activity classification from motion, show that the minimax filter can simultaneously achieve similar or better target task accuracy and lower inference accuracy, often significantly lower than previous methods.
Algorithms for preserving privacy of high-dimensional face images has been proposed previously. @cite_11 applied k-anonymity to images; @cite_8 proposed to learn a linear filter using Partial Least Squares to reduce the covariance between filtered data and private labels; @cite_18 also proposed a linear filter using the log-ratio of the Fisher's Linear Discriminant Analysis metrics. @cite_15 presented a related method of preserving privacy of linear predictors using the Augmented Fractional Knapsack algorithm. This paper differs from these in several aspects: it is not limited to linear filters and is applicable to arbitrary differentiable nonlinear filters such as multilayer neural networks; it directly optimizes the utility-privacy risk instead of optimizing heuristic criteria such as covariance differences or LDA log-ratios.
{ "cite_N": [ "@cite_8", "@cite_18", "@cite_15", "@cite_11" ], "mid": [ "1982183556", "2031366895", "2604162892", "44899178" ], "abstract": [ "Re-identification is a major privacy threat to public datasets containing individual records. Many privacy protection algorithms rely on generalization and suppression of \"quasi-identifier\" attributes such as ZIP code and birthdate. Their objective is usually syntactic sanitization: for example, k-anonymity requires that each \"quasi-identifier\" tuple appear in at least k records, while l-diversity requires that the distribution of sensitive attributes for each quasi-identifier have high entropy. The utility of sanitized data is also measured syntactically, by the number of generalization steps applied or the number of records with the same quasi-identifier. In this paper, we ask whether generalization and suppression of quasi-identifiers offer any benefits over trivial sanitization which simply separates quasi-identifiers from sensitive attributes. Previous work showed that k-anonymous databases can be useful for data mining, but k-anonymization does not guarantee any privacy. By contrast, we measure the tradeoff between privacy (how much can the adversary learn from the sanitized records?) and utility, measured as accuracy of data-mining algorithms executed on the same sanitized records. For our experimental evaluation, we use the same datasets from the UCI machine learning repository as were used in previous research on generalization and suppression. Our results demonstrate that even modest privacy gains require almost complete destruction of the data-mining utility. In most cases, trivial sanitization provides equivalent utility and better privacy than k-anonymity, l-diversity, and similar methods based on generalization and suppression.", "In machine learning and computer vision, input signals are often filtered to increase data discriminability. For example, preprocessing face images with Gabor band-pass filters is known to improve performance in expression recognition tasks [1]. Sometimes, however, one may wish to purposely decrease discriminability of one classification task (a “distractor” task), while simultaneously preserving information relevant to another task (the target task): For example, due to privacy concerns, it may be important to mask the identity of persons contained in face images before submitting them to a crowdsourcing site (e.g., Mechanical Turk) when labeling them for certain facial attributes. Suppressing discriminability in distractor tasks may also be needed to improve inter-dataset generalization: training datasets may sometimes contain spurious correlations between a target attribute (e.g., facial expression) and a distractor attribute (e.g., gender). We might improve generalization to new datasets by suppressing the signal related to the distractor task in the training dataset. This can be seen as a special form of supervised regularization. In this paper we present an approach to automatically learning preprocessing filters that suppress discriminability in distractor tasks while preserving it in target tasks. We present promising results in simulated image classification problems and in a realistic expression recognition problem.", "", "In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless." ] }
1610.03677
2537458488
An accurate and reliable image based fruit detection system is critical for supporting higher level agriculture tasks such as yield mapping and robotic harvesting. This paper presents the use of a state-of-the-art object detection framework, Faster R-CNN, in the context of fruit detection in orchards, including mangoes, almonds and apples. Ablation studies are presented to better understand the practical deployment of the detection network, including how much training data is required to capture variability in the dataset. Data augmentation techniques are shown to yield significant performance gains, resulting in a greater than two-fold reduction in the number of training images required. In contrast, transferring knowledge between orchards contributed to negligible performance gain over initialising the Deep Convolutional Neural Network directly from ImageNet features. Finally, to operate over orchard data containing between 100-1000 fruit per image, a tiling approach is introduced for the Faster R-CNN framework. The study has resulted in the best yet detection performance for these orchards relative to previous works, with an F1-score of >0.9 achieved for apples and mangoes.
Analysis of local colours and textures has been used for pixel-wise mango classification, followed by blob extraction to identify individual mangoes @cite_19 . Avoiding the use of hand-engineered features, Convolutional Neural Networks (CNNs) have been used for pixel-wise apple classification @cite_0 , followed by Watershed Segmentation (WS) to identify individual apples. The radial symmetries in the specular reflection in berries have been shown to be useful for RoI extraction @cite_3 , where a KD-forest was used for berry classification. To detect citrus fruit, Circular Hough Transforms (CHT) have been used to extract key-points, which were then classified using a Support Vector Machine (SVM) @cite_10 .
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_10", "@cite_3" ], "mid": [ "", "2091119928", "2087877962", "1892082890" ], "abstract": [ "", "This paper extends a previous study on the use of image analysis to automatically estimate mango crop yield (fruit on tree) (, 2013). Images were acquired at night, using artificial lighting of fruit at an earlier stage of maturation ('stone hardening' stage) than for the previous study. Multiple image sets were collected during the 2011 and 2012 seasons. Despite altering the settings of the filters in the algorithm presented in the previous study (based on colour segmentation using RGB and YCbCr, and texture), the less mature fruit were poorly identified, due to a lower extent of red colouration of the skin. The algorithm was altered to reduce its dependence on colour features and to increase its use of texture filtering, hessian filtering in particular, to remove leaves, trunk and stems. Results on a calibration set of images (2011) were significantly improved, with 78.3 of fruit detected, an error rate of 10.6 and an R^2 value (machine vision to manual count) of 0.63. Further application of the approach on validation sets from 2011 and 2012 had mixed results, with issues related to variation in foliage characteristics between sets. It is proposed the detection approaches within both of these algorithms be used as a 'toolkit' for a mango detection system, within an expert system that also uses user input to improve the accuracy of the system.", "Yield mapping for tree crops by mechanical harvesting requires automatic detection and counting of fruits in tree canopy. However, partial occlusion, shape irregularity, varying illumination, multiple sizes and similarity with the background make fruit identification a very difficult task to achieve. Therefore, immature green citrus-fruit detection within a green canopy is a challenging task due to all the above-mentioned problems. A novel algorithmic technique was used to detect immature green citrus fruit in tree canopy under natural outdoor conditions. Shape analysis and texture classification were two integral parts of the algorithm. Shape analysis was conducted to detect as many fruits as possible. Texture classification by a support vector machine (SVM), Canny edge detection combined with a graph-based connected component algorithm and Hough line detection, were used to remove false positives. Next, keypoints were detected using a scale invariant feature transform (SIFT) algorithm and to further remove false positives. A majority voting scheme was implemented to make the algorithm more robust. The algorithm was able to accurately detect and count 80.4 of citrus fruit in a validation set of images acquired from a citrus grove under natural outdoor conditions. The algorithm could be further improved to provide growers early yield estimation so that growers can manage grove more efficiently on a site-specific basis to increase yield and profit.", "We present a vision system that automatically predicts yield in vineyards accurately and with high resolution. Yield estimation traditionally requires tedious hand measurement, which is destructive, sparse in sampling, and inaccurate. Our method is efficient, high-resolution, and it is the first such system evaluated in realistic experimentation over several years and hundreds of vines spread over several acres of different vineyards. Other existing research is limited to small test sets of 10 vines or less, or just isolated grape clusters, with tightly controlled image acquisition and with artificially induced yield distributions. The system incorporates cameras and illumination mounted on a vehicle driving through the vineyard. We process images by exploiting the three prominent visual cues of texture, color, and shape into a strong classifier that detects berries even when they are of similar color to the vine leaves. We introduce methods to maximize the spatial and the overall accuracy of the yield estimates by optimizing the relationship between image measurements and yield. Our experimentation is conducted over four growing seasons in several wine and table-grape vineyards. These are the first such results from experimentation that is sufficiently sized for fair evaluation against true yield variation and real-world imaging conditions from a moving vehicle. Analysis of the results demonstrates yield estimates that capture up to 75 of spatial yield variance and with an average error between 3 and 11 of total yield." ] }
1610.03995
2952315658
In our today's information society more and more data emerges, e.g. in social networks, technical applications, or business applications. Companies try to commercialize these data using data mining or machine learning methods. For this purpose, the data are categorized or classified, but often at high (monetary or temporal) costs. An effective approach to reduce these costs is to apply any kind of active learning (AL) methods, as AL controls the training process of a classifier by specific querying individual data points (samples), which are then labeled (e.g., provided with class memberships) by a domain expert. However, an analysis of current AL research shows that AL still has some shortcomings. In particular, the structure information given by the spatial pattern of the (un)labeled data in the input space of a classification model (e.g., cluster information), is used in an insufficient way. In addition, many existing AL techniques pay too little attention to their practical applicability. To meet these challenges, this article presents several techniques that together build a new approach for combining AL and semi-supervised learning (SSL) for support vector machines (SVM) in classification tasks. Structure information is captured by means of probabilistic models that are iteratively improved at runtime when label information becomes available. The probabilistic models are considered in a selection strategy based on distance, density, diversity, and distribution (4DS strategy) information for AL and in a kernel function (Responsibility Weighted Mahalanobis kernel) for SVM. The approach fuses generative and discriminative modeling techniques. With 20 benchmark data sets and with the MNIST data set it is shown that our new solution yields significantly better results than state-of-the-art methods.
What do we intend to make better or in a different way? An actively trained SVM with our new RWM kernel uses a parametric density estimation to capture structure information, because a parametric estimation approach often leads to a more robust'' estimate than a non-parametric approach (used by the LapSVM (LAP kernel), for instance). An SVM with RWM kernel can be actively trained with help of any selection strategy and the RWM kernel can be used in combination with any standard implementation of SVM (e.g., LIBSVM @cite_15 ) or with any solver for the SVM optimization problem (e.g., SMO) without any additional algorithmic adjustments or extensions. In addition, the data modeling can be conducted once, i.e., , based on the unlabeled samples before the AL process starts (i.e., unsupervised). Moreover, it is also possible to refine the density model based on class information becoming known during the AL process (), so that the model captures the structure information of the unlabeled and labeled samples. Here, local adaptions of the model components are used for an efficient model refinement.
{ "cite_N": [ "@cite_15" ], "mid": [ "2153635508" ], "abstract": [ "LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail." ] }
1610.03771
2952620744
In this paper, we introduce the task of targeted aspect-based sentiment analysis. The goal is to extract fine-grained information with respect to entities mentioned in user comments. This work extends both aspect-based sentiment analysis that assumes a single entity per document and targeted sentiment analysis that assumes a single sentiment towards a target entity. In particular, we identify the sentiment towards each aspect of one or more entities. As a testbed for this task, we introduce the SentiHood dataset, extracted from a question answering (QA) platform where urban neighbourhoods are discussed by users. In this context units of text often mention several aspects of one or more neighbourhoods. This is the first time that a generic social media platform in this case a QA platform, is used for fine-grained opinion mining. Text coming from QA platforms is far less constrained compared to text from review specific platforms which current datasets are based on. We develop several strong baselines, relying on logistic regression and state-of-the-art recurrent neural networks.
The term sentiment analysis was first used in @cite_14 . Since then, the field has received much attention from both research and industry. Sentiment analysis has applications in almost in every domain and it raised many interesting research questions. Furthermore, the availability of a huge volume of opinionated data on social media platforms has accelerated the development in this area. In the beginning work on sentiment analysis mainly focused on identifying the overall sentiment of a unit of text. The unit of text varied from document @cite_18 @cite_25 , paragraph or sentences @cite_2 . However, only considering the overall sentiment fails to capture the sentiments over the aspects on which an entity can be reviewed or sentiment expressed toward different entities. Two remedy this, two new tasks have been introduced: sentiment analysis and sentiment analysis. Aspect based sentiment analysis assumes a per a unit of analysis and tries to identify sentiments towards different aspects of the entity @cite_26 @cite_7 @cite_19 @cite_10 @cite_3 @cite_19 @cite_26 @cite_20 @cite_27 . This task however does not consider more than one entity in the given text.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_26", "@cite_7", "@cite_3", "@cite_19", "@cite_27", "@cite_2", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2166706824", "", "2165664073", "", "2139414979", "2251644290", "2113786470", "2160660844", "1977593555", "2155328222", "2096110600" ], "abstract": [ "We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging.", "", "We investigate the efficacy of topic model based approaches to two multi-aspect sentiment analysis tasks: multi-aspect sentence labeling and multi-aspect rating prediction. For sentence labeling, we propose a weakly-supervised approach that utilizes only minimal prior knowledge -- in the form of seed words -- to enforce a direct correspondence between topics and aspects. This correspondence is used to label sentences with performance that approaches a fully supervised baseline. For multi-aspect rating prediction, we find that overall ratings can be used in conjunction with our sentence labelings to achieve reasonable performance compared to a fully supervised baseline. When gold-standard aspect-ratings are available, we find that topic model based features can be used to improve unsophisticated supervised baseline performance, in agreement with previous multi-aspect rating prediction work. This improvement is diminished, however, when topic model features are paired with a more competitive supervised baseline -- a finding not acknowledged in previous work.", "", "The task of product feature extraction is to find product features that customers refer to their topic reviews. It would be useful to characterize the opinions about the products. We propose an approach for product feature extraction by combining lexical and syntactic features with a maximum entropy model. For the underlying principle of maximum entropy, it prefers the uniform distributions if there is no external knowledge. Using a maximum entropy approach, firstly we extract the learning features from the annotated corpus, secondly we train the maximum entropy model, thirdly we use trained model to extract product features, and finally we apply a natural language processing technique in postprocessing step to discover the remaining product features. Our experimental results show that this approach is suitable for automatic product feature extraction.", "Vector representations for language has been shown to be useful in a number of Natural Language Processing tasks. In this paper, we aim to investigate the effectiveness of word vector representations for the problem of Aspect Based Sentiment Analysis. In particular, we target three sub-tasks namely aspect term extraction, aspect category detection, and aspect sentiment prediction. We investigate the effectiveness of vector representations over different text data and evaluate the quality of domain-dependent vectors. We utilize vector representations to compute various vectorbased features and conduct extensive experiments to demonstrate their effectiveness. Using simple vector based features, we achieve F1 scores of 79.91 for aspect term extraction, 86.75 for category detection, and the accuracy 72.39 for aspect sentiment prediction.", "With the increase in popularity of online review sites comes a corresponding need for tools capable of extracting the information most important to the user from the plain text data. Due to the diversity in products and services being reviewed, supervised methods are often not practical. We present an unsuper-vised system for extracting aspects and determining sentiment in review text. The method is simple and flexible with regard to domain and language, and takes into account the influence of aspect on sentiment polarity, an issue largely ignored in previous literature. We demonstrate its effectiveness on both component tasks, where it achieves similar results to more complex semi-supervised methods that are restricted by their reliance on manual annotation and extensive knowledge sources.", "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "With the rapid growth of user-generated content on the internet, automatic sentiment analysis of online customer reviews has become a hot research topic recently, but due to variety and wide range of products and services being reviewed on the internet, the supervised and domain-specific models are often not practical. As the number of reviews expands, it is essential to develop an efficient sentiment analysis model that is capable of extracting product aspects and determining the sentiments for these aspects. In this paper, we propose a novel unsupervised and domain-independent model for detecting explicit and implicit aspects in reviews for sentiment analysis. In the model, first a generalized method is proposed to learn multi-word aspects and then a set of heuristic rules is employed to take into account the influence of an opinion word on detecting the aspect. Second a new metric based on mutual information and aspect frequency is proposed to score aspects with a new bootstrapping iterative algorithm. The presented bootstrapping algorithm works with an unsupervised seed set. Third, two pruning methods based on the relations between aspects in reviews are presented to remove incorrect aspects. Finally the model employs an approach which uses explicit aspects and opinion words to identify implicit aspects. Utilizing extracted polarity lexicon, the approach maps each opinion word in the lexicon to the set of pre-extracted explicit aspects with a co-occurrence metric. The proposed model was evaluated on a collection of English product review datasets. The model does not require any labeled training data and it can be easily applied to other languages or other domains such as movie reviews. Experimental results show considerable improvements of our model over conventional techniques including unsupervised and supervised approaches.", "This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., \"subtle nuances\") and a negative semantic orientation when it has bad associations (e.g., \"very cavalier\"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word \"excellent\" minus the mutual information between the given phrase and the word \"poor\". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74 when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84 for automobile reviews to 66 for movie reviews.", "In this paper we present a novel framework for extracting the ratable aspects of objects from online user reviews. Extracting such aspects is an important challenge in automatically mining product opinions from the web and in generating opinion-based summaries of user reviews [18, 19, 7, 12, 27, 36, 21]. Our models are based on extensions to standard topic modeling methods such as LDA and PLSA to induce multi-grain topics. We argue that multi-grain models are more appropriate for our task since standard models tend to produce topics that correspond to global properties of objects (e.g., the brand of a product type) rather than the aspects of an object that tend to be rated by a user. The models we present not only extract ratable aspects, but also cluster them into coherent topics, e.g., 'waitress' and 'bartender' are part of the same topic 'staff' for restaurants. This differentiates it from much of the previous work which extracts aspects through term frequency analysis with minimal clustering. We evaluate the multi-grain models both qualitatively and quantitatively to show that they improve significantly upon standard topic models." ] }
1610.03138
2949642650
People enjoy encounters with generative software, but rarely are they encouraged to interact with, understand or engage with it. In this paper we define the term 'PCG-based game', and explain how this concept follows on from the idea of an AI-based game. We look at existing examples of games which foreground their AI, put forward a methodology for designing PCG-based games, describe some example case study designs for PCG-based games, and describe lessons learned during this process of sketching and developing ideas.
describe @cite_10 , as a platforming game with procedurally generated levels. By exploring the levels and choosing particular exits, the player can alter the parameters to the procedural generator, allowing them to explore the generative space through gameplay. This is the earliest example we are aware of in which a game is designed around a procedural generator with the explicit intention of giving a player control of the generator's output. This is perhaps the best-known and most explicit use of procedural generation as a game mechanic.
{ "cite_N": [ "@cite_10" ], "mid": [ "2170183083" ], "abstract": [ "This paper describes the creation of the game Endless Web, a 2D platforming game in which the player's actions determine the ongoing creation of the world she is exploring. Endless Web is an example of a PCG-based game: it uses procedural content generation (PCG) as a mechanic, and its PCG system, Launchpad, greatly influenced the aesthetics of the game. All of the player's strategies for the game revolve around the use of procedural content generation. Many design challenges were encountered in the design and creation of Endless Web, for both the game and modifications that had to be made to Launchpad. These challenges arise largely from a loss of fine-grained control over the player's experience; instead of being able to carefully craft each element the player can interact with, the designer must instead craft algorithms to produce a range of content the player might experience. In this paper we provide a definition of PCG-based game design and describe the challenges faced in creating a PCG-based game. We offer our solutions, which impacted both the game and the underlying level generator, and identify issues which may be particularly important as this area matures." ] }
1610.03138
2949642650
People enjoy encounters with generative software, but rarely are they encouraged to interact with, understand or engage with it. In this paper we define the term 'PCG-based game', and explain how this concept follows on from the idea of an AI-based game. We look at existing examples of games which foreground their AI, put forward a methodology for designing PCG-based games, describe some example case study designs for PCG-based games, and describe lessons learned during this process of sketching and developing ideas.
Many researchers have looked at the issues that arise when people interact with procedural generators. In @cite_7 Khaled explores the metaphors designers use when talking about procedural generation. Even among only professional designers this is quite diverse, and includes metaphors such as , and . Given the breadth of uses for generative techniques, and the varying levels of complexity to which they are employed, we imagine there are many more concepts about generative systems held by people who play and interact with them. Part of this work's aims is to develop games that bring these issues to the fore, and allow us to study user understanding.
{ "cite_N": [ "@cite_7" ], "mid": [ "2055044822" ], "abstract": [ "Procedural content generation (PCG), the algorithmic creation of game content with limited or indirect user input, has much to offer to game design. In recent years, it has become a mainstay of game AI, with significant research being put towards the investigation of new PCG systems, algorithms, and techniques. But for PCG to be absorbed into the practice of game design, it must be contextualised within design-centric as opposed to AI or engineering perspectives. We therefore provide a set of design metaphors for understanding potential relationships between a designer and PCG. These metaphors are: tool, material, designer, and domain expert. By examining PCG through these metaphors, we gain the ability to articulate qualities, consequences, affordances, and limitations of existing PCG approaches in relation to design. These metaphors are intended both to aid designers in understanding and appropriating PCG for their own contexts, and to advance PCG research by highlighting the assumptions implicit in existing systems and discourse." ] }
1610.03138
2949642650
People enjoy encounters with generative software, but rarely are they encouraged to interact with, understand or engage with it. In this paper we define the term 'PCG-based game', and explain how this concept follows on from the idea of an AI-based game. We look at existing examples of games which foreground their AI, put forward a methodology for designing PCG-based games, describe some example case study designs for PCG-based games, and describe lessons learned during this process of sketching and developing ideas.
Elsewhere, work in @cite_13 or in @cite_8 show the relationship between users and generative systems, especially in the context of games or game-like applications. @cite_8 sheds light on, for example, how the presentation of generated content affects our perception of the system which generated it. Games like allow us to watch the slow creation process of a world, which may makes us feel differently about the quality and intelligence of the system than if it had appeared out of nowhere.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "2224682476", "2290553183" ], "abstract": [ "In this paper, we show that personalized levels can be automatically generated for platform games. We build on previous work, where models were derived that predicted player experience based on features of level design and on playing styles. These models are constructed using preference learning, based on questionnaires administered to players after playing different levels. The contributions of the current paper are (1) more accurate models based on a much larger data set; (2) a mechanism for adapting level design parameters to given players and playing style; (3) evaluation of this adaptation mechanism using both algorithmic and human players. The results indicate that the adaptation mechanism effectively optimizes level design parameters for particular players.", "The common misconception among non-specialists is that a computer program can only perform tasks which the programmer knows how to perform (albeit much faster). This leads to a belief that if an artificial system exhibits creative behavior, it only does so because it is leveraging the programmer’s creativity. We review past efforts to evaluate creative systems and identify the biases against them. As evidenced in our case studies, a common bias indicates that creativity requires both intelligence and autonomy. We suggest that in order to overcome this skepticism, separation of programmer and program is crucial and that the program must be the responsible party for convincing the observer of this separation." ] }
1610.03541
2530161394
One of the primary objectives of a distributed storage system is to reliably store large amounts of source data for long durations using a large number @math of unreliable storage nodes, each with @math bits of storage capacity. Storage nodes fail randomly over time and are replaced with nodes of equal capacity initialized to zeroes, and thus bits are erased at some rate @math . To maintain recoverability of the source data, a repairer continually reads data over a network from nodes at a rate @math , and generates and writes data to nodes based on the read data. The distributed storage source data capacity is the maximum amount of source data that can be reliably stored for long periods of time. We prove the distributed storage source data capacity asymptotically approaches @math as @math and @math grow. This equation expresses a fundamental trade-off between network traffic and storage overhead to reliably store source data.
The @cite_10 is more advanced. When a node fails, at each of the @math remaining nodes a selected subset of @math s of the fragment is read over the interfacee from that node by the repairer, which uses all received s over interfaces from the nodes to generate and store a fragment at the new node. This construction can use Reed-Solomon codes, and is efficient for small values of @math and @math .
{ "cite_N": [ "@cite_10" ], "mid": [ "2492240243" ], "abstract": [ "This paper presents an explicit construction for an @math regenerating code over a field @math operating at the Minimum Storage Regeneration (MSR) point. The MSR code can be constructed to have rate @math as close to @math as desired, sub-packetization given by @math , for @math , field size no larger than @math and where all code symbols can be repaired with the same minimum data download. The construction modifies a prior construction by Sasidharan et. al. which required far larger field-size. A building block appearing in the construction is a scalar MDS code of block length @math . The code has a simple layered structure with coupling across layers, that allows both node repair and data recovery to be carried out by making multiple calls to a decoder for the scalar MDS code. While this work was carried out independently, there is considerable overlap with a prior construction by Ye and Barg. It is shown here that essentially the same architecture can be employed to construct MSR codes using vector binary MDS codes as building blocks in place of scalar MDS codes. The advantage here is that computations can now be carried out over a field of smaller size potentially even over the binary field as we demonstrate in an example. Further, we show how the construction can be extended to handle the case of @math under a mild restriction on the choice of helper nodes." ] }
1610.03541
2530161394
One of the primary objectives of a distributed storage system is to reliably store large amounts of source data for long durations using a large number @math of unreliable storage nodes, each with @math bits of storage capacity. Storage nodes fail randomly over time and are replaced with nodes of equal capacity initialized to zeroes, and thus bits are erased at some rate @math . To maintain recoverability of the source data, a repairer continually reads data over a network from nodes at a rate @math , and generates and writes data to nodes based on the read data. The distributed storage source data capacity is the maximum amount of source data that can be reliably stored for long periods of time. We prove the distributed storage source data capacity asymptotically approaches @math as @math and @math grow. This equation expresses a fundamental trade-off between network traffic and storage overhead to reliably store source data.
Each fragment is partitioned into sub-fragments for the @cite_10 , and the number of sub-fragments provably grows quickly as @math grows or as the @math approaches zero. For example there are @math sub-fragments per fragment for @math and @math , and the generates a fragment at a new node from receiving @math non-consecutive sub-fragments over an interface from each of @math nodes. Accessing many non-consecutive sub-fragments directly from a node is sometimes efficient, in which case the amount of data read over interfaces from nodes is equal to the amount of accessed data used to create the read data, but typically it is more efficient to access an entire fragment at a node and locally select the appropriate sub-fragments to send over the interface from the node to the repairer, in which case the amount of data locally accessed at each node is @math times the size of the data counted as read over the interface from the node.
{ "cite_N": [ "@cite_10" ], "mid": [ "2492240243" ], "abstract": [ "This paper presents an explicit construction for an @math regenerating code over a field @math operating at the Minimum Storage Regeneration (MSR) point. The MSR code can be constructed to have rate @math as close to @math as desired, sub-packetization given by @math , for @math , field size no larger than @math and where all code symbols can be repaired with the same minimum data download. The construction modifies a prior construction by Sasidharan et. al. which required far larger field-size. A building block appearing in the construction is a scalar MDS code of block length @math . The code has a simple layered structure with coupling across layers, that allows both node repair and data recovery to be carried out by making multiple calls to a decoder for the scalar MDS code. While this work was carried out independently, there is considerable overlap with a prior construction by Ye and Barg. It is shown here that essentially the same architecture can be employed to construct MSR codes using vector binary MDS codes as building blocks in place of scalar MDS codes. The advantage here is that computations can now be carried out over a field of smaller size potentially even over the binary field as we demonstrate in an example. Further, we show how the construction can be extended to handle the case of @math under a mild restriction on the choice of helper nodes." ] }
1610.03541
2530161394
One of the primary objectives of a distributed storage system is to reliably store large amounts of source data for long durations using a large number @math of unreliable storage nodes, each with @math bits of storage capacity. Storage nodes fail randomly over time and are replaced with nodes of equal capacity initialized to zeroes, and thus bits are erased at some rate @math . To maintain recoverability of the source data, a repairer continually reads data over a network from nodes at a rate @math , and generates and writes data to nodes based on the read data. The distributed storage source data capacity is the maximum amount of source data that can be reliably stored for long periods of time. We prove the distributed storage source data capacity asymptotically approaches @math as @math and @math grow. This equation expresses a fundamental trade-off between network traffic and storage overhead to reliably store source data.
In contrast to the s @cite_11 , @cite_1 , @cite_10 , the repairer described in the proof of directly reads unmodified data over interfaces from nodes, e.g. using HTTP, thus the upper bound on the amount of data read over interfaces from nodes also accounts for all data accessed from nodes.
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_11" ], "mid": [ "2492240243", "", "2119087823" ], "abstract": [ "This paper presents an explicit construction for an @math regenerating code over a field @math operating at the Minimum Storage Regeneration (MSR) point. The MSR code can be constructed to have rate @math as close to @math as desired, sub-packetization given by @math , for @math , field size no larger than @math and where all code symbols can be repaired with the same minimum data download. The construction modifies a prior construction by Sasidharan et. al. which required far larger field-size. A building block appearing in the construction is a scalar MDS code of block length @math . The code has a simple layered structure with coupling across layers, that allows both node repair and data recovery to be carried out by making multiple calls to a decoder for the scalar MDS code. While this work was carried out independently, there is considerable overlap with a prior construction by Ye and Barg. It is shown here that essentially the same architecture can be employed to construct MSR codes using vector binary MDS codes as building blocks in place of scalar MDS codes. The advantage here is that computations can now be carried out over a field of smaller size potentially even over the binary field as we demonstrate in an example. Further, we show how the construction can be extended to handle the case of @math under a mild restriction on the choice of helper nodes.", "", "Peer-to-peer distributed storage systems provide reliable access to data through redundancy spread over nodes across the Internet. A key goal is to minimize the amount of bandwidth used to maintain that redundancy. Storing a file using an erasure code, in fragments spread across nodes, promises to require less redundancy and hence less maintenance bandwidth than simple replication to provide the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate a new fragment in a distributed way while transferring as little data as possible across the network. In this paper, we introduce a general technique to analyze storage architectures that combine any form of coding and replication, as well as presenting two new schemes for maintaining redundancy using erasure codes. First, we show how to optimally generate MDS fragments directly from existing fragments in the system. Second, we introduce a new scheme called regenerating codes which use slightly larger fragments than MDS but have lower overall bandwidth use. We also show through simulation that in realistic environments, regenerating codes can reduce maintenance bandwidth use by 25 or more compared with the best previous design - a hybrid of replication and erasure codes - while simplifying system architecture." ] }
1610.03541
2530161394
One of the primary objectives of a distributed storage system is to reliably store large amounts of source data for long durations using a large number @math of unreliable storage nodes, each with @math bits of storage capacity. Storage nodes fail randomly over time and are replaced with nodes of equal capacity initialized to zeroes, and thus bits are erased at some rate @math . To maintain recoverability of the source data, a repairer continually reads data over a network from nodes at a rate @math , and generates and writes data to nodes based on the read data. The distributed storage source data capacity is the maximum amount of source data that can be reliably stored for long periods of time. We prove the distributed storage source data capacity asymptotically approaches @math as @math and @math grow. This equation expresses a fundamental trade-off between network traffic and storage overhead to reliably store source data.
When s @cite_11 , @cite_1 , @cite_10 are used to repair all objects in a system, the ratio of the amount of data read per node failure to the capacity of a node is approximately @math with respect to any with a periodic timing sequence, which is around a factor of two above the lower bound of approximately @math from Equation .
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_11" ], "mid": [ "2492240243", "", "2119087823" ], "abstract": [ "This paper presents an explicit construction for an @math regenerating code over a field @math operating at the Minimum Storage Regeneration (MSR) point. The MSR code can be constructed to have rate @math as close to @math as desired, sub-packetization given by @math , for @math , field size no larger than @math and where all code symbols can be repaired with the same minimum data download. The construction modifies a prior construction by Sasidharan et. al. which required far larger field-size. A building block appearing in the construction is a scalar MDS code of block length @math . The code has a simple layered structure with coupling across layers, that allows both node repair and data recovery to be carried out by making multiple calls to a decoder for the scalar MDS code. While this work was carried out independently, there is considerable overlap with a prior construction by Ye and Barg. It is shown here that essentially the same architecture can be employed to construct MSR codes using vector binary MDS codes as building blocks in place of scalar MDS codes. The advantage here is that computations can now be carried out over a field of smaller size potentially even over the binary field as we demonstrate in an example. Further, we show how the construction can be extended to handle the case of @math under a mild restriction on the choice of helper nodes.", "", "Peer-to-peer distributed storage systems provide reliable access to data through redundancy spread over nodes across the Internet. A key goal is to minimize the amount of bandwidth used to maintain that redundancy. Storing a file using an erasure code, in fragments spread across nodes, promises to require less redundancy and hence less maintenance bandwidth than simple replication to provide the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate a new fragment in a distributed way while transferring as little data as possible across the network. In this paper, we introduce a general technique to analyze storage architectures that combine any form of coding and replication, as well as presenting two new schemes for maintaining redundancy using erasure codes. First, we show how to optimally generate MDS fragments directly from existing fragments in the system. Second, we introduce a new scheme called regenerating codes which use slightly larger fragments than MDS but have lower overall bandwidth use. We also show through simulation that in realistic environments, regenerating codes can reduce maintenance bandwidth use by 25 or more compared with the best previous design - a hybrid of replication and erasure codes - while simplifying system architecture." ] }
1610.03541
2530161394
One of the primary objectives of a distributed storage system is to reliably store large amounts of source data for long durations using a large number @math of unreliable storage nodes, each with @math bits of storage capacity. Storage nodes fail randomly over time and are replaced with nodes of equal capacity initialized to zeroes, and thus bits are erased at some rate @math . To maintain recoverability of the source data, a repairer continually reads data over a network from nodes at a rate @math , and generates and writes data to nodes based on the read data. The distributed storage source data capacity is the maximum amount of source data that can be reliably stored for long periods of time. We prove the distributed storage source data capacity asymptotically approaches @math as @math and @math grow. This equation expresses a fundamental trade-off between network traffic and storage overhead to reliably store source data.
When s @cite_11 , @cite_1 , @cite_10 with small @math and @math are used to repair objects as @math grows, the needed peak repairer read rate with respect to a grows far beyond the lower bound average of . This is because there is a good chance that multiple s occur over a small interval of time, implying that repair for s must occur in a very short interval of time. In contrast, for the repairer described in the proof of , the peak repairer read rate essentially matches the lower bound average .
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_11" ], "mid": [ "2492240243", "", "2119087823" ], "abstract": [ "This paper presents an explicit construction for an @math regenerating code over a field @math operating at the Minimum Storage Regeneration (MSR) point. The MSR code can be constructed to have rate @math as close to @math as desired, sub-packetization given by @math , for @math , field size no larger than @math and where all code symbols can be repaired with the same minimum data download. The construction modifies a prior construction by Sasidharan et. al. which required far larger field-size. A building block appearing in the construction is a scalar MDS code of block length @math . The code has a simple layered structure with coupling across layers, that allows both node repair and data recovery to be carried out by making multiple calls to a decoder for the scalar MDS code. While this work was carried out independently, there is considerable overlap with a prior construction by Ye and Barg. It is shown here that essentially the same architecture can be employed to construct MSR codes using vector binary MDS codes as building blocks in place of scalar MDS codes. The advantage here is that computations can now be carried out over a field of smaller size potentially even over the binary field as we demonstrate in an example. Further, we show how the construction can be extended to handle the case of @math under a mild restriction on the choice of helper nodes.", "", "Peer-to-peer distributed storage systems provide reliable access to data through redundancy spread over nodes across the Internet. A key goal is to minimize the amount of bandwidth used to maintain that redundancy. Storing a file using an erasure code, in fragments spread across nodes, promises to require less redundancy and hence less maintenance bandwidth than simple replication to provide the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate a new fragment in a distributed way while transferring as little data as possible across the network. In this paper, we introduce a general technique to analyze storage architectures that combine any form of coding and replication, as well as presenting two new schemes for maintaining redundancy using erasure codes. First, we show how to optimally generate MDS fragments directly from existing fragments in the system. Second, we introduce a new scheme called regenerating codes which use slightly larger fragments than MDS but have lower overall bandwidth use. We also show through simulation that in realistic environments, regenerating codes can reduce maintenance bandwidth use by 25 or more compared with the best previous design - a hybrid of replication and erasure codes - while simplifying system architecture." ] }
1610.03541
2530161394
One of the primary objectives of a distributed storage system is to reliably store large amounts of source data for long durations using a large number @math of unreliable storage nodes, each with @math bits of storage capacity. Storage nodes fail randomly over time and are replaced with nodes of equal capacity initialized to zeroes, and thus bits are erased at some rate @math . To maintain recoverability of the source data, a repairer continually reads data over a network from nodes at a rate @math , and generates and writes data to nodes based on the read data. The distributed storage source data capacity is the maximum amount of source data that can be reliably stored for long periods of time. We prove the distributed storage source data capacity asymptotically approaches @math as @math and @math grow. This equation expresses a fundamental trade-off between network traffic and storage overhead to reliably store source data.
Thus, s @cite_11 , @cite_1 , @cite_10 minimize the peak repairer read rate with respect to the for repairing individual objects using , but they do not provide optimal peak repairer read rate at the system level when used to store and repair objects.
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_11" ], "mid": [ "2492240243", "", "2119087823" ], "abstract": [ "This paper presents an explicit construction for an @math regenerating code over a field @math operating at the Minimum Storage Regeneration (MSR) point. The MSR code can be constructed to have rate @math as close to @math as desired, sub-packetization given by @math , for @math , field size no larger than @math and where all code symbols can be repaired with the same minimum data download. The construction modifies a prior construction by Sasidharan et. al. which required far larger field-size. A building block appearing in the construction is a scalar MDS code of block length @math . The code has a simple layered structure with coupling across layers, that allows both node repair and data recovery to be carried out by making multiple calls to a decoder for the scalar MDS code. While this work was carried out independently, there is considerable overlap with a prior construction by Ye and Barg. It is shown here that essentially the same architecture can be employed to construct MSR codes using vector binary MDS codes as building blocks in place of scalar MDS codes. The advantage here is that computations can now be carried out over a field of smaller size potentially even over the binary field as we demonstrate in an example. Further, we show how the construction can be extended to handle the case of @math under a mild restriction on the choice of helper nodes.", "", "Peer-to-peer distributed storage systems provide reliable access to data through redundancy spread over nodes across the Internet. A key goal is to minimize the amount of bandwidth used to maintain that redundancy. Storing a file using an erasure code, in fragments spread across nodes, promises to require less redundancy and hence less maintenance bandwidth than simple replication to provide the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate a new fragment in a distributed way while transferring as little data as possible across the network. In this paper, we introduce a general technique to analyze storage architectures that combine any form of coding and replication, as well as presenting two new schemes for maintaining redundancy using erasure codes. First, we show how to optimally generate MDS fragments directly from existing fragments in the system. Second, we introduce a new scheme called regenerating codes which use slightly larger fragments than MDS but have lower overall bandwidth use. We also show through simulation that in realistic environments, regenerating codes can reduce maintenance bandwidth use by 25 or more compared with the best previous design - a hybrid of replication and erasure codes - while simplifying system architecture." ] }
1610.03266
2952893155
The problem of merging sorted lists in the least number of pairwise comparisons has been solved completely only for a few special cases. Graham and Karp taocp independently discovered that the tape merge algorithm is optimal in the worst case when the two lists have the same size. In the seminal papers, Stockmeyer and Yao yao , Murphy and Paull 3k3 , and Christen christen1978optimality independently showed when the lists to be merged are of size @math and @math satisfying @math , the tape merge algorithm is optimal in the worst case. This paper extends this result by showing that the tape merge algorithm is optimal in the worst case whenever the size of one list is no larger than 1.52 times the size of the other. The main tool we used to prove lower bounds is Knuth's adversary methods taocp . In addition, we show that the lower bound cannot be improved to 1.8 via Knuth's adversary methods. We also develop a new inequality about Knuth's adversary methods, which might be interesting in its own right. Moreover, we design a simple procedure to achieve constant improvement of the upper bounds for @math .
Besides the worst-case complexity, the average-case complexity has also been investigated for merge problems. Tanner @cite_24 designed an algorithm which uses at most @math comparisons on average. The average case complexity of insertive merging algorithms as well as binary merge has also been investigated @cite_29 @cite_19 .
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_29" ], "mid": [ "2171900177", "", "2001345341" ], "abstract": [ "A simple partitioning algorithm for merging two disjoint linearly ordered sets is given, and an upper bound on the average number of comparisons required is established. The upper bound is @math , where n is the number of elements in the larger of the two sets, m the number of the smaller, and @math . An immediate corollary is that any sorting problem can be done with an average number of comparisons within @math of the information theoretic bound using repeated merges; it does not matter what the merging order used is. Although the provable bound is @math over the lower bound, computations indicate that the algorithm will asymptotically make only @math more comparisons than the lower bound. The algorithm is compared with the Hwang-Lin algorithm, and modifications to improve average efficiency of this well known algorithm are given.", "", "We derive an asymptotic equivalent to the average running time of the merging algorithm of Hwang and Lin applied on two linearly ordered lists of numbers a 1 <a 2 <. . . <a m and b 1 <b 2 < . . . <b n when m and n tend to infinity in such a way that the ratio ( ) = m n is constant. We show that the distribution of the running time is concentrated around its expectation except when ( ) is a power of 2. When ( ) is a power of 2, we obtain an asymptotic equivalent for the expectation of the running time." ] }
1610.03321
2531161093
Keystroke dynamics have been extensively used in psycholinguistic and writing research to gain insights into cognitive processing. But do keystroke logs contain actual signal that can be used to learn better natural language processing models? We postulate that keystroke dynamics contain information about syntactic structure that can inform shallow syntactic parsing. To test this hypothesis, we explore labels derived from keystroke logs as auxiliary task in a multi-task bidirectional Long Short-Term Memory (bi-LSTM). Our results show promising results on two shallow syntactic parsing tasks, chunking and CCG supertagging. Our model is simple, has the advantage that data can come from distinct sources, and produces models that are significantly better than models trained on the text annotations alone.
A recent related line of work explores eye tracking data to inform sentence compression @cite_18 and induce part-of-speech @cite_0 . Similarly, there are recent studies that predict fMRI activation from reading @cite_14 or use fMRI data for POS induction @cite_7 . The distinct advantage of keystroke dynamics is that it is easy to get, non-expensive and non-intrusive.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_14", "@cite_7" ], "mid": [ "2252143990", "2964194677", "2170167574", "630306204" ], "abstract": [ "This paper investigates to what extent grammatical functions of a word can be predicted from gaze features obtained using eye-tracking. A recent study showed that reading behavior can be used to predict coarse-grained part of speech, but we go beyond this, and show that gaze features can also be used to make more finegrained distinctions between grammatical functions, e.g., subjects and objects. In addition, we show that gaze features can be used to improve a discriminative transition-based dependency parser.", "We show how eye-tracking corpora can be used to improve sentence compression models, presenting a novel multi-task learning algorithm based on multi-layer LSTMs. We obtain performance competitive with or better than state-of-the-art approaches.", "Many statistical models for natural language processing exist, including context-based neural networks that (1) model the previously seen context as a latent feature vector, (2) integrate successive words into the context using some learned representation (embedding), and (3) compute output probabilities for incoming words given the context. On the other hand, brain imaging studies have suggested that during reading, the brain (a) continuously builds a context from the successive words and every time it encounters a word it (b) fetches its properties from memory and (c) integrates it with the previous context with a degree of effort that is inversely proportional to how probable the word is. This hints to a parallelism between the neural networks and the brain in modeling context (1 and a), representing the incoming words (2 and b) and integrating it (3 and c). We explore this parallelism to better understand the brain processes and the neural networks representations. We study the alignment between the latent vectors used by neural networks and brain activity observed via Magnetoencephalography (MEG) when subjects read a story. For that purpose we apply the neural network to the same text the subjects are reading, and explore the ability of these three vector representations to predict the observed word-by-word brain activity. Our novel results show that: before a new word i is read, brain activity is well predicted by the neural network latent representation of context and the predictability decreases as the brain integrates the word and changes its own representation of context. Secondly, the neural network embedding of word i can predict the MEG activity when word i is presented to the subject, revealing that it is correlated with the brain’s own representation of word i. Moreover, we obtain that the activity is predicted in different regions of the brain with varying delay. The delay is consistent with the placement of each region on the processing pathway that starts in the visual cortex and moves to higher level regions. Finally, we show that the output probability computed by the neural networks agrees with the brain’s own assessment of the probability of word i, as it can be used to predict the brain activity after the word i’s properties have been fetched from memory and the brain is in the process of integrating it into the context.", "" ] }
1610.02714
2530771947
Egocentric, or first-person vision which became popular in recent years with an emerge in wearable technology, is different than exocentric (third-person) vision in some distinguishable ways, one of which being that the camera wearer is generally not visible in the video frames. Recent work has been done on action and object recognition in egocentric videos, as well as work on biometric extraction from first-person videos. Height estimation can be a useful feature for both soft-biometrics and object tracking. Here, we propose a method of estimating the height of an egocentric camera without any calibration or reference points. We used both traditional computer vision approaches and deep learning in order to determine the visual cues that results in best height estimation. Here, we introduce a framework inspired by two stream networks comprising of two Convolutional Neural Networks, one based on spatial information, and one based on information given by optical flow in a frame. Given an egocentric video as an input to the framework, our model yields a height estimate as an output. We also incorporate late fusion to learn a combination of temporal and spatial cues. Comparing our model with other methods we used as baselines, we achieve height estimates for videos with a Mean Average Error of 14.04 cm over a range of 103 cm of data, and classification accuracy for relative height (tall, medium or short) up to 93.75 where chance level is 33 .
With increasing popularity of egocentric vision in recent years, much research work is being focused to solve computer vision problems with the perspective of first person. Egocentric or First Person Vision problems are unlike classic computer vision problems since the person whose actions are being recorded is not captured. Egocentric vision poses unique challenges like non-static cameras, unusual view points, motion blur @cite_2 , variations in illumination with the varying positions of camera wearer, real time video analysis requirements, etcetera @cite_8 . Tan @cite_26 demonstrate that challenges posed by egocentric vision can be handled in a more efficient manner if analyzed differently than exocentric. Much work has been done to address classic computer vision problems, now with egocentric perspective such as objects understanding @cite_28 @cite_29 , object detection @cite_20 @cite_18 @cite_2 , object tracking @cite_3 @cite_22 , and activity recognition @cite_24 @cite_4 @cite_15 @cite_11 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_22", "@cite_8", "@cite_28", "@cite_29", "@cite_3", "@cite_24", "@cite_2", "@cite_15", "@cite_20", "@cite_11" ], "mid": [ "48480781", "2106640360", "1947050545", "1410291062", "", "2025581566", "2434184923", "2337242548", "854053868", "2204609240", "2081490348", "", "2167626157" ], "abstract": [ "We focus on the task of hand pose estimation from egocentric viewpoints. For this problem specification, we show that depth sensors are particularly informative for extracting near-field interactions of the camera wearer with his her environment. Despite the recent advances in full-body pose estimation using Kinect-like sensors, reliable monocular hand pose estimation in RGB-D images is still an unsolved problem. The problem is exacerbated when considering a wearable sensor and a first-person camera viewpoint: the occlusions inherent to the particular camera view and the limitations in terms of field of view make the problem even more difficult. We propose to use task and viewpoint specific synthetic training exemplars in a discriminative detection framework. We also exploit the depth features for a sparser and faster detection. We evaluate our approach on a real-world annotated dataset and propose a novel annotation technique for accurate 3D hand labelling even in case of partial occlusions.", "First-person view (FPV) video data is set to proliferate rapidly, due to many consumer wearable-camera devices coming onto the market. Research into FPV (or \"egocentric\") vision is also becoming more common in the computer vision community. However, it is still unclear what the fundamental characteristics of such data are. How is it really different from third-person view (TPV) data? Can all FPV data be treated the same? In this first attempt to approach these questions in a quantitative and empirical manner, we analyzed a meta-collection of 21 FPV and TPV datasets totaling more than 165 hours of video. We performed the first quantitative characterization of FPV videos over multiple datasets, encompassing virtually all available FPV datasets. Validating this characterization, linear classifiers trained on low-level features to perform FPV-versus-TPV classification achieved good baseline performance. Accuracy peaked at 81 for 2-minute clips, but 67 accuracy was achieved even with 1-second clips. Our low-level features are fast to compute and do not require annotation. Overall, our work uncovered insights regarding the basic nature and characteristics of FPV data.", "We address the challenging problem of recognizing the camera wearer's actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the first systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising findings. These findings uncover the best practices for egocentric actions, with a significant performance boost over all previous state-of-the-art methods on three publicly available datasets.", "A method for multiface tracking in low frame rate egocentric videos is proposed (eBoT).eBoT generates a tracklet for each detected face and groups similar tracklets.eBoT extracts a prototype for each group of tracklets and estimates its confidence.eBoT is robust to drastic changes in location and appearance of faces.eBoT is robust to partial and severe occlusions and is able to localize them. Wearable cameras offer a hands-free way to record egocentric images of daily experiences, where social events are of special interest. The first step towards detection of social events is to track the appearance of multiple persons involved in them. In this paper, we propose a novel method to find correspondences of multiple faces in low temporal resolution egocentric videos acquired through a wearable camera. This kind of photo-stream imposes additional challenges to the multi-tracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution, abrupt changes in the field of view, in illumination condition and in the target location are highly frequent. To overcome such difficulties, we propose a multi-face tracking method that generates a set of tracklets through finding correspondences along the whole sequence for each detected face and takes advantage of the tracklets redundancy to deal with unreliable ones. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which is aimed to correspond to a specific person. Finally, a prototype tracklet is extracted for each eBoT, where the occurred occlusions are estimated by relying on a new measure of confidence. We validated our approach over an extensive dataset of egocentric photo-streams and compared it to state of the art methods, demonstrating its effectiveness and robustness.", "", "Egocentric cameras are becoming more popular, introducing increasing volumes of video in which the biases and framing of traditional photography are replaced with those of natural viewing tendencies. This paradigm enables new applications, including novel studies of social interaction and human development. Recent work has focused on identifying the camera wearer's hands as a first step towards more complex analysis. In this paper, we study how to disambiguate and track not only the observer's hands but also those of social partners. We present a probabilistic framework for modeling paired interactions that incorporates the spatial, temporal, and appearance constraints inherent in egocentric video. We test our approach on a dataset of over 30 minutes of video from six pairs of subjects.", "Knowing how hands move and what object is being manipulated are two key sub-tasks for analyzing first-person (egocentric) action. However, lack of fully annotated hand data as well as imprecise foreground segmentation make either sub-task challenging. This work aims to explicitly ad dress these two issues via introducing a cascaded interactional targeting (i.e., infer both hand and active object regions) deep neural network. Firstly, a novel EM-like learning framework is proposed to train the pixel-level deep convolutional neural network (DCNN) by seamlessly integrating weakly supervised data (i.e., massive bounding box annotations) with a small set of strongly supervised data (i.e., fully annotated hand segmentation maps) to achieve state-of-the-art hand segmentation performance. Secondly, the resulting high-quality hand segmentation maps are further paired with the corresponding motion maps and object feature maps, in order to explore the contextual information among object, motion and hand to generate interactional foreground regions (operated objects). The resulting interactional target maps (hand + active object) from our cascaded DCNN are further utilized to form discriminative action representation. Experiments show that our framework has achieved the state-of-the-art egocentric action recognition performance on the benchmark dataset Activities of Daily Living (ADL).", "The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art.", "While egocentric video is becoming increasingly popular, browsing it is very difficult. In this paper we present a compact 3D Convolutional Neural Network (CNN) architecture for long-term activity recognition in egocentric videos. Recognizing long-term activities enables us to temporally segment (index) long and unstructured egocentric videos. Existing methods for this task are based on hand tuned features derived from visible objects, location of hands, as well as optical flow. Given a sparse optical flow volume as input, our CNN classifies the camera wearer's activity. We obtain classification accuracy of 89 , which outperforms the current state-of-the-art by 19 . Additional evaluation is performed on an extended egocentric video dataset, classifying twice the amount of categories than current state-of-the-art. Furthermore, our CNN is able to recognize whether a video is egocentric or not with 99.2 accuracy, up by 24 from current state-of-the-art. To better understand what the network actually learns, we propose a novel visualization of CNN kernels as flow fields.", "Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduce a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. We show how these high-quality bounding boxes can be used to create accurate pixelwise hand regions, and as an application, we investigate the extent to which hand segmentation alone can distinguish between different activities. We evaluate these techniques on a new dataset of 48 first-person videos of people interacting in realistic environments, with pixel-level ground truth for over 15,000 hand instances.", "This paper introduces the concept of first-person animal activity recognition, the problem of recognizing activities from a view-point of an animal (e.g., a dog). Similar to first-person activity recognition scenarios where humans wear cameras, our approach estimates activities performed by an animal wearing a camera. This enables monitoring and understanding of natural animal behaviors even when there are no people around them. Its applications include automated logging of animal behaviors for medical biology experiments, monitoring of pets, and investigation of wildlife patterns. In this paper, we construct a new dataset composed of first-person animal videos obtained by mounting a camera on each of the four pet dogs. Our new dataset consists of 10 activities containing a heavy fair amount of ego-motion. We implemented multiple baseline approaches to recognize activities from such videos while utilizing multiple types of global local motion features. Animal ego-actions as well as human-animal interactions are recognized with the baseline approaches, and we discuss experimental results.", "", "This paper discusses the problem of recognizing interaction-level human activities from a first-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand 'what activity others are performing to it' from continuous video inputs. These include friendly interactions such as 'a person hugging the observer' as well as hostile interactions like 'punching the observer' or 'throwing objects to the observer', whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multi-channel kernels to integrate global and local motion information, and presents a new activity learning recognition methodology that explicitly considers temporal structures displayed in first-person activity videos. In our experiments, we not only show classification results with segmented videos, but also confirm that our new approach is able to detect activities from continuous videos reliably." ] }
1610.02714
2530771947
Egocentric, or first-person vision which became popular in recent years with an emerge in wearable technology, is different than exocentric (third-person) vision in some distinguishable ways, one of which being that the camera wearer is generally not visible in the video frames. Recent work has been done on action and object recognition in egocentric videos, as well as work on biometric extraction from first-person videos. Height estimation can be a useful feature for both soft-biometrics and object tracking. Here, we propose a method of estimating the height of an egocentric camera without any calibration or reference points. We used both traditional computer vision approaches and deep learning in order to determine the visual cues that results in best height estimation. Here, we introduce a framework inspired by two stream networks comprising of two Convolutional Neural Networks, one based on spatial information, and one based on information given by optical flow in a frame. Given an egocentric video as an input to the framework, our model yields a height estimate as an output. We also incorporate late fusion to learn a combination of temporal and spatial cues. Comparing our model with other methods we used as baselines, we achieve height estimates for videos with a Mean Average Error of 14.04 cm over a range of 103 cm of data, and classification accuracy for relative height (tall, medium or short) up to 93.75 where chance level is 33 .
Recently, Zhou @cite_29 proposed a cascaded interactional targeting deep neural network for egocentric action recognition. In @cite_13 , saliency cues are exploited to predict objects of attention in egocentric videos. Another line of work is related to summarizing hours long videos recorded by camera wearers and predicting important objects, people and interesting events in life-logging data @cite_0 @cite_5 @cite_7 @cite_30 . Albeit the camera wearer is not captured and apparently seems to preserve his anonymity, nevertheless, much can be told about the person @cite_19 . Peleg @cite_19 used camera motion cue to demonstrate how a person's identity can be revealed. Estimating height of objects or people in a video with a calibrated camera @cite_17 @cite_14 @cite_27 @cite_1 is also reported. In @cite_9 , dynamic motion signatures and static scene structures proved to be significant cues for pose estimation of the camera wearer when most of the significant body parts are invisible. Our problem, however, is unique since the person whose height we are trying to estimate is not in the image frame and using an uncalibrated camera further complicates the problem.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_7", "@cite_29", "@cite_9", "@cite_1", "@cite_0", "@cite_19", "@cite_27", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "2106229755", "", "2120645068", "2434184923", "", "2400282315", "1910566948", "2302062357", "586894203", "2071711566", "", "2531951740" ], "abstract": [ "We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.", "", "We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.", "Knowing how hands move and what object is being manipulated are two key sub-tasks for analyzing first-person (egocentric) action. However, lack of fully annotated hand data as well as imprecise foreground segmentation make either sub-task challenging. This work aims to explicitly ad dress these two issues via introducing a cascaded interactional targeting (i.e., infer both hand and active object regions) deep neural network. Firstly, a novel EM-like learning framework is proposed to train the pixel-level deep convolutional neural network (DCNN) by seamlessly integrating weakly supervised data (i.e., massive bounding box annotations) with a small set of strongly supervised data (i.e., fully annotated hand segmentation maps) to achieve state-of-the-art hand segmentation performance. Secondly, the resulting high-quality hand segmentation maps are further paired with the corresponding motion maps and object feature maps, in order to explore the contextual information among object, motion and hand to generate interactional foreground regions (operated objects). The resulting interactional target maps (hand + active object) from our cascaded DCNN are further utilized to form discriminative action representation. Experiments show that our framework has achieved the state-of-the-art egocentric action recognition performance on the benchmark dataset Activities of Daily Living (ADL).", "", "This paper presents a novel technique for the estimation of the height of an object using a single camera view. In the proposed method, the only information required is the knowledge about the pose of the camera with respect to the world (i.e., height and pitch angle of the camera with respect to the ground) and a vanishing point. In the developed theory, the focal length may also be known, but in the proposed experiments it has not been employed: an approximation for small pitch angles has been taken into account and the consequent committed error has been then analysed. The presented method gives accurate results for any object placed in unstructured environments, regardless of the relative distance from the camera. The method has been tested in a series of outdoor and indoor environments, and the experimental results are presented in this paper.", "With the proliferation of wearable cameras, the number of videos of users documenting their personal lives using such devices is rapidly increasing. Since such videos may span hours, there is an important need for mechanisms that represent the information content in a compact form (i.e., shorter videos which are more easily browsable sharable). Motivated by these applications, this paper focuses on the problem of egocentric video summarization. Such videos are usually continuous with significant camera shake and other quality issues. Because of these reasons, there is growing consensus that direct application of standard video summarization tools to such data yields unsatisfactory performance. In this paper, we demonstrate that using gaze tracking information (such as fixation and saccade) significantly helps the summarization task. It allows meaningful comparison of different image frames and enables deriving personalized summaries (gaze provides a sense of the camera wearer's intent). We formulate a summarization model which captures common-sense properties of a good summary, and show that it can be solved as a submodular function maximization with partition matroid constraints, opening the door to a rich body of work from combinatorial optimization. We evaluate our approach on a new gaze-enabled egocentric video dataset (over 15 hours), which will be a valuable standalone resource.", "Egocentric cameras are being worn by an increasing number of users, among them many security forces worldwide. GoPro cameras already penetrated the mass market, reporting substantial increase in sales every year. As head-worn cameras do not capture the photographer, it may seem that the anonymity of the photographer is preserved even when the video is publicly distributed. We show that camera motion, as can be computed from the egocentric video, provides unique identity information. The photographer can be reliably recognized from a few seconds of video captured when walking. The proposed method achieves more than 90 recognition accuracy in cases where the random success rate is only 3 . Applications can include theft prevention by locking the camera when not worn by its rightful owner. Searching video sharing services (e.g. YouTube) for egocentric videos shot by a specific photographer may also become possible. An important message in this paper is that photographers should be aware that sharing egocentric video will compromise their anonymity, even when their face is not visible.", "Abstract Aim: The aim was to evaluate height measurements made with the single view metrology method and to investigate the influence of standing position and different phases of gait and running on vertical height. Method: Ten healthy men were recorded simultaneously by a 2D web camera and a 3D motion analysis system. They performed six trials, three standing and three during gait and running. The vertical height was measured with the single view metrology method and in Qualisys Track Manager. The results were compared for evaluation. The vertical height in the different postures was compared to the actual height. Results: The measurements made with the single view metrology method were significantly higher than the measurements made with Qualisys Track Manager (p 0.05). Conclusion: The single view metrology method measured vertical heights with a mean error of +2.30 cm. Posture influence vertical body height. Midstance in walking is the position where vertical height corresponds best with actual height, in running it is the non-support phase.", "We present a video summarization approach for egocentric or \"wearable\" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video--such as the nearness to hands, gaze, and frequency of occurrence--and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.", "", "We address the problem of estimating a person's body height from a single uncalibrated image. The novelty of our work lies in that we handle two difficult cases not previously addressed in the literature: (i) the image contains no reference length in the background scene to indicate absolute scale, (ii) the image contains the upper body part only. In a nutshell, our method combines well-known ideas from projective geometry and single-view metrology with prior probabilistic statistical knowledge of human anthropometry, in a Bayesian-like framework. The method is demonstrated with synthetic (randomly generated) data as well as a dataset of 96 frontal images." ] }
1610.02766
2949923111
Let @math be a discrete Gaussian free field in a two-dimensional box @math of side length @math with Dirichlet boundary conditions. We study the Liouville first passage percolation, i.e., the shortest path metric where each vertex is given a weight of @math for some @math . We show that for sufficiently small but fixed @math , with probability tending to @math as @math , all geodesics between vertices of macroscopic Euclidean distances simultaneously have (the conjecturally unique) length exponent strictly larger than 1.
In @cite_4 the chemical distance (i.e., graph distance in the induced open cluster) for the percolation of level sets of two-dimensional DGFF was studied, and in particular [Theorem 1.1] DL16 implies that there exists a path of dimension 1 joining the two boundaries of an annulus which has Liouville FPP weight @math . This can be seen as a complement of our Theorem . In @cite_9 , a non-universality result was proved on the weight exponent for the geodesic among log-correlated Gaussian fields, which suggests the subtlety when attempting to estimate the weight exponent. We would like to point out that the proof of our main result Theorem is expected to be adaptable to general log-correlated Gaussian fields with @math -scale invariant kernels as in @cite_1 . However, in light of @cite_9 there seems to be no reason to expect that the geodesic dimension is universal among log-correlated fields.
{ "cite_N": [ "@cite_1", "@cite_9", "@cite_4" ], "mid": [ "2022385268", "2964208310", "" ], "abstract": [ "In this paper, we study Gaussian multiplicative chaos in the critical case. We show that the so-called derivative martingale, introduced in the context of branching Brownian motions and branching random walks, converges almost surely (in all dimensions) to a random measure with full support. We also show that the limiting measure has no atom. In connection with the derivative martingale, we write explicit conjectures about the glassy phase of logcorrelated Gaussian potentials and the relation with the asymptotic expansion of the maximum of log-correlated Gaussian random variables.", "We consider first passage percolation (FPP) where the vertex weight is given by the exponential of two-dimensional log-correlated Gaussian fields. Our work is motivated by understanding the discrete analog for the random metric associated with Liouville quantum gravity (LQG), which roughly corresponds to the exponential of a two-dimensional Gaussian free field (GFF). The particular focus of the present paper is an aspect of universality for such FPP among the family of log-correlated Gaussian fields. More precisely, we construct a family of log-correlated Gaussian fields, and show that the FPP distance between two typically sampled vertices (according to the LQG measure) is (N^ 1+ O( ) ), where N is the side length of the box and ( ) can be made arbitrarily small if we tune a certain parameter in our construction. That is, the exponents can be arbitrarily close to 1. Combined with a recent work of the first author and Goswami on an upper bound for this exponent when the underlying field is a GFF, our result implies that such exponent is not universal among the family of log-correlated Gaussian fields.", "" ] }
1610.03098
2952463445
In this paper, we propose a novel neural approach for paraphrase generation. Conventional para- phrase generation methods either leverage hand-written rules and thesauri-based alignments, or use statistical machine learning principles. To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation. Our primary contribution is a stacked residual LSTM network, where we add residual connections between LSTM layers. This allows for efficient training of deep LSTMs. We evaluate our model and other state-of-the-art deep learning models on three different datasets: PPDB, WikiAnswers and MSCOCO. Evaluation results demonstrate that our model outperforms sequence to sequence, attention-based and bi- directional LSTM models on BLEU, METEOR, TER and an embedding-based sentence similarity metric.
Prior approaches to paraphrase generation have applied relatively different methodologies, typically using knowledge-driven approaches or statistical machine translation (SMT) principles. Knowledge-driven methods for paraphrase generation @cite_0 utilize hand-crafted rules @cite_33 or automatically learned complex paraphrase patterns @cite_5 . Other paraphrase generation methods use thesaurus-based @cite_31 or semantic analysis-driven natural language generation approaches @cite_6 to generate paraphrases. In contrast, , show the effectiveness of SMT techniques for paraphrase generation given adequate monolingual parallel corpus extracted from comparable news articles. , propose a phrase-based SMT framework for sentential paraphrase generation by using a large aligned monolingual corpus of news headlines. , propose a combination of multiple resources to learn phrase-based paraphrase tables and corresponding feature functions to devise a log-linear SMT model. Other models generate application-specific paraphrases @cite_5 , leverage bilingual parallel corpora @cite_42 or apply a multi-pivot approach to output candidate paraphrases @cite_9 .
{ "cite_N": [ "@cite_33", "@cite_9", "@cite_42", "@cite_6", "@cite_0", "@cite_5", "@cite_31" ], "mid": [ "1776056560", "2120687633", "2143927888", "1980519283", "2051593977", "2103081392", "2045274176" ], "abstract": [ "The design and implementation of a paraphrase component for a natural language question-answering system (CO-OP) is presented. The component is used to produce a paraphrase of a user's question to the system, which is presented to the user before the question is evaluated and answered. A major point made is the role of given and new information in formulating a paraphrase that differs in a meaningful way from the user's question. A description is also given of the transformational grammar that is used by the paraphraser.", "This paper proposes a method that leverages multiple machine translation (MT) engines for paraphrase generation (PG). The method includes two stages. Firstly, we use a multi-pivot approach to acquire a set of candidate paraphrases for a source sentence S. Then, we employ two kinds of techniques, namely the selection-based technique and the decoding-based technique, to produce a best paraphrase T for S using the candidates acquired in the first stage. Experimental results show that: (1) The multi-pivot approach is effective for obtaining plenty of valuable candidate paraphrases. (2) Both the selection-based and decoding-based techniques can make good use of the candidates and produce high-quality paraphrases. Moreover, these two techniques are complementary. (3) The proposed method outperforms a state-of-the-art paraphrase generation approach.", "Previous work has used monolingual parallel corpora to extract and generate paraphrases. We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. Using alignment techniques from phrase-based statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments.", "Paraphrases, which stem from the variety of lexical and grammatical means of expressing meaning available in a language, pose challenges for a sentence generation system. In this paper, we discuss the generation of paraphrases from predicate argument structure using a simple, uniform generation methodology. Central to our approach are lexico-grammatical resources which pair elementary semantic structures with their syntactic realization and a simple but powerful mechanism for combining resources.", "The task of paraphrasing is inherently familiar to speakers of all languages. Moreover, the task of automatically generating or extracting semantic equivalences for the various units of language-words, phrases, and sentences-is an important part of natural language processing (NLP) and is being increasingly employed to improve the performance of several NLP applications. In this article, we attempt to conduct a comprehensive and application-independent survey of data-driven phrasal and sentential paraphrase generation methods, while also conveying an appreciation for the importance and potential use of paraphrases in the field of NLP research. Recent work done in manual and automatic construction of paraphrase corpora is also examined. We also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generation.", "Paraphrase generation (PG) is important in plenty of NLP applications. However, the research of PG is far from enough. In this paper, we propose a novel method for statistical paraphrase generation (SPG), which can (1) achieve various applications based on a uniform statistical model, and (2) naturally combine multiple resources to enhance the PG performance. In our experiments, we use the proposed method to generate paraphrases for three different applications. The results show that the method can be easily transformed from one application to another and generate valuable and interesting paraphrases.", "This paper describes the University of North Texas SubFinder system. The system is able to provide the most likely set of substitutes for a word in a given context, by combining several techniques and knowledge sources. SubFinder has successfully participated in the best and out of ten (oot) tracks in the SemEval lexical substitution task, consistently ranking in the first or second place." ] }
1610.03098
2952463445
In this paper, we propose a novel neural approach for paraphrase generation. Conventional para- phrase generation methods either leverage hand-written rules and thesauri-based alignments, or use statistical machine learning principles. To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation. Our primary contribution is a stacked residual LSTM network, where we add residual connections between LSTM layers. This allows for efficient training of deep LSTMs. We evaluate our model and other state-of-the-art deep learning models on three different datasets: PPDB, WikiAnswers and MSCOCO. Evaluation results demonstrate that our model outperforms sequence to sequence, attention-based and bi- directional LSTM models on BLEU, METEOR, TER and an embedding-based sentence similarity metric.
To the best of our knowledge, this is the first work on using residual connections with recurrent neural networks. Very recently, we found that used residual GRU to show an improvement in image compression rates for a given quality over JPEG. Another variant of residual network called @cite_14 , which uses dense connections over every layer, has been shown to be effective for image recognition tasks achieving state-of-the-art results in CIFAR and SVHN datasets. Such works further validate the efficacy of adding residual connections for training deep networks.
{ "cite_N": [ "@cite_14" ], "mid": [ "2511730936" ], "abstract": [ "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL ." ] }
1610.02876
2531820137
Local Process Model (LPM) discovery is focused on the mining of a set of process models where each model describes the behavior represented in the event log only partially, i.e. subsets of possible events are taken into account to create so-called local process models. Often such smaller models provide valuable insights into the behavior of the process, especially when no adequate and comprehensible single overall process model exists that is able to describe the traces of the process from start to end. The practical application of LPM discovery is however hindered by computational issues in the case of logs with many activities (problems may already occur when there are more than 17 unique activities). In this paper, we explore three heuristics to discover subsets of activities that lead to useful log projections with the goal of speeding up LPM discovery considerably while still finding high-quality LPMs. We found that a Markov clustering approach to create projection sets results in the largest improvement of execution time, with discovered LPMs still being better than with the use of randomly generated activity sets of the same size. Another heuristic, based on log entropy, yields a more moderate speedup, but enables the discovery of higher quality LPMs. The third heuristic, based on the relative information gain, shows unstable performance: for some data sets the speedup and LPM quality are higher than with the log entropy based method, while for other data sets there is no speedup at all.
@cite_15 describe an approach to generate overlapping sets of activities from a causal dependency graph and uses it to speed up region theory based Petri net synthesis from a transitions system representation of the event log. The activities are grouped together in such a way that the synthesized Petri nets can be combined into a single Petri net generating all the traces of the log. In a more recent paper Carmona @cite_18 describes an approach to discover a set of projections from an event log based on Principle Component Analysis (PCA). =-1
{ "cite_N": [ "@cite_15", "@cite_18" ], "mid": [ "1910074535", "2010830289" ], "abstract": [ "The goal of Process Mining is to extract process models from logs of a system. Among the possible models to represent a process, Petri nets is an ideal candidate due to its graphical representation, clear semantics and expressive power. The theory of regions can be used to transform a log into a Petri net, but unfortunately the transformation requires algorithms with high complexity. This paper provides techniques to overcome this limitation. Either by using decomposition techniques, or by clustering events in the log and working on projections, the proposed approach can be used to widen the applicability of classical region-based techniques.", "Traces are everywhere from information systems that store their continuous executions, to any type of health care applications that record each patient's history. The transformation of a set of traces into a mathematical model that can be used for a formal reasoning is therefore of great value. The discovery of process models out of traces is an interesting problem that has received significant attention in the last years. This is a central problem in Process Mining, a novel area which tries to close the cycle between system design and validation, by resorting on methods for the automated discovery, analysis and extension of process models. In this work, algorithms for the derivation of a Petri net from a set of traces are presented. The methods are grounded on the theory of regions, which maps a model in the state-based domain (e.g., an automata) into a model in the event-based domain (e.g., a Petri net). When dealing with large examples, a direct application of the theory of regions will suffer from two problems: one is the state-explosion problem, i.e., the resources required by algorithms that work at the state-level are sometimes prohibitive. This paper introduces decomposition and projection techniques to alleviate the complexity of the region-based algorithms for Petri net discovery, thus extending its applicability to handle large inputs. A second problem is known as the overfitting problem for region-based approaches, which informally means that, in order to represent with high accuracy the trace set, the models obtained are often spaghetti-like. By focusing on special type of processes called conservative and for which an elegant theory and efficient algorithms can be devised, the techniques presented in this paper alleviate the overfitting problem and moreover incorporate structure into the models generated." ] }
1610.02482
2531166052
Autonomous crop monitoring at high spatial and temporal resolution is a critical problem in precision agriculture. While Structure from Motion and Multi-View Stereo algorithms can finely reconstruct the 3D structure of a field with low-cost image sensors, these algorithms fail to capture the dynamic nature of continuously growing crops. In this paper we propose a 4D reconstruction approach to crop monitoring, which employs a spatio-temporal model of dynamic scenes that is useful for precision agriculture applications. Additionally, we provide a robust data association algorithm to address the problem of large appearance changes due to scenes being viewed from different angles at different points in time, which is critical to achieving 4D reconstruction. Finally, we collected a high quality dataset with ground truth statistics to evaluate the performance of our method. We demonstrate that our 4D reconstruction approach provides models that are qualitatively correct with respect to visual appearance and quantitatively accurate when measured against the ground truth geometric properties of the monitored crops.
Automated crop monitoring is a key problem in precision agriculture, used to maximize crop yield while minimizing cost and environmental impact. Traditional crop monitoring techniques are based on measurements by human operators, which is both expensive and labor intensive. Early work on automated crop monitoring mainly relies on satellite imagery @cite_45 , which is expensive and lacks sufficient resolution in both space and time. Recently, crop monitoring with Unmanned Aerial Vehicles (UAVs) @cite_27 @cite_42 @cite_16 @cite_37 @cite_40 and ground vehicles @cite_24 @cite_14 @cite_48 @cite_33 has garnered interest from both agricultural and robotics communities due to the abilities of these systems to gather large quantities of data with high spatial and temporal resolution.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_33", "@cite_48", "@cite_42", "@cite_24", "@cite_27", "@cite_45", "@cite_40", "@cite_16" ], "mid": [ "", "", "1892082890", "", "2049492608", "1989863789", "", "2132077228", "2410496954", "1659349535" ], "abstract": [ "", "", "We present a vision system that automatically predicts yield in vineyards accurately and with high resolution. Yield estimation traditionally requires tedious hand measurement, which is destructive, sparse in sampling, and inaccurate. Our method is efficient, high-resolution, and it is the first such system evaluated in realistic experimentation over several years and hundreds of vines spread over several acres of different vineyards. Other existing research is limited to small test sets of 10 vines or less, or just isolated grape clusters, with tightly controlled image acquisition and with artificially induced yield distributions. The system incorporates cameras and illumination mounted on a vehicle driving through the vineyard. We process images by exploiting the three prominent visual cues of texture, color, and shape into a strong classifier that detects berries even when they are of similar color to the vine leaves. We introduce methods to maximize the spatial and the overall accuracy of the yield estimates by optimizing the relationship between image measurements and yield. Our experimentation is conducted over four growing seasons in several wine and table-grape vineyards. These are the first such results from experimentation that is sufficiently sized for fair evaluation against true yield variation and real-world imaging conditions from a moving vehicle. Analysis of the results demonstrates yield estimates that capture up to 75 of spatial yield variance and with an average error between 3 and 11 of total yield.", "", "Remote sensing by Unmanned Aerial Vehicles (UAVs) is changing the way agriculture operates by increasing the spatial-temporal resolution of data collection. Micro-UAVs have the potential to further improve and enrich the data collected by operating close to the crops, enabling the collection of higher spatio-temporal resolution data. In this paper, we present a UAV-mounted measurement system that utilizes a laser scanner to compute crop heights, a critical indicator of crop health. The system filters, transforms, and analyzes the cluttered range data in real-time to determine the distance to the ground and to the top of the crops. We assess the system in an indoor testbed and in a corn field. Our findings indicate that despite the dense canopy and highly variable sensor readings, we can precisely fly over crops and measure its height to within 5cm of measurements gathered using current measurement technology.", "An approach is described for automatic assessment of crop and weed area in images of widely spaced (0.25 m) cereal crops, captured from a tractor mounted camera. A form of vegetative index, which is invariant over the range of natural daylight illumination, was computed from the red, green and blue channels of a conventional CCD camera. The transformed image can be segmented into soil and vegetative components using a single fixed threshold. A previously reported algorithm was applied to robustly locate the crop rows. Assessment zones were automatically positioned; for crop growth directly over the crop rows, and for weed growth between the rows. The proportion of crop and weed pixels counted was compared with a manual assessment of area density on the basis of high resolution plan view photographs of the same area; this was performed for views with a range of crop and weed levels. The correlation of the manual and automatic measures was examined, and used to obtain a calibration for the automatic approach. The results of mapping of a small field, at two times, are presented. The results of the automated mapping appear to be consistent with manual assessment.", "", "Low resolution satellite imagery has been extensively used for crop monitoring and yield forecasting for over 30 years and plays an important role in a growing number of operational systems. The combination of their high temporal frequency with their extended geographical coverage generally associated with low costs per area unit makes these images a convenient choice at both national and regional scales. Several qualitative and quantitative approaches can be clearly distinguished, going from the use of low resolution satellite imagery as the main predictor of final crop yield to complex crop growth models where remote sensing-derived indicators play different roles, depending on the nature of the model and on the availability of data measured on the ground. Vegetation performance anomaly detection with low resolution images continues to be a fundamental component of early warning and drought monitoring systems at the regional scale. For applications at more detailed scales, the limitations created by the mixed nature of low resolution pixels are being progressively reduced by the higher resolution offered by new sensors, while the continuity of existing systems remains crucial for ensuring the", "Unmanned aerial vehicles (UAVs) have the potential to significantly impact early detection and monitoring of plant diseases. In this paper, we present preliminary work in developing a UAV-mounted sensor suite for detection of citrus greening disease, a major threat to Florida citrus production. We propose a depth-invariant sensing methodology for measuring reflectance of polarized amber light, a metric which has been found to measure starch accumulation in greening-infected leaves. We describe the implications of adding depth information to this method, including the use of machine learning models to discriminate between healthy and infected leaves with validation accuracies up to 93 . Additionally, we discuss stipulations and challenges of use of the system with UAV platforms. This sensing system has the potential to allow for rapid scanning of groves to determine the spread of the disease, especially in areas where infection is still in early stages, including citrus farms in California. Although presented in the context of citrus greening disease, the methods can be applied to a variety of plant pathology studies, enabling timely monitoring of plant health—impacting scientists, growers, and policymakers.", "Addressing the challenges of feeding the burgeoning world population with limited resources requires innovation in sustainable, efficient farming. The practice of precision agriculture offers many benefits towards addressing these challenges, such as improved yield and efficient use of such resources as water, fertilizer and pesticides. We describe the design and development of a light-weight, multi-spectral 3D imaging device that can be used for automated monitoring in precision agriculture. The sensor suite consists of a laser range scanner, multi-spectral cameras, a thermal imaging camera, and navigational sensors. We present techniques to extract four key data products — plant morphology, canopy volume, leaf area index, and fruit counts — using the sensor suite. We demonstrate its use with two systems: multi-rotor micro aerial vehicles and on a human-carried, shoulder-mounted harness. We show results of field experiments conducted in collaboration with growers and agronomists in vineyards, apple orchards and orange groves." ] }
1610.02482
2531166052
Autonomous crop monitoring at high spatial and temporal resolution is a critical problem in precision agriculture. While Structure from Motion and Multi-View Stereo algorithms can finely reconstruct the 3D structure of a field with low-cost image sensors, these algorithms fail to capture the dynamic nature of continuously growing crops. In this paper we propose a 4D reconstruction approach to crop monitoring, which employs a spatio-temporal model of dynamic scenes that is useful for precision agriculture applications. Additionally, we provide a robust data association algorithm to address the problem of large appearance changes due to scenes being viewed from different angles at different points in time, which is critical to achieving 4D reconstruction. Finally, we collected a high quality dataset with ground truth statistics to evaluate the performance of our method. We demonstrate that our 4D reconstruction approach provides models that are qualitatively correct with respect to visual appearance and quantitatively accurate when measured against the ground truth geometric properties of the monitored crops.
Computer vision is a powerful tool for monitoring the crops and estimating yields with low-cost image sensors @cite_24 @cite_33 @cite_22 @cite_21 . However, the majority of this work only utilizes 2D information in individual images, failing to recover the 3D geometric information from sequences of images. Structure from Motion (SfM) @cite_29 is a mature discipline within the computer vision community that enables the recovery of 3D geometric information from image. When combined with Multi-View Stereo (MVS) approaches @cite_6 , these methods can be used to obtain dense, fine-grained 3D reconstructions. The major barrier to the direct use of these methods for crop monitoring is that traditional SfM and MVS methods only work for , which cannot solve 3D reconstruction problems with crops.
{ "cite_N": [ "@cite_22", "@cite_33", "@cite_29", "@cite_21", "@cite_6", "@cite_24" ], "mid": [ "", "1892082890", "2163446794", "2501369945", "2129404737", "1989863789" ], "abstract": [ "", "We present a vision system that automatically predicts yield in vineyards accurately and with high resolution. Yield estimation traditionally requires tedious hand measurement, which is destructive, sparse in sampling, and inaccurate. Our method is efficient, high-resolution, and it is the first such system evaluated in realistic experimentation over several years and hundreds of vines spread over several acres of different vineyards. Other existing research is limited to small test sets of 10 vines or less, or just isolated grape clusters, with tightly controlled image acquisition and with artificially induced yield distributions. The system incorporates cameras and illumination mounted on a vehicle driving through the vineyard. We process images by exploiting the three prominent visual cues of texture, color, and shape into a strong classifier that detects berries even when they are of similar color to the vine leaves. We introduce methods to maximize the spatial and the overall accuracy of the yield estimates by optimizing the relationship between image measurements and yield. Our experimentation is conducted over four growing seasons in several wine and table-grape vineyards. These are the first such results from experimentation that is sufficiently sized for fair evaluation against true yield variation and real-world imaging conditions from a moving vehicle. Analysis of the results demonstrates yield estimates that capture up to 75 of spatial yield variance and with an average error between 3 and 11 of total yield.", "We present a system that can reconstruct 3D geometry from large, unorganized collections of photographs such as those found by searching for a given city (e.g., Rome) on Internet photo-sharing sites. Our system is built on a set of new, distributed computer vision algorithms for image matching and 3D reconstruction, designed to maximize parallelism at each stage of the pipeline and to scale gracefully with both the size of the problem and the amount of available computation. Our experimental results demonstrate that it is now possible to reconstruct city-scale image collections with more than a hundred thousand images in less than a day.", "This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0 . 807 to 0 . 838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit.", "This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and \"crowded\" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.", "An approach is described for automatic assessment of crop and weed area in images of widely spaced (0.25 m) cereal crops, captured from a tractor mounted camera. A form of vegetative index, which is invariant over the range of natural daylight illumination, was computed from the red, green and blue channels of a conventional CCD camera. The transformed image can be segmented into soil and vegetative components using a single fixed threshold. A previously reported algorithm was applied to robustly locate the crop rows. Assessment zones were automatically positioned; for crop growth directly over the crop rows, and for weed growth between the rows. The proportion of crop and weed pixels counted was compared with a manual assessment of area density on the basis of high resolution plan view photographs of the same area; this was performed for views with a range of crop and weed levels. The correlation of the manual and automatic measures was examined, and used to obtain a calibration for the automatic approach. The results of mapping of a small field, at two times, are presented. The results of the automated mapping appear to be consistent with manual assessment." ] }
1610.02482
2531166052
Autonomous crop monitoring at high spatial and temporal resolution is a critical problem in precision agriculture. While Structure from Motion and Multi-View Stereo algorithms can finely reconstruct the 3D structure of a field with low-cost image sensors, these algorithms fail to capture the dynamic nature of continuously growing crops. In this paper we propose a 4D reconstruction approach to crop monitoring, which employs a spatio-temporal model of dynamic scenes that is useful for precision agriculture applications. Additionally, we provide a robust data association algorithm to address the problem of large appearance changes due to scenes being viewed from different angles at different points in time, which is critical to achieving 4D reconstruction. Finally, we collected a high quality dataset with ground truth statistics to evaluate the performance of our method. We demonstrate that our 4D reconstruction approach provides models that are qualitatively correct with respect to visual appearance and quantitatively accurate when measured against the ground truth geometric properties of the monitored crops.
Change detection in both 2D images @cite_41 @cite_9 , 3D point clouds @cite_25 , and volumetric reconstruction @cite_8 @cite_26 , have been extensively studied. @cite_3 generates a 3D reconstruction of changed parts from 2D images, and has applied the approach to city-scale change detection and 3D reconstruction @cite_32 @cite_15 . 3D change detection has also been applied to urban construction site monitoring @cite_4 . However, this type of scene change detection, which focuses on local changes, does not apply to precision agriculture applications where crops are changing over time.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_8", "@cite_41", "@cite_9", "@cite_32", "@cite_3", "@cite_15", "@cite_25" ], "mid": [ "199409740", "2072921235", "2098079560", "2149345424", "", "2053750911", "2097838991", "2042323109", "2205574202" ], "abstract": [ "This paper describes an approach to reconstruct the complete history of a 3-d scene over time from imagery. The proposed approach avoids rebuilding 3-d models of the scene at each time instant. Instead, the approach employs an initial 3-d model which is continuously updated with changes in the environment to form a full 4-d representation. This updating scheme is enabled by a novel algorithm that infers 3-d changes with respect to the model at one time step from images taken at a subsequent time step. This algorithm can effectively detect changes even when the illumination conditions between image collections are significantly different. The performance of the proposed framework is demonstrated on four challenging datasets in terms of 4-d modeling accuracy as well as quantitative evaluation of 3-d change detection.", "This paper proposes an automated method for monitoring changes of 3D building elements using unordered photo collections as well as Building Information Models (BIM) that pertain information about geometry and relationships of elements. This task is particularly challenging as existing construction photographs are taken under different lighting and viewpoint conditions, are uncalibrated and extremely cluttered by equipment and people. Given a set of these images, our system first uses structure-from-motion, multi-view stereo, and voxel coloring algorithms to calibrate cameras, reconstruct the construction scene, quantize the scene into voxels and traverse and label the voxels for observed occupancy. The BIM is subsequently fused into the observed scene by a registration-step and voxels are traversed and labeled for expected visibility. Next, a machine learning scheme built upon a Bayesian model is proposed that automatically detects and tracks building elements in presence of occlusions. Finally, the system enables the expected and reconstructed elements to be explored with an interactive, image-based 3D viewer where construction progress deviations are automatically color-coded over the BIM. We present promising results on several challenging construction photo collections under different lighting conditions and sever occlusions.", "This paper examines the problem of detecting changes in a 3-d scene from a sequence of images, taken by cameras with arbitrary but known pose. No prior knowledge of the state of normal appearance and geometry of object surfaces is assumed, and abnormal changes can occur in any image of the sequence. To the authors' knowledge, this paper is the first to address the change detection problem in such a general framework. Existing change detection algorithms that exploit multiple image viewpoints typically can detect only motion changes or assume a planar world geometry which cannot cope effectively with appearance changes due to occlusion and un-modeled 3-d scene geometry (ego-motion parallax). The approach presented here can manage the complications of unknown and sometimes changing world surfaces by maintaining a 3-d voxel-based model, where probability distributions for surface occupancy and image appearance are stored in each voxel. The probability distributions at each voxel are continuously updated as new images are received. The key question of convergence of this joint estimation problem is answered by a formal proof based on realistic assumptions about the nature of real world scenes. A series of experiments are presented that evaluate change detection accuracy under laboratory-controlled conditions as well as aerial reconnaissance scenarios.-", "This paper proposes a method for detecting temporal changes of the three-dimensional structure of an outdoor scene from its multi-view images captured at two separate times. For the images, we consider those captured by a camera mounted on a vehicle running in a city street. The method estimates scene structures probabilistically, not deterministically, and based on their estimates, it evaluates the probability of structural changes in the scene, where the inputs are the similarity of the local image patches among the multi-view images. The aim of the probabilistic treatment is to maximize the accuracy of change detection, behind which there is our conjecture that although it is difficult to estimate the scene structures deterministically, it should be easier to detect their changes. The proposed method is compared with the methods that use multi-view stereo (MVS) to reconstruct the scene structures of the two time points and then differentiate them to detect changes. The experimental results show that the proposed method outperforms such MVS-based methods.", "", "In this paper, we propose a method to detect changes in the geometry of a city using panoramic images captured by a car driving around the city. We designed our approach to account for all the challenges involved in a large scale application of change detection, such as, inaccuracies in the input geometry, errors in the geo-location data of the images, as well as, the limited amount of information due to sparse imagery. We evaluated our approach on an area of 6 square kilometers inside a city, using 3420 images downloaded from Google Street View. These images besides being publicly available, are also a good example of panoramic images captured with a driving vehicle, and hence demonstrating all the possible challenges resulting from such an acquisition. We also quantitatively compared the performance of our approach with respect to a ground truth, as well as to prior work. This evaluation shows that our approach outperforms the current state of the art.", "In this paper, we propose an efficient technique to detect changes in the geometry of an urban environment using some images observing its current state. The proposed method can be used to significantly optimize the process of updating the 3D model of a city changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes, such as cars, people etc. As a by-product, the algorithm also provides a coarse geometry of the detected changes. The performance of the proposed method was tested on four different kinds of urban environments and compared with two alternative techniques.", "We propose a method to detect changes in the geometry of a city using panoramic images captured by a car driving around the city. The proposed method can be used to significantly optimize the process of updating the 3D model of an urban environment that is changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes such as cars, people etc. The approach also accounts for the challenges involved in a large scale application of change detection, such as inaccuracies in the input geometry, errors in the geo-location data of the images as well as the limited amount of information due to sparse imagery. We evaluated our approach on a small scale setup using high resolution, densely captured images and a large scale setup covering an entire city using instead the more realistic scenario of low resolution, sparsely captured images. A quantitative evaluation was also conducted for the large scale setup consisting of @math images.", "Abstract Mobile laser scanning (MLS) has become a popular technique for road inventory, building modelling, infrastructure management, mobility assessment, etc. Meanwhile, due to the high mobility of MLS systems, it is easy to revisit interested areas. However, change detection using MLS data of street environment has seldom been studied. In this paper, an approach that combines occupancy grids and a distance-based method for change detection from MLS point clouds is proposed. Unlike conventional occupancy grids, our occupancy-based method models space based on scanning rays and local point distributions in 3D without voxelization. A local cylindrical reference frame is presented for the interpolation of occupancy between rays according to the scanning geometry. The Dempster–Shafer theory (DST) is utilized for both intra-data evidence fusion and inter-data consistency assessment. Occupancy of reference point cloud is fused at the location of target points and then the consistency is evaluated directly on the points. A point-to-triangle (PTT) distance-based method is combined to improve the occupancy-based method. Because it is robust to penetrable objects, e.g. vegetation, which cause self-conflicts when modelling occupancy. The combined method tackles irregular point density and occlusion problems, also eliminates false detections on penetrable objects." ] }
1610.02579
2949477300
The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. Effective integration of local and contextual visual cues from these regions has become a fundamental problem in object detection. In this paper, we propose a gated bi-directional CNN (GBD-Net) to pass messages among features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution between neighboring support regions in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close interactions are modeled in a more complex way. It is also shown that message passing is not always helpful but dependent on individual samples. Gated functions are therefore needed to control message transmission, whose on-or-offs are controlled by extra visual evidence from the input sample. The effectiveness of GBD-Net is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO. This paper also shows the details of our approach in wining the ImageNet object detection challenge of 2016, with source code provided on this https URL .
Selective search @cite_0 obtained region proposals by hierarchically grouping segmentation results. Edgeboxes @cite_22 evaluated the number of contours enclosed by a bounding box to indicate the likelihood of an object. Faster RCNN @cite_5 and CRAFT @cite_28 obtained region proposals with the help of convolutional neural networks. Pont-Tuest and Van Gool @cite_20 studied the statistical difference between the Pascal-VOC dataset @cite_31 to Microsoft CoCo dataset @cite_24 to obtain better object proposals. Region proposal can be considered as a cascade of classifiers. At the first stage, millions of windows that are highly confident of being background are removed by region proposal methods and the remaining hundreds or southands are then used for classification. In this paper, we adopt an improved version of the CRAFT in providing the region proposals.
{ "cite_N": [ "@cite_22", "@cite_28", "@cite_0", "@cite_24", "@cite_5", "@cite_31", "@cite_20" ], "mid": [ "7746136", "2339367607", "1577168949", "1861492603", "2613718673", "2031489346", "2198291924" ], "abstract": [ "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "Object detection is a fundamental problem in image understanding. One popular solution is the R-CNN framework and its fast versions. They decompose the object detection problem into two cascaded easier tasks: 1) generating object proposals from images, 2) classifying proposals into various object categories. Despite that we are handling with two relatively easier tasks, they are not solved perfectly and there's still room for improvement. In this paper, we push the \"divide and conquer\" solution even further by dividing each task into two sub-tasks. We call the proposed method \"CRAFT\" (Cascade Region-proposal-network And FasT-rcnn), which tackles each task with a carefully designed network cascade. We show that the cascade structure helps in both tasks: in proposal generation, it provides more compact and better localized object proposals; in object classification, it reduces false positives (mainly between ambiguous categories) by capturing both inter- and intra-category variances. CRAFT achieves consistent and considerable improvement over the state-of-the-art on object detection benchmarks like PASCAL VOC 07 12 and ILSVRC.", "", "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.", "Computer vision in general, and object proposals in particular, are nowadays strongly influenced by the databases on which researchers evaluate the performance of their algorithms. This paper studies the transition from the Pascal Visual Object Challenge dataset, which has been the benchmark of reference for the last years, to the updated, bigger, and more challenging Microsoft Common Objects in Context. We first review and deeply analyze the new challenges, and opportunities, that this database presents. We then survey the current state of the art in object proposals and evaluate it focusing on how it generalizes to the new dataset. In sight of these results, we propose various lines of research to take advantage of the new benchmark and improve the techniques. We explore one of these lines, which leads to an improvement over the state of the art of +5.2 ." ] }
1610.02579
2949477300
The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. Effective integration of local and contextual visual cues from these regions has become a fundamental problem in object detection. In this paper, we propose a gated bi-directional CNN (GBD-Net) to pass messages among features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution between neighboring support regions in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close interactions are modeled in a more complex way. It is also shown that message passing is not always helpful but dependent on individual samples. Gated functions are therefore needed to control message transmission, whose on-or-offs are controlled by extra visual evidence from the input sample. The effectiveness of GBD-Net is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO. This paper also shows the details of our approach in wining the ImageNet object detection challenge of 2016, with source code provided on this https URL .
Since the candidate regions are not very accurate in locating objects, multi-region CNN @cite_2 , LocNet @cite_4 and AttractioNet @cite_36 are proposed for more accurate localization of the objects. These approaches conduct bounding box regression iteratively so that the candidate regions gradually move towards the ground truth object.
{ "cite_N": [ "@cite_36", "@cite_4", "@cite_2" ], "mid": [ "", "2177544419", "1932624639" ], "abstract": [ "", "We propose a novel object localization methodology with the purpose of boosting the localization accuracy of state-of-the-art object detection systems. Our model, given a search region, aims at returning the bounding box of an object of interest inside this region. To accomplish its goal, it relies on assigning conditional probabilities to each row and column of this region, where these probabilities provide useful information regarding the location of the boundaries of the object inside the search region and allow the accurate inference of the object bounding box under a simple probabilistic framework. For implementing our localization model, we make use of a convolutional neural network architecture that is properly adapted for this task, called LocNet. We show experimentally that LocNet achieves a very significant improvement on the mAP for high IoU thresholds on PASCAL VOC2007 test set and that it can be very easily coupled with recent state-of-the-art object detection systems, helping them to boost their performance. Finally, we demonstrate that our detection approach can achieve high detection accuracy even when it is given as input a set of sliding windows, thus proving that it is independent of box proposal methods.", "We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2 and 73.9 correspondingly, surpassing any other published work by a significant margin." ] }
1610.02579
2949477300
The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. Effective integration of local and contextual visual cues from these regions has become a fundamental problem in object detection. In this paper, we propose a gated bi-directional CNN (GBD-Net) to pass messages among features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution between neighboring support regions in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close interactions are modeled in a more complex way. It is also shown that message passing is not always helpful but dependent on individual samples. Gated functions are therefore needed to control message transmission, whose on-or-offs are controlled by extra visual evidence from the input sample. The effectiveness of GBD-Net is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO. This paper also shows the details of our approach in wining the ImageNet object detection challenge of 2016, with source code provided on this https URL .
The state-of-the-art deep learning based object detection pipeline RCNN @cite_23 extracted CNN features from the warped image regions and applied a linear SVM as the classifier. By pre-training on the ImageNet classification dataset, it achieved great improvement in detection accuracy compared with previous sliding-window approaches that used handcrafted features on PASCAL-VOC and the large-scale ImageNet object detection dataset. In order to obtain a higher speed, Fast RCNN @cite_29 shared the computational cost among candidate boxes in the same image and proposed a novel roi-pooling operation to extract feature vectors for each region proposal. Faster RCNN @cite_5 combined the region proposal step with the region classification step by sharing the same convolution layers for both tasks. Region proposal is not necessary. Some recent approaches, e.g. Deep MultiBox @cite_38 , YOLO @cite_3 and SSD @cite_27 , directly estimate the object classes from predefined sliding windows.
{ "cite_N": [ "@cite_38", "@cite_29", "@cite_3", "@cite_27", "@cite_23", "@cite_5" ], "mid": [ "", "", "1483870316", "2193145675", "2102605133", "2613718673" ], "abstract": [ "", "", "We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn." ] }
1610.02579
2949477300
The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. Effective integration of local and contextual visual cues from these regions has become a fundamental problem in object detection. In this paper, we propose a gated bi-directional CNN (GBD-Net) to pass messages among features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution between neighboring support regions in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close interactions are modeled in a more complex way. It is also shown that message passing is not always helpful but dependent on individual samples. Gated functions are therefore needed to control message transmission, whose on-or-offs are controlled by extra visual evidence from the input sample. The effectiveness of GBD-Net is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO. This paper also shows the details of our approach in wining the ImageNet object detection challenge of 2016, with source code provided on this https URL .
A large number of works @cite_6 @cite_21 @cite_17 @cite_7 @cite_8 @cite_10 aimed at designing network structures and they are shown to be effective in the detection task. The works in @cite_6 @cite_21 @cite_17 @cite_7 @cite_10 proposed deeper networks. People @cite_37 @cite_17 @cite_1 also investigated how to effectively train deep networks. Simonyan al @cite_17 learn deeper networks based on the parameters in shallow networks. Ioffe al @cite_37 normalized each layer inputs for each training mini-batch in order to avoid internal covariate shift. He al @cite_1 investigated parameter initialization approaches and proposed parameterized RELU. Li al @cite_35 proposed multi-bias non-linear activation (MBA) layer to explore the information hidden in the magnitudes of responses.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_7", "@cite_8", "@cite_21", "@cite_1", "@cite_6", "@cite_10", "@cite_17" ], "mid": [ "2963609427", "2949117887", "2950179405", "2339890635", "1487583988", "1677182931", "", "2949650786", "1686810756" ], "abstract": [ "As a widely used non-linear activation, Rectified Linear Unit (ReLU) separates noise and signal in a feature map by learning a threshold or bias. However, we argue that the classification of noise and signal not only depends on the magnitude of responses, but also the context of how the feature responses would be used to detect more abstract patterns in higher layers. In order to output multiple response maps with magnitude in different ranges for a particular visual pattern, existing networks employing ReLU and its variants have to learn a large number of redundant filters. In this paper, we propose a multi-bias non-linear activation (MBA) layer to explore the information hidden in the magnitudes of responses. It is placed after the convolution layer to decouple the responses to a convolution kernel into multiple maps by multi-thresholding magnitudes, thus generating more patterns in the feature space at a low computational cost. It provides great flexibility of selecting responses to different visual patterns in different magnitude ranges to form rich representations in higher layers. Such a simple and yet effective scheme achieves the state-of-the-art performance on several benchmarks.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "The recent COCO object detection dataset presents several new challenges for object detection. In particular, it contains objects at a broad range of scales, less prototypical images, and requires more precise localization. To address these challenges, we test three modifications to the standard Fast R-CNN object detector: (1) skip connections that give the detector access to features at multiple network layers, (2) a foveal structure to exploit object context at multiple object resolutions, and (3) an integral loss function and corresponding network adjustment that improve localization. The result of these modifications is that information can flow along multiple paths in our network, including through features from multiple network layers and from multiple object views. We refer to our modified classifier as a \"MultiPath\" network. We couple our MultiPath network with DeepMask object proposals, which are well suited for localization and small objects, and adapt our pipeline to predict segmentation masks in addition to bounding boxes. The combined system improves results over the baseline Fast R-CNN detector with Selective Search by 66 overall and by 4x on small objects. It placed second in both the COCO 2015 detection and segmentation challenges.", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94 top-5 test error on the ImageNet 2012 classification dataset. This is a 26 relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66 [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1 , [26]) on this dataset.", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1610.02579
2949477300
The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. Effective integration of local and contextual visual cues from these regions has become a fundamental problem in object detection. In this paper, we propose a gated bi-directional CNN (GBD-Net) to pass messages among features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution between neighboring support regions in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close interactions are modeled in a more complex way. It is also shown that message passing is not always helpful but dependent on individual samples. Gated functions are therefore needed to control message transmission, whose on-or-offs are controlled by extra visual evidence from the input sample. The effectiveness of GBD-Net is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO. This paper also shows the details of our approach in wining the ImageNet object detection challenge of 2016, with source code provided on this https URL .
Our contributions focus on a novel bi-directional network structure to effectively make use of multi-scale and multi-context regions. Our design is complementary to above region proposals, pipelines, CNN layer designs, and training approaches. There are many works on using visual cues from object parts @cite_12 @cite_2 @cite_32 and contextual information @cite_12 @cite_2 . Gidaris al @cite_2 adopted a multi-region CNN model and manually selected multiple image regions. Girshick al @cite_32 and Ouyang al @cite_12 learned the deformable parts from CNNs. In order to use the contextual information, multiple image regions surrounding the candidate box were cropped in @cite_2 and whole-image classification scores were used in @cite_12 . These works simply concatenated features or scores from object parts or context while we pass message among features representing local and contextual visual patterns so that they validate the existence of each other by non-linear relationship learning. Experimental results show that GBD-Net is complementary to the approaches in @cite_2 @cite_25 . As a step further, we propose to use gate functions for controlling message passing, which was not investigated in existing works.
{ "cite_N": [ "@cite_25", "@cite_32", "@cite_12", "@cite_2" ], "mid": [ "", "1960289438", "2952677200", "1932624639" ], "abstract": [ "", "Deformable part models (DPMs) and convolutional neural networks (CNNs) are two widely used tools for visual recognition. They are typically viewed as distinct approaches: DPMs are graphical models (Markov random fields), while CNNs are “black-box” non-linear classifiers. In this paper, we show that a DPM can be formulated as a CNN, thus providing a synthesis of the two ideas. Our construction involves unrolling the DPM inference algorithm and mapping each step to an equivalent CNN layer. From this perspective, it is natural to replace the standard image features used in DPMs with a learned feature extractor. We call the resulting model a DeepPyramid DPM and experimentally validate it on PASCAL VOC object detection. We find that DeepPyramid DPMs significantly outperform DPMs based on histograms of oriented gradients features (HOG) and slightly outperforms a comparable version of the recently introduced R-CNN detection system, while running significantly faster.", "In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN girshick2014rich , which was the state-of-the-art, from 31 to 50.3 on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1 . Detailed component-wise analysis is also provided through extensive experimental evaluation, which provide a global view for people to understand the deep learning object detection pipeline.", "We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2 and 73.9 correspondingly, surpassing any other published work by a significant margin." ] }
1610.02609
2949577940
Human-robot handovers are characterized by high uncertainty and poor structure of the problem that make them difficult tasks. While machine learning methods have shown promising results, their application to problems with large state dimensionality, such as in the case of humanoid robots, is still limited. Additionally, by using these methods and during the interaction with the human operator, no guarantees can be obtained on the correct interpretation of spatial constraints (e.g., from social rules). In this paper, we present Policy Improvement with Spatio-Temporal Affordance Maps -- @math -STAM, a novel iterative algorithm to learn spatial affordances and generate robot behaviors. Our goal consists in generating a policy that adapts to the unknown action semantics by using affordances. In this way, while learning to perform a human-robot handover task, we can (1) efficiently generate good policies with few training episodes, and (2) easily encode action semantics and, if available, enforce prior knowledge in it. We experimentally validate our approach both in simulation and on a real NAO robot whose task consists in taking an object from the hands of a human. The obtained results show that our algorithm obtains a good policy while reducing the computational load and time duration of the learning process.
The idea of improving robot policies based on object affordances @cite_21 @cite_17 , or discovering object affordances given an initial policy @cite_24 has been already investigated in recent works. For example, in @cite_24 the authors exploit a simple policy to learn the affordance of an object to be pushed. @cite_21 and @cite_17 , instead, affordances have been respectively exploited to learn action primitives for the imitation of humans and for autonomous pile manipulation. Likewise, we combine the use of affordances and policy improvement methods. However, while refining the robot behavior, we also modify the affordance model, depending on the outcomes of the current policy. In this way, on the one hand a semantic representation of the space in the environment is incrementally learned, on the other hand such representation is used to reduce the search space for the policy improvement.
{ "cite_N": [ "@cite_24", "@cite_21", "@cite_17" ], "mid": [ "2091713555", "2134194996", "2041108426" ], "abstract": [ "Learning to perform household tasks is a key step towards developing cognitive service robots. This requires that robots are capable of discovering how to use human-designed products. In this paper, we propose an active learning approach for acquiring object affordances and manipulation skills in a bottom-up manner. We address affordance learning in continuous state and action spaces without manual discretization of states or exploratory motor primitives. During exploration in the action space, the robot learns a forward model to predict action effects. It simultaneously updates the active exploration policy through reinforcement learning, whereby the prediction error serves as the intrinsic reward. By using the learned forward model, motor skills are obtained to achieve goal states of an object. We demonstrate through real-world experiments that a humanoid robot NAO is able to autonomously learn how to manipulate two types of garbage cans with lids that need to be opened and closed by different motor skills.", "In this paper we build an imitation learning algorithm for a humanoid robot on top of a general world model provided by learned object affordances. We consider that the robot has previously learned a task independent affordance-based model of its interaction with the world. This model is used to recognize the demonstration by another agent (a human) and infer the task to be learned. We discuss several important problems that arise in this combined framework, such as the influence of an inaccurate model in the recognition of the demonstration. We illustrate the ideas in the paper with some experimental results obtained with a real robot.", "Autonomous manipulation in unstructured environments will enable a large variety of exciting and important applications. Despite its promise, autonomous manipulation remains largely unsolved. Even the most rudimentary manipulation task--such as removing objects from a pile--remains challenging for robots. We identify three major challenges that must be addressed to enable autonomous manipulation: object segmentation, action selection, and motion generation. These challenges become more pronounced when unknown man-made or natural objects are cluttered together in a pile. We present a system capable of manipulating unknown objects in such an environment. Our robot is tasked with clearing a table by removing objects from a pile and placing them into a bin. To that end, we address the three aforementioned challenges. Our robot perceives the environment with an RGB-D sensor, segmenting the pile into object hypotheses using non-parametric surface models. Our system then computes the affordances of each object, and selects the best affordance and its associated action to execute. Finally, our robot instantiates the proper compliant motion primitive to safely execute the desired action. For efficient and reliable action selection, we developed a framework for supervised learning of manipulation expertise. To verify the performance of our system, we conducted dozens of trials and report on several hours of experiments involving more than 1,500 interactions. The results show that our learning-based approach for pile manipulation outperforms a common sense heuristic as well as a random strategy, and is on par with human action selection." ] }
1610.02527
2530417694
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of optimization.
A trivial benchmark for solving problems of structure is Gradient Descent (GD) in the case when functions @math are smooth (or Subgradient Descent for non-smooth functions) @cite_14 . The GD algorithm performs the iteration [w^ t+1 = w^t - h_t f(w^t), ] where @math is a stepsize parameter. As we mentioned earlier, the number of functions, or equivalently, the number of training data pairs, @math , is typically very large. This makes GD impractical, as it needs to process the whole dataset in order to evaluate a single gradient and update the model.
{ "cite_N": [ "@cite_14" ], "mid": [ "2124541940" ], "abstract": [ "It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name \"polynomial-time interior-point methods\", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12]." ] }
1610.02527
2530417694
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of optimization.
Gradient descent can be substantially accelerated, in theory and practice, via the addition of a momentum term. Acceleration ideas for gradient methods in convex optimization can be traced back to the work of Polyak @cite_63 and Nesterov @cite_62 @cite_110 . While accelerated GD methods have a substantially better convergence rate, in each iteration they still need to do at least one pass over all data. As a result, they are not practical for problems where @math very large.
{ "cite_N": [ "@cite_110", "@cite_62", "@cite_63" ], "mid": [ "", "604534808", "1988720110" ], "abstract": [ "", "In this work we develop a new algorithm for regularized empirical risk minimization. Our method extends recent techniques of Shalev-Shwartz [02 2015], which enable a dual-free analysis of SDCA, to arbitrary mini-batching schemes. Moreover, our method is able to better utilize the information in the data dening the ERM problem. For convex loss functions, our complexity results match those of QUARTZ, which is a primal-dual method also allowing for arbitrary mini-batching schemes. The advantage of a dual-free analysis comes from the fact that it guarantees convergence even for non-convex loss functions, as long as the average loss is convex. We illustrate through experiments the utility of being able to design arbitrary mini-batching schemes.", "For the solution of the functional equation P (x) = 0 (1) (where P is an operator, usually linear, from B into B, and B is a Banach space) iteration methods are generally used. These consist of the construction of a series x0, …, xn, …, which converges to the solution (see, for example [1]). Continuous analogues of these methods are also known, in which a trajectory x(t), 0 ⩽ t ⩽ ∞ is constructed, which satisfies the ordinary differential equation in B and is such that x(t) approaches the solution of (1) as t → ∞ (see [2]). We shall call the method a k-step method if for the construction of each successive iteration xn+1 we use k previous iterations xn, …, xn−k+1. The same term will also be used for continuous methods if x(t) satisfies a differential equation of the k-th order or k-th degree. Iteration methods which are more widely used are one-step (e.g. methods of successive approximations). They are generally simple from the calculation point of view but often converge very slowly. This is confirmed both by the evaluation of the speed of convergence and by calculation in practice (for more details see below). Therefore the question of the rate of convergence is most important. Some multistep methods, which we shall consider further, which are only slightly more complicated than the corresponding one-step methods, make it possible to speed up the convergence substantially. Note that all the methods mentioned below are applicable also to the problem of minimizing the differentiable functional (x) in Hilbert space, so long as this problem reduces to the solution of the equation grad (x) = 0." ] }
1610.02527
2530417694
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of optimization.
When an explicit strongly convex, but not necessarily smooth, regularizer is added to the average loss , it is possible to write down its (Fenchel) dual and the dual variables live in @math -dimensional space. Applying RCD leads to an algorithm for solving known under the name Stochastic Dual Coordinate Ascent @cite_74 . This method has gained broad popularity with practicioners, likely due to the fact that for a number of loss functions, the method comes without the need to tune any hyper-parameters. The work @cite_74 was first to show that by applying RCD @cite_98 to the dual problem, one also solves the primal problem . For a theoretical and computational comparison of applying RCD to the primal versus the dual problems, see @cite_37 .
{ "cite_N": [ "@cite_98", "@cite_37", "@cite_74" ], "mid": [ "2399042127", "2414068143", "1636243882" ], "abstract": [ "Two types of low cost-per-iteration gradient descent methods have been extensively studied in parallel. One is online or stochastic gradient descent (OGD SGD), and the other is randomzied coordinate descent (RBCD). In this paper, we combine the two types of methods together and propose online randomized block coordinate descent (ORBCD). At each iteration, ORBCD only computes the partial gradient of one block coordinate of one mini-batch samples. ORBCD is well suited for the composite minimization problem where one function is the average of the losses of a large number of samples and the other is a simple regularizer defined on high dimensional variables. We show that the iteration complexity of ORBCD has the same order as OGD or SGD. For strongly convex functions, by reducing the variance of stochastic gradients, we show that ORBCD can converge at a geometric rate in expectation, matching the convergence rate of SGD with variance reduction and RBCD.", "Randomized coordinate descent (RCD) methods are state-of-the-art algorithms for training linear predictors via minimizing regularized empirical risk. When the number of examples ( @math ) is much larger than the number of features ( @math ), a common strategy is to apply RCD to the dual problem. On the other hand, when the number of features is much larger than the number of examples, it makes sense to apply RCD directly to the primal problem. In this paper we provide the first joint study of these two approaches when applied to L2-regularized ERM. First, we show through a rigorous analysis that for dense data, the above intuition is precisely correct. However, we find that for sparse and structured data, primal RCD can significantly outperform dual RCD even if @math , and vice versa, dual RCD can be much faster than primal RCD even if @math . Moreover, we show that, surprisingly, a single sampling strategy minimizes both the (bound on the) number of iterations and the overall expected complexity of RCD. Note that the latter complexity measure also takes into account the average cost of the iterations, which depends on the structure and sparsity of the data, and on the sampling strategy employed. We confirm our theoretical predictions using extensive experiments with both synthetic and real data sets.", "We present an improved analysis of mini-batched stochastic dual coordinate ascent for regularized empirical loss minimization (i.e. SVM and SVM-type objectives). Our analysis allows for flexible sampling schemes, including where data is distribute across machines, and combines a dependence on the smoothness of the loss and or the data spread (measured through the spectral norm)." ] }
1610.02527
2530417694
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of optimization.
A followup algorithm SAGA @cite_69 and its simplification @cite_48 , modifies the SAG algorithm to achieve unbiased estimate of the gradients. The memory requirement is still present, but the method significantly simplifies theoretical analysis, and yields a slightly stronger convergence guarantee.
{ "cite_N": [ "@cite_48", "@cite_69" ], "mid": [ "2254148458", "2952215077" ], "abstract": [ "We describe a novel optimization method for finite sums (such as empirical risk minimization problems) building on the recently introduced SAGA method. Our method achieves an accelerated convergence rate on strongly convex smooth problems. Our method has only one parameter (a step size), and is radically simpler than other accelerated methods for finite sums. Additionally it can be applied when the terms are non-smooth, yielding a method applicable in many areas where operator splitting methods would traditionally be applied.", "In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method." ] }
1610.02527
2530417694
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of optimization.
There already exist attempts at combining SVRG type algorithms with randomized coordinate descent @cite_95 @cite_66 . Although these works highlight some interesting theoretical properties, the algorithms do not seem to be practical at the moment; more work is needed in this area. The first attempt to unify algorithms such as SVRG and SAG SAGA already appeared in the SAGA paper @cite_69 , where the authors interpret SAGA as a midpoint between SAG and SVRG. Recent work @cite_60 presents a general algorithm, which recovers SVRG, SAGA, SAG and GD as special cases, and obtains an asynchronous variant of these algorithms as a byproduct of the formulation. SVRG can be equipped with momentum (and negative momentum), leading to a new accelerated SVRG method known as Katyusha @cite_61 . SVRG can be further accelerated via a raw clustering mechanism @cite_84 .
{ "cite_N": [ "@cite_61", "@cite_69", "@cite_60", "@cite_84", "@cite_95", "@cite_66" ], "mid": [ "2414455940", "2952215077", "1501362825", "2272842178", "159656292", "" ], "abstract": [ "Nesterov's momentum trick is famously known for accelerating gradient descent, and has been proven useful in building fast iterative algorithms. However, in the stochastic setting, counterexamples exist and prevent Nesterov's momentum from providing similar acceleration, even if the underlying problem is convex. We introduce @math , a direct, primal-only stochastic gradient method to fix this issue. It has a provably accelerated convergence rate in convex (off-line) stochastic optimization. The main ingredient is @math , a novel \"negative momentum\" on top of Nesterov's momentum. It can be incorporated into a variance-reduction based algorithm and speed it up, both in terms of @math performance. Since variance reduction has been successfully applied to a growing list of practical problems, our paper suggests that in each of such cases, one could potentially try to give Katyusha a hug.", "In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.", "We study optimization algorithms based on variance reduction for stochastic gradient descent (SGD). Remarkable recent progress has been made in this direction through development of algorithms like SAG, SVRG, SAGA. These algorithms have been shown to outperform SGD, both theoretically and empirically. However, asynchronous versions of these algorithms---a crucial requirement for modern large-scale applications---have not been studied. We bridge this gap by presenting a unifying framework for many variance reduction techniques. Subsequently, we propose an asynchronous algorithm grounded in our framework, and prove its fast convergence. An important consequence of our general approach is that it yields asynchronous versions of variance reduction algorithms such as SVRG and SAGA as a byproduct. Our method achieves near linear speedup in sparse settings common to machine learning. We demonstrate the empirical performance of our method through a concrete realization of asynchronous SVRG.", "The amount of data available in the world is growing faster and bigger than our ability to deal with it. However, if we take advantage of the internal structure, data may become much smaller for machine learning purposes. In this paper we focus on one of the most fundamental machine learning tasks, empirical risk minimization (ERM), and provide faster algorithms with the help from the clustering structure of the data. We introduce a simple notion of raw clustering that can be efficiently obtained with just one pass of the data, and propose two algorithms. Our variance-reduction based algorithm ClusterSVRG introduces a new gradient estimator using the clustering information, and our accelerated algorithm ClusterACDM is built on a novel Haar transformation applied to the dual space of each cluster. Our algorithms outperform their classical counterparts both in theory and practice.", "We propose a novel stochastic gradient method---semi-stochastic coordinate descent (S2CD)---for the problem of minimizing a strongly convex function represented as the average of a large number of smooth convex functions: @math . Our method first performs a deterministic step (computation of the gradient of @math at the starting point), followed by a large number of stochastic steps. The process is repeated a few times, with the last stochastic iterate becoming the new starting point where the deterministic step is taken. The novelty of our method is in how the stochastic steps are performed. In each such step, we pick a random function @math and a random coordinate @math ---both using nonuniform distributions---and update a single coordinate of the decision vector only, based on the computation of the @math partial derivative of @math at two different points. Each random step of the method constitutes an unbiased estimate of the gradient of @math and moreover, the squared norm of the steps goes to zero in expectation, meaning that the stochastic estimate of the gradient progressively improves. The complexity of the method is the sum of two terms: @math evaluations of gradients @math and @math evaluations of partial derivatives @math , where @math is a novel condition number.", "" ] }
1610.02527
2530417694
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of optimization.
A third class of new algorithms are the Stochastic quasi-Newton methods @cite_79 @cite_106 . These algorithms in general try to mimic the limited memory BFGS method (L-BFGS) @cite_36 , but model the local curvature information using inexact gradients --- coming from the SGD procedure. A recent attempt at combining these methods with SVRG can be found in @cite_76 . In @cite_105 , the authors utilize recent progress in the area of stochastic matrix inversion @cite_57 revealing new connections with quasi-Newton methods, and devise a new stochastic limited memory BFGS method working in tandem with SVRG. The fact that the theoretical understanding of this branch of research is the least understood and having several details making the implementation more difficult compared to the methods above may limit its wider use. However, this approach could be most promising for deep learning once understood better.
{ "cite_N": [ "@cite_36", "@cite_57", "@cite_79", "@cite_106", "@cite_76", "@cite_105" ], "mid": [ "2051434435", "2950486997", "", "", "1942704184", "2316266851" ], "abstract": [ "We study the numerical performance of a limited memory quasi-Newton method for large scale optimization, which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir (1985), which combines cycles of BFGS steps and conjugate direction steps. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Toint (1982a). The results show that, for some problems, the partitioned quasi-Newton method is clearly superior to the L-BFGS method. However we find that for other problems the L-BFGS method is very competitive due to its low iteration cost. We also study the convergence properties of the L-BFGS method, and prove global convergence on uniformly convex problems.", "We develop and analyze a broad family of stochastic randomized algorithms for inverting a matrix. We also develop specialized variants maintaining symmetry or positive definiteness of the iterates. All methods in the family converge globally and linearly (i.e., the error decays exponentially), with explicit rates. In special cases, we obtain stochastic block variants of several quasi-Newton updates, including bad Broyden (BB), good Broyden (GB), Powell-symmetric-Broyden (PSB), Davidon-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS). Ours are the first stochastic versions of these updates shown to converge to an inverse of a fixed matrix. Through a dual viewpoint we uncover a fundamental link between quasi-Newton updates and approximate inverse preconditioning. Further, we develop an adaptive variant of randomized block BFGS, where we modify the distribution underlying the stochasticity of the method throughout the iterative process to achieve faster convergence. By inverting several matrices from varied applications, we demonstrate that AdaRBFGS is highly competitive when compared to the well established Newton-Schulz and minimal residual methods. In particular, on large-scale problems our method outperforms the standard methods by orders of magnitude. Development of efficient methods for estimating the inverse of very large matrices is a much needed tool for preconditioning and variable metric optimization methods in the advent of the big data era.", "", "", "We propose a new stochastic L-BFGS algorithm and prove a linear convergence rate for strongly convex and smooth functions. Our algorithm draws heavily from a recent stochastic variant of L-BFGS proposed in (2014) as well as a recent approach to variance reduction for stochastic gradient descent from Johnson and Zhang (2013). We demonstrate experimentally that our algorithm performs well on large-scale convex and non-convex optimization problems, exhibiting linear convergence and rapidly solving the optimization problems to high levels of precision. Furthermore, we show that our algorithm performs well for a wide-range of step sizes, often differing by several orders of magnitude.", "We propose a novel limited-memory stochastic block BFGS update for incorporating enriched curvature information in stochastic approximation methods. In our method, the estimate of the inverse Hessian matrix that is maintained by it, is updated at each iteration using a sketch of the Hessian, i.e., a randomly generated compressed form of the Hessian. We propose several sketching strategies, present a new quasi-Newton method that uses stochastic block BFGS updates combined with the variance reduction approach SVRG to compute batch stochastic gradients, and prove linear convergence of the resulting method. Numerical tests on large-scale logistic regression problems reveal that our method is more robust and substantially outperforms current state-of-the-art methods." ] }
1610.02527
2530417694
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of optimization.
One important aspect of machine learning is that the Empirical Risk Minimization problem we are solving is just a proxy for the Expected Risk we are ultimately interested in. When one can find exact minimum of the empirical risk, everything reduces to balancing approximation--estimation tradeoff that is the object of abundant literature --- see for instance @cite_75 . An assessment of asymptotic performance of some optimization algorithms as algorithms in large-scale learning problems See [Section 2.3] BottouBousquet for their definition of large scale learning problem. has been introduced in @cite_5 . Recent extension in @cite_83 has shown that the variance reduced algorithms (SAG, SVRG, ) can in certain setting be better algorithms than SGD, not just better optimization algorithms.
{ "cite_N": [ "@cite_5", "@cite_75", "@cite_83" ], "mid": [ "", "2149298154", "2132968152" ], "abstract": [ "", "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems.", "We present and analyze several strategies for improving the performance of stochastic variance-reduced gradient (SVRG) methods. We first show that the convergence rate of these methods can be preserved under a decreasing sequence of errors in the control variate, and use this to derive variants of SVRG that use growing-batch strategies to reduce the number of gradient calculations required in the early iterations. We further (i) show how to exploit support vectors to reduce the number of gradient computations in the later iterations, (ii) prove that the commonly-used regularized SVRG iteration is justified and improves the convergence rate, (iii) consider alternate mini-batch selection strategies, and (iv) consider the generalization error of the method." ] }
1610.02527
2530417694
We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of optimization.
Recently, lower and upper bounds for complexity of stochastic methods on problems of the form were recently obtained in @cite_102 .
{ "cite_N": [ "@cite_102" ], "mid": [ "2406665342" ], "abstract": [ "We provide tight upper and lower bounds on the complexity of minimizing the average of @math convex functions using gradient and prox oracles of the component functions. We show a significant gap between the complexity of deterministic vs randomized optimization. For smooth functions, we show that accelerated gradient descent (AGD) and an accelerated variant of SVRG are optimal in the deterministic and randomized settings respectively, and that a gradient oracle is sufficient for the optimal rate. For non-smooth functions, having access to prox oracles reduces the complexity and we present optimal methods based on smoothing that improve over methods using just gradient accesses." ] }
1610.02651
2530134766
This paper provides a framework to hash images containing instances of unknown object classes. In many object recognition problems, we might have access to huge amount of data. It may so happen that even this huge data doesn't cover the objects belonging to classes that we see in our day to day life. Zero shot learning exploits auxiliary information (also called as signatures) in order to predict the labels corresponding to unknown classes. In this work, we attempt to generate the hash codes for images belonging to unseen classes, information of which is available only through the textual corpus. We formulate this as an unsupervised hashing formulation as the exact labels are not available for the instances of unseen classes. We show that the proposed solution is able to generate hash codes which can predict labels corresponding to unseen classes with appreciably good precision.
Many hashing methods have been proposed in the past, which can be categorized into two type - data dependent and data independent hashing. Locality sensitive hashing is the most popular data independent hashing technique, which uses randomized projections to generate hash functions and ensures high collision probability for similar data points. Variants of LSH @cite_11 have been developed by taking different distance measures like Mahalanobis distance @cite_40 , kernel similarity ( @cite_34 , @cite_27 ) and @math norm distance @cite_41 . Other forms of LSH could be found in detail in @cite_42 . In general data independent hashing techniques exploit long hashes and several hash tables to achieve better performance in terms of precision and recall, thus rendering them limited in use for large scale applications. On the contrary, data dependent techniques could be classified into two types - supervised and unsupervised hashing. These algorithms tend to exploit the available training data to generate short binary codes. Among the data dependent methods, PCA-based hashing ( @cite_20 , @cite_4 ), supervised and semi-supervised hashing ( @cite_4 , @cite_48 , @cite_37 , @cite_21 ) and graph based hashing ( @cite_13 , @cite_30 ) techniques are quite popular.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_4", "@cite_41", "@cite_48", "@cite_42", "@cite_21", "@cite_27", "@cite_40", "@cite_34", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "", "2221852422", "", "2162006472", "", "1870428314", "2164338181", "2125378448", "2144892774", "2171790913", "2251864938", "1974647172", "1502916507" ], "abstract": [ "", "We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "", "We present a novel Locality-Sensitive Hashing scheme for the Approximate Nearest Neighbor Problem under lp norm, based on p-stable distributions.Our scheme improves the running time of the earlier algorithm for the case of the lp norm. It also yields the first known provably efficient approximate NN algorithm for the case p<1. We also show that the algorithm finds the exact near neigbhor in O(log n) time for data satisfying certain \"bounded growth\" condition.Unlike earlier schemes, our LSH scheme works directly on points in the Euclidean space without embeddings. Consequently, the resulting query time bound is free of large factors and is simple and easy to implement. Our experiments (on synthetic data sets) show that the our data structure is up to 40 times faster than kd-tree.", "", "Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space.", "Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches. In this paper, we develop an algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings. We develop a scalable coordinate-descent algorithm for our proposed hashing objective that is able to efficiently learn hash functions in a variety of settings. Unlike existing methods such as semantic hashing and spectral hashing, our method is easily kernelized and does not require restrictive assumptions about the underlying distribution of the data. We present results over several domains to demonstrate that our method outperforms existing state-of-the-art techniques.", "This paper addresses the problem of designing binary codes for high-dimensional data such that vectors that are similar in the original space map to similar binary strings. We introduce a simple distribution-free encoding scheme based on random projections, such that the expected Hamming distance between the binary codes of two vectors is related to the value of a shift-invariant kernel (e.g., a Gaussian kernel) between the vectors. We present a full theoretical analysis of the convergence properties of the proposed scheme, and report favorable experimental performance as compared to a recent state-of-the-art method, spectral hashing.", "We introduce a method that enables scalable similarity search for learned metrics. Given pairwise similarity and dissimilarity constraints between some examples, we learn a Mahalanobis distance function that captures the examples' underlying relationships well. To allow sublinear time similarity search under the learned metric, we show how to encode the learned metric parameterization into randomized locality-sensitive hash functions. We further formulate an indirect solution that enables metric learning and hashing for vector spaces whose high dimensionality makes it infeasible to learn an explicit transformation over the feature dimensions. We demonstrate the approach applied to a variety of image data sets, as well as a systems data set. The learned metrics improve accuracy relative to commonly used metric baselines, while our hashing construction enables efficient indexing with learned distances and very large databases.", "Fast retrieval methods are critical for large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sub-linear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several large-scale datasets, and show that it enables accurate and fast performance for example-based object classification, feature matching, and content-based retrieval.", "Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.", "This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.", "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50)." ] }
1610.02651
2530134766
This paper provides a framework to hash images containing instances of unknown object classes. In many object recognition problems, we might have access to huge amount of data. It may so happen that even this huge data doesn't cover the objects belonging to classes that we see in our day to day life. Zero shot learning exploits auxiliary information (also called as signatures) in order to predict the labels corresponding to unknown classes. In this work, we attempt to generate the hash codes for images belonging to unseen classes, information of which is available only through the textual corpus. We formulate this as an unsupervised hashing formulation as the exact labels are not available for the instances of unseen classes. We show that the proposed solution is able to generate hash codes which can predict labels corresponding to unseen classes with appreciably good precision.
It has been shown that leveraging non-linear manifold embedding techniques have helped in generating better pairwise affinity preserving dense binary codes. Among well-known hashing algorithms which utilize this idea is spectral hashing ( @cite_30 , @cite_31 ), which uses the eigenfunctions of the Laplacian matrix to capture variation in the data. anchor graph hashing, popularly known as AGH ( @cite_13 , @cite_25 ), uses anchor graphs for generating hash codes for training and out-of-sample data efficiently and effectively. Benefiting from the properties of manifold approaches, Inductive manifold hashing (IMH) has been proposed in @cite_22 , which computes the manifold of a given data point according to the manifold of its neighbours. In @cite_7 , a solution has been proposed for preserving the inner-product similarities among raw vectors, while tackling maximum inner product search (MIPS) problem.
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_7", "@cite_31", "@cite_13", "@cite_25" ], "mid": [ "", "2171700594", "2197084977", "189214596", "2251864938", "2142881874" ], "abstract": [ "", "Learning based hashing methods have attracted considerable attention due to their ability to greatly increase the scale at which existing algorithms may operate. Most of these methods are designed to generate binary codes that preserve the Euclidean distance in the original space. Manifold learning techniques, in contrast, are better able to model the intrinsic structure embedded in the original high-dimensional data. The complexity of these models, and the problems with out-of-sample data, have previously rendered them unsuitable for application to large-scale embedding, however. In this work, we consider how to learn compact binary embeddings on their intrinsic manifolds. In order to address the above-mentioned difficulties, we describe an efficient, inductive solution to the out-of-sample data problem, and a process by which non-parametric manifold learning may be used as the basis of a hashing method. Our proposed approach thus allows the development of a range of new hashing techniques exploiting the flexibility of the wide variety of manifold learning approaches available. We particularly show that hashing on the basis of t-SNE [29] outperforms state-of-the-art hashing methods on large-scale benchmark datasets, and is very effective for image classification with very short code lengths.", "Binary coding or hashing techniques are recognized to accomplish efficient near neighbor search, and have thus attracted broad interests in the recent vision and learning studies. However, such studies have rarely been dedicated to Maximum Inner Product Search (MIPS), which plays a critical role in various vision applications. In this paper, we investigate learning binary codes to exclusively handle the MIPS problem. Inspired by the latest advance in asymmetric hashing schemes, we propose an asymmetric binary code learning framework based on inner product fitting. Specifically, two sets of coding functions are learned such that the inner products between their generated binary codes can reveal the inner products between original data vectors. We also propose an alternative simpler objective which maximizes the correlations between the inner products of the produced binary codes and raw data vectors. In both objectives, the binary codes and coding functions are simultaneously learned without continuous relaxations, which is the key to achieving high-quality binary codes. We evaluate the proposed method, dubbed Asymmetric Inner-product Binary Coding (AIBC), relying on the two objectives on several large-scale image datasets. Both of them are superior to the state-of-the-art binary coding and hashing methods in performing MIPS tasks.", "With the growing availability of very large image databases, there has been a surge of interest in methods based on \"semantic hashing\", i.e. compact binary codes of data-points so that the Hamming distance between codewords correlates with similarity. In reviewing and comparing existing methods, we show that their relative performance can change drastically depending on the definition of ground-truth neighbors. Motivated by this finding, we propose a new formulation for learning binary codes which seeks to reconstruct the affinity between datapoints, rather than their distances. We show that this criterion is intractable to solve exactly, but a spectral relaxation gives an algorithm where the bits correspond to thresholded eigenvectors of the affinity matrix, and as the number of datapoints goes to infinity these eigenvectors converge to eigenfunctions of Laplace-Beltrami operators, similar to the recently proposed Spectral Hashing (SH) method. Unlike SH whose performance may degrade as the number of bits increases, the optimal code using our formulation is guaranteed to faithfully reproduce the affinities as the number of bits increases. We show that the number of eigenfunctions needed may increase exponentially with dimension, but introduce a \"kernel trick\" to allow us to compute with an exponentially large number of bits but using only memory and computation that grows linearly with dimension. Experiments shows that MDSH outperforms the state-of-the art, especially in the challenging regime of small distance thresholds.", "Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.", "Hashing has emerged as a popular technique for fast nearest neighbor search in gigantic databases. In particular, learning based hashing has received considerable attention due to its appealing storage and search efficiency. However, the performance of most unsupervised learning based hashing methods deteriorates rapidly as the hash code length increases. We argue that the degraded performance is due to inferior optimization procedures used to achieve discrete binary codes. This paper presents a graph-based unsupervised hashing model to preserve the neighborhood structure of massive data in a discrete code space. We cast the graph hashing problem into a discrete optimization framework which directly learns the binary codes. A tractable alternating maximization algorithm is then proposed to explicitly deal with the discrete constraints, yielding high-quality codes to well capture the local neighborhoods. Extensive experiments performed on four large datasets with up to one million samples show that our discrete optimization based graph hashing method obtains superior search accuracy over state-of-the-art un-supervised hashing methods, especially for longer codes." ] }
1610.02454
2952434594
Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 x 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations. We also show preliminary results on the more challenging domain of text- and location-controllable synthesis of images of human actions on the MPII Human Pose dataset.
In addition to recognizing patterns within images, deep convolutional networks have shown remarkable capability to images. trained a deconvolutional network to generate 3D chair renderings conditioned on a set of graphics codes indicating shape, position and lighting. followed with a recurrent convolutional encoder-decoder that learned to apply incremental 3D rotations to generate sequences of rotated chair and face images. @cite_1 used a similar approach in order to predict action-conditional future frames of Atari games. trained a network to generate images that solved visual analogy problems.
{ "cite_N": [ "@cite_1" ], "mid": [ "2118688707" ], "abstract": [ "Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future (image-)frames are dependent on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs." ] }
1610.02442
2530217065
Lecture notes are important for students to review and understand the key points in the class. Unfortunately, the students often miss or lose part of the lecture notes. In this paper, we design and implement an infrared sensor based system, InfraNotes, to automatically record the notes on the board by sensing and analyzing hand gestures of the lecturer. Compared with existing techniques, our system does not require special accessories with lecturers such as sensor-facilitated pens, writing surfaces or the video-taping infrastructure. Instead, it only has an infrared-sensor module on the eraser holder of black white board to capture handwritten trajectories. With a lightweight framework for handwritten trajectory processing, clear lecture notes can be generated automatically. We evaluate the quality of lecture notes by three standard character recognition techniques. The results indicate that InfraNotes is a promising solution to create clear and complete lectures to promote the education.
Some universities recorded lecture videos, therefore, a variety of researches focus on photo video analysis in order to generate class notes. One practical solution was blackboard segmentation @cite_0 . This system first analyzed video to segment written regions on blackboard. Then it exported these regions of notes in photo format. This solution was effective especially when processing lectures in the past. However, it could only generate photo, which was not a search-friendly content. Another method @cite_17 @cite_15 was to use a camera scanning whole white board with several photos. Then it stitched those images together as a panorama and eliminate background and keep enhanced notes image. This solution was practical and generated readable notes image. There was also a system @cite_21 which could recognize handwriting text from these photo by OCR. However, these two solutions could not automatically generate notes with time line in every line. In real settings, this system might miss content if it missed a single photo. The modification on notes also could not be preserved. Most significantly, the writer must not block the notes on the board, which was hard to achieve during lecture.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_21", "@cite_17" ], "mid": [ "2118877014", "2542995786", "2952953660", "2097138604" ], "abstract": [ "We propose a method for segmentation of written regions on a blackboard in the lecture room using a video image. We first detect the static edges of which locations on the image are stationary. Next, we extract several rectangular regions in which these static edges are located densely. Finally, by use of fuzzy rules, the extracted rectangles are merges as contextual regions, where letters and figures in each contextual region explain each context. We apply our method to automatic production of lecture video, and archives segmentation of written regions on the blackboard in lecture rooms.", "A whiteboard provides a large shared space for the participants to focus their attention and express their ideas, and is therefore a great collaboration tool for information workers. However, it has several limitations; notably, the contents on the whiteboard are hard to archive or share with people who are not present in the discussions. This paper presents our work in developing tools to facilitate collaboration on physical whiteboards by using a camera and a microphone. In particular, we have developed two systems: a whiteboard-camera system and a projector-whiteboardcamera system. The whiteboard-camera system allows a user to take notes of a whiteboard meeting when he she wants or in an automatic way, to transmit the whiteboard content to remote participants in real time and in an efficient way, and to archive the whole whiteboard session for efficient post-viewing. The projector-whitebord-camera system incorporates a whiteboard into a projectorcamera system. The whiteboard serves as the writing surface (input) as well as the projecting surface (output). Many applications of such a system inevitably require extracting handwritings from video images that contain both handwritings and the projected content. By analogy with echo cancellation in audio conferencing, we call this problem visual echo cancellation, and we describe one approach to accomplish the task. Our systems can be retrofit to any existing whiteboard. With the help of a camera and a microphone and optionally a projector, we are effectively bridging the physical and digital worlds.", "", "The paper describes a system for scanning the content on a whiteboard into a computer by a digital camera and also for enhancing the visual quality of whiteboard images. Because digital cameras are becoming accessible to average users, more and more people use digital cameras to take pictures of whiteboards instead of copying manually, thus increasing productivity significantly. However, the images are usually taken from an angle, resulting in undesired perspective distortion. They also contain other distracting regions such as walls. We have developed a system that automatically locates the boundary of a whiteboard, crops out the whiteboard region, rectifies it into a rectangle, and corrects the color to make the whiteboard completely white. In the case where a single image is not enough (e.g., large whiteboard and low-res camera), we have developed a robust feature-based technique to stitch multiple overlapping images automatically. We therefore reproduce the whiteboard content as a faithful electronic document which can be archived or shared with others. The system has been tested extensively, and very good results have been obtained." ] }
1610.02442
2530217065
Lecture notes are important for students to review and understand the key points in the class. Unfortunately, the students often miss or lose part of the lecture notes. In this paper, we design and implement an infrared sensor based system, InfraNotes, to automatically record the notes on the board by sensing and analyzing hand gestures of the lecturer. Compared with existing techniques, our system does not require special accessories with lecturers such as sensor-facilitated pens, writing surfaces or the video-taping infrastructure. Instead, it only has an infrared-sensor module on the eraser holder of black white board to capture handwritten trajectories. With a lightweight framework for handwritten trajectory processing, clear lecture notes can be generated automatically. We evaluate the quality of lecture notes by three standard character recognition techniques. The results indicate that InfraNotes is a promising solution to create clear and complete lectures to promote the education.
Hand writing recognition is the task of transforming a language represented in its spatial form of graphical marks into its symbolic representation. There are two kinds of handwriting input, on-line and off-line @cite_5 . On-line handwriting input maintains the time series of writing points, order of strokes and additional information about pen tip (velocity, acceleration). For example, handwriting input methods on cell phones and tablets receive on-line handwriting input when users touch the screen. Preprocessing of on-line recognition includes noise removal, stroke and character segmentation. Off-line handwriting input only preserves images of the completed on-board writing area. For example, banks recognize handwritten amounts on checks. Preprocessing of off-line recognition includes setting thresholds to extract writing points, removal of noise, segmentation of writing lines, and finally segmentation of characters and words. Our system is a type of on-line handwriting input system and records time stamps of each points on the handwritten trajectory. However, the sensor tracks both on-board and off-board movement of writing tools. Thus, we need classification after segmentation to determine which strokes are belong to on-board writing or off-board hand movement.
{ "cite_N": [ "@cite_5" ], "mid": [ "2142069714" ], "abstract": [ "Handwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies. Given its ubiquity in human transactions, machine recognition of handwriting has practical significance, as in reading handwritten notes in a PDA, in postal addresses on envelopes, in amounts in bank checks, in handwritten fields in forms, etc. This overview describes the nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms. Both the online case (which pertains to the availability of trajectory data during writing) and the off-line case (which pertains to scanned images) are considered. Algorithms for preprocessing, character and word recognition, and performance with practical systems are indicated. Other fields of application, like signature verification, writer authentification, handwriting learning tools are also considered." ] }
1610.02003
2529570878
Machine translation is the discipline concerned with developing automated tools for translating from one human language to another. Statistical machine translation (SMT) is the dominant paradigm in this field. In SMT, translations are generated by means of statistical models whose parameters are learned from bilingual data. Scalability is a key concern in SMT, as one would like to make use of as much data as possible to train better translation systems. In recent years, mobile devices with adequate computing power have become widely available. Despite being very successful, mobile applications relying on NLP systems continue to follow a client-server architecture, which is of limited use because access to internet is often limited and expensive. The goal of this dissertation is to show how to construct a scalable machine translation system that can operate with the limited resources available on a mobile device. The main challenge for porting translation systems on mobile devices is memory usage. The amount of memory available on a mobile device is far less than what is typically available on the server side of a client-server application. In this thesis, we investigate alternatives for the two components which prevent standard translation systems from working on mobile devices due to high memory usage. We show that once these standard components are replaced with our proposed alternatives, we obtain a scalable translation system that can work on a device with limited memory.
Another area of related research is focused on producing compact n-gram language models by pruning weights for n-grams that are unlikely to appear at test time. explore two simple pruning techniques: filtering n-grams below a certain frequency threshold and n-grams whose weight is close to the back-off weight. proposes a similar approach which aims to minimize the difference in perplexity between the original model and the pruned model. construct models for estimating the probability of a n-gram not occurring in the test corpus which they use for pruning the language model. These techniques can be combined with a memory efficient implementation (e.g. the trie based language models available in KenLM @cite_32 ) to obtain accurate, compact back-off n-gram language models.
{ "cite_N": [ "@cite_32" ], "mid": [ "2134800885" ], "abstract": [ "We present KenLM, a library that implements two data structures for efficient language model queries, reducing both time and memory costs. The Probing data structure uses linear probing hash tables and is designed for speed. Compared with the widely-used SRILM, our Probing model is 2.4 times as fast while using 57 of the memory. The Trie data structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed at lower memory consumption. Trie simultaneously uses less memory than the smallest lossless baseline and less CPU than the fastest baseline. Our code is open-source, thread-safe, and integrated into the Moses, cdec, and Joshua translation systems. This paper describes the several performance techniques used and presents benchmarks against alternative implementations." ] }
1610.02237
2952124503
We present an approach for weakly supervised learning of human actions from video transcriptions. Our system is based on the idea that, given a sequence of input data and a transcript, i.e. a list of the order the actions occur in the video, it is possible to infer the actions within the video stream, and thus, learn the related action models without the need for any frame-based annotation. Starting from the transcript information at hand, we split the given data sequences uniformly based on the number of expected actions. We then learn action models for each class by maximizing the probability that the training video sequences are generated by the action models given the sequence order as defined by the transcripts. The learned model can be used to temporally segment an unseen video with or without transcript. We evaluate our approach on four distinct activity datasets, namely Hollywood Extended, MPII Cooking, Breakfast and CRIM13. We show that our system is able to align the scripted actions with the video data and that the learned models localize and classify actions competitively in comparison to models trained with full supervision, i.e. with frame level annotations, and that they outperform any current state-of-the-art approach for aligning transcripts with video data.
Additionally, in the context of hierarchical action detection, where long activities can be subdivided into smaller action units, several works make use of grammars @cite_1 @cite_20 @cite_14 @cite_21 . While the method of @cite_14 is particularly designed for hierarchical action detection and makes use of a forward-backward algorithm over the grammar's syntax tree, our method builds on @cite_21 , where a HMM is used in combination with a context free grammar to parse the most likely video segmentation and corresponding action classification. While all previously mentioned methods use fully supervised training and rely on annotation of all actions and their exact segment boundaries, our method only requires the sequence of actions that occur in the video but no segment boundaries at all.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_21", "@cite_20" ], "mid": [ "1983845824", "2099614498", "2314362175", "1988650444" ], "abstract": [ "We propose a probabilistic method for parsing a temporal sequence such as a complex activity defined as composition of sub-activities actions. The temporal structure of the high-level activity is represented by a string-length limited stochastic context-free grammar. Given the grammar, a Bayes network, which we term Sequential Interval Network (SIN), is generated where the variable nodes correspond to the start and end times of component actions. The network integrates information about the duration of each primitive action, visual detection results for each primitive action, and the activity's temporal structure. At any moment in time during the activity, message passing is used to perform exact inference yielding the posterior probabilities of the start and end times for each different activity action. We provide demonstrations of this framework being applied to vision tasks such as action prediction, classification of the high-level activities or temporal segmentation of a test sequence, the method is also applicable in Human Robot Interaction domain where continual prediction of human action is needed.", "This paper describes a framework for modeling human activities as temporally structured processes. Our approach is motivated by the inherently hierarchical nature of human activities and the close correspondence between human actions and speech: We model action units using Hidden Markov Models, much like words in speech. These action units then form the building blocks to model complex human activities as sentences using an action grammar. To evaluate our approach, we collected a large dataset of daily cooking activities: The dataset includes a total of 52 participants, each performing a total of 10 cooking activities in multiple real-life kitchens, resulting in over 77 hours of video footage. We evaluate the HTK toolkit, a state-of-the-art speech recognition engine, in combination with multiple video feature descriptors, for both the recognition of cooking activities (e.g., making pancakes) as well as the semantic parsing of videos into action units (e.g., cracking eggs). Our results demonstrate the benefits of structured temporal generative approaches over existing discriminative approaches in coping with the complexity of human daily life activities.", "We describe an end-to-end generative approach for the segmentation and recognition of human activities. In this approach, a visual representation based on reduced Fisher Vectors is combined with a structured temporal model for recognition. We show that the statistical properties of Fisher Vectors make them an especially suitable front-end for generative models such as Gaussian mixtures. The system is evaluated for both the recognition of complex activities as well as their parsing into action units. Using a variety of video datasets ranging from human cooking activities to animal behaviors, our experiments demonstrate that the resulting architecture outperforms state-of-the-art approaches for larger datasets, i.e. when sufficient amount of data is available for training structured generative models.", "Real-world videos of human activities exhibit temporal structure at various scales, long videos are typically composed out of multiple action instances, where each instance is itself composed of sub-actions with variable durations and orderings. Temporal grammars can presumably model such hierarchical structure, but are computationally difficult to apply for long video streams. We describe simple grammars that capture hierarchical temporal structure while admitting inference with a finite-state-machine. This makes parsing linear time, constant storage, and naturally online. We train grammar parameters using a latent structural SVM, where latent subactions are learned automatically. We illustrate the effectiveness of our approach over common baselines on a new half-million frame dataset of continuous YouTube videos." ] }
1610.01891
2952808631
One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text is special and has unique characteristics. In addition, the medical text mining poses more challenges, e.g., more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug. The mining is even more challenging due to the lack of labeled dataset sources and external knowledge, as well as multiple token representations for a single drug name that is more common in the real application setting. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, i.e., MLP (Multi-Layer Perceptrons). The second technique involves two deep network classifiers, i.e., DBN (Deep Belief Networks), and SAE (Stacked Denoising Encoders). The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, i.e., LSTM (Long Short Term Memory). In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.
Drug name extraction and their classification are one of the challenges in the Semantic Evaluation Task (SemEval 2013). The best-reported performance for this challenge was 71.5 A hybrid approach model, which combines statistical learning and dictionary based, is proposed by @cite_11 . In their study, the author utilizes word2vec representation, CRF learning model and DINTO, a drug ontology. With this word2vec representation, targeted drug is treated as a current token in a context windows which consists of three tokens on the left and three tokens in the right. Additional features are included in the data representation such as pos tags, lemma in the windows context, an orthography feature as uppercase, lowercase, and mixed cap. The author also used Wikipedia text as an additional resource to perform word2vec representation training. The best F-score value in extracting the drug name provided by the method is 0.72.
{ "cite_N": [ "@cite_11" ], "mid": [ "2251235442" ], "abstract": [ "This paper describes a machine learningbased approach that uses word embedding features to recognize drug names from biomedical texts. As a starting point, we developed a baseline system based on Conditional Random Field (CRF) trained with standard features used in current Named Entity Recognition (NER) systems. Then, the system was extended to incorporate new features, such as word vectors and word clusters generated by the Word2Vec tool and a lexicon feature from the DINTO ontology. We trained the Word2vec tool over two different corpus: Wikipedia and MedLine. Our main goal is to study the effectiveness of using word embeddings as features to improve performance on our baseline system, as well as to analyze whether the DINTO ontology could be a valuable complementary data source integrated in a machine learning NER system. To evaluate our approach and compare it with previous work, we conducted a series of experiments on the dataset of SemEval-2013 Task 9.1 Drug Name Recognition." ] }
1610.01891
2952808631
One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text is special and has unique characteristics. In addition, the medical text mining poses more challenges, e.g., more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug. The mining is even more challenging due to the lack of labeled dataset sources and external knowledge, as well as multiple token representations for a single drug name that is more common in the real application setting. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, i.e., MLP (Multi-Layer Perceptrons). The second technique involves two deep network classifiers, i.e., DBN (Deep Belief Networks), and SAE (Stacked Denoising Encoders). The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, i.e., LSTM (Long Short Term Memory). In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.
Another approach for discovering valuable information from clinical text data that adopts event-location extraction model was examined by J, @cite_10 . They use an SVM classifier to predict drug or non-drug entity which is applied to DrugBank dataset. The best performance achieved by their method is a 0.6 in F-score. The drawback of their approach is that it only deals with a single token drug name.
{ "cite_N": [ "@cite_10" ], "mid": [ "2250619635" ], "abstract": [ "The DDIExtraction 2013 task in the SemEval conference concerns the detection of drug names and statements of drug-drug interactions (DDI) from text. Extraction of DDIs is important for providing up-to-date knowledge on adverse interactions between coadministered drugs. We apply the machine learning based Turku Event Extraction System to both tasks. We evaluate three feature sets, syntactic features derived from deep parsing, enhanced optionally with features derived from DrugBank or from both DrugBank and MetaMap. TEES achieves F-scores of 60 for the drug name recognition task and 59 for the DDI extraction task." ] }
1610.01891
2952808631
One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text is special and has unique characteristics. In addition, the medical text mining poses more challenges, e.g., more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug. The mining is even more challenging due to the lack of labeled dataset sources and external knowledge, as well as multiple token representations for a single drug name that is more common in the real application setting. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, i.e., MLP (Multi-Layer Perceptrons). The second technique involves two deep network classifiers, i.e., DBN (Deep Belief Networks), and SAE (Stacked Denoising Encoders). The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, i.e., LSTM (Long Short Term Memory). In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.
Our proposed method is based only on the data distribution pattern and vector of words characteristics, so there is no need for external knowledge nor additional handcrafted features. To overcome the multiple tokens problem, we propose a new technique which treats a target entity as a set of tokens (a tuple) at once rather than treating the target entity as a single token surrounded by other tokens such as used by @cite_11 or @cite_37 . By addressing a set of the tokens as a single sample, our proposed method can predict whether a set of tokens is a drug name or not. In our first experiment, we evaluate the first and second data representation techniques and apply MLP learning model. In our second scenario, we choose the the second technique which gave the best result with MLP, and apply it to two different machine learning methods: DBN, and SAE. In our third experiment, we examined the third data representation technique which utilizes the Euclidian distance between successive words in a certain sentence of medical text. The third data representation is then fed into an LSTM model. Based on the resulted F-score value, the second experiment gives the best performance.
{ "cite_N": [ "@cite_37", "@cite_11" ], "mid": [ "2124479345", "2251235442" ], "abstract": [ "Motivation: Knowledge of drug-drug interactions (DDIs) is crucial for healthcare professionals in order to avoid adverse effects when co-administering drugs to patients. Since most newly discovered DDIs are made available through scientific publications, automatic DDI extraction is highly relevant. Results: We propose a novel feature-based approach to extract DDIs from text. Our approach consists of three steps. First, we apply text preprocessing to convert input sentences from a given dataset into structured representations. Second, we map each candidate DDI pair from that dataset into a suitable syntactic structure. Based on that, a novel set of features is used to generate feature vectors for these candidate DDI pairs. Third, the obtained feature vectors are used to train a support vector machine (SVM) classifier. When evaluated on two DDI extraction challenge test datasets from 2011 and 2013, our system achieves F-scores of 71.1 and 83.5 , respectively, outperforming any state-of-the-art DDI extraction system. Availability: The source code is available for academic use at http: www.biosemantics.org uploads DDI.zip", "This paper describes a machine learningbased approach that uses word embedding features to recognize drug names from biomedical texts. As a starting point, we developed a baseline system based on Conditional Random Field (CRF) trained with standard features used in current Named Entity Recognition (NER) systems. Then, the system was extended to incorporate new features, such as word vectors and word clusters generated by the Word2Vec tool and a lexicon feature from the DINTO ontology. We trained the Word2vec tool over two different corpus: Wikipedia and MedLine. Our main goal is to study the effectiveness of using word embeddings as features to improve performance on our baseline system, as well as to analyze whether the DINTO ontology could be a valuable complementary data source integrated in a machine learning NER system. To evaluate our approach and compare it with previous work, we conducted a series of experiments on the dataset of SemEval-2013 Task 9.1 Drug Name Recognition." ] }
1610.01733
2528501250
Exploration in an unknown environment is the core functionality for mobile robots. Learning-based exploration methods, including convolutional neural networks, provide excellent strategies without human-designed logic for the feature extraction. But the conventional supervised learning algorithms cost lots of efforts on the labeling work of datasets inevitably. Scenes not included in the training set are mostly unrecognized either. We propose a deep reinforcement learning method for the exploration of mobile robots in an indoor environment with the depth information from an RGB-D sensor only. Based on the Deep Q-Network framework, the raw depth image is taken as the only input to estimate the Q values corresponding to all moving commands. The training of the network weights is end-to-end. In arbitrarily constructed simulation environments, we show that the robot can be quickly adapted to unfamiliar scenes without any man-made labeling. Besides, through analysis of receptive fields of feature representations, deep reinforcement learning motivates the convolutional networks to estimate the traversability of the scenes. The test results are compared with the exploration strategies separately based on deep learning or reinforcement learning. Even trained only in the simulated environment, experimental results in real-world environment demonstrate that the cognitive ability of robot controller is dramatically improved compared with the supervised method. We believe it is the first time that raw sensor information is used to build cognitive exploration strategy for mobile robots through end-to-end deep reinforcement learning.
Through regarding RGB or RGB-D images as the states of robots, reinforcement learning can be directly used to achieve visual control policies. In our previous work @cite_20 , a Q-learning based reinforcement learning controller was used to help a navigate in the simulation environment.
{ "cite_N": [ "@cite_20" ], "mid": [ "2562788852" ], "abstract": [ "This paper introduces a reinforcement learning method for exploring a corridor environment with the depth information from an RGB-D sensor only. The robot controller achieves obstacle avoidance ability by pre-training of feature maps using the depth information. The system is based on the recent Deep Q-Network (DQN) framework where a convolution neural network structure was adopted in the Q-value estimation of the Q-learning method. We separate the DQN into a supervised deep learning structure and a Q-learning network. The experiments of a Turtlebot in the Gazebo simulation environment show the robustness to different kinds of corridor environments. All of the experiments use the same pre-training deep learning structure. Note that the robot is traveling in environments which are different from the pre-training environment. It is the first time that raw sensor information is used to build such an exploring strategy for robotics by reinforcement learning." ] }
1610.01733
2528501250
Exploration in an unknown environment is the core functionality for mobile robots. Learning-based exploration methods, including convolutional neural networks, provide excellent strategies without human-designed logic for the feature extraction. But the conventional supervised learning algorithms cost lots of efforts on the labeling work of datasets inevitably. Scenes not included in the training set are mostly unrecognized either. We propose a deep reinforcement learning method for the exploration of mobile robots in an indoor environment with the depth information from an RGB-D sensor only. Based on the Deep Q-Network framework, the raw depth image is taken as the only input to estimate the Q values corresponding to all moving commands. The training of the network weights is end-to-end. In arbitrarily constructed simulation environments, we show that the robot can be quickly adapted to unfamiliar scenes without any man-made labeling. Besides, through analysis of receptive fields of feature representations, deep reinforcement learning motivates the convolutional networks to estimate the traversability of the scenes. The test results are compared with the exploration strategies separately based on deep learning or reinforcement learning. Even trained only in the simulated environment, experimental results in real-world environment demonstrate that the cognitive ability of robot controller is dramatically improved compared with the supervised method. We believe it is the first time that raw sensor information is used to build cognitive exploration strategy for mobile robots through end-to-end deep reinforcement learning.
Due to the potential of automating the design of data representations, deep reinforcement learning abstracted considerable attentions recently @cite_22 . Deep reinforcement learning was firstly applied on playing 2600 Atari games @cite_3 . The typical model-free Q-learning method was combined with convolutional feature extraction structures as Deep Q-network (DQN). The learned policies beat human players and previous algorithms in most of Atari games. Based on the success of DQN @cite_3 , revised deep reinforcement learning methods appeared to improve the performance on various of applications. Not like DQN taking three continues images as input, DRQN @cite_13 replaced several normal convolutional layers with recurrent neural networks (RNN) and long short term memory (LSTM) layers. Taking only one frame as the input, the trained model performed as well as DQN in Atari games. Dueling network @cite_19 separated the Q-value estimator to two independent network structures, one for the state value function and one for the advantage function. Now, it is the state-of-art method on the Atari 2600 domain.
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_22", "@cite_3" ], "mid": [ "2951799221", "2121092017", "2342662072", "2145339207" ], "abstract": [ "In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.", "Deep Reinforcement Learning has yielded proficient controllers for complex tasks. However, these controllers have limited memory and rely on being able to perceive the complete game screen at each decision point. To address these shortcomings, this article investigates the effects of adding recurrency to a Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected layer with a recurrent LSTM. The resulting (DRQN), although capable of seeing only a single frame at each timestep, successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens. Additionally, when trained with partial observations and evaluated with incrementally more complete observations, DRQN's performance scales as a function of observability. Conversely, when trained with full observations and evaluated with partial observations, DRQN's performance degrades less than DQN's. Thus, given the same length of history, recurrency is a viable alternative to stacking a history of frames in the DQN's input layer and while recurrency confers no systematic advantage when learning to play the game, the recurrent net can better adapt at evaluation time if the quality of observations changes.", "Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at this https URL in order to facilitate experimental reproducibility and to encourage adoption by other researchers.", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action." ] }
1610.01733
2528501250
Exploration in an unknown environment is the core functionality for mobile robots. Learning-based exploration methods, including convolutional neural networks, provide excellent strategies without human-designed logic for the feature extraction. But the conventional supervised learning algorithms cost lots of efforts on the labeling work of datasets inevitably. Scenes not included in the training set are mostly unrecognized either. We propose a deep reinforcement learning method for the exploration of mobile robots in an indoor environment with the depth information from an RGB-D sensor only. Based on the Deep Q-Network framework, the raw depth image is taken as the only input to estimate the Q values corresponding to all moving commands. The training of the network weights is end-to-end. In arbitrarily constructed simulation environments, we show that the robot can be quickly adapted to unfamiliar scenes without any man-made labeling. Besides, through analysis of receptive fields of feature representations, deep reinforcement learning motivates the convolutional networks to estimate the traversability of the scenes. The test results are compared with the exploration strategies separately based on deep learning or reinforcement learning. Even trained only in the simulated environment, experimental results in real-world environment demonstrate that the cognitive ability of robot controller is dramatically improved compared with the supervised method. We believe it is the first time that raw sensor information is used to build cognitive exploration strategy for mobile robots through end-to-end deep reinforcement learning.
For robotics control, deep reinforcement learning also accomplished various simulated robotics control tasks @cite_8 . In the continues control domain @cite_24 , the same model-free algorithm robustly solved more than 20 simulated physics tasks. Control policies are learned directly from raw pixel inputs. Considering the complexity of control problems, model-based reinforcement learning algorithm was proved to be able to accelerate the learning procedure @cite_8 so that the deep reinforcement learning framework could handle more challenging problems.
{ "cite_N": [ "@cite_24", "@cite_8" ], "mid": [ "2950471160", "2173248099" ], "abstract": [ "Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions. However, the sample complexity of model-free algorithms, particularly when using high-dimensional function approximators, tends to limit their applicability to physical systems. In this paper, we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks. We propose two complementary techniques for improving the efficiency of such algorithms. First, we derive a continuous variant of the Q-learning algorithm, which we call normalized adantage functions (NAF), as an alternative to the more commonly used policy gradient and actor-critic methods. NAF representation allows us to apply Q-learning with experience replay to continuous tasks, and substantially improves performance on a set of simulated robotic control tasks. To further improve the efficiency of our approach, we explore the use of learned models for accelerating model-free reinforcement learning. We show that iteratively refitted local linear models are especially effective for this, and demonstrate substantially faster learning on domains where such models are applicable.", "We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs." ] }
1610.01733
2528501250
Exploration in an unknown environment is the core functionality for mobile robots. Learning-based exploration methods, including convolutional neural networks, provide excellent strategies without human-designed logic for the feature extraction. But the conventional supervised learning algorithms cost lots of efforts on the labeling work of datasets inevitably. Scenes not included in the training set are mostly unrecognized either. We propose a deep reinforcement learning method for the exploration of mobile robots in an indoor environment with the depth information from an RGB-D sensor only. Based on the Deep Q-Network framework, the raw depth image is taken as the only input to estimate the Q values corresponding to all moving commands. The training of the network weights is end-to-end. In arbitrarily constructed simulation environments, we show that the robot can be quickly adapted to unfamiliar scenes without any man-made labeling. Besides, through analysis of receptive fields of feature representations, deep reinforcement learning motivates the convolutional networks to estimate the traversability of the scenes. The test results are compared with the exploration strategies separately based on deep learning or reinforcement learning. Even trained only in the simulated environment, experimental results in real-world environment demonstrate that the cognitive ability of robot controller is dramatically improved compared with the supervised method. We believe it is the first time that raw sensor information is used to build cognitive exploration strategy for mobile robots through end-to-end deep reinforcement learning.
No matter Atari games or the control tasks mentioned above, deep reinforcement learning has been keeping showing the advantage in simulated environments. However, it is rarely used to address robotics problems in real world environment. As in @cite_21 , the motion control of a robot motivated by deep reinforcement learning could make sense only with simulated semantic images but not raw images taken by real cameras. Thus, we consider the feasibility of deep reinforcement learning in real world tasks to be the primary contribution of our work.
{ "cite_N": [ "@cite_21" ], "mid": [ "2121615981" ], "abstract": [ "This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel images and without any prior knowledge of configuration is shown for the first time. We build upon the success of recent deep reinforcement learning and develop a system for learning target reaching with a three-joint robot manipulator using external visual observation. A Deep Q Network (DQN) was demonstrated to perform target reaching after training in simulation. Transferring the network to real hardware and real observation in a naive approach failed, but experiments show that the network works when replacing camera images with synthetic images." ] }
1610.01511
2951355983
With current efforts to design Future Internet Architectures (FIAs), the evaluation and comparison of different proposals is an interesting research challenge. Previously, metrics such as bandwidth or latency have commonly been used to compare FIAs to IP networks. We suggest the use of power consumption as a metric to compare FIAs. While low power consumption is an important goal in its own right (as lower energy use translates to smaller environmental impact as well as lower operating costs), power consumption can also serve as a proxy for other metrics such as bandwidth and processor load. Lacking power consumption statistics about either commodity FIA routers or widely deployed FIA testbeds, we propose models for power consumption of FIA routers. Based on our models, we simulate scenarios for measuring power consumption of content delivery in different FIAs. Specifically, we address two questions: 1) which of the proposed FIA candidates achieves the lowest energy footprint; and 2) which set of design choices yields a power-efficient network architecture? Although the lack of real-world data makes numerous assumptions necessary for our analysis, we explore the uncertainty of our calculations through sensitivity analysis of input parameters.
Previous work mainly focuses on the evaluation of content-centric networks (CCNs) or on the evaluation of specific CCN subsystems. Fricker al @cite_10 evaluate the caching performance of CCNs under the influence of network traffic compositions. Muscariello, Carofiglio and Gallo @cite_12 evaluate the performance of bandwidth and shared storage in CCN designs. Fayazbakhsh al @cite_14 use network latency, network congestion and origin server load to compare different caching strategies in CCNs. Perino and Varvello @cite_31 use router throughput, monetary costs, and energy efficiency to estimate the feasibility of deployment of CCN routers. The main objective of our work is not limited to the exploration of power consumption of each individual FIA. Rather, we concentrate on presenting an evaluation framework to explore the power implications of adopted FIA design principles. The results obtained herein can help guide designers toward power-efficient network designs.
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_10", "@cite_12" ], "mid": [ "2130124274", "2156104365", "", "2117053637" ], "abstract": [ "Content-Centric Networking (CCN) is a novel networking paradigm centered around content distribution rather than host-to-host connectivity. This change from host-centric to content-centric has several attractive advantages, such as network load reduction, low dissemination latency, and energy efficiency. However, it is unclear whether today's technology is ready for the CCN (r)evolution. The major contribution of this paper is a systematic evaluation of the suitability of existing software and hardware components in today's routers for the support of CCN. Our main conclusion is that a CCN deployment is feasible at a Content Distribution Network (CDN) and ISP scale, whereas today's technology is not yet ready to support an Internet scale deployment.", "Information-Centric Networking (ICN) has seen a significant resurgence in recent years. ICN promises benefits to users and service providers along several dimensions (e.g., performance, security, and mobility). These benefits, however, come at a non-trivial cost as many ICN proposals envision adding significant complexity to the network by having routers serve as content caches and support nearest-replica routing. This paper is driven by the simple question of whether this additional complexity is justified and if we can achieve these benefits in an incrementally deployable fashion. To this end, we use trace-driven simulations to analyze the quantitative benefits attributed to ICN (e.g., lower latency and congestion). Somewhat surprisingly, we find that pervasive caching and nearest-replica routing are not fundamentally necessary---most of the performance benefits can be achieved with simpler caching architectures. We also discuss how the qualitative benefits of ICN (e.g., security, mobility) can be achieved without any changes to the network. Building on these insights, we present a proof-of-concept design of an incrementally deployable ICN architecture.", "", "Internet usage has dramatically evolved towards content dissemination and retrieval, whilst the underlying infrastructure remains tied up to hosts interconnection. Information centric networking (ICN) proposals have recently emerged to rethink Internet foundations and design a natively content-centric network environment. Important features of such networks are the availability of built-in network storage and of receiver-driven chunk-level transport, whose interaction significantly impacts overall system and user performance. In the paper, we provide an analytical characterization of statistical bandwidth and storage sharing, under fairly general assumption on total demand, topology, content popularity and limited network resources. A closed-form expression for average content delivery time is derived and its accuracy confirmed by event-driven simulations. Finally, we present some applications of our model, leveraging on explicit formulae for the optimal dimensioning and localization of storage resources." ] }
1610.01394
2529709694
We describe an end-to-end framework for learning parameters of min-cost flow multi-target tracking problem with quadratic trajectory interactions including suppression of overlapping tracks and contextual cues about co-occurrence of different objects. Our approach utilizes structured prediction with a tracking-specific loss function to learn the complete set of model parameters. In this learning framework, we evaluate two different approaches to finding an optimal set of tracks under a quadratic model objective, one based on an linear program (LP) relaxation and the other based on novel greedy variants of dynamic programming that handle pairwise interactions. We find the greedy algorithms achieve almost equivalent accuracy to the LP relaxation while being up to 10 @math ? faster than a commercial LP solver. We evaluate trained models on three challenging benchmarks. Surprisingly, we find that with proper parameter learning, our simple data association model without explicit appearance motion reasoning is able to achieve comparable or better accuracy than many state-of-the-art methods that use far more complex motion features or appearance affinity metric learning.
Multi-target tracking problems have been tackled in a number of different ways. One approach is to first group detections into candidate tracklets and then perform scoring and association of these tracklets ( @cite_4 @cite_7 @cite_1 ). Compared to individual detections, tracklets allow for evaluating much richer trajectory and appearance models while maintaining some benefits of purely combinatorial grouping. However since the problem is at least two-layered (tracklet-generation and tracklet-association), these models are difficult to reason about mathematically and typically lack guarantees of optimality. Furthermore, tracklet-generation can only be done offline (or with substantial latency) and thus approaches that rely on tracklets are inherently limited in circumstances where online tracking is desired.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_7" ], "mid": [ "2053744956", "2035153336", "1966136723" ], "abstract": [ "This paper presents a novel introduction of online target-specific metric learning in track fragment (tracklet) association by network flow optimization for long-term multi-person tracking. Different from other network flow formulation, each node in our network represents a tracklet, and each edge represents the likelihood of neighboring tracklets belonging to the same trajectory as measured by our proposed affinity score. In our method, target-specific similarity metrics are learned, which give rise to the appearance-based models used in the tracklet affinity estimation. Trajectory-based tracklets are refined by using the learned metrics to account for appearance consistency and to identify reliable tracklets. The metrics are then re-learned using reliable tracklets for computing tracklet affinity scores. Long-term trajectories are then obtained through network flow optimization. Occlusions and missed detections are handled by a trajectory completion step. Our method is effective for long-term tracking even when the targets are spatially close or completely occluded by others. We validate our proposed framework on several public datasets and show that it outperforms several state of art methods.", "We introduce an online learning approach for multitarget tracking. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous approaches which only focus on producing discriminative motion and appearance models for all targets, we further consider discriminative features for distinguishing difficult pairs of targets. The tracking problem is formulated using an online learned CRF model, and is transformed into an energy minimization problem. The energy functions include a set of unary functions that are based on motion and appearance models for discriminating all targets, as well as a set of pairwise functions that are based on models for differentiating corresponding pairs of tracklets. The online CRF approach is more powerful at distinguishing spatially close targets with similar appearances, as well as in dealing with camera motions. An efficient algorithm is introduced for finding an association with low energy cost. We evaluate our approach on three public data sets, and show significant improvements compared with several state-of-art methods.", "This paper addresses the problem of simultaneous tracking of multiple targets in a video. We first apply object detectors to every video frame. Pairs of detection responses from every two consecutive frames are then used to build a graph of tracklets. The graph helps transitively link the best matching tracklets that do not violate hard and soft contextual constraints between the resulting tracks. We prove that this data association problem can be formulated as finding the maximum-weight independent set (MWIS) of the graph. We present a new, polynomial-time MWIS algorithm, and prove that it converges to an optimum. Similarity and contextual constraints between object detections, used for data association, are learned online from object appearance and motion properties. Long-term occlusions are addressed by iteratively repeating MWIS to hierarchically merge smaller tracks into longer ones. Our results demonstrate advantages of simultaneously accounting for soft and hard contextual constraints in multitarget tracking. We outperform the state of the art on the benchmark datasets." ] }
1610.01394
2529709694
We describe an end-to-end framework for learning parameters of min-cost flow multi-target tracking problem with quadratic trajectory interactions including suppression of overlapping tracks and contextual cues about co-occurrence of different objects. Our approach utilizes structured prediction with a tracking-specific loss function to learn the complete set of model parameters. In this learning framework, we evaluate two different approaches to finding an optimal set of tracks under a quadratic model objective, one based on an linear program (LP) relaxation and the other based on novel greedy variants of dynamic programming that handle pairwise interactions. We find the greedy algorithms achieve almost equivalent accuracy to the LP relaxation while being up to 10 @math ? faster than a commercial LP solver. We evaluate trained models on three challenging benchmarks. Surprisingly, we find that with proper parameter learning, our simple data association model without explicit appearance motion reasoning is able to achieve comparable or better accuracy than many state-of-the-art methods that use far more complex motion features or appearance affinity metric learning.
An alternative to tracklets is to attempt to include higher-order constraints directly in a combinatorial framework ( @cite_3 @cite_8 ). Such methods often operate directly over raw detections, either in an online or offline-fashion. Offline formulations benefit from having a single well-defined objective function or likelihood, and thus can give either globally optimal solution ( @cite_20 ), or provide approximate solution with a certificate of (sub)optimality ( @cite_8 ). @cite_24 propose a subgraph multi-cut approach which differs from traditional path-finding'' algorithms such as @cite_20 , @cite_32 and @cite_3 . Although designed to work directly on raw detections, in practice @cite_24 use tracklets to reduce the dimension of the inference problem. Such is the trade-off between finding globally optimal solutions and using rich tracking features.
{ "cite_N": [ "@cite_8", "@cite_32", "@cite_3", "@cite_24", "@cite_20" ], "mid": [ "1908451628", "2016135469", "2127084114", "2007352603", "2111644456" ], "abstract": [ "Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the “tracking-by-detection” paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences.", "We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance.", "We propose a method for global multi-target tracking that can incorporate higher-order track smoothness constraints such as constant velocity. Our problem formulation readily lends itself to path estimation in a trellis graph, but unlike previous methods, each node in our network represents a candidate pair of matching observations between consecutive frames. Extra constraints on binary flow variables in the graph result in a problem that can no longer be solved by min-cost network flow. We therefore propose an iterative solution method that relaxes these extra constraints using Lagrangian relaxation, resulting in a series of problems that ARE solvable by min-cost flow, and that progressively improve towards a high-quality solution to our original optimization problem. We present experimental results showing that our method outperforms the standard network-flow formulation as well as other recent algorithms that attempt to incorporate higher-order smoothness constraints.", "Tracking multiple targets in a video, based on a finite set of detection hypotheses, is a persistent problem in computer vision. A common strategy for tracking is to first select hypotheses spatially and then to link these over time while maintaining disjoint path constraints [14, 15, 24]. In crowded scenes multiple hypotheses will often be similar to each other making selection of optimal links an unnecessary hard optimization problem due to the sequential treatment of space and time. Embracing this observation, we propose to link and cluster plausible detections jointly across space and time. Specifically, we state multi-target tracking as a Minimum Cost Subgraph Multicut Problem. Evidence about pairs of detection hypotheses is incorporated whether the detections are in the same frame, neighboring frames or distant frames. This facilitates long-range re-identification and within-frame clustering. Results for published benchmark sequences demonstrate the superiority of this approach.", "We propose a network flow based optimization method for data association needed for multiple object tracking. The maximum-a-posteriori (MAP) data association problem is mapped into a cost-flow network with a non-overlap constraint on trajectories. The optimal data association is found by a min-cost flow algorithm in the network. The network is augmented to include an explicit occlusion model(EOM) to track with long-term inter-object occlusions. A solution to the EOM-based network is found by an iterative approach built upon the original algorithm. Initialization and termination of trajectories and potential false observations are modeled by the formulation intrinsically. The method is efficient and does not require hypotheses pruning. Performance is compared with previous results on two public pedestrian datasets to show its improvement." ] }
1610.01394
2529709694
We describe an end-to-end framework for learning parameters of min-cost flow multi-target tracking problem with quadratic trajectory interactions including suppression of overlapping tracks and contextual cues about co-occurrence of different objects. Our approach utilizes structured prediction with a tracking-specific loss function to learn the complete set of model parameters. In this learning framework, we evaluate two different approaches to finding an optimal set of tracks under a quadratic model objective, one based on an linear program (LP) relaxation and the other based on novel greedy variants of dynamic programming that handle pairwise interactions. We find the greedy algorithms achieve almost equivalent accuracy to the LP relaxation while being up to 10 @math ? faster than a commercial LP solver. We evaluate trained models on three challenging benchmarks. Surprisingly, we find that with proper parameter learning, our simple data association model without explicit appearance motion reasoning is able to achieve comparable or better accuracy than many state-of-the-art methods that use far more complex motion features or appearance affinity metric learning.
@cite_2 attempt to solve both data association and trajectory smoothing problem simultaneously, which results in a problem with varying dimensionality and difficult approximate inference. @cite_25 avoid this varying dimensionality problem by integrating out trajectory-related variables and using Markov Chain Monte Carlo sampling to estimate the marginal likelihoods for data association and trajectory estimation. @cite_5 propose yet another way to avoid varying dimensionality: instead of explicitly enumerating number of tracks, they assign a latent variable for each real detection track and conduct data association on these latent variables.
{ "cite_N": [ "@cite_5", "@cite_25", "@cite_2" ], "mid": [ "2118740913", "2112517951", "2104828970" ], "abstract": [ "We propose a novel parametrization of the data association problem for multi-target tracking. In our formulation, the number of targets is implicitly inferred together with the data association, effectively solving data association and model selection as a single inference problem. The novel formulation allows us to interpret data association and tracking as a single Switching Linear Dynamical System (SLDS). We compute an approximate posterior solution to this problem using a dynamic programming message passing technique. This inference-based approach allows us to incorporate richer probabilistic models into the tracking system. In particular, we incorporate inference over inliers outliers and track termination times into the system. We evaluate our approach on publicly available datasets and demonstrate results competitive with, and in some cases exceeding the state of the art.", "We develop a Bayesian modeling approach for tracking people in 3D from monocular video with unknown cameras. Modeling in 3D provides natural explanations for occlusions and smoothness discontinuities that result from projection, and allows priors on velocity and smoothness to be grounded in physical quantities: meters and seconds vs. pixels and frames. We pose the problem in the context of data association, in which observations are assigned to tracks. A correct application of Bayesian inference to multi-target tracking must address the fact that the model's dimension changes as tracks are added or removed, and thus, posterior densities of different hypotheses are not comparable. We address this by marginalizing out the trajectory parameters so the resulting posterior over data associations has constant dimension. This is made tractable by using (a) Gaussian process priors for smooth trajectories and (b) approximately Gaussian likelihood functions. Our approach provides a principled method for incorporating multiple sources of evidence, we present results using both optical flow and object detector outputs. Results are comparable to recent work on 3D tracking and, unlike others, our method requires no pre-calibrated cameras.", "The problem of multi-target tracking is comprised of two distinct, but tightly coupled challenges: (i) the naturally discrete problem of data association, i.e. assigning image observations to the appropriate target; (ii) the naturally continuous problem of trajectory estimation, i.e. recovering the trajectories of all targets. To go beyond simple greedy solutions for data association, recent approaches often perform multi-target tracking using discrete optimization. This has the disadvantage that trajectories need to be pre-computed or represented discretely, thus limiting accuracy. In this paper we instead formulate multi-target tracking as a discrete-continuous optimization problem that handles each aspect in its natural domain and allows leveraging powerful methods for multi-model fitting. Data association is performed using discrete optimization with label costs, yielding near optimality. Trajectory estimation is posed as a continuous fitting problem with a simple closed-form solution, which is used in turn to update the label costs. We demonstrate the accuracy and robustness of our approach with state-of-the-art performance on several standard datasets." ] }
1610.01394
2529709694
We describe an end-to-end framework for learning parameters of min-cost flow multi-target tracking problem with quadratic trajectory interactions including suppression of overlapping tracks and contextual cues about co-occurrence of different objects. Our approach utilizes structured prediction with a tracking-specific loss function to learn the complete set of model parameters. In this learning framework, we evaluate two different approaches to finding an optimal set of tracks under a quadratic model objective, one based on an linear program (LP) relaxation and the other based on novel greedy variants of dynamic programming that handle pairwise interactions. We find the greedy algorithms achieve almost equivalent accuracy to the LP relaxation while being up to 10 @math ? faster than a commercial LP solver. We evaluate trained models on three challenging benchmarks. Surprisingly, we find that with proper parameter learning, our simple data association model without explicit appearance motion reasoning is able to achieve comparable or better accuracy than many state-of-the-art methods that use far more complex motion features or appearance affinity metric learning.
Online tracking algorithms take advantage of previously identified track associations to build rich feature models over past trajectories that facilitate data association at the current frame. The capability to perform streaming data association on incoming video-frames without seeing the entire video is a desirable property for real-time applications such as autonomous driving. However, when the whole video is available, online tracking may make errors that are avoidable in offline algorithms that access future frames to resolve ambiguities. @cite_0 revisit the legacy Multiple Hypothesis Tracking method and introduc a novel online recursive appearance filter. @cite_39 proposes a novel flow-descriptor designed specifically for multi-target tracking and introduces a delay period to allow correction of possible errors made in previous frames (thus the name near online''), which yields state-of-the-art accuracy. @cite_6 use the relatively simple Hungarian Matching with the novel extension to choose either simple'' or complex'' features depending on the difficulty of the inference problem at each frame.
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_39" ], "mid": [ "2237765446", "2219554887", "1531192956" ], "abstract": [ "This paper revisits the classical multiple hypotheses tracking (MHT) algorithm in a tracking-by-detection framework. The success of MHT largely depends on the ability to maintain a small list of potential hypotheses, which can be facilitated with the accurate object detectors that are currently available. We demonstrate that a classical MHT implementation from the 90's can come surprisingly close to the performance of state-of-the-art methods on standard benchmark datasets. In order to further utilize the strength of MHT in exploiting higher-order information, we introduce a method for training online appearance models for each track hypothesis. We show that appearance models can be learned efficiently via a regularized least squares framework, requiring only a few extra operations for each hypothesis branch. We obtain state-of-the-art results on popular tracking-by-detection datasets such as PETS and the recent MOT challenge.", "Online Multiple Target Tracking (MTT) is often addressed within the tracking-by-detection paradigm. Detections are previously extracted independently in each frame and then objects trajectories are built by maximizing specifically designed coherence functions. Nevertheless, ambiguities arise in presence of occlusions or detection errors. In this paper we claim that the ambiguities in tracking could be solved by a selective use of the features, by working with more reliable features if possible and exploiting a deeper representation of the target only if necessary. To this end, we propose an online divide and conquer tracker for static camera scenes, which partitions the assignment problem in local subproblems and solves them by selectively choosing and combining the best features. The complete framework is cast as a structural learning task that unifies these phases and learns tracker parameters from examples. Experiments on two different datasets highlights a significant improvement of tracking performances (MOTA +10 ) over the state of the art.", "In this paper, we tackle two key aspects of multiple target tracking problem: 1) designing an accurate affinity measure to associate detections and 2) implementing an efficient and accurate (near) online multiple target tracking algorithm. As for the first contribution, we introduce a novel Aggregated Local Flow Descriptor (ALFD) that encodes the relative motion pattern between a pair of temporally distant detections using long term interest point trajectories (IPTs). Leveraging on the IPTs, the ALFD provides a robust affinity measure for estimating the likelihood of matching detections regardless of the application scenarios. As for another contribution, we present a Near-Online Multi-target Tracking (NOMT) algorithm. The tracking problem is formulated as a data-association between targets and detections in a temporal window, that is performed repeatedly at every frame. While being efficient, NOMT achieves robustness via integrating multiple cues including ALFD metric, target dynamics, appearance similarity, and long term trajectory regularization into the model. Our ablative analysis verifies the superiority of the ALFD metric over the other conventional affinity metrics. We run a comprehensive experimental evaluation on two challenging tracking datasets, KITTI [16] and MOT [2] datasets. The NOMT method combined with ALFD metric achieves the best accuracy in both datasets with significant margins (about 10 higher MOTA) over the state-of-the-art." ] }
1610.01394
2529709694
We describe an end-to-end framework for learning parameters of min-cost flow multi-target tracking problem with quadratic trajectory interactions including suppression of overlapping tracks and contextual cues about co-occurrence of different objects. Our approach utilizes structured prediction with a tracking-specific loss function to learn the complete set of model parameters. In this learning framework, we evaluate two different approaches to finding an optimal set of tracks under a quadratic model objective, one based on an linear program (LP) relaxation and the other based on novel greedy variants of dynamic programming that handle pairwise interactions. We find the greedy algorithms achieve almost equivalent accuracy to the LP relaxation while being up to 10 @math ? faster than a commercial LP solver. We evaluate trained models on three challenging benchmarks. Surprisingly, we find that with proper parameter learning, our simple data association model without explicit appearance motion reasoning is able to achieve comparable or better accuracy than many state-of-the-art methods that use far more complex motion features or appearance affinity metric learning.
For any multi-target tracking approach, there are a large number of associated model parameters which must be accurately tuned to achieve high performance. This is particularly true for (undirected) combinatorial models based on, , network-flow, where parameters have often been set empirically by hand or learned using piecewise training. @cite_6 and @cite_15 both use structured SVM to learn parameters of their online data association models. @cite_16 use structured SVM to learn parameters for offline multi-target tracking with quadratic interactions for the purpose of activity recognition. Our work differs in that it focuses on generic activity-independent tracking and global end-to-end formulation of the learning problem. In particular, we develop a novel loss function that penalizes false transition and id-errors based on the MOTA ( @cite_48 ) tracking score.
{ "cite_N": [ "@cite_48", "@cite_15", "@cite_16", "@cite_6" ], "mid": [ "2124781496", "1921895346", "100367037", "2219554887" ], "abstract": [ "Simultaneous tracking of multiple persons in real-world environments is an active research field and several approaches have been proposed, based on a variety of features and algorithms. Recently, there has been a growing interest in organizing systematic evaluations to compare the various techniques. Unfortunately, the lack of common metrics for measuring the performance of multiple object trackers still makes it hard to compare their results. In this work, we introduce two intuitive and general metrics to allow for objective comparison of tracker characteristics, focusing on their precision in estimating object locations, their accuracy in recognizing object configurations and their ability to consistently label objects over time. These metrics have been extensively used in two large-scale international evaluations, the 2006 and 2007 CLEAR evaluations, to measure and compare the performance of multiple object trackers for a wide variety of tracking tasks. Selected performance results are presented and the advantages and drawbacks of the presented metrics are discussed based on the experience gained during the evaluations.", "In this paper we show that multiple object tracking (MOT) can be formulated in a framework, where the detection and data-association are performed simultaneously. Our method allows us to overcome the confinements of data association based MOT approaches; where the performance is dependent on the object detection results provided at input level. At the core of our method lies structured learning which learns a model for each target and infers the best location of all targets simultaneously in a video clip. The inference of our structured learning is done through a new Target Identity-aware Network Flow (TINF), where each node in the network encodes the probability of each target identity belonging to that node. The proposed Lagrangian relaxation optimization finds the high quality solution to the network. During optimization a soft spatial constraint is enforced between the nodes of the graph which helps reducing the ambiguity caused by nearby targets with similar appearance in crowded scenarios. We show that automatically detecting and tracking targets in a single framework can help resolve the ambiguities due to frequent occlusion and heavy articulation of targets. Our experiments involve challenging yet distinct datasets and show that our method can achieve results better than the state-of-art.", "We present a coherent, discriminative framework for simultaneously tracking multiple people and estimating their collective activities. Instead of treating the two problems separately, our model is grounded in the intuition that a strong correlation exists between a person's motion, their activity, and the motion and activities of other nearby people. Instead of directly linking the solutions to these two problems, we introduce a hierarchy of activity types that creates a natural progression that leads from a specific person's motion to the activity of the group as a whole. Our model is capable of jointly tracking multiple people, recognizing individual activities (atomic activities), the interactions between pairs of people (interaction activities), and finally the behavior of groups of people (collective activities). We also propose an algorithm for solving this otherwise intractable joint inference problem by combining belief propagation with a version of the branch and bound algorithm equipped with integer programming. Experimental results on challenging video datasets demonstrate our theoretical claims and indicate that our model achieves the best collective activity classification results to date.", "Online Multiple Target Tracking (MTT) is often addressed within the tracking-by-detection paradigm. Detections are previously extracted independently in each frame and then objects trajectories are built by maximizing specifically designed coherence functions. Nevertheless, ambiguities arise in presence of occlusions or detection errors. In this paper we claim that the ambiguities in tracking could be solved by a selective use of the features, by working with more reliable features if possible and exploiting a deeper representation of the target only if necessary. To this end, we propose an online divide and conquer tracker for static camera scenes, which partitions the assignment problem in local subproblems and solves them by selectively choosing and combining the best features. The complete framework is cast as a structural learning task that unifies these phases and learns tracker parameters from examples. Experiments on two different datasets highlights a significant improvement of tracking performances (MOTA +10 ) over the state of the art." ] }