aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1903.12063 | 2924636932 | We present a 3-step registration pipeline for differently stained histological serial sections that consists of 1) a robust pre-alignment, 2) a parametric registration computed on coarse resolution images, and 3) an accurate nonlinear registration. In all three steps the NGF distance measure is minimized with respect to an increasingly flexible transformation. We apply the method in the ANHIR image registration challenge and evaluate its performance on the training data. The presented method is robust (error reduction in 99.6 of the cases), fast (runtime < 4 seconds) and accurate (median relative target registration error 0.0019). | The underlying variational image registration framework of this work has been described in @cite_5 @cite_10 and its application to histological images was first described in @cite_11 in 2006. A general issue has been the handling of large images and the associated computational complexity and runtimes. At this time, the elastic registration of two images from slices of a human brain with @math pixels took about 100 minutes on a workstation and 3 minutes on a cluster computer. Later, a faster implementation for regular workstations reducing memory read and write operations has been proposed in @cite_7 in 2013. The authors report a registration time of 104 seconds for a pair of images from the DIR-Lab 4DCT dataset (approx. @math voxels). Additional optimizations including the exploitation of special instruction sets of modern CPUs has been recently described in @cite_12 , reducing the registration time for two @math images to 19 seconds. The present work builds on top of these implementations. | {
"cite_N": [
"@cite_7",
"@cite_5",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"1965820798",
"2461984154",
"564060940",
"2962889862",
"1993487882"
],
"abstract": [
"Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.",
"Image registration is central to many challenges in medical imaging and therefore it has a vast range of applications. The purpose of this note is to provide a unified but extremely flexible framework for image registration. This framework is based on a variational formulation of the registration problem. We discuss the framework as well as some of its most important building blocks. These include some of the most promising non-linear registration strategies used in today medical imaging. The overall goal of image registration is to compute a deformation, such that a deformed version of an image becomes similar to a so-called reference image. Hence, the similarity measure is an important building block. Depending on the application at hand, it is inevitable to constrain the wanted deformation in an appropriate way. Thus, regularization is also a main building block. Finally, it is often desirable to incorporate higher level information about the expected deformation. We show how such constraints or information can easily be integrated in our general framework and discuss some examples. Moreover, the proposed general framework allows for a unified algorithmic treatment of the various building blocks.",
"Whenever images taken at different times, from different viewpoints, and or by different sensors need to be compared, merged, or integrated, image registration is required. Registration, also known as alignment, fusion, or warping, is the process of transforming data into a common reference frame. This book provides an overview of state-of-the-art registration techniques from theory to practice, plus numerous exercises designed to enhance readers understanding of the principles and mechanisms of the described techniques. It also provides, via a supplementary Web page, free access to FAIR.m, a package that is based on the MATLAB software environment, which enables readers to experiment with the proposed algorithms and explore the presented examples in more depth. Written from an interdisciplinary point of view, this book will appeal to mathematicians who want to learn more about image registration, medical imaging professionals who want to know more about and explore available imaging techniques, and computer scientists and engineers who want to understand the numerical schemes behind the techniques. The book is also appropriate for use as a course text at the advanced graduate level. Contents: FAIR Listings; FAIR Examples; List of Figures; List of Tables; Preface; Chapter 1: Introduction; Chapter 2: FAIR Concepts; Chapter 3: Image Interpolation; Chapter 4: Transforming Images by Parameterized Transformations; Chapter 5: Landmark-Based Registration; Chapter 6: Parametric Image Registration; Chapter 7: Distance Measures; Chapter 8: Regularization; Chapter 9: Nonparametric Image Registration; Chapter 10: Outlook; Bibliography; Symbols, Acronyms, Index.",
"We present a novel computational approach to fast and memory-efficient deformable image registration. In the variational registration model, the computation of the objective function derivatives is the computationally most expensive operation, both in terms of runtime and memory requirements. In order to target this bottleneck, we analyze the matrix structure of gradient and Hessian computations for the case of the normalized gradient fields distance measure and curvature regularization. Based on this analysis, we derive equivalent matrix-free closed-form expressions for derivative computations, eliminating the need for storing intermediate results and the costs of sparse matrix arithmetic. This has further benefits: (1) matrix computations can be fully parallelized, (2) memory complexity for derivative computation is reduced from linear to constant, and (3) overall computation times are substantially reduced. In comparison with an optimized matrix-based reference implementation, the CPU implementation ac...",
"The physical (microtomy), optical (microscopy), and radiologic (tomography) sectioning of biological objects and their digitization lead to stacks of images. Due to the sectioning process and disturbances, movement of objects during imaging for example, adjacent images of the image stack are not optimally aligned to each other. Such mismatches have to be corrected automatically by suitable registration methods. Here, a whole brain of a Sprague Dawley rat was serially sectioned and stained followed by digitizing the 20 ?m thin histologic sections. We describe how to prepare the images for subsequent automatic intensity based registration. Different registration schemes are presented and their results compared to each other from an anatomical and mathematical perspective. In the first part we concentrate on rigid and affine linear methods and deal only with linear mismatches of the images. Digitized images of stained histologic sections often exhibit inhomogenities of the gray level distribution coming from staining and or sectioning variations. Therefore, a method is developed that is robust with respect to inhomogenities and artifacts. Furthermore we combined this approach by minimizing a suitable distance measure for shear and rotation mismatches of foreground objects after applying the principal axes transform. As a consequence of our investigations, we must emphasize that the combination of a robust principal axes based registration in combination with optimizing translation, rotation and shearing errors gives rise to the best reconstruction results from the mathematical and anatomical view point. Because the sectioning process introduces nonlinear deformations to the relative thin histologic sections as well, an elastic registration has to be applied to correct these deformations. In the second part of the study a detailed description of the advances of an elastic registration after affine linear registration of the rat brain is given. We found quantitative evidence that affine linear registration is a suitable starting point for the alignment of histologic sections but elastic registration must be performed to improve significantly the registration result. A strategy is presented that enables to register elastically the affine linear preregistered rat brain sections and the first one hundred images of serial histologic sections through both occipital lobes of a human brain (6112 images). Additionally, we will describe how a parallel implementation of the elastic registration was realized. Finally, the computed force fields have been applied here for the first time to the morphometrized data of cells determined automatically by an image analytic framework."
]
} |
1903.12239 | 2929499422 | Detection and segmentation of objects in overheard imagery is a challenging task. The variable density, random orientation, small size, and instance-to-instance heterogeneity of objects in overhead imagery calls for approaches distinct from existing models designed for natural scene datasets. Though new overhead imagery datasets are being developed, they almost universally comprise a single view taken from directly overhead ("at nadir"), failing to address one critical variable: look angle. By contrast, views vary in real-world overhead imagery, particularly in dynamic scenarios such as natural disasters where first looks are often over 40 degrees off-nadir. This represents an important challenge to computer vision methods, as changing view angle adds distortions, alters resolution, and changes lighting. At present, the impact of these perturbations for algorithmic detection and segmentation of objects is untested. To address this problem, we introduce the SpaceNet Multi-View Overhead Imagery (MVOI) Dataset, an extension of the SpaceNet open source remote sensing dataset. MVOI comprises 27 unique looks from a broad range of viewing angles (-32 to 54 degrees). Each of these images cover the same geography and are annotated with 126,747 building footprint labels, enabling direct assessment of the impact of viewpoint perturbation on model performance. We benchmark multiple leading segmentation and object detection models on: (1) building detection, (2) generalization to unseen viewing angles and resolutions, and (3) sensitivity of building footprint extraction to changes in resolution. We find that segmentation and object detection models struggle to identify buildings in off-nadir imagery and generalize poorly to unseen views, presenting an important benchmark to explore the broadly relevant challenge of detecting small, heterogeneous target objects in visually dynamic contexts. | Object detection and segmentation is a well-studied problem for natural scene images, but those objects are generally much larger and suffer minimally from distortions exacerbated in overhead imagery. Natural scene research is driven by datasets such as MSCOCO @cite_7 and PASCALVOC @cite_31 , but those datasets lack multiple views of each object. PASCAL3D @cite_29 , autonomous driving datasets such as KITTI @cite_43 , CityScapes @cite_11 , existing multi-view datasets @cite_18 @cite_13 , and tracking datasets such as MOT2017 @cite_42 or OBT @cite_32 contains different views, but they are confined to a narrow range of angles, lack sufficient heterogeneity to test generalization between views, and are restricted to natural scene images. Multiple viewpoints are found in 3D model datasets @cite_33 @cite_5 , but those are not photorealistic and lack the occlusion and visual distortion properties encountered with real imagery. | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_7",
"@cite_29",
"@cite_42",
"@cite_32",
"@cite_43",
"@cite_5",
"@cite_31",
"@cite_13",
"@cite_11"
],
"mid": [
"2963488642",
"2253156915",
"1861492603",
"1991264156",
"2291627510",
"2089961441",
"2150066425",
"2175711684",
"2031489346",
"",
"2340897893"
],
"abstract": [
"We present an approach that uses a multi-camera system to train fine-grained detectors for keypoints that are prone to occlusion, such as the joints of a hand. We call this procedure multiview bootstrapping: first, an initial keypoint detector is used to produce noisy labels in multiple views of the hand. The noisy detections are then triangulated in 3D using multiview geometry or marked as outliers. Finally, the reprojected triangulations are used as new labeled training data to improve the detector. We repeat this process, generating more labeled data in each iteration. We derive a result analytically relating the minimum number of views to achieve target true and false positive rates for a given detector. The method is used to train a hand keypoint detector for single images. The resulting keypoint detector runs in realtime on RGB images and has accuracy comparable to methods that use depth sensors. The single view detector, triangulated over multiple views, enables 3D markerless hand motion capture with complex object interactions.",
"We have created a dataset of more than ten thousand 3D scans of real objects. To create the dataset, we recruited 70 operators, equipped them with consumer-grade mobile 3D scanning setups, and paid them to scan objects in their environments. The operators scanned objects of their choosing, outside the laboratory and without direct supervision by computer vision professionals. The result is a large and diverse collection of object scans: from shoes, mugs, and toys to grand pianos, construction vehicles, and large outdoor sculptures. We worked with an attorney to ensure that data acquisition did not violate privacy constraints. The acquired data was placed irrevocably in the public domain and is available freely at http: redwood-data.org 3dscan.",
"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.",
"3D object detection and pose estimation methods have become popular in recent years since they can handle ambiguities in 2D images and also provide a richer description for objects compared to 2D object detectors. However, most of the datasets for 3D recognition are limited to a small amount of images per category or are captured in controlled environments. In this paper, we contribute PASCAL3D+ dataset, which is a novel and challenging dataset for 3D object detection and pose estimation. PASCAL3D+ augments 12 rigid categories of the PASCAL VOC 2012 [4] with 3D annotations. Furthermore, more images are added for each category from ImageNet [3]. PASCAL3D+ images exhibit much more variability compared to the existing 3D datasets, and on average there are more than 3,000 object instances per category. We believe this dataset will provide a rich testbed to study 3D detection and pose estimation and will help to significantly push forward research in this area. We provide the results of variations of DPM [6] on our new dataset for object detection and viewpoint estimation in different scenarios, which can be used as baselines for the community. Our benchmark is available online at http: cvgl.stanford.edu projects pascal3d",
"Standardized benchmarks are crucial for the majority of computer vision applications. Although leaderboards and ranking tables should not be over-claimed, benchmarks often provide the most objective measure of performance and are therefore important guides for reseach. Recently, a new benchmark for Multiple Object Tracking, MOTChallenge, was launched with the goal of collecting existing and new data and creating a framework for the standardized evaluation of multiple object tracking methods. The first release of the benchmark focuses on multiple people tracking, since pedestrians are by far the most studied object in the tracking community. This paper accompanies a new release of the MOTChallenge benchmark. Unlike the initial release, all videos of MOT16 have been carefully annotated following a consistent protocol. Moreover, it not only offers a significant increase in the number of labeled boxes, but also provides multiple object classes beside pedestrians and the level of visibility for every single object of interest.",
"Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.",
"Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net datasets kitti",
"The ability to predict future states of the environment is a central pillar of intelligence. At its core, effective prediction requires an internal model of the world and an understanding of the rules by which the world changes. Here, we explore the internal models developed by deep neural networks trained using a loss based on predicting future frames in synthetic video sequences, using a CNN-LSTM-deCNN framework. We first show that this architecture can achieve excellent performance in visual sequence prediction tasks, including state-of-the-art performance in a standard 'bouncing balls' dataset (, 2009). Using a weighted mean-squared error and adversarial loss (, 2014), the same architecture successfully extrapolates out-of-the-plane rotations of computer-generated faces. Furthermore, despite being trained end-to-end to predict only pixel-level information, our Predictive Generative Networks learn a representation of the latent structure of the underlying three-dimensional objects themselves. Importantly, we find that this representation is naturally tolerant to object transformations, and generalizes well to new tasks, such as classification of static images. Similar models trained solely with a reconstruction loss fail to generalize as effectively. We argue that prediction can serve as a powerful unsupervised loss for learning rich internal representations of high-level object features.",
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"",
"Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark."
]
} |
1903.12239 | 2929499422 | Detection and segmentation of objects in overheard imagery is a challenging task. The variable density, random orientation, small size, and instance-to-instance heterogeneity of objects in overhead imagery calls for approaches distinct from existing models designed for natural scene datasets. Though new overhead imagery datasets are being developed, they almost universally comprise a single view taken from directly overhead ("at nadir"), failing to address one critical variable: look angle. By contrast, views vary in real-world overhead imagery, particularly in dynamic scenarios such as natural disasters where first looks are often over 40 degrees off-nadir. This represents an important challenge to computer vision methods, as changing view angle adds distortions, alters resolution, and changes lighting. At present, the impact of these perturbations for algorithmic detection and segmentation of objects is untested. To address this problem, we introduce the SpaceNet Multi-View Overhead Imagery (MVOI) Dataset, an extension of the SpaceNet open source remote sensing dataset. MVOI comprises 27 unique looks from a broad range of viewing angles (-32 to 54 degrees). Each of these images cover the same geography and are annotated with 126,747 building footprint labels, enabling direct assessment of the impact of viewpoint perturbation on model performance. We benchmark multiple leading segmentation and object detection models on: (1) building detection, (2) generalization to unseen viewing angles and resolutions, and (3) sensitivity of building footprint extraction to changes in resolution. We find that segmentation and object detection models struggle to identify buildings in off-nadir imagery and generalize poorly to unseen views, presenting an important benchmark to explore the broadly relevant challenge of detecting small, heterogeneous target objects in visually dynamic contexts. | Previous datasets for overhead imagery focus on classification @cite_27 , bounding box object detection @cite_3 @cite_19 @cite_4 , instance-based segmentation @cite_35 , and object tracking @cite_22 tasks. None of these datasets comprise multiple collections of the same field of view from substantially different look angles, making it difficult to assess model robustness to new views. Within segmentation datasets, Van @cite_35 represents the closest work, with dense building and road annotations. We summarize the key characteristics of each dataset in Table . Our dataset matches or exceeds existing datasets in terms of imagery size and annotation density, but critically includes varying look direction and angle to better reflect the visual heterogeneity of real-world imagery. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_22",
"@cite_3",
"@cite_19",
"@cite_27"
],
"mid": [
"2811199523",
"",
"2519586580",
"2962749812",
"",
"2963785576"
],
"abstract": [
"Foundational mapping remains a challenge in many parts of the world, particularly in dynamic scenarios such as natural disasters when timely updates are critical. Updating maps is currently a highly manual process requiring a large number of human labelers to either create features or rigorously validate automated outputs. We propose that the frequent revisits of earth imaging satellite constellations may accelerate existing efforts to quickly update foundational maps when combined with advanced machine learning techniques. Accordingly, the SpaceNet partners (CosmiQ Works, Radiant Solutions, and NVIDIA), released a large corpus of labeled satellite imagery on Amazon Web Services (AWS) called SpaceNet. The SpaceNet partners also launched a series of public prize competitions to encourage improvement of remote sensing machine learning algorithms. The first two of these competitions focused on automated building footprint extraction, and the most recent challenge focused on road network extraction. In this paper we discuss the SpaceNet imagery, labels, evaluation metrics, prize challenge results to date, and future plans for the SpaceNet challenge series.",
"",
"Humans navigate crowded spaces such as a university campus by following common sense rules based on social etiquette. In this paper, we argue that in order to enable the design of new target tracking or trajectory forecasting methods that can take full advantage of these rules, we need to have access to better data in the first place. To that end, we contribute a new large-scale dataset that collects videos of various types of targets (not just pedestrians, but also bikers, skateboarders, cars, buses, golf carts) that navigate in a real world outdoor environment such as a university campus. Moreover, we introduce a new characterization that describes the “social sensitivity” at which two targets interact. We use this characterization to define “navigation styles” and improve both forecasting models and state-of-the-art multi-target tracking–whereby the learnt forecasting models help the data association step.",
"Object detection is an important and challenging problem in computer vision. Although the past decade has witnessed major advances in object detection in natural scenes, such successes have been slow to aerial imagery, not only because of the huge variation in the scale, orientation and shape of the object instances on the earth's surface, but also due to the scarcity of well-annotated datasets of objects in aerial scenes. To advance object detection research in Earth Vision, also known as Earth Observation and Remote Sensing, we introduce a large-scale Dataset for Object deTection in Aerial images (DOTA). To this end, we collect 2806 aerial images from different sensors and platforms. Each image is of the size about 4000 A— 4000 pixels and contains objects exhibiting a wide variety of scales, orientations, and shapes. These DOTA images are then annotated by experts in aerial image interpretation using 15 common object categories. The fully annotated DOTA images contains 188, 282 instances, each of which is labeled by an arbitrary (8 d.o.f.) quadrilateral. To build a baseline for object detection in Earth Vision, we evaluate state-of-the-art object detection algorithms on DOTA. Experiments demonstrate that DOTA well represents real Earth Vision applications and are quite challenging.",
"",
"We present a new dataset, Functional Map of the World (fMoW), which aims to inspire the development of machine learning models capable of predicting the functional purpose of buildings and land use from temporal sequences of satellite images and a rich set of metadata features. The metadata provided with each image enables reasoning about location, time, sun angles, physical sizes, and other features when making predictions about objects in the image. Our dataset consists of over 1 million images from over 200 countries. For each image, we provide at least one bounding box annotation containing one of 63 categories, including a \"false detection\" category. We present an analysis of the dataset along with baseline approaches that reason about metadata and temporal views. Our data, code, and pretrained models have been made publicly available."
]
} |
1903.12239 | 2929499422 | Detection and segmentation of objects in overheard imagery is a challenging task. The variable density, random orientation, small size, and instance-to-instance heterogeneity of objects in overhead imagery calls for approaches distinct from existing models designed for natural scene datasets. Though new overhead imagery datasets are being developed, they almost universally comprise a single view taken from directly overhead ("at nadir"), failing to address one critical variable: look angle. By contrast, views vary in real-world overhead imagery, particularly in dynamic scenarios such as natural disasters where first looks are often over 40 degrees off-nadir. This represents an important challenge to computer vision methods, as changing view angle adds distortions, alters resolution, and changes lighting. At present, the impact of these perturbations for algorithmic detection and segmentation of objects is untested. To address this problem, we introduce the SpaceNet Multi-View Overhead Imagery (MVOI) Dataset, an extension of the SpaceNet open source remote sensing dataset. MVOI comprises 27 unique looks from a broad range of viewing angles (-32 to 54 degrees). Each of these images cover the same geography and are annotated with 126,747 building footprint labels, enabling direct assessment of the impact of viewpoint perturbation on model performance. We benchmark multiple leading segmentation and object detection models on: (1) building detection, (2) generalization to unseen viewing angles and resolutions, and (3) sensitivity of building footprint extraction to changes in resolution. We find that segmentation and object detection models struggle to identify buildings in off-nadir imagery and generalize poorly to unseen views, presenting an important benchmark to explore the broadly relevant challenge of detecting small, heterogeneous target objects in visually dynamic contexts. | The effect of different views on segmentation or object detection in natural scenes has not been thoroughly studied, as feature characteristics are relatively preserved even under rotation of the object in that context. Nonetheless, preliminary studies of classification model performance on video frames suggests that minimal pixel-level changes can impact performance @cite_2 . By contrast, substantial occlusion and distortion occurs in off-nadir overhead imagery, complicating segmentation and placement of geospatially accurate object footprints, as shown in Figure A-B. Furthermore, due to the comparatively small size of target objects ( buildings) in overhead imagery, changing view substantially alters their appearance (Figure C-D). We expect similar challenges to occur when detecting objects in natural scene images at a distance or in crowded views. Existing solutions to occlusion are often domain specific, exploiting either face-specific structure @cite_21 for recognition or relying on attention mechanisms to identify common elements @cite_0 or landmarks @cite_40 . The heterogeneity in building appearance in overhead imagery, and the absence of landmark features to identify them, makes their detection an ideal research task for developing domain-agnostic models that are robust to occlusion. | {
"cite_N": [
"@cite_0",
"@cite_40",
"@cite_21",
"@cite_2"
],
"mid": [
"2792824754",
"2963980377",
"2209882149",
"2807007689"
],
"abstract": [
"Pedestrian detection has progressed significantly in the last years. However, occluded people are notoriously hard to detect, as their appearance varies substantially depending on a wide range of occlusion patterns. In this paper, we aim to propose a simple and compact method based on the FasterRCNN architecture for occluded pedestrian detection. We start with interpreting CNN channel features of a pedestrian detector, and we find that different channels activate responses for different body parts respectively. These findings motivate us to employ an attention mechanism across channels to represent various occlusion patterns in one single model, as each occlusion pattern can be formulated as some specific combination of body parts. Therefore, an attention network with self or external guidances is proposed as an add-on to the baseline FasterRCNN detector. When evaluating on the heavy occlusion subset, we achieve a significant improvement of 8pp to the baseline FasterRCNN detector on CityPersons and on Caltech we outperform the state-of-the-art method by 4pp.",
"A key step to driver safety is to observe the driver's activities with the face being a key step in this process to extracting information such as head pose, blink rate, yawns, talking to passenger which can then help derive higher level information, such as distraction, drowsiness, intent, and where they are looking. In the context of driving safety, it is important for the system perform robust estimation under harsh lighting and occlusion but also be able to detect when the occlusion occurs so that information predicted from occluded parts of the face can be taken into account properly. This paper introduces the Occluded Stacked Hourglass, based on the work of original Stacked Hourglass network for body pose joint estimation, which is retrained to process a detected face window and output 68 occlusion heat maps, each corresponding to a facial landmark. Landmark location, occlusion levels and a refined face detection score, to reject false positives, are extracted from these heat maps. Using the facial landmark locations, features such as head pose and eye mouth openness can be extracted to derive driver attention and activity. The system is evaluated for face detection, head pose, and occlusion estimation on various datasets in the wild, both quantitatively and qualitatively, and shows state-of-the-art results.",
"In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method [23] by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.",
"Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, the deeper the network the more we see these failures to generalize. We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed. We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations. Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans."
]
} |
1903.12133 | 2922757351 | Humans use a host of signals to infer the emotional state of others. In general, computer systems that leverage signals from multiple modalities will be more robust and accurate in the same task. We present a multimodal affect and context sensing platform. The system is composed of video, audio and application analysis pipelines that leverage ubiquitous sensors (camera and microphone) to log and broadcast emotion data in real-time. The platform is designed to enable easy prototyping of novel computer interfaces that sense, respond and adapt to human emotion. This paper describes the different audio, visual and application processing components and explains how the data is stored and or broadcast for other applications to consume. We hope that this platform helps advance the state-of-the-art in affective computing by enabling development of novel human-computer interfaces. | There is an extensive literature on automated affect recognition. We will not cover the prior work completely here as surveys of the existing work provide a much more complete review of the field than would be possible in this section @cite_24 @cite_28 . However, we highlight a few highly relevant and seminal papers on multimodal affect recognition platforms and applications. | {
"cite_N": [
"@cite_24",
"@cite_28"
],
"mid": [
"1985867508",
"2753840835"
],
"abstract": [
"Affect detection is an important pattern recognition problem that has inspired researchers from several areas. The field is in need of a systematic review due to the recent influx of Multimodal (MM) affect detection systems that differ in several respects and sometimes yield incompatible results. This article provides such a survey via a quantitative review and meta-analysis of 90 peer-reviewed MM systems. The review indicated that the state of the art mainly consists of person-dependent models (62.2p of systems) that fuse audio and visual (55.6p) information to detect acted (52.2p) expressions of basic emotions and simple dimensions of arousal and valence (64.5p) with feature- (38.9p) and decision-level (35.6p) fusion techniques. However, there were also person-independent systems that considered additional modalities to detect nonbasic emotions and complex dimensions using model-level fusion techniques. The meta-analysis revealed that MM systems were consistently (85p of systems) more accurate than their best unimodal counterparts, with an average improvement of 9.83p (median of 6.60p). However, improvements were three times lower when systems were trained on natural (4.59p) versus acted data (12.7p). Importantly, MM accuracy could be accurately predicted (cross-validated R2 of 0.803) from unimodal accuracies and two system-level factors. Theoretical and applied implications and recommendations are discussed.",
"Abstract Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual's sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human–machine and human–human interactions. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential."
]
} |
1903.12133 | 2922757351 | Humans use a host of signals to infer the emotional state of others. In general, computer systems that leverage signals from multiple modalities will be more robust and accurate in the same task. We present a multimodal affect and context sensing platform. The system is composed of video, audio and application analysis pipelines that leverage ubiquitous sensors (camera and microphone) to log and broadcast emotion data in real-time. The platform is designed to enable easy prototyping of novel computer interfaces that sense, respond and adapt to human emotion. This paper describes the different audio, visual and application processing components and explains how the data is stored and or broadcast for other applications to consume. We hope that this platform helps advance the state-of-the-art in affective computing by enabling development of novel human-computer interfaces. | Multimodal affect sensing has been applied in numerous contexts including teaching and learning environments @cite_16 @cite_33 , healthcare @cite_4 , the arts @cite_18 , and human-robot interaction @cite_22 . The first work on affect recognition started almost three decades ago where physiological sensors, cameras and microphones were used to detect a host of affective responses. Early multimodal systems often comprised of bulky equipment and wired sensors @cite_16 . The miniturization of electronics and improvements in wireless communications now mean that sensing can be performed more easily using off-the-shelf devices that are small and ubiquitous (such as webcams, microphones, accelerometers). | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_22",
"@cite_16"
],
"mid": [
"2152742269",
"2109636054",
"1995641269",
"2022964551",
"2109626108"
],
"abstract": [
"The goal of the EyesWeb project is to develop a modular system for the real-time analysis of body movement and gesture. Such information can be used to control and generate sound, music, and visual media, and to control actuators (e.g., robots). Another goal of the project is to explore and develop models of interaction by extending music language toward gesture and visual languages, with a particular focus on the understanding of affect and expressive content in gesture. For example, we attempt to distinguish the expressive content from two instances of the same movement",
"We present SimSensei Kiosk, an implemented virtual human interviewer designed to create an engaging face-to-face interaction where the user feels comfortable talking and sharing information. SimSensei Kiosk is also designed to create interactional situations favorable to the automatic assessment of distress indicators, defined as verbal and nonverbal behaviors correlated with depression, anxiety or post-traumatic stress disorder (PTSD). In this paper, we summarize the design methodology, performed over the past two years, which is based on three main development cycles: (1) analysis of face-to-face human interactions to identify potential distress indicators, dialogue policies and virtual human gestures, (2) development and analysis of a Wizard-of-Oz prototype system where two human operators were deciding the spoken and gestural responses, and (3) development of a fully automatic virtual interviewer able to engage users in 15-25 minute interactions. We show the potential of our fully automatic virtual human interviewer in a user study, and situate its performance in relation to the Wizard-of-Oz prototype.",
"We developed and evaluated a multimodal affect detector that combines conversational cues, gross body language, and facial features. The multimodal affect detector uses feature-level fusion to combine the sensory channels and linear discriminant analyses to discriminate between naturally occurring experiences of boredom, engagement flow, confusion, frustration, delight, and neutral. Training and validation data for the affect detector were collected in a study where 28 learners completed a 32- min. tutorial session with AutoTutor, an intelligent tutoring system with conversational dialogue. Classification results supported a channel × judgment type interaction, where the face was the most diagnostic channel for spontaneous affect judgments (i.e., at any time in the tutorial session), while conversational cues were superior for fixed judgments (i.e., every 20 s in the session). The analyses also indicated that the accuracy of the multichannel model (face, dialogue, and posture) was statistically higher than the best single-channel model for the fixed but not spontaneous affect expressions. However, multichannel models reduced the discrepancy (i.e., variance in the precision of the different emotions) of the discriminant models for both judgment types. The results also indicated that the combination of channels yielded superadditive effects for some affective states, but additive, redundant, and inhibitory effects for others. We explore the structure of the multimodal linear discriminant models and discuss the implications of some of our major findings.",
"In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.",
"We propose a multi-sensor affect recognition system and evaluate it on the challenging task of classifying interest (or disinterest) in children trying to solve an educational puzzle on the computer. The multimodal sensory information from facial expressions and postural shifts of the learner is combined with information about the learner's activity on the computer. We propose a unified approach, based on a mixture of Gaussian Processes, for achieving sensor fusion under the problematic conditions of missing channels and noisy labels. This approach generates separate class labels corresponding to each individual modality. The final classification is based upon a hidden random variable, which probabilistically combines the sensors. The multimodal Gaussian Process approach achieves accuracy of over 86 , significantly outperforming classification using the individual modalities, and several other combination schemes."
]
} |
1903.12133 | 2922757351 | Humans use a host of signals to infer the emotional state of others. In general, computer systems that leverage signals from multiple modalities will be more robust and accurate in the same task. We present a multimodal affect and context sensing platform. The system is composed of video, audio and application analysis pipelines that leverage ubiquitous sensors (camera and microphone) to log and broadcast emotion data in real-time. The platform is designed to enable easy prototyping of novel computer interfaces that sense, respond and adapt to human emotion. This paper describes the different audio, visual and application processing components and explains how the data is stored and or broadcast for other applications to consume. We hope that this platform helps advance the state-of-the-art in affective computing by enabling development of novel human-computer interfaces. | Multisense @cite_11 is a platform for multimodal affect sensing that incorporates both visual and audio components. Specifically, components included 3D head position-orientation and facial tracking, facial expression and gaze analysis, and audio analysis. It leverages existing public tools for some of these components. For example, audio analysis is performed using the OpenSmile @cite_8 package. SimSensei @cite_4 is a virtual human interviewer designed to create engaging face-to-face interactions that are driven in part via the Multisense sensing algorithms. Multisense broadcasts signals to the Kiosk using the Physical Markup Language (PML) standard. | {
"cite_N": [
"@cite_8",
"@cite_4",
"@cite_11"
],
"mid": [
"2085662862",
"2109636054",
"2526567570"
],
"abstract": [
"We introduce the openSMILE feature extraction toolkit, which unites feature extraction algorithms from the speech processing and the Music Information Retrieval communities. Audio low-level descriptors such as CHROMA and CENS features, loudness, Mel-frequency cepstral coefficients, perceptual linear predictive cepstral coefficients, linear predictive coefficients, line spectral frequencies, fundamental frequency, and formant frequencies are supported. Delta regression and various statistical functionals can be applied to the low-level descriptors. openSMILE is implemented in C++ with no third-party dependencies for the core functionality. It is fast, runs on Unix and Windows platforms, and has a modular, component based architecture which makes extensions via plug-ins easy. It supports on-line incremental processing for all implemented features as well as off-line and batch processing. Numeric compatibility with future versions is ensured by means of unit tests. openSMILE can be downloaded from http: opensmile.sourceforge.net .",
"We present SimSensei Kiosk, an implemented virtual human interviewer designed to create an engaging face-to-face interaction where the user feels comfortable talking and sharing information. SimSensei Kiosk is also designed to create interactional situations favorable to the automatic assessment of distress indicators, defined as verbal and nonverbal behaviors correlated with depression, anxiety or post-traumatic stress disorder (PTSD). In this paper, we summarize the design methodology, performed over the past two years, which is based on three main development cycles: (1) analysis of face-to-face human interactions to identify potential distress indicators, dialogue policies and virtual human gestures, (2) development and analysis of a Wizard-of-Oz prototype system where two human operators were deciding the spoken and gestural responses, and (3) development of a fully automatic virtual interviewer able to engage users in 15-25 minute interactions. We show the potential of our fully automatic virtual human interviewer in a user study, and situate its performance in relation to the Wizard-of-Oz prototype.",
"During face-to-face interactions, people naturally integrate nonverbal behaviors such as facial expressions and body postures as part of the conversation to infer the communicative intent or emotional state of their interlocutor. The interpretation of these nonverbal behaviors will often be contextualized by interactional cues such as the previous spoken question, the general discussion topic or the physical environment. A critical step in creating computers able to understand or participate in this type of social face-to-face interactions is to develop a computational platform to synchronously recognize nonverbal behaviors as part of the interactional context. In this platform, information for the acoustic and visual modalities should be carefully synchronized and rapidly processed. At the same time, contextual and interactional cues should be remembered and integrated to better interpret nonverbal (and verbal) behaviors. In this article, we introduce a real-time computational framework, MultiSense, which offers flexible and efficient synchronization approaches for context-based nonverbal behavior analysis. MultiSense is designed to utilize interactional cues from both interlocutors (e.g., from the computer and the human participant) and integrate this contextual information when interpreting nonverbal behaviors. MultiSense can also assimilate behaviors over a full interaction and summarize the observed affective states of the user. We demonstrate the capabilities of the new framework with a concrete use case from the mental health domain where MultiSense is used as part of a decision support tool to assess indicators of psychological distress such as depression and post-traumatic stress disorder (PTSD). In this scenario, MultiSense not only infers psychological distress indicators from nonverbal behaviors but also broadcasts the user state in real-time to a virtual agent (i.e., a digital interviewer) designed to conduct semi-structured interviews with human participants. Our experiments show the added value of our multimodal synchronization approaches and also demonstrate the importance of MultiSense contextual interpretation when inferring distress indicators."
]
} |
1903.11725 | 2969202087 | We propose a learning framework, named Multi-Coordinate Cost Balancing (MCCB), to address the problem of acquiring point-to-point movement skills from demonstrations. MCCB encodes demonstrations simultaneously in multiple differential coordinates that specify local geometric properties. MCCB generates reproductions by solving a convex optimization problem with a multi-coordinate cost function and linear constraints on the reproductions, such as initial, target, and via points. Further, since the relative importance of each coordinate system in the cost function might be unknown for a given skill, MCCB learns optimal weighting factors that balance the cost function. We demonstrate the effectiveness of MCCB via detailed experiments conducted on one handwriting dataset and three complex skill datasets. | Learning from demonstration has attracted a lot of attention from researchers in the past few decades. While several categories of LfD methods exist @cite_18 , our work falls under the category of trajectory-based LfD. In this category, demonstrations take the form of trajectories and the methods aim to synthesize trajectories that accurately reproduce the demonstrations. | {
"cite_N": [
"@cite_18"
],
"mid": [
"1986014385"
],
"abstract": [
"We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research."
]
} |
1903.11725 | 2969202087 | We propose a learning framework, named Multi-Coordinate Cost Balancing (MCCB), to address the problem of acquiring point-to-point movement skills from demonstrations. MCCB encodes demonstrations simultaneously in multiple differential coordinates that specify local geometric properties. MCCB generates reproductions by solving a convex optimization problem with a multi-coordinate cost function and linear constraints on the reproductions, such as initial, target, and via points. Further, since the relative importance of each coordinate system in the cost function might be unknown for a given skill, MCCB learns optimal weighting factors that balance the cost function. We demonstrate the effectiveness of MCCB via detailed experiments conducted on one handwriting dataset and three complex skill datasets. | Dynamical systems-based trajectory learning methods, such as @cite_17 @cite_0 @cite_19 , encode demonstrations using statistical dynamical systems and generate reproductions by forward propagating the dynamics. While such deterministic methods exhibit several advantages, such as convergence guarantees and robustness to perturbations, they are restricted to learning in a single coordinate system and ignore inherent uncertainties in the demonstrations. They incentivize conformance to the norm even when demonstrations exhibit high variance. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_17"
],
"mid": [
"2802383953",
"1965421192",
"2129202194"
],
"abstract": [
"This paper presents a unified framework of model-learning algorithms, called contracting dynamical system primitives (CDSP), that can be used to learn pose (i.e., position and orientation) dynamics of point-to-point motions from demonstrations. The position and the orientation (represented using quaternions) trajectories are modeled as two separate autonomous nonlinear dynamical systems. The special constraints of the ( S ^ 3 ) manifold are enforced in the formulation of the system that models the orientation dynamics. To capture the variability in the demonstrations, the dynamical systems are estimated using Gaussian mixture models (GMMs). The parameters of the GMMs are learned subject to the constraints derived using partial contraction analysis. The learned models’ reproductions are shown to accurately reproduce the demonstrations and are guaranteed to converge to the desired goal location. Experimental results illustrate the CDSP algorithm’s ability to accurately learn position and orientation dynamics and the utility of the learned models in path generation for a Baxter robot arm. The CDSP algorithm is evaluated on a publicly available dataset and a synthetic dataset, and is shown to have the lowest and comparable average reproduction errors when compared to state-of-the-art imitation learning algorithms.",
"Nonlinear dynamical systems are a promising representation to learn complex robot movements. Besides their undoubted modeling power, it is of major importance that such systems work in a stable manner. We therefore present a neural learning scheme that estimates stable dynamical systems from demonstrations based on a two-stage process: first, a data-driven Lyapunov function candidate is estimated. Second, stability is incorporated by means of a novel method to respect local constraints in the neural learning. We show in two experiments that this method is capable of learning stable dynamics while simultaneously sustaining the accuracy of the estimate and robustly generates complex movements.",
"This paper presents a method to learn discrete robot motions from a set of demonstrations. We model a motion as a nonlinear autonomous (i.e., time-invariant) dynamical system (DS) and define sufficient conditions to ensure global asymptotic stability at the target. We propose a learning method, which is called Stable Estimator of Dynamical Systems (SEDS), to learn the parameters of the DS to ensure that all motions closely follow the demonstrations while ultimately reaching and stopping at the target. Time-invariance and global asymptotic stability at the target ensures that the system can respond immediately and appropriately to perturbations that are encountered during the motion. The method is evaluated through a set of robot experiments and on a library of human handwriting motions."
]
} |
1903.11725 | 2969202087 | We propose a learning framework, named Multi-Coordinate Cost Balancing (MCCB), to address the problem of acquiring point-to-point movement skills from demonstrations. MCCB encodes demonstrations simultaneously in multiple differential coordinates that specify local geometric properties. MCCB generates reproductions by solving a convex optimization problem with a multi-coordinate cost function and linear constraints on the reproductions, such as initial, target, and via points. Further, since the relative importance of each coordinate system in the cost function might be unknown for a given skill, MCCB learns optimal weighting factors that balance the cost function. We demonstrate the effectiveness of MCCB via detailed experiments conducted on one handwriting dataset and three complex skill datasets. | Trajectory optimization methods, such as @cite_2 and @cite_10 , focus on geometric features by minimizing costs specified using predefined norms. An optimization framework proposed in @cite_22 attempts to adapt multiple demonstrations to new initial and target locations by minimizing the distance between the demonstrations and the reproduction according to a learned Hilbert space norm. Indeed, learning an appropriate Hilbert space norm is related to finding an appropriate coordinate system based on the demonstrations. However, similar to the dynamical systems-based methods, the methods in @cite_2 @cite_10 @cite_22 are restricted to a single predefined or learned coordinate system and do not explicitly model and utilize the inherent time-dependent variations in the demonstrations. | {
"cite_N": [
"@cite_10",
"@cite_22",
"@cite_2"
],
"mid": [
"2142224528",
"1551864151",
"2099893201"
],
"abstract": [
"We present a new optimization-based approach for robotic motion planning among obstacles. Like CHOMP (Covariant Hamiltonian Optimization for Motion Planning), our algorithm can be used to find collision-free trajectories from naA¯ve, straight-line initializations that might be in collision. At the core of our approach are (a) a sequential convex optimization procedure, which penalizes collisions with a hinge loss and increases the penalty coefficients in an outer loop as necessary, and (b) an efficient formulation of the no-collisions constraint that directly considers continuous-time safety Our algorithm is implemented in a software package called TrajOpt. We report results from a series of experiments comparing TrajOpt with CHOMP and randomized planners from OMPL, with regard to planning time and path quality. We consider motion planning for 7 DOF robot arms, 18 DOF full-body robots, statically stable walking motion for the 34 DOF Atlas humanoid robot, and physical experiments with the 18 DOF PR2. We also apply TrajOpt to plan curvature-constrained steerable needle trajectories in the SE(3) configuration space and multiple non-intersecting curved channels within 3D-printed implants for intracavitary brachytherapy. Details, videos, and source code are freely available at: http: rll.berkeley.edu trajopt ijrr.",
"We formalize the problem of adapting a demonstrated trajectory to a new start and goal configuration as an optimization problem over a Hilbert space of trajectories: minimize the distance between the demonstration and the new trajectory subject to the new end point constraints. We show that the commonly used version of Dynamic Movement Primitives (DMPs) implement this minimization in the way they adapt demonstrations, for a particular choice of the Hilbert space norm. The generalization to arbitrary norms enables the robot to select a more appropriate norm for the task, as well as learn how to adapt the demonstration from the user. Our experiments show that this can significantly improve the robot's ability to accurately generalize the demonstration.",
"Existing high-dimensional motion planning algorithms are simultaneously overpowered and underpowered. In domains sparsely populated by obstacles, the heuristics used by sampling-based planners to navigate “narrow passages” can be needlessly complex; furthermore, additional post-processing is required to remove the jerky or extraneous motions from the paths that such planners generate. In this paper, we present CHOMP, a novel method for continuous path refinement that uses covariant gradient techniques to improve the quality of sampled trajectories. Our optimization technique both optimizes higher-order dynamics and is able to converge over a wider range of input paths relative to previous path optimization strategies. In particular, we relax the collision-free feasibility prerequisite on input paths required by those strategies. As a result, CHOMP can be used as a standalone motion planner in many real-world planning queries. We demonstrate the effectiveness of our proposed method in manipulation planning for a 6-DOF robotic arm as well as in trajectory generation for a walking quadruped robot."
]
} |
1903.11725 | 2969202087 | We propose a learning framework, named Multi-Coordinate Cost Balancing (MCCB), to address the problem of acquiring point-to-point movement skills from demonstrations. MCCB encodes demonstrations simultaneously in multiple differential coordinates that specify local geometric properties. MCCB generates reproductions by solving a convex optimization problem with a multi-coordinate cost function and linear constraints on the reproductions, such as initial, target, and via points. Further, since the relative importance of each coordinate system in the cost function might be unknown for a given skill, MCCB learns optimal weighting factors that balance the cost function. We demonstrate the effectiveness of MCCB via detailed experiments conducted on one handwriting dataset and three complex skill datasets. | Probabilistic trajectory-learning methods, such as @cite_16 @cite_21 and @cite_20 , on the other hand, capture and utilize the variation observed in the demonstrations. However, these methods are also restricted to encoding demonstrations in a single predefined coordinate system that is assumed to be known. | {
"cite_N": [
"@cite_16",
"@cite_21",
"@cite_20"
],
"mid": [
"2769555377",
"2740824349",
"2140801763"
],
"abstract": [
"",
"A data-driven identification of dynamical systems requiring only minimal prior knowledge is promising whenever no analytically derived model structure is available, e.g., from first principles in physics. However, meta-knowledge on the system’s behavior is often given and should be exploited: Stability as fundamental property is essential when the model is used for controller design or movement generation. Therefore, this paper proposes a framework for learning stable stochastic systems from data. We focus on identifying a state-dependent coefficient form of the nonlinear stochastic model which is globally asymptotically stable according to probabilistic Lyapunov methods. We compare our approach to other state of the art methods on real-world datasets in terms of flexibility and stability.",
"Movement Primitives (MP) are a well-established approach for representing modular and re-usable robot movement generators. Many state-of-the-art robot learning successes are based MPs, due to their compact representation of the inherently continuous and high dimensional robot movements. A major goal in robot learning is to combine multiple MPs as building blocks in a modular control architecture to solve complex tasks. To this effect, a MP representation has to allow for blending between motions, adapting to altered task variables, and co-activating multiple MPs in parallel. We present a probabilistic formulation of the MP concept that maintains a distribution over trajectories. Our probabilistic approach allows for the derivation of new operations which are essential for implementing all aforementioned properties in one framework. In order to use such a trajectory distribution for robot movement control, we analytically derive a stochastic feedback controller which reproduces the given trajectory distribution. We evaluate and compare our approach to existing methods on several simulated as well as real robot scenarios."
]
} |
1903.11725 | 2969202087 | We propose a learning framework, named Multi-Coordinate Cost Balancing (MCCB), to address the problem of acquiring point-to-point movement skills from demonstrations. MCCB encodes demonstrations simultaneously in multiple differential coordinates that specify local geometric properties. MCCB generates reproductions by solving a convex optimization problem with a multi-coordinate cost function and linear constraints on the reproductions, such as initial, target, and via points. Further, since the relative importance of each coordinate system in the cost function might be unknown for a given skill, MCCB learns optimal weighting factors that balance the cost function. We demonstrate the effectiveness of MCCB via detailed experiments conducted on one handwriting dataset and three complex skill datasets. | Our design of the costs in each differential coordinate is inspired by the minimal intervention principle @cite_4 that takes variance into account. While the approach in @cite_4 does encode demonstrations in different frames of references, all the frames are restricted to Cartesian coordinates or orientation space. Furthermore, all the relevant frames for a given task are also expected to be provided by the user. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2053324916"
],
"abstract": [
"We present a task-parameterized probabilistic model encoding movements in the form of virtual spring-damper systems acting in multiple frames of reference. Each candidate coordinate system observes a set of demonstrations from its own perspective, by extracting an attractor path whose variations depend on the relevance of the frame at each step of the task. This information is exploited to generate new attractor paths in new situations (new position and orientation of the frames), with the predicted covariances used to estimate the varying stiffness and damping of the spring-damper systems, resulting in a minimal intervention control strategy. The approach is tested with a 7-DOFs Barrett WAM manipulator whose movement and impedance behavior need to be modulated in regard to the position and orientation of two external objects varying during demonstration and reproduction."
]
} |
1903.11725 | 2969202087 | We propose a learning framework, named Multi-Coordinate Cost Balancing (MCCB), to address the problem of acquiring point-to-point movement skills from demonstrations. MCCB encodes demonstrations simultaneously in multiple differential coordinates that specify local geometric properties. MCCB generates reproductions by solving a convex optimization problem with a multi-coordinate cost function and linear constraints on the reproductions, such as initial, target, and via points. Further, since the relative importance of each coordinate system in the cost function might be unknown for a given skill, MCCB learns optimal weighting factors that balance the cost function. We demonstrate the effectiveness of MCCB via detailed experiments conducted on one handwriting dataset and three complex skill datasets. | The motion planning framework in @cite_13 , complementary to our approach, utilizes a blended cost function, the construction of which is guided by probability distributions learned from the demonstrations. This framework incentivizes factors such as smoothness, manipulability, and obstacle avoidance, but is restricted to the Cartesian coordinate system. MCCB, on the other hand, encodes demonstrations in multiple differential coordinates and learns to optimally balance their relative influences, but does not consider factors such as manipulability and obstacle avoidance. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2577634205"
],
"abstract": [
"Trajectory optimization is an essential tool for motion planning under multiple constraints of robotic manipulators. Optimization-based methods can explicitly optimize a trajectory by leveraging prior knowledge of the system and have been used in various applications such as collision avoidance. However, these methods often require a hand-coded cost function in order to achieve the desired behavior. Specifying such cost function for a complex desired behavior, e.g., disentangling a rope, is a nontrivial task that is often even infeasible. Learning from demonstration (LfD) methods offer an alternative way to program robot motion. LfD methods are less dependent on analytical models and instead learn the behavior of experts implicitly from the demonstrated trajectories. However, the problem of adapting the demonstrations to new situations, e.g., avoiding newly introduced obstacles, has not been fully investigated in the literature. In this letter, we present a motion planning framework that combines the advantages of optimization-based and demonstration-based methods. We learn a distribution of trajectories demonstrated by human experts and use it to guide the trajectory optimization process. The resulting trajectory maintains the demonstrated behaviors, which are essential to performing the task successfully, while adapting the trajectory to avoid obstacles. In simulated experiments and with a real robotic system, we verify that our approach optimizes the trajectory to avoid obstacles and encodes the demonstrated behavior in the resulting trajectory."
]
} |
1903.11725 | 2969202087 | We propose a learning framework, named Multi-Coordinate Cost Balancing (MCCB), to address the problem of acquiring point-to-point movement skills from demonstrations. MCCB encodes demonstrations simultaneously in multiple differential coordinates that specify local geometric properties. MCCB generates reproductions by solving a convex optimization problem with a multi-coordinate cost function and linear constraints on the reproductions, such as initial, target, and via points. Further, since the relative importance of each coordinate system in the cost function might be unknown for a given skill, MCCB learns optimal weighting factors that balance the cost function. We demonstrate the effectiveness of MCCB via detailed experiments conducted on one handwriting dataset and three complex skill datasets. | Differential coordinates have been extensively used in the computer graphics community @cite_6 @cite_11 . Prior work in trajectory learning that incorporates differential coordinates includes the Laplacian trajectory editing (LTE) algorithm @cite_9 . Using Laplacian coordinates, the LTE algorithm adapts a single demonstration to new initial, target, and via points while preserving the shape. However, the LTE algorithm does not reason about the relative importances of multiple coordinates. | {
"cite_N": [
"@cite_9",
"@cite_6",
"@cite_11"
],
"mid": [
"2107221262",
"2161578027",
"1984159596"
],
"abstract": [
"Assuming that a robot trajectory is given from a high-level planning or learning mechanism, it needs to be adapted to react to dynamic environment changes. In this article we propose a novel approach to deform trajectories while keeping their local shape similar, which is based on the discrete Laplace---Beltrami operator. The approach can be readily extended and covers multiple deformation techniques including fixed waypoints that must be passed, positional constraints for collision avoidance or a cooperative manipulation scheme for the coordination of multiple robots. Due to its low computational complexity it allows for real-time trajectory deformation both on local and global scale and online adaptation to changed environmental constraints. Simulations illustrate the straightforward combination of the proposed approach with other established trajectory-related methods like artificial potential fields or prioritized inverse kinematics. Experiments with the HRP-4 humanoid successfully demonstrate the applicability in complex daily-life tasks.",
"One of the main challenges in editing a mesh is to retain the visual appearance of the surface after applying various modifications. In this paper we advocate the use of linear differential coordinates as means to preserve the high-frequency detail of the surface. The differential coordinates represent the details and are defined by a linear transformation of the mesh vertices. This allows the reconstruction of the edited surface by solving a linear system that satisfies the reconstruction of the local details in least squares sense. Since the differential coordinates are defined in a global coordinate system they are not rotation-invariant. To compensate for that, we rotate them to agree with the rotation of an approximated local frame. We show that the linear least squares system can be solved fast enough to guarantee interactive response time thanks to a precomputed factorization of the coefficient matrix. We demonstrate that our approach enables to edit complex detailed meshes while keeping the shape of the details in their natural orientation.",
"One of the challenges in geometry processing is to automatically reconstruct a higher-level representation from raw geometric data. For instance, computing a parameterization of an object helps attaching information to it and converting between various representations. More generally, this family of problems may be thought of in terms of constructing structured function bases attached to surfaces. In this paper, we study a specific type of hierarchical function bases, defined by the eigenfunctions of the Laplace-Beltrami operator. When applied to a sphere, this function basis corresponds to the classical spherical harmonics. On more general objects, this defines a function basis well adapted to the geometry and the topology of the object. Based on physical analogies (vibration modes), we first give an intuitive view before explaining the underlying theory. We then explain in practice how to compute an approximation of the eigenfunctions of a differential operator, and show possible applications in geometry processing."
]
} |
1903.11741 | 2924655441 | The scarcity of richly annotated medical images is limiting supervised deep learning based solutions to medical image analysis tasks, such as localizing discriminatory radiomic disease signatures. Therefore, it is desirable to leverage unsupervised and weakly supervised models. Most recent weakly supervised localization methods apply attention maps or region proposals in a multiple instance learning formulation. While attention maps can be noisy, leading to erroneously highlighted regions, it is not simple to decide on an optimal window bag size for multiple instance learning approaches. In this paper, we propose a learned spatial masking mechanism to filter out irrelevant background signals from attention maps. The proposed method minimizes mutual information between a masked variational representation and the input while maximizing the information between the masked representation and class labels. This results in more accurate localization of discriminatory regions. We tested the proposed model on the ChestX-ray8 dataset to localize pneumonia from chest X-ray images without using any pixel-level or bounding-box annotations. | @cite_21 and @cite_13 applied region-proposal and beam search based methods to localize objects from natural (i.e., non-medical) images. Training such hybrid localization-classification models requires large amounts of bounding-box level image annotations, which can suffer from rater-variability and can be prohibitively expensive or time consuming. Several existing methods @cite_16 @cite_3 @cite_14 @cite_0 @cite_9 formulate the weakly-supervised localization as a multiple instance learning (MIL) problem. However, like for region-proposal based methods, it is difficult to find an optimal window size. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_0",
"@cite_16",
"@cite_13"
],
"mid": [
"2952072685",
"318792885",
"2290280043",
"2132984949",
"2106841609",
"2963603913",
"2613833277"
],
"abstract": [
"Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain. In this paper, we propose a new method that achieves this goal with only image-level labels of whether the objects are present or not. Our approach combines a discriminative submodular cover problem for automatically discovering a set of positive object windows with a smoothed latent SVM formulation. The latter allows us to leverage efficient quasi-Newton optimization techniques. Our experiments demonstrate that the proposed approach provides a 50 relative improvement in mean average precision over the current state-of-the-art on PASCAL VOC 2007 detection.",
"Localizing objects in cluttered backgrounds is a challenging task in weakly supervised localization. Due to large object variations in cluttered images, objects have large ambiguity with backgrounds. However, backgrounds contain useful latent information, e.g., the sky for aeroplanes. If we can learn this latent information, object-background ambiguity can be reduced to suppress the background. In this paper, we propose the latent category learning (LCL), which is an unsupervised learning problem given only image-level class labels. Firstly, inspired by the latent semantic discovery, we use the typical probabilistic Latent Semantic Analysis (pLSA) to learn the latent categories, which can represent objects, object parts or backgrounds. Secondly, to determine which category contains the target object, we propose a category selection method evaluating each category’s discrimination. We evaluate the method on the PASCAL VOC 2007 database and ILSVRC 2013 detection challenge. On VOC 2007, the proposed method yields the annotation accuracy of 48 , which outperforms previous results by 10 . More importantly, we achieve the detection average precision of 30.9 , which improves previous results by 8 and can be competitive with the supervised deformable part model (DPM) 5.0 baseline 33.7 . On ILSVRC 2013 detection, the method yields the precision of 6.0 , which is also competitive with the DPM 5.0.",
"Object localization is an important computer vision problem with a variety of applications. The lack of large scale object-level annotations and the relative abundance of image-level labels makes a compelling case for weak supervision in the object localization task. Deep Convolutional Neural Networks are a class of state-of-the-art methods for the related problem of object recognition. In this paper, we describe a novel object localization algorithm which uses classification networks trained on only image labels. This weakly supervised method leverages local spatial and semantic patterns captured in the convolutional layers of classification networks. We propose an efficient beam search based approach to detect and localize multiple objects in images. The proposed method significantly outperforms the state-of-the-art in standard object localization data-sets.",
"Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that often we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition.",
"The prominence of weakly labeled data gives rise to a growing demand for object detection methods that can cope with minimal supervision. We propose an approach that automatically identifies discriminative configurations of visual patterns that are characteristic of a given object class. We formulate the problem as a constrained submodular optimization problem and demonstrate the benefits of the discovered configurations in remedying mislocalizations and finding informative positive and negative training examples. Together, these lead to state-of-the-art weakly-supervised detection results on the challenging PASCAL VOC dataset.",
"Weakly supervised learning of object detection is an important problem in image understanding that still does not have a satisfactory solution. In this paper, we address this problem by exploiting the power of deep convolutional neural networks pre-trained on large-scale image-level classification tasks. We propose a weakly supervised deep detection architecture that modifies one such network to operate at the level of image regions, performing simultaneously region selection and classification. Trained as an image classifier, the architecture implicitly learns object detectors that are better than alternative weakly supervised detection systems on the PASCAL VOC data. The model, which is a simple and elegant end-to-end architecture, outperforms standard data augmentation and fine-tuning techniques for the task of image-level classification as well.",
""
]
} |
1903.11741 | 2924655441 | The scarcity of richly annotated medical images is limiting supervised deep learning based solutions to medical image analysis tasks, such as localizing discriminatory radiomic disease signatures. Therefore, it is desirable to leverage unsupervised and weakly supervised models. Most recent weakly supervised localization methods apply attention maps or region proposals in a multiple instance learning formulation. While attention maps can be noisy, leading to erroneously highlighted regions, it is not simple to decide on an optimal window bag size for multiple instance learning approaches. In this paper, we propose a learned spatial masking mechanism to filter out irrelevant background signals from attention maps. The proposed method minimizes mutual information between a masked variational representation and the input while maximizing the information between the masked representation and class labels. This results in more accurate localization of discriminatory regions. We tested the proposed model on the ChestX-ray8 dataset to localize pneumonia from chest X-ray images without using any pixel-level or bounding-box annotations. | Similarly to previous works @cite_4 @cite_18 @cite_12 , @cite_2 proposed an activation map based framework to produce tight bounding boxes around objects. However, in the context of object localization, there might be erroneously detected regions (false positives) or regions activations which spread over unrealistically wide ranges. This is because saliency maps @cite_6 are usually noisy. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_6",
"@cite_2",
"@cite_12"
],
"mid": [
"2770241596",
"2884149198",
"2962851944",
"2884195989",
"2920507514"
],
"abstract": [
"We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest X-ray dataset, containing over 100,000 frontal-view X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases.",
"Chest X-rays is one of the most commonly available and affordable radiological examinations in clinical practice. While detecting thoracic diseases on chest X-rays is still a challenging task for machine intelligence, due to 1) the highly varied appearance of lesion areas on X-rays from patients of different thoracic disease and 2) the shortage of accurate pixel-level annotations by radiologists for model training. Existing machine learning methods are unable to deal with the challenge that thoracic diseases usually happen in localized disease-specific areas. In this article, we propose a weakly supervised deep learning framework equipped with squeeze-and-excitation blocks, multi-map transfer and max-min pooling for classifying common thoracic diseases as well as localizing suspicious lesion regions on chest X-rays. The comprehensive experiments and discussions are performed on the ChestX-ray14 dataset. Both numerical and visual results have demonstrated the effectiveness of proposed model and its better performance against the state-of-the-art pipelines.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].",
"This work provides a simple approach to discover tight object bounding boxes with only image-level supervision, called Tight box mining with Surrounding Segmentation Context (TS2C). We observe that object candidates mined through current multiple instance learning methods are usually trapped to discriminative object parts, rather than the entire object. TS2C leverages surrounding segmentation context derived from weakly-supervised segmentation to suppress such low-quality distracting candidates and boost the high-quality ones. Specifically, TS2C is developed based on two key properties of desirable bounding boxes: (1) high purity, meaning most pixels in the box are with high object response, and (2) high completeness, meaning the box covers high object response pixels comprehensively. With such novel and computable criteria, more tight candidates can be discovered for learning a better object detector. With TS2C, we obtain 48.0 and 44.4 mAP scores on VOC 2007 and 2012 benchmarks, which are the new state-of-the-arts.",
""
]
} |
1903.11782 | 2922854965 | Inter-cell interference (ICI) is one of the major performance-limiting factors in the context of modern cellular systems. To tackle ICI, coordinated multi-point (CoMP) schemes have been proposed as a key technology for next-generation mobile communication systems. Although CoMP schemes offer promising theoretical gains, their performance could degrade significantly because of practical issues such as limited backhaul. To address this issue, we explore a novel uplink interference management scheme called anywhere decoding, which requires exchanging just a few bits of information per coding interval among the base stations (BSs). In spite of the low overhead of anywhere decoding, we observe considerable gains in the outage probability performance of cell-edge users, compared to no cooperation between BSs. Additionally, asymptotic results of the outage probability for high-SNR regimes demonstrate that anywhere decoding schemes achieve full spatial diversity through multiple decoding opportunities, and they are within 1.5 dB of full cooperation. | : In this scheme, is necessary between the BSs. Instead, BSs estimate the channels of interfering terminals and either perform successive interference cancellation (SIC), take spatial characteristics of interference into account in adjusting receive filters, i.e., interference rejection combining (IRC), or implement a combination of these two schemes (IRC+SIC) @cite_1 . | {
"cite_N": [
"@cite_1"
],
"mid": [
"2593447939"
],
"abstract": [
"Recently, the 3rd Generation Partnership Project (3GPP) has developed a sidelink system to compensate for the explosive increase in mobile data traffic. However, there are numerous challenging technical problems that must be overcome. One major problem is inter-cell interference management. 3GPP has considered network-assisted interference cancellation and suppression to improve both the signal-to-noise-plus-interference ratio (SINR) and receiver performance by suppression or cancellation of inter-cell interference signals. In this paper, we propose a novel advanced receiver to reduce the interference from neighbor cell in sidelink systems. The proposed receiver can suppress and cancel the interference by combining interference rejection combining with successive interference cancellation. We perform a system-level simulation based on 20 MHz bandwidth of 3GPP LTE-Advanced system. Simulation results show that the proposed receiver can improve the SINR, throughput, and spectral efficiency, relative to a conventional system."
]
} |
1903.11633 | 2922959542 | Landmark localization in images and videos is a classic problem solved in various ways. Nowadays, with deep networks prevailing throughout machine learning, there are revamped interests in pushing facial landmark detection technologies to handle more challenging data. Most efforts use network objectives based on L1 or L2 norms, which have several disadvantages. First of all, the locations of landmarks are determined from generated heatmaps (i.e., confidence maps) from which predicted landmark locations (i.e., the means) get penalized without accounting for the spread: a high scatter corresponds to low confidence and vice-versa. For this, we introduce a LaplaceKL objective that penalizes for a low confidence. Another issue is a dependency on labeled data, which are expensive to obtain and susceptible to error. To address both issues we propose an adversarial training framework that leverages unlabeled data to improve model performance. Our method claims state-of-the-art on all of the 300W benchmarks and ranks second-to-best on the Annotated Facial Landmarks in the Wild (AFLW) dataset. Furthermore, our model is robust with a reduced size: 1 8 the number of channels (i.e., 0.0398MB) is comparable to state-of-that-art in real-time on CPU. Thus, we show that our method is of high practical value to real-life application. | has been of interest to researchers for decades. At first, most methods were based on Active Shape Models @cite_4 and Active Appearance Models @cite_25 . Then, Cascaded Regression Methods (CRMs) were introduced, which operate in a sequential fashion; starting with the average shape, then incrementally shifting the shape closer to the target shape. CRMs offer high speed and accuracy ( @math 1,000 fps on CPU @cite_11 @cite_13 ). | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_25",
"@cite_11"
],
"mid": [
"2087681821",
"",
"2152826865",
"1998294030"
],
"abstract": [
"This paper addresses the problem of Face Alignment for a single image. We show how an ensemble of regression trees can be used to estimate the face's landmark positions directly from a sparse subset of pixel intensities, achieving super-realtime performance with high quality predictions. We present a general framework based on gradient boosting for learning an ensemble of regression trees that optimizes the sum of square error loss and naturally handles missing or partially labelled data. We show how using appropriate priors exploiting the structure of image data helps with efficient feature selection. Different regularization strategies and its importance to combat overfitting are also investigated. In addition, we analyse the effect of the quantity of training data on the accuracy of the predictions and explore the effect of data augmentation using synthesized data.",
"",
"We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.",
"This paper presents a highly efficient, very accurate regression approach for face alignment. Our approach has two novel components: a set of local binary features, and a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. Our approach achieves the state-of-the-art results when tested on the current most challenging benchmarks. Furthermore, because extracting and regressing local binary features is computationally very cheap, our system is much faster than previous methods. It achieves over 3, 000 fps on a desktop or 300 fps on a mobile phone for locating a few dozens of landmarks."
]
} |
1903.11633 | 2922959542 | Landmark localization in images and videos is a classic problem solved in various ways. Nowadays, with deep networks prevailing throughout machine learning, there are revamped interests in pushing facial landmark detection technologies to handle more challenging data. Most efforts use network objectives based on L1 or L2 norms, which have several disadvantages. First of all, the locations of landmarks are determined from generated heatmaps (i.e., confidence maps) from which predicted landmark locations (i.e., the means) get penalized without accounting for the spread: a high scatter corresponds to low confidence and vice-versa. For this, we introduce a LaplaceKL objective that penalizes for a low confidence. Another issue is a dependency on labeled data, which are expensive to obtain and susceptible to error. To address both issues we propose an adversarial training framework that leverages unlabeled data to improve model performance. Our method claims state-of-the-art on all of the 300W benchmarks and ranks second-to-best on the Annotated Facial Landmarks in the Wild (AFLW) dataset. Furthermore, our model is robust with a reduced size: 1 8 the number of channels (i.e., 0.0398MB) is comparable to state-of-that-art in real-time on CPU. Thus, we show that our method is of high practical value to real-life application. | More recently, deep-learning based approaches have prevailed in the community due to end-to-end learning and improved accuracy. Initial works mimicked the iterative nature of cascaded methods using recurrent convolutional neural networks @cite_39 @cite_41 @cite_30 @cite_19 . Besides, several methods for dense landmark localization @cite_8 @cite_36 and 3D face alignment @cite_17 @cite_5 have been proposed: all of which are fully-supervised and, thus, require labels for each image. | {
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_41",
"@cite_36",
"@cite_39",
"@cite_19",
"@cite_5",
"@cite_17"
],
"mid": [
"2593580652",
"2964145484",
"",
"1567532702",
"2964192142",
"2792922843",
"",
"2756148933"
],
"abstract": [
"Mainstream direction in face alignment is now dominated by cascaded regression methods. These methods start from an image with an initial shape and build a set of shape increments by computing features with respect to the current shape estimate. These shape increments move the initial shape to the desired location. Despite the advantages of the cascaded methods, they all share two major limitations: (i) shape increments are learned separately from each other in a cascaded manner, (ii) the use of standard generic computer vision features such SIFT, HOG, does not allow these methods to learn problem-specific features. In this work, we propose a novel Recurrent Convolutional Face Alignment method that overcomes these limitations. We frame the standard cascaded alignment problem as a recurrent process and learn all shape increments jointly, by using a recurrent neural network with the gated recurrent unit. Importantly, by combining a convolutional neural network with a recurrent one we alleviate hand-crafted features, widely adopted in the literature and thus allowing the model to learn task-specific features. Moreover, both the convolutional and the recurrent neural networks are learned jointly. Experimental evaluation shows that the proposed method has better performance than the state-of-the-art methods, and further support the importance of learning a single end-to-end model for face alignment.",
"In this paper we propose to learn a mapping from image pixels into a dense template grid through a fully convolutional network. We formulate this task as a regression problem and train our network by leveraging upon manually annotated facial landmarks in-the-wild. We use such landmarks to establish a dense correspondence field between a three-dimensional object template and the input image, which then serves as the ground-truth for training our regression system. We show that we can combine ideas from semantic segmentation with regression networks, yielding a highly-accurate quantized regression architecture. Our system, called DenseReg, allows us to estimate dense image-to-template correspondences in a fully convolutional manner. As such our network can provide useful correspondence information as a stand-alone system, while when used as an initialization for Statistical Deformable Models we obtain landmark localization results that largely outperform the current state-of-the-art on the challenging 300W benchmark. We thoroughly evaluate our method on a host of facial analysis tasks, and demonstrate its use for other correspondence estimation tasks, such as the human body and the human ear. DenseReg code is made available at http: alpguler.com DenseReg.html along with supplementary materials.",
"",
"To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http: zface.org.",
"We propose a novel method for real-time face alignment in videos based on a recurrent encoder–decoder network model. Our proposed model predicts 2D facial point heat maps regularized by both detection and regression loss, while uniquely exploiting recurrent learning at both spatial and temporal dimensions. At the spatial level, we add a feedback loop connection between the combined output response map and the input, in order to enable iterative coarse-to-fine face alignment using a single network model, instead of relying on traditional cascaded model ensembles. At the temporal level, we first decouple the features in the bottleneck of the network into temporal-variant factors, such as pose and expression, and temporal-invariant factors, such as identity information. Temporal recurrent learning is then applied to the decoupled temporal-variant features. We show that such feature disentangling yields better generalization and significantly more accurate results at test time. We perform a comprehensive experimental analysis, showing the importance of each component of our proposed model, as well as superior results over the state of the art and several variations of our method in standard datasets.",
"The mainstream direction in face alignment is now dominated by cascaded regression methods. These methods start from an image with an initial shape and build a set of shape increments based on features with respect to the current estimated shape. These shape increments move the initial shape to the desired location. Despite the advantages of the cascaded methods, they all share two major limitations: (i) shape increments are learned independently from each other in a cascaded manner, (ii) the use of standard generic computer vision features such SIFT, HOG, does not allow these methods to learn problem-specific features. In this work, we propose a novel Recurrent Convolutional Shape Regression (RCSR) method that overcomes these limitations. We formulate the standard cascaded alignment problem as a recurrent process and learn all shape increments jointly, by using a recurrent neural network with a gated recurrent unit. Importantly, by combining a convolutional neural network with a recurrent one we avoid hand-crafted features, widely adopted in the literature and thus we allow the model to learn task-specific features. Besides, we employ the convolutional gated recurrent unit which takes as input the feature tensors instead of flattened feature vectors. Therefore, the spatial structure of the features can be better preserved in the memory of the recurrent neural network. Moreover, both the convolutional and the recurrent neural networks are learned jointly. Experimental evaluation shows that the proposed method has better performance than the state-of-the-art methods, and further supports the importance of learning a single end-to-end model for face alignment.",
"",
"Most approaches to face alignment treat the face as a 2D object, which fails to represent depth variation and is vulnerable to loss of shape consistency when the face rotates along a 3D axis. Because faces commonly rotate three dimensionally, 2D approaches are vulnerable to significant error. 3D morphable models, employed as a second step in 2D+3D approaches are robust to face rotation but are computationally too expensive for many applications, yet their ability to maintain viewpoint consistency is unknown. We present an alternative approach that estimates 3D face landmarks in a single face image. The method uses a regression forest-based algorithm that adds a third dimension to the common cascade pipeline. 3D face landmarks are estimated directly, which avoids fitting a 3D morphable model. The proposed method achieves viewpoint consistency in a computationally efficient manner that is robust to 3D face rotation. To train and test our approach, we introduce the Multi-PIE Viewpoint Consistent database. In empirical tests, the proposed method achieved simple yet effective head pose estimation and viewpoint consistency on multiple measures relative to alternative approaches."
]
} |
1903.11633 | 2922959542 | Landmark localization in images and videos is a classic problem solved in various ways. Nowadays, with deep networks prevailing throughout machine learning, there are revamped interests in pushing facial landmark detection technologies to handle more challenging data. Most efforts use network objectives based on L1 or L2 norms, which have several disadvantages. First of all, the locations of landmarks are determined from generated heatmaps (i.e., confidence maps) from which predicted landmark locations (i.e., the means) get penalized without accounting for the spread: a high scatter corresponds to low confidence and vice-versa. For this, we introduce a LaplaceKL objective that penalizes for a low confidence. Another issue is a dependency on labeled data, which are expensive to obtain and susceptible to error. To address both issues we propose an adversarial training framework that leverages unlabeled data to improve model performance. Our method claims state-of-the-art on all of the 300W benchmarks and ranks second-to-best on the Annotated Facial Landmarks in the Wild (AFLW) dataset. Furthermore, our model is robust with a reduced size: 1 8 the number of channels (i.e., 0.0398MB) is comparable to state-of-that-art in real-time on CPU. Thus, we show that our method is of high practical value to real-life application. | Nowadays, there is an increasing interest in semi-supervised methods for landmark localization. Recent work used a sequential multitasking method which was capable of injecting labels of two types into the training pipeline, with one type constituting the annotated landmarks and the other type consisting of facial expressions (or hand-gestures) @cite_45 . The authors argued that the latter label type was more easily obtainable, and showed the benefits of using both types of annotations by claiming state-of-the-art on several tasks. Additionally, they explore other semi-supervised techniques ( equivariance loss). @cite_44 , a supervision-by-registration method was proposed, which greatly utilized unlabeled videos to train a landmark detector. The key assumption was that the neighboring frames of the detected landmarks should be consistent with the optical flow computed between the frames. This approach demonstrated a more stable detector for videos, as well as an improved accuracy on public benchmarks. | {
"cite_N": [
"@cite_44",
"@cite_45"
],
"mid": [
"2798730128",
"2962887041"
],
"abstract": [
"In this paper, we present supervision-by-registration, an unsupervised approach to improve the precision of facial landmark detectors on both images and video. Our key observation is that the detections of the same landmark in adjacent frames should be coherent with registration, i.e., optical flow. Interestingly, coherency of optical flow is a source of supervision that does not require manual labeling, and can be leveraged during detector training. For example, we can enforce in the training loss function that a detected landmark at framet-1 followed by optical flow tracking from framet-1 to framet should coincide with the location of the detection at framet. Essentially, supervision-by-registration augments the training loss function with a registration loss, thus training the detector to have output that is not only close to the annotations in labeled images, but also consistent with registration on large amounts of unlabeled videos. End-to-end training with the registration loss is made possible by a differentiable Lucas-Kanade operation, which computes optical flow registration in the forward pass, and back-propagates gradients that encourage temporal coherency in the detector. The output of our method is a more precise image-based facial landmark detector, which can be applied to single images or video. With supervision-by-registration, we demonstrate (1) improvements in facial landmark detection on both images (300W, ALFW) and video (300VW, Youtube-Celebrities), and (2) significant reduction of jittering in video detections.",
"We present two techniques to improve landmark localization in images from partially annotated datasets. Our primary goal is to leverage the common situation where precise landmark locations are only provided for a small data subset, but where class labels for classification or regression tasks related to the landmarks are more abundantly available. First, we propose the framework of sequential multitasking and explore it here through an architecture for landmark localization where training with class labels acts as an auxiliary signal to guide the landmark localization on unlabeled data. A key aspect of our approach is that errors can be backpropagated through a complete landmark localization model. Second, we propose and explore an unsupervised learning technique for landmark localization based on having a model predict equivariant landmarks with respect to transformations applied to the image. We show that these techniques, improve landmark prediction considerably and can learn effective detectors even when only a small fraction of the dataset has landmark labels. We present results on two toy datasets and four real datasets, with hands and faces, and report new state-of-the-art on two datasets in the wild, e.g. with only 5 of labeled images we outperform previous state-of-the-art trained on the AFLW dataset."
]
} |
1903.11633 | 2922959542 | Landmark localization in images and videos is a classic problem solved in various ways. Nowadays, with deep networks prevailing throughout machine learning, there are revamped interests in pushing facial landmark detection technologies to handle more challenging data. Most efforts use network objectives based on L1 or L2 norms, which have several disadvantages. First of all, the locations of landmarks are determined from generated heatmaps (i.e., confidence maps) from which predicted landmark locations (i.e., the means) get penalized without accounting for the spread: a high scatter corresponds to low confidence and vice-versa. For this, we introduce a LaplaceKL objective that penalizes for a low confidence. Another issue is a dependency on labeled data, which are expensive to obtain and susceptible to error. To address both issues we propose an adversarial training framework that leverages unlabeled data to improve model performance. Our method claims state-of-the-art on all of the 300W benchmarks and ranks second-to-best on the Annotated Facial Landmarks in the Wild (AFLW) dataset. Furthermore, our model is robust with a reduced size: 1 8 the number of channels (i.e., 0.0398MB) is comparable to state-of-that-art in real-time on CPU. Thus, we show that our method is of high practical value to real-life application. | Landmark localization datasets and benchmarks have significantly evolved as well. The 68-point mark-up scheme of the MultiPIE dataset @cite_3 has been widely adopted. Despite the initial excitement for MultiPIE throughout the landmark localization community @cite_42 , it is now considered one of the easy dataset captured entirely in a controlled lab setting. A more challenging dataset, aflw @cite_16 , was then released with up to 21 facial landmarks per face ( occluded or invisible'' landmarks were not marked). Finally, came the 300W dataset made-up of face images from the internet, labeled with the same 68-point mark-up scheme as MultiPIE, and promoted as a data challenge @cite_31 . Currently, 300W is among the most widely used benchmarks for facial landmark localization. In addition to 2D datasets the community created several datasets annotated with 3D keypoints @cite_23 . | {
"cite_N": [
"@cite_42",
"@cite_3",
"@cite_23",
"@cite_31",
"@cite_16"
],
"mid": [
"2047508432",
"",
"2605105738",
"2058961190",
"2012885984"
],
"abstract": [
"We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).",
"",
"This paper investigates how far a very deep neural network is from attaining close to saturating performance on existing 2D and 3D face alignment datasets. To this end, we make the following 5 contributions: (a) we construct, for the first time, a very strong baseline by combining a state-of-the-art architecture for landmark localization with a state-of-the-art residual block, train it on a very large yet synthetically expanded 2D facial landmark dataset and finally evaluate it on all other 2D facial landmark datasets. (b)We create a guided by 2D landmarks network which converts 2D landmark annotations to 3D and unifies all existing datasets, leading to the creation of LS3D-W, the largest and most challenging 3D facial landmark dataset to date ( 230,000 images). (c) Following that, we train a neural network for 3D face alignment and evaluate it on the newly introduced LS3D-W. (d) We further look into the effect of all “traditional” factors affecting face alignment performance like large pose, initialization and resolution, and introduce a “new” one, namely the size of the network. (e) We show that both 2D and 3D face alignment networks achieve performance of remarkable accuracy which is probably close to saturating the datasets used. Training and testing code as well as the dataset can be downloaded from https: www.adrianbulat.com face-alignment",
"Automatic facial point detection plays arguably the most important role in face analysis. Several methods have been proposed which reported their results on databases of both constrained and unconstrained conditions. Most of these databases provide annotations with different mark-ups and in some cases the are problems related to the accuracy of the fiducial points. The aforementioned issues as well as the lack of a evaluation protocol makes it difficult to compare performance between different systems. In this paper, we present the 300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge which is held in conjunction with the International Conference on Computer Vision 2013, Sydney, Australia. The main goal of this challenge is to compare the performance of different methods on a new-collected dataset using the same evaluation protocol and the same mark-up and hence to develop the first standardized benchmark for facial landmark localization.",
"Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework."
]
} |
1903.11633 | 2922959542 | Landmark localization in images and videos is a classic problem solved in various ways. Nowadays, with deep networks prevailing throughout machine learning, there are revamped interests in pushing facial landmark detection technologies to handle more challenging data. Most efforts use network objectives based on L1 or L2 norms, which have several disadvantages. First of all, the locations of landmarks are determined from generated heatmaps (i.e., confidence maps) from which predicted landmark locations (i.e., the means) get penalized without accounting for the spread: a high scatter corresponds to low confidence and vice-versa. For this, we introduce a LaplaceKL objective that penalizes for a low confidence. Another issue is a dependency on labeled data, which are expensive to obtain and susceptible to error. To address both issues we propose an adversarial training framework that leverages unlabeled data to improve model performance. Our method claims state-of-the-art on all of the 300W benchmarks and ranks second-to-best on the Annotated Facial Landmarks in the Wild (AFLW) dataset. Furthermore, our model is robust with a reduced size: 1 8 the number of channels (i.e., 0.0398MB) is comparable to state-of-that-art in real-time on CPU. Thus, we show that our method is of high practical value to real-life application. | were recently introduced @cite_15 , and quickly became popular in both research and practice. gan have been used to generate images @cite_33 and videos @cite_12 @cite_1 , and to do image manipulation @cite_7 , text-to-image @cite_22 , image-to-image @cite_34 , video-to-video @cite_40 translation and re-targeting @cite_9 . | {
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_22",
"@cite_9",
"@cite_1",
"@cite_40",
"@cite_15",
"@cite_34",
"@cite_12"
],
"mid": [
"2173520492",
"2974067445",
"2964024144",
"2963168844",
"2737548191",
"2886748926",
"",
"2962793481",
"2964245526"
],
"abstract": [
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"",
"Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.",
"",
"Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-the-art approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion.",
"We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image synthesis problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without understanding temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a novel video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generator and discriminator architectures, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our approach to future video prediction, outperforming several state-of-the-art competing systems.",
"",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos. Unlike existing Generative Adversarial Nets (GAN)-based methods that generate videos with a single generator consisting of 3D deconvolutional layers, our model exploits two different types of generators: a temporal generator and an image generator. The temporal generator takes a single latent variable as input and outputs a set of latent variables, each of which corresponds to an image frame in a video. The image generator transforms a set of such latent variables into a video. To deal with instability in training of GAN with such advanced networks, we adopt a recently proposed model, Wasserstein GAN, and propose a novel method to train it stably in an end-to-end manner. The experimental results demonstrate the effectiveness of our methods."
]
} |
1903.11633 | 2922959542 | Landmark localization in images and videos is a classic problem solved in various ways. Nowadays, with deep networks prevailing throughout machine learning, there are revamped interests in pushing facial landmark detection technologies to handle more challenging data. Most efforts use network objectives based on L1 or L2 norms, which have several disadvantages. First of all, the locations of landmarks are determined from generated heatmaps (i.e., confidence maps) from which predicted landmark locations (i.e., the means) get penalized without accounting for the spread: a high scatter corresponds to low confidence and vice-versa. For this, we introduce a LaplaceKL objective that penalizes for a low confidence. Another issue is a dependency on labeled data, which are expensive to obtain and susceptible to error. To address both issues we propose an adversarial training framework that leverages unlabeled data to improve model performance. Our method claims state-of-the-art on all of the 300W benchmarks and ranks second-to-best on the Annotated Facial Landmarks in the Wild (AFLW) dataset. Furthermore, our model is robust with a reduced size: 1 8 the number of channels (i.e., 0.0398MB) is comparable to state-of-that-art in real-time on CPU. Thus, we show that our method is of high practical value to real-life application. | An interesting feature of gan is the ability to transfer images and videos across different domains. Thus, gan were adopted in various semi-supervised and domain-adaptation tasks. Many have leveraged synthetic data to improve model performance on real data. For example, a gan transferred images of human eyes from the real domain to bootstrap training data @cite_0 . Other researchers used a neural network to make synthetically generated images of outdoor scenes more photo-realistic, which also were used to improve performance for image segmentation @cite_43 . Sometimes, labeling images captured in a controlled setting is more manageable ( versus an uncontrolled setting). For instance, 2D body pose annotations were available for images in the wild, while 3D annotations were mostly for images captured in a lab setting. Therefore, images with 3D annotations were used in the adversarial training to predict 3D human body poses in images in-the-wild @cite_10 . | {
"cite_N": [
"@cite_0",
"@cite_43",
"@cite_10"
],
"mid": [
"2963709863",
"2767657961",
"2795089319"
],
"abstract": [
"With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a self-regularization term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.",
"Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.",
"Recently, remarkable advances have been achieved in 3D human pose estimation from monocular images because of the powerful Deep Convolutional Neural Networks (DCNNs). Despite their success on large-scale datasets collected in the constrained lab environment, it is difficult to obtain the 3D pose annotations for in-the-wild images. Therefore, 3D human pose estimation in the wild is still a challenge. In this paper, we propose an adversarial learning framework, which distills the 3D human pose structures learned from the fully annotated dataset to in-the-wild images with only 2D pose annotations. Instead of defining hard-coded rules to constrain the pose estimation results, we design a novel multi-source discriminator to distinguish the predicted 3D poses from the ground-truth, which helps to enforce the pose estimator to generate anthropometrically valid poses even with images in the wild. We also observe that a carefully designed information source for the discriminator is essential to boost the performance. Thus, we design a geometric descriptor, which computes the pairwise relative locations and distances between body joints, as a new information source for the discriminator. The efficacy of our adversarial learning framework with the new geometric descriptor has been demonstrated through extensive experiments on widely used public benchmarks. Our approach significantly improves the performance compared with previous state-of-the-art approaches."
]
} |
1903.11821 | 2923377306 | Single Image Super Resolution (SISR) is the task of producing a high resolution (HR) image from a given low-resolution (LR) image. It is a well researched problem with extensive commercial applications such as digital camera, video compression, medical imaging and so on. Most super resolution works focus on the features learning architecture, which can recover the texture details as close as possible. However, these works suffer from the following challenges: (1) The low-resolution (LR) training images are artificially synthesized using HR images with bicubic downsampling, which have much richer-information than real demosaic-upscaled mobile images. The mismatch between training and inference mobile data heavily blocks the improvement of practical super resolution algorithms. (2) These methods cannot effectively handle the blind distortions during super resolution in practical applications. In this work, an end-to-end novel framework, including high-to-low network and low-to-high network, is proposed to solve the above problems with dual Generative Adversarial Networks (GAN). First, the above mismatch problems are well explored with the high-to-low network, where clear high-resolution image and the corresponding realistic low-resolution image pairs can be generated. Moreover, a large-scale General Mobile Super Resolution Dataset, GMSR, is proposed, which can be utilized for training or as a fair comparison benchmark for super resolution methods. Second, an effective low-to-high network (super resolution network) is proposed in the framework. Benefiting from the GMSR dataset and novel training strategies, the super resolution model can effectively handle detail recovery and denoising at the same time. | Single image super resolution (SISR) is an important topic and has been developed for a long time. Early methods @cite_23 @cite_42 @cite_17 @cite_35 that based on the interpolation theory can be very fast, however usually yield over-smooth results. Methods rely on neighbor embedding and sparse representation @cite_19 @cite_29 @cite_11 @cite_12 @cite_36 @cite_34 targeted on learning the mapping between LR and HR. Some example-based approaches used image self-similarity property to reduce the amount of training data needed @cite_31 @cite_14 @cite_32 , and increased the size of the limited internal dictionary @cite_0 . | {
"cite_N": [
"@cite_35",
"@cite_11",
"@cite_14",
"@cite_36",
"@cite_29",
"@cite_42",
"@cite_32",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_31",
"@cite_34",
"@cite_12",
"@cite_17"
],
"mid": [
"2035677848",
"",
"1976416062",
"",
"",
"",
"2067625321",
"1930824406",
"2118963448",
"",
"2534320940",
"",
"",
"2157190232"
],
"abstract": [
"Abstract A Fourier method of filtering digital data called Lanczos filtering is described. Its principal feature is the use of “sigma factors” which significantly reduce the amplitude of the Gibbs oscillation. A pair of graphs is developed that can be used to determine filter response quality given the number of weights and the value of the cutoff frequency, the only two inputs required by the method. Examples of response functions in one and two dimensions are given and comparisons are made with response functions from other filters. The simplicity of calculating the weights and the adequate response make Lanczos filtering an attractive filtering method.",
"",
"We propose a new high-quality and efficient single-image upscaling technique that extends existing example-based super-resolution frameworks. In our approach we do not rely on an external example database or use the whole input image as a source for example patches. Instead, we follow a local self-similarity assumption on natural images and extract patches from extremely localized regions in the input image. This allows us to reduce considerably the nearest-patch search time without compromising quality in most images. Tests, that we perform and report, show that the local self-similarity assumption holds better for small scaling factors where there are more example patches of greater relevance. We implement these small scalings using dedicated novel nondyadic filter banks, that we derive based on principles that model the upscaling process. Moreover, the new filters are nearly biorthogonal and hence produce high-resolution images that are highly consistent with the input image without solving implicit back-projection equations. The local and explicit nature of our algorithm makes it simple, efficient, and allows a trivial parallel implementation on a GPU. We demonstrate the new method ability to produce high-quality resolution enhancement, its application to video sequences with no algorithmic modification, and its efficiency to perform real-time enhancement of low-resolution video standard into recent high-definition formats.",
"",
"",
"",
"The neighbor-embedding (NE) algorithm for single-image super-resolution (SR) reconstruction assumes that the feature spaces of low-resolution (LR) and high-resolution (HR) patches are locally isometric. However, this is not true for SR because of one-to-many mappings between LR and HR patches. To overcome or at least to reduce the problem for NE-based SR reconstruction, we apply a joint learning technique to train two projection matrices simultaneously and to map the original LR and HR feature spaces onto a unified feature subspace. Subsequently, the k -nearest neighbor selection of the input LR image patches is conducted in the unified feature subspace to estimate the reconstruction weights. To handle a large number of samples, joint learning locally exploits a coupled constraint by linking the LR-HR counterparts together with the K-nearest grouping patch pairs. In order to refine further the initial SR estimate, we impose a global reconstruction constraint on the SR outcome based on the maximum a posteriori framework. Preliminary experiments suggest that the proposed algorithm outperforms NE-related baselines.",
"Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.",
"In this paper, we propose a novel method for solving single-image super-resolution problems. Given a low-resolution image as input, we recover its high-resolution counterpart using a set of training examples. While this formulation resembles other learning-based methods for super-resolution, our method has been inspired by recent manifold teaming methods, particularly locally linear embedding (LLE). Specifically, small image patches in the lowand high-resolution images form manifolds with similar local geometry in two distinct feature spaces. As in LLE, local geometry is characterized by how a feature vector corresponding to a patch can be reconstructed by its neighbors in the feature space. Besides using the training image pairs to estimate the high-resolution embedding, we also enforce local compatibility and smoothness constraints between patches in the target high-resolution image through overlapping. Experiments show that our method is very flexible and gives good empirical results.",
"",
"Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.",
"",
"",
"Preserving edge structures is a challenge to image interpolation algorithms that reconstruct a high-resolution image from a low-resolution counterpart. We propose a new edge-guided nonlinear interpolation technique through directional filtering and data fusion. For a pixel to be interpolated, two observation sets are defined in two orthogonal directions, and each set produces an estimate of the pixel value. These directional estimates, modeled as different noisy measurements of the missing pixel are fused by the linear minimum mean square-error estimation (LMMSE) technique into a more robust estimate, using the statistics of the two observation sets. We also present a simplified version of the LMMSE-based interpolation algorithm to reduce computational cost without sacrificing much the interpolation performance. Experiments show that the new interpolation techniques can preserve edge sharpness and reduce ringing artifacts"
]
} |
1903.11980 | 2950757391 | The facility location problem is an NP-hard optimization problem. Therefore, approximation algorithms are often used to solve large instances. Such algorithms often perform much better than worst-case analysis suggests. Therefore, probabilistic analysis is a widely used tool to analyze such algorithms. Most research on probabilistic analysis of NP-hard optimization problems involving metric spaces, such as the facility location problem, has been focused on Euclidean instances, and also instances with independent (random) edge lengths, which are non-metric, have been researched. We would like to extend this knowledge to other, more general, metrics. We investigate the facility location problem using random shortest path metrics. We analyze some probabilistic properties for a simple greedy heuristic which gives a solution to the facility location problem: opening the @math cheapest facilities (with @math only depending on the facility opening costs). If the facility opening costs are such that @math is not too large, then we show that this heuristic is asymptotically optimal. On the other hand, for large values of @math , the analysis becomes more difficult, and we provide a closed-form expression as upper bound for the expected approximation ratio. In the special case where all facility opening costs are equal this closed-form expression reduces to @math or @math or even @math if the opening costs are sufficiently small. | Although a lot of studies have been conducted on random shortest path metrics, or first-passage percolation (e.g. @cite_5 @cite_14 @cite_10 ), systematic research of the behavior of (simple) heuristics and approximation algorithms for optimization problems on random shortest path metrics was initiated only recently @cite_2 . They provide some structural properties of random shortest path metrics, including the existence of a good clustering. These properties are then used for a probabilistic analysis of simple algorithms for several optimization problems, including the minimum-weight perfect matching problem and the @math -median problem. For the facility location problem, several sophisticated polynomial-time approximation algorithms exist, the best one currently having a worst-case approximation ratio of @math @cite_7 . conducted a probabilistic analysis for the facility location problem using Euclidean distances @cite_1 . They expected to show that some polynomial-time approximation algorithms would be asymptotically optimal under these circumstances, but found out that this is not the case. On the other hand, they described a trivial heuristic which is asymptotically optimal in the Euclidean model. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_1",
"@cite_2",
"@cite_5",
"@cite_10"
],
"mid": [
"2051213257",
"2177090493",
"2078422037",
"",
"2074567977",
"2031541804"
],
"abstract": [
"We consider the shortest paths between all pairs of nodes in a directed or undirected complete graph with edge lengths which are uniformly and independently distributed in [0, 1]. We show that die longest of these paths is bounded by c log n n almost surely, where c is a constant and n is the number of nodes. Our bound is the best possible up to a constant. We apply this result to some well-known problems and obtain several algorithmic improvements over existing results. Our results hold with obvious modifications to random (as opposed to complete) graphs and to any distribution of weights whose density is positive and bounded from below at a neighborhood of zero. As a corollary of our proof we get a new result concerning the diameter of random graphs.",
"We present a 1.488-approximation algorithm for the metric uncapacitated facility location (UFL) problem. Previously, the best algorithm was due to Byrka (2007). Byrka proposed an algorithm parametrized by @c and used it with @c 1.6774. By either running his algorithm or the algorithm proposed by Jain, Mahdian and Saberi ([email protected]?02), Byrka obtained an algorithm that gives expected approximation ratio 1.5. We show that if @c is randomly selected, the approximation ratio can be improved to 1.488. Our algorithm cuts the gap with the 1.463 approximability lower bound by almost 1 3.",
"A flame arrestor is disclosed which includes a conduit extending along inside of a drum arranged to contain a quantity of non-combustible liquid. This conduit is equipped with bubbler nozzles that discharge a combustible gas into the liquid in the form of separate, discrete bubbles. The gas is drawn from the drum through outlets above the liquid level and deflectors are positioned to deflect and distribute a flame front entering the drum through any of the nozzles.",
"",
"Abstract We derive an exact summation formula and a closed-form approximation for the expected length of a shortest path for a complete graph where the arc lengths are independent and exponentially distributed random variables. Experimental data validates both results. The property of completeness allows us to exploit certain symmetries to derive these results, which would otherwise require computing an exponential number of recursive equations. We have also found that this formula is a close approximation for the expected length of a shortest path in complete graphs with uniformly distributed arc lengths.",
"Consider the minimal weights of paths between two points in a complete graph Kn with random weights on the edges, the weights being, for instance, uniformly distributed. It is shown that, asymptotically, this is log n n for two given points, that the maximum if one point is fixed and the other varies is 2 log n n, and that the maximum over all pairs of points is 3 log n n.Some further related results are given as well, including results on asymptotic distributions and moments, and on the number of edges in the minimal weight paths."
]
} |
1903.11750 | 2923631003 | Navigation underwater traditionally is done by keeping a safe distance from obstacles, resulting in "fly-overs" of the area of interest. An Autonomous Underwater Vehicle (AUV) moving through a cluttered space, such as a shipwreck, or a decorated cave is an extremely challenging problem and has not been addressed in the past. This paper proposed a novel navigation framework utilizing an enhanced version of Trajopt for fast 3D path-optimization with near-optimal guarantees for AUVs. A sampling based correction procedure ensures that the planning is not limited by local minima, enabling navigation through narrow spaces. The method is shown, both on simulation and in-pool experiments, to be fast enough to enable real-time autonomous navigation for an Aqua2 AUV with strong safety guarantees. | The underwater domain introduces additional complexities to path planning. Because underwater environments often are highly dynamic, generating safe paths becomes more difficult as robots must account for their own drift. In many cases, underwater robots are also affected by currents in the environment. Several methods have been explored to correct the deviations caused by currents, including the FM* planning system @cite_44 . Other methods rely on observations about the structure of the terrain @cite_38 and satellite imagery @cite_18 to estimate the effects of currents. Genetic algorithms @cite_35 and mixed integer linear programming @cite_37 have also been used to support the computation of paths in dynamic underwater environments. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_37",
"@cite_35",
"@cite_44"
],
"mid": [
"",
"102256072",
"2106086387",
"2112724282",
"2146500120"
],
"abstract": [
"",
"Autonomous Underwater Vehicles (AUVs) usually operate in ocean environments characterized by complex spatial variability which can jeopardize their missions. To avoid this, planning safety routes with minimum energy cost is of primary importance. This work revisits the benefits, in terms of travelling time, of path planning in marine environments showing spatial variability. By means of a path planner presented in a previous paper, this work focuses on the application to a real environment of such techniques. Extensive computations have been carried out to calculate optimal paths on realistic ocean environments, based on autonomous underwater glider properties as the mobile platform. Unlike previous works, the more realistic and applied case of an autonomous underwater glider surveying the Western Mediterranean Sea is considered. Results indicate that substantial energy savings of planned paths compared to straight line trajectories are obtained when the current intensity and the vehicle speed are comparable. Conversely, the straight line path betwe en starting and ending points can be considered an optimum path when the current speed does not exceed half of the vehicle velocity. In both situations, benefits of path planning seem dependent also on the spatial structure of the current field.",
"The goal of adaptive sampling in the ocean is to predict the types and locations of additional ocean measurements that would be most useful to collect. Quantitatively, what is most useful is defined by an objective function and the goal is then to optimize this objective under the constraints of the available observing network. Examples of objectives are better oceanic understanding, to improve forecast quality, or to sample regions of high interest. This work provides a new path-planning scheme for the adaptive sampling problem. We define the path-planning problem in terms of an optimization framework and propose a method based on mixed integer linear programming (MILP). The mathematical goal is to find the vehicle path that maximizes the line integral of the uncertainty of field estimates along this path. Sampling this path can improve the accuracy of the field estimates the most. While achieving this objective, several constraints must be satisfied and are implemented. They relate to vehicle motion, intervehicle coordination, communication, collision avoidance, etc. The MILP formulation is quite powerful to handle different problem constraints and flexible enough to allow easy extensions of the problem. The formulation covers single- and multiple-vehicle cases as well as single- and multiple-day formulations. The need for a multiple-day formulation arises when the ocean sampling mission is optimized for several days ahead. We first introduce the details of the formulation, then elaborate on the objective function and constraints, and finally, present a varied set of examples to illustrate the applicability of the proposed method.",
"This paper proposes a genetic algorithm (GA) for path planning of an autonomous underwater vehicle in an ocean environment characterized by strong currents and enhanced space-time variability. The goal is to find a safe path that takes the vehicle from its starting location to a mission-specified destination, minimizing the energy cost. The GA includes novel genetic operators that ensure the convergence to the global minimum even in cases where the structure (in space and time) of the current field implies the existence of different local minima. The performance of these operators is discussed. The proposed algorithm is suitable for situations in which the vehicle has to operate energy-exhaustive missions.",
"Efficient path-planning algorithms are a crucial issue for modern autonomous underwater vehicles. Classical path-planning algorithms in artificial intelligence are not designed to deal with wide continuous environments prone to currents. We present a novel Fast Marching (FM)-based approach to address the following issues. First, we develop an algorithm we call FM* to efficiently extract a 2-D continuous path from a discrete representation of the environment. Second, we take underwater currents into account thanks to an anisotropic extension of the original FM algorithm. Third, the vehicle turning radius is introduced as a constraint on the optimal path curvature for both isotropic and anisotropic media. Finally, a multiresolution method is introduced to speed up the overall path-planning process"
]
} |
1903.11750 | 2923631003 | Navigation underwater traditionally is done by keeping a safe distance from obstacles, resulting in "fly-overs" of the area of interest. An Autonomous Underwater Vehicle (AUV) moving through a cluttered space, such as a shipwreck, or a decorated cave is an extremely challenging problem and has not been addressed in the past. This paper proposed a novel navigation framework utilizing an enhanced version of Trajopt for fast 3D path-optimization with near-optimal guarantees for AUVs. A sampling based correction procedure ensures that the planning is not limited by local minima, enabling navigation through narrow spaces. The method is shown, both on simulation and in-pool experiments, to be fast enough to enable real-time autonomous navigation for an Aqua2 AUV with strong safety guarantees. | Not only is it important for AUVs to be able to plan in a known environment, but it is also often necessary for AUVs to navigate in an environment without global knowledge of the environment. In such cases, obstacles are observed, often by stereo vision as has been done on aerial vehicles @cite_1 . Exploration of an unknown environment by aerial vehicles has been represented using a 3D occupancy grid using probabilistic roadmaps (PRM) and the D* Lite algorithm for planning as in @cite_47 . Although underwater and aerial domains provide different challenges, both require path planning in 3D. For an AUV such as Aqua2 whose movements do not correlate exactly with control inputs, planning becomes even more difficult. Other AUVs have also been used for path planning, such as RAIS @cite_32 and DeepC @cite_31 . Another AUV, REMUS @cite_0 , used obstacle avoidance specifically for exploration of shallow waters. | {
"cite_N": [
"@cite_47",
"@cite_1",
"@cite_32",
"@cite_0",
"@cite_31"
],
"mid": [
"2154444440",
"1415278901",
"",
"1592265573",
"2044413640"
],
"abstract": [
"We present a synthesis of techniques for rotorcraft UAV navigation through unknown environments which may contain obstacles. D* Lite and probabilistic roadmaps are combined for path planning, together with stereo vision for obstacle detection and dynamic path updating. A 3D occupancy map is used to represent the environment, and is updated online using stereo data. The target application is autonomous helicopter-based structure inspections, which require the UAV to fly safely close to the structures it is inspecting. Results are presented from simulation and with real flight hardware mounted onboard a cable array robot, demonstrating successful navigation through unknown environments containing obstacles.",
"Abstract The main goal of this research effort is to find a flyable collision-free path for an unmanned aerial vehicle (UAV) in a dynamic environment. Given that the UAV path planning needs to adapt in near real-time to the dynamic nature of the operational scenario, and to react rapidly to updates in the situational awareness, a modified artificial potential field (MAPF) approach is utilized to provide collision avoidance in view of pop-up threats and a random set of moving obstacles. To ensure a practically reachable trajectory, this paper proposes a constraint reference frame to develop MAPF so that the decomposed forces from MAPF can be matched with the physical constraints of the UAV for online adjustment. Simulations and experimental results provide promising validation in terms of the efficiency and scalability of the proposed approach.",
"",
"Abstract Future Naval operations necessitate the incorporation of autonomous underwater vehicles into a collaborative network. In future complex missions, a forward look capability will also be required to map and avoid obstacles such as sunken ships. This work examines obstacle avoidance behaviors using a hypothetical forward-looking sonar for the autonomous underwater vehicle REMUS. Hydrodynamic coefficients are used to develop steering equations that model REMUS through a track of specified points similar to a real-world mission track. A two-dimensional forward-looking sonar model with a 120° horizontal scan and a 110 meter radial range is modeled for obstacle detection. Sonar mappings from geographic range-bearing coordinates are developed for implementation in MATLAB simulations. The product of bearing and range weighting functions form the gain factor for a dynamic obstacle avoidance behavior. The overall vehicle heading error incorporates this obstacle avoidance term to develop a path around detected objects. REMUS is a highly responsive vehicle in the model and is capable of avoiding multiple objects in proximity along its track path.",
"Abstract This paper presents a reactive local level of an Obstacle Avoidance System for an Autonomous Underwater Vehicle (AUV). The specific requirements of the underwater world, the computational capacity and the sensors of the vehicle as well as its manoeuvrability were considered by the choice and the development of strategies used. Such requirements include the sea current information, the consideration of moving objects and the constrained view of sonar."
]
} |
1903.11750 | 2923631003 | Navigation underwater traditionally is done by keeping a safe distance from obstacles, resulting in "fly-overs" of the area of interest. An Autonomous Underwater Vehicle (AUV) moving through a cluttered space, such as a shipwreck, or a decorated cave is an extremely challenging problem and has not been addressed in the past. This paper proposed a novel navigation framework utilizing an enhanced version of Trajopt for fast 3D path-optimization with near-optimal guarantees for AUVs. A sampling based correction procedure ensures that the planning is not limited by local minima, enabling navigation through narrow spaces. The method is shown, both on simulation and in-pool experiments, to be fast enough to enable real-time autonomous navigation for an Aqua2 AUV with strong safety guarantees. | Enabling Aqua2 AUVs to perform complex tasks has been attempted a few times. In the beginning basic patterns were used, then complex swimming gaits were developed in order to perform patterns such as swimming on the side, in a corkscrew motion, or performing a barrel roll @cite_2 . Visual tags placed on structures were used to enable the AUV to navigate @cite_16 , while a learned reactive controller had the vehicle maintain safe distance while moving over a coral reef @cite_24 @cite_39 . | {
"cite_N": [
"@cite_24",
"@cite_16",
"@cite_39",
"@cite_2"
],
"mid": [
"2908986427",
"2045701406",
"2909695427",
"1511334422"
],
"abstract": [
"We present a GPU-based integrated robotic platform that enables collision avoidance, navigation, and image understanding on a single underwater vehicle. The platform enables observational tasks such as coral reef health assessment by enabling simultaneous operation of multiple image analysis taskswhile navigating in close proximity to obstacles. The integration of a GPU allows us to leverage deep neural networks for collision avoidance and automated object detection and classification while a general purpose CPU processes images to perform visual Simultaneous Localization and Mapping (SLAM). In this paper, we describe the system architecture and summarize experimental results for coral detection and collision-free navigation.",
"Inspection and exploration of complex underwater structures requires the development of agile and easy to program platforms. In this paper, we describe a system that enables the deployment of an autonomous underwater vehicle in 3D environments proximal to the ocean bottom. Unlike many previous approaches, our solution: uses oscillating hydrofoil propulsion; allows for stable control of the robot’s motion and sensor directions; allows human operators to specify detailed trajectories in a natural fashion; and has been successfully demonstrated as a holistic system in the open ocean near both coral reefs and a sunken cargo ship. A key component of our system is the 3D control of a hexapod swimming robot, which can move the vehicle through agile sequences of orientations despite challenging marine conditions. We present two methods to easily generate robot trajectories appropriate for deployments in close proximity to challenging contours of the sea floor. Both offline recording of trajectories using augmented reality and online placement of fiducial tags in the marine environment are shown to have desirable properties, with complementary strengths and weaknesses. Finally, qualitative and quantitative results of the 3D control system are presented.",
"We address the problem of learning vision-based, collision-avoiding, and target-selecting controllers in 3D, specifically in underwater environments densely populated with coral reefs. Using a highly maneuverable, dynamic, six-legged (or flippered) vehicle to swim underwater, we exploit real time visual feedback to make close-range navigation decisions that would be hard to achieve with other sensors. Our approach uses computer vision as the sole mechanism for both collision avoidance and visual target selection. In particular, we seek to swim close to the reef to make observations while avoiding both collisions and barren, coral-deprived regions. To carry out path selection while avoiding collisions, we use monocular image data processed in real time. The proposed system uses a convolutional neural network that takes an image from a forward-facing camera as input and predicts unscaled and relative path changes. The network is trained to encode our desired obstacle-avoidance and reef-exploration objectives via supervised learning from human-labeled data. The predictions from the network are transformed into absolute path changes via a combination of a temporally-smoothed proportional controller for heading targets and a low-level motor controller. This system enables safe and autonomous coral reef navigation in underwater environments. We validate our approach using an untethered and fully autonomous robot swimming through coral reef in the open ocean. Our robot successfully traverses 1000 m of the ocean floor collision-free while collecting close-up footage of coral reefs.",
"We present an end-to-end framework for realizing fully automated gait learning for a complex underwater legged robot. Using this framework, we demonstrate that a hexapod flipper-propelled robot can learn task-specific control policies purely from experience data. Our method couples a state-of-the-art policy search technique with a family of periodic low-level controls that are well suited for underwater propulsion. We demonstrate the practical efficacy of tabula rasa learning, that is, learning without the use of any prior knowledge, of policies for a six-legged swimmer to carry out a variety of acrobatic maneuvers in three dimensional space. We also demonstrate informed learning that relies on simulated experience from a realistic simulator. In numerous cases, novel emergent gait behaviors have arisen from learning, such as the use of one stationary flipper to create drag while another oscillates to create thrust. Similar effective results have been demonstrated in under-actuated configurations, where as few as two flippers are used to maneuver the robot to a desired pose, or through an acrobatic motion such as a corkscrew. The success of our learning framework is assessed both in simulation and in the field using an underwater swimming robot."
]
} |
1903.11260 | 2922706279 | Small data challenges have emerged in many learning problems, since the success of deep neural networks often relies on the availability of a huge amount of labeled data that is expensive to collect. To address it, many efforts have been made on training complex models with small data in an unsupervised and semi-supervised fashion. In this paper, we will review the recent progresses on these two major categories of methods. A wide spectrum of small data models will be categorized in a big picture, where we will show how they interplay with each other to motivate explorations of new ideas. We will review the criteria of learning the transformation equivariant, disentangled, self-supervised and semi-supervised representations, which underpin the foundations of recent developments. Many instantiations of unsupervised and semi-supervised generative models have been developed on the basis of these criteria, greatly expanding the territory of existing autoencoders, generative adversarial nets (GANs) and other deep networks by exploring the distribution of unlabeled data for more powerful representations. While we focus on the unsupervised and semi-supervised methods, we will also provide a broader review of other emerging topics, from unsupervised and semi-supervised domain adaptation to the fundamental roles of transformation equivariance and invariance in training a wide spectrum of deep networks. It is impossible for us to write an exclusive encyclopedia to include all related works. Instead, we aim at exploring the main ideas, principles and methods in this area to reveal where we are heading on the journey towards addressing the small data challenges in this big data era. | There exist other variants of unsupervised domain adaptation methods based on the adversarial or non-adversarial training. For example, Domain Confusion @cite_71 proposes an objective under which the two untied representations are trained to map onto a uniform distribution by viewing two domains identically. CoGAN @cite_91 trains two GANs that generate the source and target images respectively. The domain-invariance is achieved by tying high-level parameters of the two GANs and a classifier is trained based on the output of the discriminator. | {
"cite_N": [
"@cite_91",
"@cite_71"
],
"mid": [
"2963784072",
"2214409633"
],
"abstract": [
"We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings."
]
} |
1903.11248 | 2923009181 | We present a method for compositing virtual objects into a photograph such that the object colors appear to have been processed by the photo's camera imaging pipeline. Compositing in such a camera-aware manner is essential for high realism, and it requires the color transformation in the photo's pipeline to be inferred, which is challenging due to the inherent one-to-many mapping that exists from a scene to a photo. To address this problem for the case of a single photo taken from an unknown camera, we propose a dual-learning approach in which the reverse color transformation (from the photo to the scene) is jointly estimated. Learning of the reverse transformation is used to facilitate learning of the forward mapping, by enforcing cycle consistency of the two processes. We additionally employ a feature sharing schema to extract evidence from the target photo in the reverse mapping to guide the forward color transformation. Our dual-learning approach achieves object compositing results that surpass those of alternative techniques. | Physics-based computer vision methods such as shape-from-shading require measurements of scene radiance that are physically accurate. Towards obtaining accurate measurements from photographs, the imaging pipeline of cameras has been modeled and used to undo the effects of in-camera processing. Many techniques have been proposed for modeling a particular component of an imaging pipeline, such as tone mapping @cite_19 @cite_10 @cite_4 or white balancing @cite_8 @cite_14 @cite_15 . More comprehensive are works that aim to model the sequence of processing operations that occur within an imaging device @cite_20 @cite_12 . Recently, a deep neural network was presented for modeling the scene-dependent color processing of a given camera, where RAW-JPEG image pairs are captured from the camera for training @cite_2 . In our work, we utilize this deep network for modeling color transformations in the imaging pipeline, but infer the model using only a single photograph from an unknown camera. This inference from a single image is made possible through the use of contextual color priors on common scene objects and our proposed dual-learning approach with a feature sharing schema. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2739489527",
"2133934946",
"2556952061",
"2093580403",
"2740930075",
"2519060067",
"2123315723",
"2127701282",
"2110288559"
],
"abstract": [
"Improvements in color constancy have arisen from the use of convolutional neural networks (CNNs). However, the patch-based CNNs that exist for this problem are faced with the issue of estimation ambiguity, where a patch may contain insufficient information to establish a unique or even a limited possible range of illumination colors. Image patches with estimation ambiguity not only appear with great frequency in photographs, but also significantly degrade the quality of network training and inference. To overcome this problem, we present a fully convolutional network architecture in which patches throughout an image can carry different confidence weights according to the value they provide for color constancy estimation. These confidence weights are learned and applied within a novel pooling layer where the local estimates are merged into a global solution. With this formulation, the network is able to determine what to learn and how to pool automatically from color constancy datasets without additional supervision. The proposed network also allows for end-to-end training, and achieves higher efficiency and accuracy. On standard benchmarks, our network outperforms the previous state-of-the-art while achieving 120x greater efficiency.",
"Photometric methods in computer vision require calibration of the camera's radiometric response, and previous works have addressed this problem using multiple registered images captured under different camera exposure settings. In many instances, such an image set is not available, so we propose a method that performs radiometric calibration from only a single image, based on measured RGB distributions at color edges. This technique automatically selects appropriate edge information for processing, and employs a Bayesian approach to compute the calibration. Extensive experimentation has shown that accurate calibration results can be obtained using only a single input image.",
"We present Fast Fourier Color Constancy (FFCC), a color constancy algorithm which solves illuminant estimation by reducing it to a spatial localization task on a torus. By operating in the frequency domain, FFCC produces lower error rates than the previous state-of-the-art by 13–20 while being 250-3000 times faster. This unconventional approach introduces challenges regarding aliasing, directional statistics, and preconditioning, which we address. By producing a complete posterior distribution over illuminants instead of a single illuminant estimate, FFCC enables better training techniques, an effective temporal smoothing technique, and richer methods for error analysis. Our implementation of FFCC runs at 700 frames per second on a mobile device, allowing it to be used as an accurate, real-time, temporally-coherent automatic white balance algorithm.",
"To produce images that are suitable for display, tone-mapping is widely used in digital cameras to map linear color measurements into narrow gamuts with limited dynamic range. This introduces non-linear distortion that must be undone, through a radiometric calibration process, before computer vision systems can analyze such photographs radiometrically. This paper considers the inherent uncertainty of undoing the effects of tone-mapping. We observe that this uncertainty varies substantially across color space, making some pixels more reliable than others. We introduce a model for this uncertainty and a method for fitting it to a given camera or imaging pipeline. Once fit, the model provides for each pixel in a tone-mapped digital photograph a probability distribution over linear scene colors that could have induced it. We demonstrate how these distributions can be useful for visual inference by incorporating them into estimation algorithms for a representative set of vision tasks.",
"We present a novel deep learning framework that models the scene dependent image processing inside cameras. Often called as the radiometric calibration, the process of recovering RAW images from processed images (JPEG format in the sRGB color space) is essential for many computer vision tasks that rely on physically accurate radiance values. All previous works rely on the deterministic imaging model where the color transformation stays the same regardless of the scene and thus they can only be applied for images taken under the manual mode. In this paper, we propose a data-driven approach to learn the scene dependent and locally varying image processing inside cameras under the automode. Our method incorporates both the global and the local scene context into pixel-wise features via multi-scale pyramid of learnable histogram layers. The results show that we can model the imaging pipeline of different cameras that operate under the automode accurately in both directions (from RAW to sRGB, from sRGB to RAW) and we show how we can apply our method to improve the performance of image deblurring.",
"Illuminant estimation to achieve color constancy is an ill-posed problem. Searching the large hypothesis space for an accurate illuminant estimation is hard due to the ambiguities of unknown reflections and local patch appearances. In this work, we propose a novel Deep Specialized Network (DS-Net) that is adaptive to diverse local regions for estimating robust local illuminants. This is achieved through a new convolutional network architecture with two interacting sub-networks, i.e. an hypotheses network (HypNet) and a selection network (SelNet). In particular, HypNet generates multiple illuminant hypotheses that inherently capture different modes of illuminants with its unique two-branch structure. SelNet then adaptively picks for confident estimations from these plausible hypotheses. Extensive experiments on the two largest color constancy benchmark datasets show that the proposed ‘hypothesis selection’ approach is effective to overcome erroneous estimation. Through the synergy of HypNet and SelNet, our approach outperforms state-of-the-art methods such as [1, 2, 3].",
"In many computer vision systems, it is assumed that the image brightness of a point directly reflects the scene radiance of the point. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. The effects of these factors are most visible in image mosaics and textures of 3D models where colors look inconsistent and notable boundaries exist. In this paper, we propose a full radiometric calibration algorithm that includes robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function estimation, we approach each process in a manner that is robust to noise and outliers. We verify our algorithm with both synthetic and real data, which shows significant improvement compared to existing methods. We apply our estimation results to radiometrically align images for seamless mosaics and 3D model textures. We also use our method to create high dynamic range (HDR) mosaics that are more representative of the scene than normal mosaics.",
"We present a study of in-camera image processing through an extensive analysis of more than 10,000 images from over 30 cameras. The goal of this work is to investigate if image values can be transformed to physically meaningful values, and if so, when and how this can be done. From our analysis, we found a major limitation of the imaging model employed in conventional radiometric calibration methods and propose a new in-camera imaging model that fits well with today's cameras. With the new model, we present associated calibration procedures that allow us to convert sRGB images back to their original CCD RAW responses in a manner that is significantly more accurate than any existing methods. Additionally, we show how this new imaging model can be used to build an image correction application that converts an sRGB input image captured with the wrong camera settings to an sRGB output image that would have been recorded under the correct settings of a specific camera.",
"Images harvested from the Web are proving to be useful for many visual tasks, including recognition, geo-location, and three-dimensional reconstruction. These images are captured under a variety of lighting conditions by consumer-level digital cameras, and these cameras have color processing pipelines that are diverse, complex, and scenedependent. As a result, the color information contained in these images is difficult to exploit. In this paper, we analyze the factors that contribute to the color output of a typical camera, and we explore the use of parametric models for relating these output colors to meaningful scenes properties. We evaluate these models using a database of registered images captured with varying camera models, camera settings, and lighting conditions. The database is available online at http: vision.middlebury.edu color ."
]
} |
1903.11248 | 2923009181 | We present a method for compositing virtual objects into a photograph such that the object colors appear to have been processed by the photo's camera imaging pipeline. Compositing in such a camera-aware manner is essential for high realism, and it requires the color transformation in the photo's pipeline to be inferred, which is challenging due to the inherent one-to-many mapping that exists from a scene to a photo. To address this problem for the case of a single photo taken from an unknown camera, we propose a dual-learning approach in which the reverse color transformation (from the photo to the scene) is jointly estimated. Learning of the reverse transformation is used to facilitate learning of the forward mapping, by enforcing cycle consistency of the two processes. We additionally employ a feature sharing schema to extract evidence from the target photo in the reverse mapping to guide the forward color transformation. Our dual-learning approach achieves object compositing results that surpass those of alternative techniques. | For increasing the realism of objects composited into photographs, methods have been presented for estimating scene illumination @cite_16 @cite_18 and for recovering camera distortions such as those resulting from sensor noise and motion blur @cite_0 , or caused by the camera's lens and rolling shutter @cite_11 . In contrast to these previous techniques, our work seeks to heighten realism by estimating and applying the in-camera color processing to composited objects, and thus is complementary to this prior research. Moreover, unlike the methods that model imaging distortions @cite_0 @cite_11 , which require access to the camera for calibrating these effects, our method is specifically developed not to need the camera at hand, so that it can be applied to arbitrary images. | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_16",
"@cite_11"
],
"mid": [
"2142028894",
"2043470575",
"2069088802",
"2120943485"
],
"abstract": [
"In video see-through augmented reality (AR), virtual objects are overlaid over digital video images. One particular problem of this image mixing process is that the visual appearance of the computer graphics differs strongly from the real background image. The reason for this is that typical AR systems use fast but simple real-time rendering techniques for displaying virtual objects. In this paper, methods for reducing the impact of three effects which make virtual and real objects easily distinguishable are presented. The first effect is camera image noise, which is contained in the data delivered by the image sensor used for capturing the real scene. The second effect considered is edge aliasing, which makes distinguishing virtual objects from real objects simple. Finally, we consider motion blur, which is caused by the temporal integration of color intensities in the image sensor during fast movements of the camera or observed objects. In this paper, we present a system for generating a realistic simulation of image noise based on a new camera calibration step. Additionally, a rendering algorithm is introduced, which performs a smooth blending between the camera image and virtual objects at their boundary in order to reduce aliasing. Lastly, a rendering method is presented, which produces motion blur according to the current camera movement. The implementation of the new rendering techniques utilizes the programmability of modern graphics processing units (GPUs) and delivers real-time frame rates.",
"We present a method for estimating the real-world lighting conditions within a scene in real-time. The estimation is based on the visual appearance of a human face in the real scene captured in a single image of a monocular camera. In hardware setups featuring a user-facing camera, an image of the user’s face can be acquired at any time. The limited range in variations between different human faces makes it possible to analyze their appearance offline, and to apply the results to new faces. Our approach uses radiance transfer functions – learned offline from a dataset of images of faces under different known illuminations – for particular points on the human face. Based on these functions, we recover the most plausible real-world lighting conditions for measured reflections in a face, represented by a function depending on incident light angle using Spherical Harmonics. The pose of the camera relative to the face is determined by means of optical tracking, and virtual 3D content is rendered and overlaid onto the real scene with a fixed spatial relationship to the face. By applying the estimated lighting conditions to the rendering of the virtual content, the augmented scene is shaded coherently with regard to the real and virtual parts of the scene. We show with different examples under a variety of lighting conditions, that our approach provides plausible results, which considerably enhance the visual realism in real-time Augmented Reality applications.",
"We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements. With a single image and a small amount of annotation, our method creates a physical model of the scene that is suitable for realistically rendering synthetic objects with diffuse, specular, and even glowing materials while accounting for lighting interactions between the objects and the scene. We demonstrate in a user study that synthetic images produced by our method are confusable with real scenes, even for people who believe they are good at telling the difference. Further, our study shows that our method is competitive with other insertion methods while requiring less scene information. We also collected new illumination and reflectance datasets; renderings produced by our system compare well to ground truth. Our system has applications in the movie and gaming industry, as well as home decorating and user content creation, among others.",
"Video see-through Augmented Reality adds computer graphics to the real world in real time by overlaying graphics onto a live video feed. To achieve a realistic integration of the virtual and real imagery, the rendered images should have a similar appearance and quality to those produced by the video camera. This paper describes a compositing method which models the artifacts produced by a small low-cost camera, and adds these effects to an ideal pinhole image produced by conventional rendering methods. We attempt to model and simulate each step of the imaging process, including distortions, chromatic aberrations, blur, Bayer masking, noise, sharpening, and color-space compression, all while requiring only an RGBA image and an estimate of camera velocity as inputs."
]
} |
1903.11248 | 2923009181 | We present a method for compositing virtual objects into a photograph such that the object colors appear to have been processed by the photo's camera imaging pipeline. Compositing in such a camera-aware manner is essential for high realism, and it requires the color transformation in the photo's pipeline to be inferred, which is challenging due to the inherent one-to-many mapping that exists from a scene to a photo. To address this problem for the case of a single photo taken from an unknown camera, we propose a dual-learning approach in which the reverse color transformation (from the photo to the scene) is jointly estimated. Learning of the reverse transformation is used to facilitate learning of the forward mapping, by enforcing cycle consistency of the two processes. We additionally employ a feature sharing schema to extract evidence from the target photo in the reverse mapping to guide the forward color transformation. Our dual-learning approach achieves object compositing results that surpass those of alternative techniques. | Many image processing problems can be viewed as translating an input image into an output image that exhibits a different representation of the scene. A general framework for this translation problem was introduced using a Generative Adversarial Network (GAN) that learns this mapping from a training set of aligned image pairs from the two domains @cite_17 . To relax the requirement of paired training data, recent methods have exploited the duality in the image translation problem by jointly learning an additional GAN that maps images from the output domain to the input domain while enforcing a cycle-consistency constraint in which an image mapped from the input domain to the output domain and then back to the input domain should yield the original input @cite_24 @cite_29 @cite_13 . Through this coupling of GANs, the training data need not be paired, but rather it is sufficient to have independent sets of images in each of the two domains. | {
"cite_N": [
"@cite_24",
"@cite_29",
"@cite_13",
"@cite_17"
],
"mid": [
"2962793481",
"2963784072",
"2598581049",
""
],
"abstract": [
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.",
"While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations when given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity.",
""
]
} |
1903.11222 | 2922940586 | For those languages which use it, capitalization is an important signal for the fundamental NLP tasks of Named Entity Recognition (NER) and Part of Speech (POS) tagging. In fact, it is such a strong signal that model performance on these tasks drops sharply in common lowercased scenarios, such as noisy web text or machine translation outputs. In this work, we perform a systematic analysis of solutions to this problem, modifying only the casing of the train or test data using lowercasing and truecasing methods. While prior work and first impressions might suggest training a caseless model, or using a truecaser at test time, we show that the most effective strategy is a concatenation of cased and lowercased training data, producing a single model with high performance on both cased and uncased text. As shown in our experiments, this result holds across tasks and input representations. Finally, we show that our proposed solution gives an 8 F1 improvement in mention detection on noisy out-of-domain Twitter data. | A practical, common solution to this problem is summarized by the Stanford CoreNLP system @cite_16 : train on uncased text, or use a truecaser on test data. We include these suggested solutions in our analysis below. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2123442489"
],
"abstract": [
"We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage."
]
} |
1903.11222 | 2922940586 | For those languages which use it, capitalization is an important signal for the fundamental NLP tasks of Named Entity Recognition (NER) and Part of Speech (POS) tagging. In fact, it is such a strong signal that model performance on these tasks drops sharply in common lowercased scenarios, such as noisy web text or machine translation outputs. In this work, we perform a systematic analysis of solutions to this problem, modifying only the casing of the train or test data using lowercasing and truecasing methods. While prior work and first impressions might suggest training a caseless model, or using a truecaser at test time, we show that the most effective strategy is a concatenation of cased and lowercased training data, producing a single model with high performance on both cased and uncased text. As shown in our experiments, this result holds across tasks and input representations. Finally, we show that our proposed solution gives an 8 F1 improvement in mention detection on noisy out-of-domain Twitter data. | Truecasing presents a natural solution for situations with noisy or uncertain text capitalization. It has been studied in the context of many fields, including speech recognition @cite_10 @cite_4 , and machine translation @cite_2 , as the outputs of these tasks are traditionally lowercased. | {
"cite_N": [
"@cite_10",
"@cite_4",
"@cite_2"
],
"mid": [
"2502285915",
"2169187092",
"2130784467"
],
"abstract": [
"Proper capitalization in text is a useful, often mandatory characteristic. Many text processing techniques rely on proper capitalization, and people can more easily read mixed case text. Proper capitalization, however, is often absent in a number of text sources, including automatic speech recognition output and closed caption text. The value of these text sources can be greatly enhanced with proper capitalization. We describe and evaluate a series of techniques that can recover proper capitalization. Our final system is able to recover more than 88 of the capitalized words with better than 90 accuracy.",
"Adding punctuation and capitalization greatly improves the readability of automatic speech transcripts. We discuss an approach for performing both tasks in a single pass using a purely text-based n-gram language model. We study the effect on performance of varying the n-gram order (from n = 3 to n = 6) and the amount of training data (from 58 million to 55 billion tokens). Our results show that using larger training data sets consistently improves performance, while increasing the n-gram order does not help nearly as much.",
"We present a probabilistic bilingual capitalization model for capitalizing machine translation outputs using conditional random fields. Experiments carried out on three language pairs and a variety of experiment conditions show that our model significantly outperforms a strong monolingual capitalization model baseline, especially when working with small datasets and or European language pairs."
]
} |
1903.11397 | 2924022660 | To increase productivity, today's compilers offer a two-fold abstraction: they hide hardware complexity from the software developer, and they support many architectures and programming languages. At the same time, due to fierce market competition, most processor vendors do not disclose many of their implementation details. These factors force software developers to treat both compilers and architectures as black boxes. In practice, this leads to a suboptimal compiler behavior where the maximum potential of improving an application's resource usage, such as execution time, is often not realized. This paper exposes missed optimization opportunities and is of interest to all three communities, compiler engineers, software developers and hardware architects. By exploiting the behavior of the standard optimization levels, such as the -O3, of the LLVM v6.0 compiler, we show how to reveal hidden cross-architecture and architecture-dependent potential optimizations on two popular processors: the Intel i5-6300U, widely used in portable PCs, and the ARM Cortex-A53-based Broadcom BCM2837 used in the Raspberry Pi 3B+. The classic nightly regression testing can then be extended to use the resource usage and compilation information collected while exploiting subsequences of the standard optimization levels. This provides a systematic means of detecting and tracking missed optimization opportunities. The enhanced nightly regression system is capable of driving the improvement and tuning of the compiler's common optimizer | Iterative compilation typically randomly samples the optimization configuration space until finding a configuration that outperforms a predefined optimization level @cite_2 . The technique has in many cases proven to provide significant performance gains @cite_7 @cite_15 , but typically a large number of optimization configurations, in the order of hundreds to thousands, need to be evaluated before reaching any performance gains over standard optimization levels. Thus, iterative compilation has been traditionally used as a baseline to assess the performance of MLB compiler auto-tuning techniques @cite_0 @cite_2 @cite_12 . MLB techniques aim to beat the performance of iterative compilation by finding a better optimization configuration in a shorter time. Thus, MLB techniques try to strategically sample the optimization configuration space based on the models built during their training phase. Such models are being trained on either static code features @cite_0 or profiling information @cite_19 , such as performance counter values that characterize the programs in the training set, and a performance metric for the dependent variable. An example of such a performance metric is the execution time of programs when applying a specific optimization configuration. | {
"cite_N": [
"@cite_7",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_12"
],
"mid": [
"",
"2156560068",
"2142079700",
"2751901133",
"2785111092",
""
],
"abstract": [
"",
"Tuning compiler optimizations for rapidly evolving hardware makes porting and extending an optimizing compiler for each new platform extremely challenging. Iterative optimization is a popular approach to adapting programs to a new architecture automatically using feedback-directed compilation. However, the large number of evaluations required for each program has prevented iterative compilation from widespread take-up in production compilers. Machine learning has been proposed to tune optimizations across programs systematically but is currently limited to a few transformations, long training phases and critically lacks publicly released, stable tools. Our approach is to develop a modular, extensible, self-tuning optimization infrastructure to automatically learn the best optimizations across multiple programs and architectures based on the correlation between program features, run-time behavior and optimizations. In this paper we describe Milepost GCC, the first publicly-available open-source machine learning-based compiler. It consists of an Interactive Compilation Interface (ICI) and plugins to extract program features and exchange optimization data with the cTuning.org open public repository. It automatically adapts the internal optimization heuristic at function-level granularity to improve execution time, code size and compilation time of a new program on a given architecture. Part of the MILEPOST technology together with low-level ICI-inspired plugin framework is now included in the mainline GCC. We developed machine learning plugins based on probabilistic and transductive approaches to predict good combinations of optimizations. Our preliminary experimental results show that it is possible to automatically reduce the execution time of individual MiBench programs, some by more than a factor of 2, while also improving compilation time and code size. On average we are able to reduce the execution time of the MiBench benchmark suite by 11 for the ARC reconfigurable processor. We also present a realistic multi-objective optimization scenario for Berkeley DB library using Milepost GCC and improve execution time by approximately 17 , while reducing compilation time and code size by 12 and 7 respectively on Intel Xeon processor.",
"Applying the right compiler optimizations to a particular program can have a significant impact on program performance. Due to the non-linear interaction of compiler optimizations, however, determining the best setting is nontrivial. There have been several proposed techniques that search the space of compiler options to find good solutions; however such approaches can be expensive. This paper proposes a different approach using performance counters as a means of determining good compiler optimization settings. This is achieved by learning a model off-line which can then be used to determine good settings for any new program. We show that such an approach outperforms the state-ofthe- art and is two orders of magnitude faster on average. Furthermore, we show that our performance counter-based approach outperforms techniques based on static code features. Using our technique we achieve a 17 improvement over the highest optimization setting of the commercial PathScale EKOPath 2.3.1 optimizing compiler on the SPEC benchmark suite on a recent AMD Athlon 64 3700+ platform.",
"Recent compilers offer a vast number of multilayered optimizations targeting different code segments of an application. Choosing among these optimizations can significantly impact the performance of the code being optimized. The selection of the right set of compiler optimizations for a particular code segment is a very hard problem, but finding the best ordering of these optimizations adds further complexity. Finding the best ordering represents a long standing problem in compilation research, named the phase-ordering problem. The traditional approach of constructing compiler heuristics to solve this problem simply cannot cope with the enormous complexity of choosing the right ordering of optimizations for every code segment in an application. This article proposes an automatic optimization framework we call MiCOMP, which Mi tigates the Com piler P hase-ordering problem. We perform phase ordering of the optimizations in LLVM’s highest optimization level using optimization sub-sequences and machine learning. The idea is to cluster the optimization passes of LLVM’s O3 setting into different clusters to predict the speedup of a complete sequence of all the optimization clusters instead of having to deal with the ordering of more than 60 different individual optimizations. The predictive model uses (1) dynamic features, (2) an encoded version of the compiler sequence, and (3) an exploration heuristic to tackle the problem. Experimental results using the LLVM compiler framework and the Cbench suite show the effectiveness of the proposed clustering and encoding techniques to application-based reordering of passes, while using a number of predictive models. We perform statistical analysis on the results and compare against (1) random iterative compilation, (2) standard optimization levels, and (3) two recent prediction approaches. We show that MiCOMP’s iterative compilation using its sub-sequences can reach an average performance speedup of 1.31 (up to 1.51). Additionally, we demonstrate that MiCOMP’s prediction model outperforms the -O1, -O2, and -O3 optimization levels within using just a few predictions and reduces the prediction error rate down to only 5 . Overall, it achieves 90 of the available speedup by exploring less than 0.001 of the optimization space.",
"Developing efficient software and hardware has never been harder whether it is for a tiny IoT device or an Exascale supercomputer. Apart from the ever growing design and optimization complexity, there exist even more fundamental problems such as lack of interdisciplinary knowledge required for effective software hardware co-design, and a growing technology transfer gap between academia and industry. We introduce our new educational initiative to tackle these problems by developing Collective Knowledge (CK), a unified experimental framework for computer systems research and development. We use CK to teach the community how to make their research artifacts and experimental workflows portable, reproducible, customizable and reusable while enabling sustainable RD crowdsource experimentation across diverse platforms; share experimental results, models, visualizations; gradually expose more design and optimization choices using a simple JSON API; and ultimately build upon each other's findings. As the first practical step, we have implemented customizable compiler autotuning, crowdsourced optimization of diverse workloads across Raspberry Pi 3 devices, reduced the execution time and code size by up to 40 , and applied machine learning to predict optimizations. We hope such approach will help teach students how to build upon each others' work to enable efficient and self-optimizing software hardware model stack for emerging workloads.",
""
]
} |
1903.11397 | 2924022660 | To increase productivity, today's compilers offer a two-fold abstraction: they hide hardware complexity from the software developer, and they support many architectures and programming languages. At the same time, due to fierce market competition, most processor vendors do not disclose many of their implementation details. These factors force software developers to treat both compilers and architectures as black boxes. In practice, this leads to a suboptimal compiler behavior where the maximum potential of improving an application's resource usage, such as execution time, is often not realized. This paper exposes missed optimization opportunities and is of interest to all three communities, compiler engineers, software developers and hardware architects. By exploiting the behavior of the standard optimization levels, such as the -O3, of the LLVM v6.0 compiler, we show how to reveal hidden cross-architecture and architecture-dependent potential optimizations on two popular processors: the Intel i5-6300U, widely used in portable PCs, and the ARM Cortex-A53-based Broadcom BCM2837 used in the Raspberry Pi 3B+. The classic nightly regression testing can then be extended to use the resource usage and compilation information collected while exploiting subsequences of the standard optimization levels. This provides a systematic means of detecting and tracking missed optimization opportunities. The enhanced nightly regression system is capable of driving the improvement and tuning of the compiler's common optimizer | Typically, these techniques require a large training phase @cite_10 to create their predictive models. Furthermore, they are hardly portable across different compilers, different versions of the same compiler, or different architectures. Even if a single flag is added to the set of a compiler's existing flags, the whole training phase has to be repeated. Moreover, extracting some of the metrics that these techniques depend on, such as static code features, might require a significant amount of engineering @cite_17 . Thus, MLB techniques are inadequate for systematic testing and improvement of compilers. | {
"cite_N": [
"@cite_10",
"@cite_17"
],
"mid": [
"2586408124",
"2962724414"
],
"abstract": [
"Since performance is not portable between platforms, engineers must fine-tune heuristics for each processor in turn. This is such a laborious task that high-profile compilers, supporting many architectures, cannot keep up with hardware innovation and are actually out-of-date. Iterative compilation driven by machine learning has been shown to be efficient at generating portable optimization models automatically. However, good quality models require costly, repetitive, and extensive training which greatly hinders the wide adoption of this powerful technique. In this work, we show that much of this cost is spent collecting training data, runtime measurements for different optimization decisions, which contribute little to the final heuristic. Current implementations evaluate randomly chosen, often redundant, training examples a pre-configured, almost always excessive, number of times – a large source of wasted effort. Our approach optimizes not only the selection of training examples but also the number of samples per example, independently. To evaluate, we construct 11 high-quality models which use a combination of optimization settings to predict the runtime of benchmarks from the SPAPT suite. Our novel, broadly applicable, methodology is able to reduce the training overhead by up to 26x compared to an approach with a fixed number of sample runs, transforming what is potentially months of work into days.",
"In the last decade, machine-learning-based compilation has moved from an obscure research niche to a mainstream activity. In this paper, we describe the relationship between machine learning and compiler optimization and introduce the main concepts of features, models, training, and deployment. We then provide a comprehensive survey and provide a road map for the wide variety of different research areas. We conclude with a discussion on open issues in the area and potential research directions. This paper provides both an accessible introduction to the fast moving area of machine-learning-based compilation and a detailed bibliography of its main achievements."
]
} |
1903.11397 | 2924022660 | To increase productivity, today's compilers offer a two-fold abstraction: they hide hardware complexity from the software developer, and they support many architectures and programming languages. At the same time, due to fierce market competition, most processor vendors do not disclose many of their implementation details. These factors force software developers to treat both compilers and architectures as black boxes. In practice, this leads to a suboptimal compiler behavior where the maximum potential of improving an application's resource usage, such as execution time, is often not realized. This paper exposes missed optimization opportunities and is of interest to all three communities, compiler engineers, software developers and hardware architects. By exploiting the behavior of the standard optimization levels, such as the -O3, of the LLVM v6.0 compiler, we show how to reveal hidden cross-architecture and architecture-dependent potential optimizations on two popular processors: the Intel i5-6300U, widely used in portable PCs, and the ARM Cortex-A53-based Broadcom BCM2837 used in the Raspberry Pi 3B+. The classic nightly regression testing can then be extended to use the resource usage and compilation information collected while exploiting subsequences of the standard optimization levels. This provides a systematic means of detecting and tracking missed optimization opportunities. The enhanced nightly regression system is capable of driving the improvement and tuning of the compiler's common optimizer | Energy consumption of computing is becoming critically important for economic, environmental, and reliability reasons @cite_8 @cite_6 . @cite_1 , the technique also used in this paper for exploring the standard optimization levels, was able to accurately account for energy consumption through physical hardware measurements on deeply embedded devices. In future work, we will explore if energy profilers @cite_22 can achieve the same for platforms with higher-end architectures that do not allow for processor's direct energy measurements, such the ones explored in this paper. | {
"cite_N": [
"@cite_1",
"@cite_22",
"@cite_6",
"@cite_8"
],
"mid": [
"2788459922",
"",
"2727098759",
"2963423308"
],
"abstract": [
"This paper presents the interesting observation that by performing fewer of the optimizations available in a standard compiler optimization level such as -02, while preserving their original ordering, significant savings can be achieved in both execution time and energy consumption. This observation has been validated on two embedded processors, namely the ARM Cortex-M0 and the ARM Cortex-M3, using two different versions of the LLVM compilation framework; v3.8 and v5.0. Experimental evaluation with 71 embedded benchmarks demonstrated performance gains for at least half of the benchmarks for both processors. An average execution time reduction of 2.4 and 5.3 was achieved across all the benchmarks for the Cortex-M0 and Cortex-M3 processors, respectively, with execution time improvements ranging from 1 up to 90 over the -02. The savings that can be achieved are in the same range as what can be achieved by the state-of-the-art compilation approaches that use iterative compilation or machine learning to select flags or to determine phase orderings that result in more efficient code. In contrast to these time consuming and expensive to apply techniques, our approach only needs to test a limited number of optimization configurations, less than 64, to obtain similar or even better savings. Furthermore, our approach can support multi-criteria optimization as it targets execution time, energy consumption and code size at the same time.",
"",
"The Internet of Things (IoT) sparks a whole new world of embedded applications. Most of these applications are based on deeply embedded systems that have to operate on limited or unreliable sources of energy, such as batteries or energy harvesters. Meeting the energy requirements for such applications is a hard challenge, which threatens the future growth of the IoT. Software has the ultimate control over hardware. Therefore, its role is significant in optimizing the energy consumption of a system. Currently, programmers have no feedback on how their software affects the energy consumption of a system. Such feedback can be enabled by energy transparency, a concept that makes a program’s energy consumption visible, from hardware to software. This letter discusses the need for energy transparency in software development and emphasizes on how such transparency can be realized to help tackle the IoT energy challenge.",
"Abstract Promoting energy efficiency to a first class system design goal is an important research challenge. Although more energy-efficient hardware can be designed, it is software that controls the hardware; for a given system the potential for energy savings is likely to be much greater at the higher levels of abstraction in the system stack. Thus the greatest savings are expected from energy-aware software development, which is the vision of the EU ENTRA project. This article presents the concept of energy transparency as a foundation for energy-aware software development. We show how energy modelling of hardware is combined with static analysis to allow the programmer to understand the energy consumption of a program without executing it, thus enabling exploration of the design space taking energy into consideration. The paper concludes by summarising the current and future challenges identified in the ENTRA project."
]
} |
1903.11279 | 2952895179 | Visually rich documents (VRDs) are ubiquitous in daily business and life. Examples are purchase receipts, insurance policy documents, custom declaration forms and so on. In VRDs, visual and layout information is critical for document understanding, and texts in such documents cannot be serialized into the one-dimensional sequence without losing information. Classic information extraction models such as BiLSTM-CRF typically operate on text sequences and do not incorporate visual features. In this paper, we introduce a graph convolution based model to combine textual and visual information presented in VRDs. Graph embeddings are trained to summarize the context of a text segment in the document, and further combined with text embeddings for entity extraction. Extensive experiments have been conducted to show that our method outperforms BiLSTM-CRF baselines by significant margins, on two real-world datasets. Additionally, ablation studies are also performed to evaluate the effectiveness of each component of our model. | Neural network architectures such as CNN and RNNs have demonstrated huge success on many artificial intelligence tasks where the underlying data has grid-like or sequential structure @cite_1 @cite_8 @cite_9 . Recently, there is a surge of interest in studying the neural network structure operating on graphs @cite_11 @cite_6 , since much data in the real world is naturally represented as graphs. Many works attempt to generalize convolution on the graph structure. Some use a spectrum based approach where the learned model depends on the structure of the graph. As a result, the approach does not work well on dynamic graph structures. The others define convolution directly on the graph @cite_26 @cite_6 @cite_19 @cite_15 @cite_4 . We follow the latter approach in our work to model the text segment graph of VRDs. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_15",
"@cite_11"
],
"mid": [
"2766453196",
"2964113829",
"1938755728",
"2949541494",
"2163605009",
"2962767366",
"2796167946",
"2796341166",
"2519887557"
],
"abstract": [
"We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).",
"We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks.",
"We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60 fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.",
"We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.",
"Celebrated and its fruitful variants are powerful models to achieve excellent performance on the tasks that map sequences to sequences. However, these are many machine learning tasks with inputs naturally represented in a form of graphs, which imposes significant challenges to existing Seq2Seq models for lossless conversion from its graph form to the sequence. In this work, we present a general end-to-end approach to map the input graph to a sequence of vectors, and then another attention-based LSTM to decode the target sequence from these vectors. Specifically, to address inevitable information loss for data conversion, we introduce a novel graph-to-sequence neural network model that follows the encoder-decoder architecture. Our method first uses an improved graph-based neural network to generate the node and graph embeddings by a novel aggregation strategy to incorporate the edge direction information into the node embeddings. We also propose an attention based mechanism that aligns node embeddings and decoding sequence to better cope with large graphs. Experimental results on bAbI task, Shortest Path Task, and Natural Language Generation Task demonstrate that our model achieves the state-of-the-art performance and significantly outperforms other baselines. We also show that with the proposed aggregation strategy, our proposed model is able to quickly converge to good performance.",
"To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.",
"We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin."
]
} |
1903.11279 | 2952895179 | Visually rich documents (VRDs) are ubiquitous in daily business and life. Examples are purchase receipts, insurance policy documents, custom declaration forms and so on. In VRDs, visual and layout information is critical for document understanding, and texts in such documents cannot be serialized into the one-dimensional sequence without losing information. Classic information extraction models such as BiLSTM-CRF typically operate on text sequences and do not incorporate visual features. In this paper, we introduce a graph convolution based model to combine textual and visual information presented in VRDs. Graph embeddings are trained to summarize the context of a text segment in the document, and further combined with text embeddings for entity extraction. Extensive experiments have been conducted to show that our method outperforms BiLSTM-CRF baselines by significant margins, on two real-world datasets. Additionally, ablation studies are also performed to evaluate the effectiveness of each component of our model. | Different from existing works, this paper introduces explicit edge embeddings into the graph convolution network, which models the relationship between vertices directly. Similar to @cite_26 , we apply self-attention @cite_10 to define convolution on variable-sized neighbors, and the approach is computationally efficient since the operation is parallelizable across node pairs. | {
"cite_N": [
"@cite_26",
"@cite_10"
],
"mid": [
"2766453196",
"2963403868"
],
"abstract": [
"We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature."
]
} |
1903.11279 | 2952895179 | Visually rich documents (VRDs) are ubiquitous in daily business and life. Examples are purchase receipts, insurance policy documents, custom declaration forms and so on. In VRDs, visual and layout information is critical for document understanding, and texts in such documents cannot be serialized into the one-dimensional sequence without losing information. Classic information extraction models such as BiLSTM-CRF typically operate on text sequences and do not incorporate visual features. In this paper, we introduce a graph convolution based model to combine textual and visual information presented in VRDs. Graph embeddings are trained to summarize the context of a text segment in the document, and further combined with text embeddings for entity extraction. Extensive experiments have been conducted to show that our method outperforms BiLSTM-CRF baselines by significant margins, on two real-world datasets. Additionally, ablation studies are also performed to evaluate the effectiveness of each component of our model. | Recently, significant progress has been made in information extraction from unstructured or semi-structured text. However, most works focus on plain text documents @cite_13 @cite_5 @cite_18 @cite_25 . For information extraction from VRDs, @cite_2 which uses a recurrent neural network (RNN) to extract entities of interest from VRDs (invoices) is the closest to our work, but does not take visual features into account. Besides, some of the studies @cite_3 @cite_23 @cite_12 in the area of document understanding deal with a similar problem to our work, and explore using visual features to aid text extraction from VRDs; however, approaches they proposed are based on a large amount of heuristic knowledge and human-designed features, as well as limited in known templates, which are not scalable in real-world business settings. We also acknowledge a concurrent work of @cite_7 , which models 2-D document using convolution networks. However, there are several key differences. Our neural network architecture is graph-based, and our model operates on text segments instead of characters as in @cite_7 . | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_3",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"2295030615",
"2949370368",
"2811000613",
"2058053171",
"2748159032",
"2296283641",
"2612364175",
"2963625095",
"2087291337"
],
"abstract": [
"State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55 accuracy for POS tagging and 91.21 F1 for NER.",
"We introduce a novel type of text representation that preserves the 2D layout of a document. This is achieved by encoding each document page as a two-dimensional grid of characters. Based on this representation, we present a generic document understanding pipeline for structured documents. This pipeline makes use of a fully convolutional encoder-decoder network that predicts a segmentation mask and bounding boxes. We demonstrate its capabilities on an information extraction task from invoices and show that it significantly outperforms approaches based on sequential text or document images.",
"In this paper, we present an incremental frame-work for extracting information fields from administrative documents. First, we demonstrate some limits of the existing state-of-the-art methods such as the delay of the system efficiency. This is a concern in industrial context when we have only few samples of each document class. Based on this analysis, we propose a hybrid system combining incremental learning by means of itf-df statistics and a-priori generic models. We report in the experimental section our results obtained with a dataset of real invoices.",
"We propose an approach for information extraction for multi-page printed document understanding. The approach is designed for scenarios in which the set of possible document classes, i.e., documents sharing similar content and layout, is large and may evolve over time. Describing a new class is a very simple task: the operator merely provides a few samples and then, by means of a GUI, clicks on the OCR-generated blocks of a document containing the information to be extracted. Our approach is based on probability: we derived a general form for the probability that a sequence of blocks contains the searched information. We estimate the parameters for a new class by applying the maximum likelihood method to the samples of the class. All these parameters depend only on block properties that can be extracted automatically from the operator actions on the GUI. Processing a document of a given class consists in finding the sequence of blocks, which maximizes the corresponding probability for that class. We evaluated experimentally our proposal using 807 multi-page printed documents of different domains (invoices, patents, data-sheets), obtaining very good results—e.g., a success rate often greater than 90 even for classes with just two samples.",
"We present CloudScan; an invoice analysis system that requires zero configuration or upfront annotation. In contrast to previous work, CloudScan does not rely on templates of invoice layout, instead it learns a single global model of invoices that naturally generalizes to unseen invoice layouts. The model is trained using data automatically extracted from end-user provided feedback. This automatic training data extraction removes the requirement for users to annotate the data precisely. We describe a recurrent neural network model that can capture long range context and compare it to a baseline logistic regression model corresponding to the current CloudScan production system. We train and evaluate the system on 8 important fields using a dataset of 326,471 invoices. The recurrent neural network and baseline model achieve 0.891 and 0.887 average F1 scores respectively on seen invoice layouts. For the harder task of unseen invoice layouts, the recurrent neural network model outperforms the baseline with 0.840 average F1 compared to 0.788.",
"Comunicacio presentada a la 2016 Conference of the North American Chapter of the Association for Computational Linguistics, celebrada a San Diego (CA, EUA) els dies 12 a 17 de juny 2016.",
"Past work in relation extraction has focused on binary relations in single sentences. Recent NLP inroads in high-value domains have sparked interest in the more general setting of extracting n-ary relations that span multiple sentences. In this paper, we explore a general relation extraction framework based on graph long short-term memory networks (graph LSTMs) that can be easily extended to cross-sentence n-ary relation extraction. The graph formulation provides a unified way of exploring different LSTM approaches and incorporating various intra-sentential and inter-sentential dependencies, such as sequential, syntactic, and discourse relations. A robust contextual representation is learned for the entities, which serves as input to the relation classifier. This simplifies handling of relations with arbitrary arity, and enables multi-task learning with related relations. We evaluate this framework in two important precision medicine settings, demonstrating its effectiveness with both conventional supervised learning and distant supervision. Cross-sentence extraction produced larger knowledge bases. and multi-task learning significantly improved extraction accuracy. A thorough analysis of various LSTM approaches yielded useful insight the impact of linguistic analysis on extraction accuracy.",
"Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",
"In this paper we present an incremental framework aimed at extracting field information from administrative document images in the context of a Digital Mail-room scenario. Given a single training sample in which the user has marked which fields have to be extracted from a particular document class, a document model representing structural relationships among words is built. This model is incrementally refined as the system processes more and more documents from the same class. A reformulation of the tf-idf statistic scheme allows to adjust the importance weights of the structural relationships among words. We report in the experimental section our results obtained with a large dataset of real invoices."
]
} |
1903.11279 | 2952895179 | Visually rich documents (VRDs) are ubiquitous in daily business and life. Examples are purchase receipts, insurance policy documents, custom declaration forms and so on. In VRDs, visual and layout information is critical for document understanding, and texts in such documents cannot be serialized into the one-dimensional sequence without losing information. Classic information extraction models such as BiLSTM-CRF typically operate on text sequences and do not incorporate visual features. In this paper, we introduce a graph convolution based model to combine textual and visual information presented in VRDs. Graph embeddings are trained to summarize the context of a text segment in the document, and further combined with text embeddings for entity extraction. Extensive experiments have been conducted to show that our method outperforms BiLSTM-CRF baselines by significant margins, on two real-world datasets. Additionally, ablation studies are also performed to evaluate the effectiveness of each component of our model. | Besides, information extraction based on the graph structure has been developed most recently. @cite_13 @cite_17 present a graph LSTM to capture various dependencies among the input words and @cite_0 designs a novel graph schema to extract entities and relations jointly. However, their models are not concerned with visual information directly. | {
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_17"
],
"mid": [
"2964273534",
"2612364175",
"2951403367"
],
"abstract": [
"Joint extraction of entities and relations is an important task in information extraction. To tackle this problem, we firstly propose a novel tagging scheme that can convert the joint extraction task to a tagging problem. Then, based on our tagging scheme, we study different end-toend models to extract entities and their relations directly, without identifying entities and relations separately. We conduct experiments on a public dataset produced by distant supervision method and the experimental results show that the tagging based methods are better than most of the existing pipelined and joint learning methods. What’s more, the end-to-end model proposed in this paper, achieves the best results on the public dataset.",
"Past work in relation extraction has focused on binary relations in single sentences. Recent NLP inroads in high-value domains have sparked interest in the more general setting of extracting n-ary relations that span multiple sentences. In this paper, we explore a general relation extraction framework based on graph long short-term memory networks (graph LSTMs) that can be easily extended to cross-sentence n-ary relation extraction. The graph formulation provides a unified way of exploring different LSTM approaches and incorporating various intra-sentential and inter-sentential dependencies, such as sequential, syntactic, and discourse relations. A robust contextual representation is learned for the entities, which serves as input to the relation classifier. This simplifies handling of relations with arbitrary arity, and enables multi-task learning with related relations. We evaluate this framework in two important precision medicine settings, demonstrating its effectiveness with both conventional supervised learning and distant supervision. Cross-sentence extraction produced larger knowledge bases. and multi-task learning significantly improved extraction accuracy. A thorough analysis of various LSTM approaches yielded useful insight the impact of linguistic analysis on extraction accuracy.",
"Cross-sentence @math -ary relation extraction detects relations among @math entities across multiple sentences. Typical methods formulate an input as a , integrating various intra-sentential and inter-sentential dependencies. The current state-of-the-art method splits the input graph into two DAGs, adopting a DAG-structured LSTM for each. Though being able to model rich linguistic knowledge by leveraging graph edges, important information can be lost in the splitting procedure. We propose a graph-state LSTM model, which uses a parallel state to model each word, recurrently enriching state values via message passing. Compared with DAG LSTMs, our graph LSTM keeps the original graph structure, and speeds up computation by allowing more parallelization. On a standard benchmark, our model shows the best result in the literature."
]
} |
1903.11329 | 2923913124 | We propose a new objective, the counterfactual objective, unifying existing objectives for off-policy policy gradient algorithms in the continuing reinforcement learning (RL) setting. Compared to the commonly used excursion objective, which can be misleading about the performance of the target policy when deployed, our new objective better predicts such performance. We prove the Generalized Off-Policy Policy Gradient Theorem to compute the policy gradient of the counterfactual objective and use an emphatic approach to get an unbiased sample from this policy gradient, yielding the Generalized Off-Policy Actor-Critic (Geoff-PAC) algorithm. We demonstrate the merits of Geoff-PAC over existing algorithms in Mujoco robot simulation tasks, the first empirical success of emphatic algorithms in prevailing deep RL benchmarks. | There have been many applications of OPPG, e.g., DPG , DDPG , ACER , EPG , and IMPALA . Particularly, @cite_3 propose IPG to unify on- and off-policy policy gradients. IPG is a mix of the from the on-policy objective and the excursion objective. To compute the gradients of the on-policy objective, IPG does need on-policy samples. In this paper, the counterfactual objective is a mix of , and we do not need on-policy samples to compute the policy gradient of the counterfactual objective. Mixing @math and @math directly in IPG-style is a possibility for future work. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2620671107"
],
"abstract": [
"Off-policy model-free deep reinforcement learning methods using previously collected data can improve sample efficiency over on-policy policy gradient techniques. On the other hand, on-policy algorithms are often more stable and easier to use. This paper examines, both theoretically and empirically, approaches to merging on- and off-policy updates for deep reinforcement learning. Theoretical results show that off-policy updates with a value function estimator can be interpolated with on-policy policy gradient updates whilst still satisfying performance bounds. Our analysis uses control variate methods to produce a family of policy gradient algorithms, with several recently proposed algorithms being special cases of this family. We then provide an empirical comparison of these techniques with the remaining algorithmic details fixed, and show how different mixing of off-policy gradient estimates with on-policy samples contribute to improvements in empirical performance. The final algorithm provides a generalization and unification of existing deep policy gradient techniques, has theoretical guarantees on the bias introduced by off-policy updates, and improves on the state-of-the-art model-free deep RL methods on a number of OpenAI Gym continuous control benchmarks."
]
} |
1903.11329 | 2923913124 | We propose a new objective, the counterfactual objective, unifying existing objectives for off-policy policy gradient algorithms in the continuing reinforcement learning (RL) setting. Compared to the commonly used excursion objective, which can be misleading about the performance of the target policy when deployed, our new objective better predicts such performance. We prove the Generalized Off-Policy Policy Gradient Theorem to compute the policy gradient of the counterfactual objective and use an emphatic approach to get an unbiased sample from this policy gradient, yielding the Generalized Off-Policy Actor-Critic (Geoff-PAC) algorithm. We demonstrate the merits of Geoff-PAC over existing algorithms in Mujoco robot simulation tasks, the first empirical success of emphatic algorithms in prevailing deep RL benchmarks. | There have been other policy-based off-policy algorithms. @cite_8 provide an unbiased sample for @math , assuming the value function is linear. Theoretical results are provided without empirical study. @cite_13 eliminate the linear assumption and provide a thorough empirical study. We therefore conduct our comparison with @cite_13 instead of @cite_8 . In another line of work, the policy entropy is used for reward shaping. The target policy can then be derived from the value function directly . This line of work includes the deep energy-based RL , where a value function is learned off-policy and the policy is derived from the value function directly, and path consistency learning , where gradients are computed to satisfy certain path consistencies. This line of work is orthogonal to this paper, where we compute the policy gradients of a given objective directly in an off-policy manner. | {
"cite_N": [
"@cite_13",
"@cite_8"
],
"mid": [
"2963744705",
"2788366696"
],
"abstract": [
"Policy gradient methods are widely used for control in reinforcement learning, particularly for the continuous action setting. There have been a host of theoretically sound algorithms proposed for the on-policy setting, due to the existence of the policy gradient theorem which provides a simplified form for the gradient. In off-policy learning, however, where the behaviour policy is not necessarily attempting to learn and follow the optimal policy for the given task, the existence of such a theorem has been elusive. In this work, we solve this open problem by providing the first off-policy policy gradient theorem. The key to the derivation is the use of emphatic weightings. We develop a new actor-critic algorithm---called Actor Critic with Emphatic weightings (ACE)---that approximates the simplified gradients provided by the theorem. We demonstrate in a simple counterexample that previous off-policy policy gradient methods---particularly OffPAC and DPG---converge to the wrong solution whereas ACE finds the optimal solution.",
"We present the first class of policy-gradient algorithms that work with both state-value and policy function-approximation, and are guaranteed to converge under off-policy training. Our solution targets problems in reinforcement learning where the action representation adds to the-curse-of-dimensionality; that is, with continuous or large action sets, thus making it infeasible to estimate state-action value functions (Q functions). Using state-value functions helps to lift the curse and as a result naturally turn our policy-gradient solution into classical Actor-Critic architecture whose Actor uses state-value function for the update. Our algorithms, Gradient Actor-Critic and Emphatic Actor-Critic, are derived based on the exact gradient of averaged state-value function objective and thus are guaranteed to converge to its optimal solution, while maintaining all the desirable properties of classical Actor-Critic methods with no additional hyper-parameters. To our knowledge, this is the first time that convergent off-policy learning methods have been extended to classical Actor-Critic methods with function approximation."
]
} |
1903.10972 | 2923890923 | Following recent successes in applying BERT to question answering, we explore simple applications to ad hoc document retrieval. This required confronting the challenge posed by documents that are typically longer than the length of input BERT was designed to handle. We address this issue by applying inference on sentences individually, and then aggregating sentence scores to produce document scores. Experiments on TREC microblog and newswire test collections show that our approach is simple yet effective, as we report the highest average precision on these datasets by neural approaches that we are aware of. | However, there are aspects of the task worth discussing. Researchers have understood for a few years now that relevance matching and semantic matching (for example, paraphrase detection, natural language inference, etc.) are different tasks, despite shared common characteristics @cite_1 . The first task has a heavier dependence on exact match (i.e., one-hot'') signals, whereas the second task generally requires models to more accurately capture semantics. Question answering has elements of both, but nevertheless remains a different task from document retrieval. Due to these task differences, neural models for document ranking, for example, DRMM @cite_1 , are quite different architecturally from neural models for capturing similarity; see, for example, the survey of . | {
"cite_N": [
"@cite_1"
],
"mid": [
"2536015822"
],
"abstract": [
"In recent years, deep neural networks have led to exciting breakthroughs in speech recognition, computer vision, and natural language processing (NLP) tasks. However, there have been few positive results of deep models on ad-hoc retrieval tasks. This is partially due to the fact that many important characteristics of the ad-hoc retrieval task have not been well addressed in deep models yet. Typically, the ad-hoc retrieval task is formalized as a matching problem between two pieces of text in existing work using deep models, and treated equivalent to many NLP tasks such as paraphrase identification, question answering and automatic conversation. However, we argue that the ad-hoc retrieval task is mainly about relevance matching while most NLP matching tasks concern semantic matching, and there are some fundamental differences between these two matching tasks. Successful relevance matching requires proper handling of the exact matching signals, query term importance, and diverse matching requirements. In this paper, we propose a novel deep relevance matching model (DRMM) for ad-hoc retrieval. Specifically, our model employs a joint deep architecture at the query term level for relevance matching. By using matching histogram mapping, a feed forward matching network, and a term gating network, we can effectively deal with the three relevance matching factors mentioned above. Experimental results on two representative benchmark collections show that our model can significantly outperform some well-known retrieval models as well as state-of-the-art deep matching models."
]
} |
1903.10995 | 2924268550 | Autonomous vehicles are more likely to be accepted if they drive accurately, comfortably, but also similar to how human drivers would. This is especially true when autonomous and human-driven vehicles need to share the same road. The main research focus thus far, however, is still on improving driving accuracy only. This paper formalizes the three concerns with the aim of accurate, comfortable and human-like driving. Three contributions are made in this paper. First, numerical map data from HERE Technologies are employed for more accurate driving; a set of map features which are believed to be relevant to driving are engineered to navigate better. Second, the learning procedure is improved from a pointwise prediction to a sequence-based prediction and passengers' comfort measures are embedded into the learning algorithm. Finally, we take advantage of the advances in adversary learning to learn human-like driving; specifically, the standard L1 or L2 loss is augmented by an adversary loss which is based on a discriminator trained to distinguish between human driving and machine driving. Our model is trained and evaluated on the Drive360 dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving model is more accurate, more comfortable and behaves more like a human driver than previous methods. The resources of this work will be released on the project page. | . Significant progress has been made in autonomous driving in the last few years. Classical approaches require the recognition of all driving-relevant objects, such as lanes, traffic signs, traffic lights, cars and pedestrians, and then perform motion planning, which is further used for final vehicle control @cite_49 . These type of systems are sophisticated, represent the current state-of-the-art for autonomous driving, but they are hard to maintain and prone to error accumulation over the pipeline. | {
"cite_N": [
"@cite_49"
],
"mid": [
"2121806728"
],
"abstract": [
"Boss is an autonomous vehicle that uses on-board sensors (global positioning system, lasers, radars, and cameras) to track other vehicles, detect static obstacles, and localize itself relative to a road model. A three-layer planning system combines mission, behavioral, and motion planning to drive in urban environments. The mission planning layer considers which street to take to achieve a mission goal. The behavioral layer determines when to change lanes and precedence at intersections and performs error recovery maneuvers. The motion planning layer selects actions to avoid obstacles while making progress toward local goals. The system was developed from the ground up to address the requirements of the DARPA Urban Challenge using a spiral system development process with a heavy emphasis on regular, regressive system testing. During the National Qualification Event and the 85-km Urban Challenge Final Event, Boss demonstrated some of its capabilities, qualifying first and winning the challenge. © 2008 Wiley Periodicals, Inc."
]
} |
1903.10995 | 2924268550 | Autonomous vehicles are more likely to be accepted if they drive accurately, comfortably, but also similar to how human drivers would. This is especially true when autonomous and human-driven vehicles need to share the same road. The main research focus thus far, however, is still on improving driving accuracy only. This paper formalizes the three concerns with the aim of accurate, comfortable and human-like driving. Three contributions are made in this paper. First, numerical map data from HERE Technologies are employed for more accurate driving; a set of map features which are believed to be relevant to driving are engineered to navigate better. Second, the learning procedure is improved from a pointwise prediction to a sequence-based prediction and passengers' comfort measures are embedded into the learning algorithm. Finally, we take advantage of the advances in adversary learning to learn human-like driving; specifically, the standard L1 or L2 loss is augmented by an adversary loss which is based on a discriminator trained to distinguish between human driving and machine driving. Our model is trained and evaluated on the Drive360 dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving model is more accurate, more comfortable and behaves more like a human driver than previous methods. The resources of this work will be released on the project page. | End-to-end mapping methods on the other hand construct a direct mapping from the sensory input to the maneuvers. The idea can be traced back to the 1980s @cite_35 . Other more recent end-to-end examples include @cite_1 @cite_28 @cite_50 @cite_14 @cite_31 @cite_40 @cite_36 @cite_16 . In @cite_28 , the authors trained an end-to-end method with a collection of front-facing videos. The idea was extended later on by using a larger video dataset @cite_50 , by adding side tasks to regularize the training @cite_50 @cite_16 , by introducing directional commands @cite_40 and route planners @cite_36 to indicate the destination, by using multiple surround-view cameras to extend the visual field @cite_36 , by adding synthesized off-the-road scenarios @cite_0 , and by adding modules to predict when the model fails @cite_22 . The main contributions of this work, namely using numerical map data, incorporating ride comfort measures, and rendering human-like driving in an end-to-end learning framework, are complementary to all methods developed before. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_22",
"@cite_28",
"@cite_36",
"@cite_1",
"@cite_0",
"@cite_40",
"@cite_50",
"@cite_31",
"@cite_16"
],
"mid": [
"",
"2798873012",
"2964003311",
"2342840547",
"2887286974",
"2133233905",
"2905173465",
"",
"2559767995",
"2963580221",
"2963448286"
],
"abstract": [
"",
"Learning autonomous-driving policies is one of the most challenging but promising tasks for computer vision. Most researchers believe that future research and applications should combine cameras, video recorders and laser scanners to obtain comprehensive semantic understanding of real traffic. However, current approaches only learn from large-scale videos, due to the lack of benchmarks that consist of precise laser-scanner data. In this paper, we are the first to propose a LiDAR-Video dataset, which provides large-scale high-quality point clouds scanned by a Velodyne laser, videos recorded by a dashboard camera and standard drivers' behaviors. Extensive experiments demonstrate that extra depth information help networks to determine driving policies indeed.",
"The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is im- portant that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places with heavy traffic, at complex intersections, and or under adverse weather illumination conditions. This work presents a method to learn to predict the occurrence of these failures, i.e., to assess how difficult a scene is to a given driving model and to possibly give the human driver an early headsup. A camera- based driving model is developed and trained over real driving datasets. The discrepancies between the model's predictions and the human 'ground-truth' maneuvers were then recorded, to yield the 'failure' scores. Experimental results show that the failure score can indeed be learned and predicted. Thus, our prediction method is able to improve the overall safety of an automated driving model by alerting the human driver timely, leading to better human-vehicle collaborative driving.",
"We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).",
"For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: (1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and (2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: (1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and (2) route planners help the driving task significantly, especially for steering angle prediction. Code, data and more visual results will be made available at http: www.vision.ee.ethz.ch heckers Drive360.",
"We describe a vision-based obstacle avoidance system for off-road mobile robots. The system is trained from end to end to map raw input images to steering angles. It is trained in supervised mode to predict the steering angles provided by a human driver during training runs collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two forward-pointing wireless color cameras. A remote computer processes the video and controls the robot via radio. The learning system is a large 6-layer convolutional network whose input is a single left right pair of unprocessed low-resolution images. The robot exhibits an excellent ability to detect obstacles and navigate around them in real time at speeds of 2 m s.",
"Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -- the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world.",
"",
"Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm. We provide a novel large-scale dataset of crowd-sourced driving behavior suitable for training our model, and report results predicting the driver action on held out sequences across diverse conditions.",
"Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset ( 1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.",
""
]
} |
1903.10995 | 2924268550 | Autonomous vehicles are more likely to be accepted if they drive accurately, comfortably, but also similar to how human drivers would. This is especially true when autonomous and human-driven vehicles need to share the same road. The main research focus thus far, however, is still on improving driving accuracy only. This paper formalizes the three concerns with the aim of accurate, comfortable and human-like driving. Three contributions are made in this paper. First, numerical map data from HERE Technologies are employed for more accurate driving; a set of map features which are believed to be relevant to driving are engineered to navigate better. Second, the learning procedure is improved from a pointwise prediction to a sequence-based prediction and passengers' comfort measures are embedded into the learning algorithm. Finally, we take advantage of the advances in adversary learning to learn human-like driving; specifically, the standard L1 or L2 loss is augmented by an adversary loss which is based on a discriminator trained to distinguish between human driving and machine driving. Our model is trained and evaluated on the Drive360 dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving model is more accurate, more comfortable and behaves more like a human driver than previous methods. The resources of this work will be released on the project page. | There are also methods dedicated to robust transfer of driving policies from a synthetic domain to the real world domain @cite_46 @cite_4 . Some other works study how to better evaluate the learned driving models @cite_21 @cite_3 . Those works are complementary to our work. Other contributions have chosen the middle ground between traditional pipe-lined methods and the monolithic end-to-end approach. They learn driving models from compact intermediate representations called affordance indicators such as and @cite_10 @cite_47 . Our engineered features from numerical maps can be considered as some sort of affordance indicators. Recently, reinforcement learning for driving has received increased attention @cite_12 @cite_20 @cite_13 . The trend is especially fueled by the release of multiple driving simulators @cite_34 @cite_32 . | {
"cite_N": [
"@cite_13",
"@cite_47",
"@cite_4",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_34",
"@cite_46",
"@cite_20",
"@cite_10",
"@cite_12"
],
"mid": [
"2968983352",
"2963727600",
"2968095971",
"2328067583",
"2962867954",
"2890235476",
"2615547864",
"2962696750",
"2583993537",
"2119112357",
"2530849036"
],
"abstract": [
"We demonstrate the first application of deep reinforcement learning to autonomous driving. From randomly initialised parameters, our model is able to learn a policy for lane following in a handful of training episodes using a single monocular image as input. We provide a general and easy to obtain reward: the distance travelled by the vehicle without the safety driver taking control. We use a continuous, model-free deep reinforcement learning algorithm, with all exploration and optimisation performed on-vehicle. This demonstrates a new framework for autonomous driving which moves away from reliance on defined logical rules, mapping, and direct supervision. We discuss the challenges and opportunities to scale this approach to a broader range of autonomous driving tasks.",
"",
"Simulation can be a powerful tool for under-standing machine learning systems and designing methods to solve real-world problems. Training and evaluating methods purely in simulation is often “doomed to succeed” at the desired task in a simulated environment, but the resulting models are incapable of operation in the real world. Here we present and evaluate a method for transferring a vision-based lane following driving policy from simulation to operation on a rural road without any real-world labels. Our approach leverages recent advances in image-to-image translation to achieve domain transfer while jointly learning a single-camera control policy from simulation control labels. We assess the driving performance of this method using both open-loop regression metrics, and closed-loop performance operating an autonomous vehicle on rural and urban roads.",
"Software testing is all too often simply a bug hunt rather than a wellconsidered exercise in ensuring quality. A more methodical approach than a simple cycle of system-level test-fail-patch-test will be required to deploy safe autonomous vehicles at scale. The ISO 26262 development V process sets up a framework that ties each type of testing to a corresponding design or requirement document, but presents challenges when adapted to deal with the sorts of novel testing problems that face autonomous vehicles. This paper identifies five major challenge areas in testing according to the V model for autonomous vehicles: driver out of the loop, complex requirements, non-deterministic algorithms, inductive learning algorithms, and failoperational systems. General solution approaches that seem promising across these different challenge areas include: phased deployment using successively relaxed operational scenarios, use of a monitor actuator pair architecture to separate the most complex autonomy functions from simpler safety functions, and fault injection as a way to perform more efficient edge case testing. While significant challenges remain in safety-certifying the type of algorithms that provide high-level autonomy themselves, it seems within reach to instead architect the system and its accompanying design process to be able to employ existing software safety approaches.",
"",
"Autonomous driving models should ideally be evaluated by deploying them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and suitable offline metrics.",
"Developing and testing algorithms for autonomous vehicles in real world is an expensive and time consuming process. Also, in order to utilize recent advances in machine intelligence and deep learning we need to collect a large amount of annotated training data in a variety of conditions and environments. We present a new simulator built on Unreal Engine that offers physically and visually realistic simulations for both of these goals. Our simulator includes a physics engine that can operate at a high frequency for real-time hardware-in-the-loop (HITL) simulations with support for popular protocols (e.g. MavLink). The simulator is designed from the ground up to be extensible to accommodate new types of vehicles, hardware platforms and software protocols. In addition, the modular design enables various components to be easily usable independently in other projects. We demonstrate the simulator by first implementing a quadrotor as an autonomous vehicle and then experimentally comparing the software components with real-world flights.",
"",
"Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles.",
"Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.",
"Autonomous driving is a multi-agent setting where the host vehicle must apply sophisticated negotiation skills with other road users when overtaking, giving way, merging, taking left and right turns and while pushing ahead in unstructured urban roadways. Since there are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy. Moreover, one must balance between unexpected behavior of other drivers pedestrians and at the same time not to be too defensive so that normal traffic flow is maintained. In this paper we apply deep reinforcement learning to the problem of forming long term driving strategies. We note that there are two major challenges that make autonomous driving different from other robotic tasks. First, is the necessity for ensuring functional safety - something that machine learning has difficulty with given that performance is optimized at the level of an expectation over many instances. Second, the Markov Decision Process model often used in robotics is problematic in our case because of unpredictable behavior of other agents in this multi-agent scenario. We make three contributions in our work. First, we show how policy gradient iterations can be used without Markovian assumptions. Second, we decompose the problem into a composition of a Policy for Desires (which is to be learned) and trajectory planning with hard constraints (which is not learned). The goal of Desires is to enable comfort of driving, while hard constraints guarantees the safety of driving. Third, we introduce a hierarchical temporal abstraction we call an \"Option Graph\" with a gating mechanism that significantly reduces the effective horizon and thereby reducing the variance of the gradient estimation even further."
]
} |
1903.10995 | 2924268550 | Autonomous vehicles are more likely to be accepted if they drive accurately, comfortably, but also similar to how human drivers would. This is especially true when autonomous and human-driven vehicles need to share the same road. The main research focus thus far, however, is still on improving driving accuracy only. This paper formalizes the three concerns with the aim of accurate, comfortable and human-like driving. Three contributions are made in this paper. First, numerical map data from HERE Technologies are employed for more accurate driving; a set of map features which are believed to be relevant to driving are engineered to navigate better. Second, the learning procedure is improved from a pointwise prediction to a sequence-based prediction and passengers' comfort measures are embedded into the learning algorithm. Finally, we take advantage of the advances in adversary learning to learn human-like driving; specifically, the standard L1 or L2 loss is augmented by an adversary loss which is based on a discriminator trained to distinguish between human driving and machine driving. Our model is trained and evaluated on the Drive360 dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving model is more accurate, more comfortable and behaves more like a human driver than previous methods. The resources of this work will be released on the project page. | . Increasing the accuracy and robustness of self-localization on a map @cite_5 @cite_11 @cite_45 and computing the fastest, most fuel-efficient trajectory from one point to another through a road network @cite_25 @cite_38 @cite_48 @cite_15 @cite_7 have both been popular research fields for many years. By now, navigation systems are widely used to aid human drivers or pedestrians. Yet, their integration for learning driving models has not received due attention in the academic community, mainly due to limited accessibility @cite_36 . We integrate industrial standard numerical maps -- from HERE Technologies -- into the learning of our driving models. We show the advantage of using numerical maps and further combine the engineered features of our numerical maps with the visually rendered navigation routes by @cite_36 . | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_36",
"@cite_48",
"@cite_45",
"@cite_5",
"@cite_15",
"@cite_25",
"@cite_11"
],
"mid": [
"2104317982",
"2008003976",
"2887286974",
"2030740234",
"1956142090",
"1989185123",
"2001155602",
"2172041433",
"2012531858"
],
"abstract": [
"We survey recent advances in algorithms for route planning in transportation networks. For road networks, we show that one can compute driving directions in milliseconds or less even at continental scale. A variety of techniques provide different trade-offs between preprocessing effort, space requirements, and query time. Some algorithms can answer queries in a fraction of a microsecond, while others can deal efficiently with real-time traffic. Journey planning on public transportation systems, although conceptually similar, is a significantly harder problem due to its inherent time-dependent and multicriteria nature. Although exact algorithms are fast enough for interactive queries on metropolitan transit systems, dealing with continent-sized instances requires simplifications or heavy preprocessing. The multimodal route planning problem, which seeks journeys combining schedule-based transportation (buses, trains) with unrestricted modes (walking, driving), is even harder, relying on approximate solutions even for metropolitan inputs.",
"A driver's choice of a route to a destination may depend on the route's length and travel time, but a multitude of other, possibly hard-to-formalize aspects, may also factor into the driver's decision. There is evidence that a driver's choice of route is context dependent, e.g., varies across time, and that route choice also varies from driver to driver. In contrast, conventional routing services support little in the way of context dependence, and they deliver the same routes to all drivers. We study how to identify context-aware driving preferences for individual drivers from historical trajectories, and thus how to provide foundations for personalized navigation, but also professional driver education and traffic planning. We provide techniques that are able to capture time-dependent and uncertain properties of dynamic travel costs, such as travel time and fuel consumption, from trajectories, and we provide techniques capable of capturing the driving behaviors of different drivers in terms of multiple dynamic travel costs. Further, we propose techniques that are able to identify a driver's contexts and then to identify driving preferences for each context using historical trajectories from the driver. Empirical studies with a large trajectory data set offer insight into the design properties of the proposed techniques and suggest that they are effective.",
"For human drivers, having rear and side-view mirrors is vital for safe driving. They deliver a more complete view of what is happening around the car. Human drivers also heavily exploit their mental map for navigation. Nonetheless, several methods have been published that learn driving models with only a front-facing camera and without a route planner. This lack of information renders the self-driving task quite intractable. We investigate the problem in a more realistic setting, which consists of a surround-view camera system with eight cameras, a route planner, and a CAN bus reader. In particular, we develop a sensor setup that provides data for a 360-degree view of the area surrounding the vehicle, the driving route to the destination, and low-level driving maneuvers (e.g. steering angle and speed) by human drivers. With such a sensor setup we collect a new driving dataset, covering diverse driving scenarios and varying weather illumination conditions. Finally, we learn a novel driving model by integrating information from the surround-view cameras and the route planner. Two route planners are exploited: (1) by representing the planned routes on OpenStreetMap as a stack of GPS coordinates, and (2) by rendering the planned routes on TomTom Go Mobile and recording the progression into a video. Our experiments show that: (1) 360-degree surround-view cameras help avoid failures made with a single front-view camera, in particular for city driving and intersection scenarios; and (2) route planners help the driving task significantly, especially for steering angle prediction. Code, data and more visual results will be made available at http: www.vision.ee.ethz.ch heckers Drive360.",
"GPS devices have been widely used in automobiles to compute navigation routes to destinations. The generated driving route targets the minimal traveling distance, but neglects the sightseeing experience of the route. In this study, we propose an augmented GPS navigation system, GPSView, to incorporate a scenic factor into the routing. The goal of GPSView is to plan a driving route with scenery and sightseeing qualities, and therefore allow travelers to enjoy sightseeing on the drive. To do so, we first build a database of scenic roadways with vistas of landscapes and sights along the roadside. Specifically, we adapt an attention-based approach to exploit community-contributed GPS-tagged photos on the Internet to discover scenic roadways. The premise is: a multitude of photos taken along a roadway imply that this roadway is probably appealing and catches the public's attention. By analyzing the geospatial distribution of photos, the proposed approach discovers the roadside sight spots, or Points-Of-Interest (POIs), which have good scenic qualities and visibility to travelers on the roadway. Finally, we formulate scenic driving route planning as an optimization task towards the best trade-off between sightseeing experience and traveling distance. Testing in the northern California area shows that the proposed system can deliver promising results.",
"One significant barrier in introducing autonomous driving is the liability issue of a collision; e.g. when two autonomous vehicles collide, it is unclear which vehicle should be held accountable. To solve this issue, we view traffic rules from legal texts as requirements for autonomous vehicles. If we can prove that an autonomous vehicle always satisfies these requirements during its operation, then it cannot be held responsible in a collision. We present our approach by formalising a subset of traffic rules from the Vienna Convention on Road Traffic for highway scenarios in Isabelle HOL.",
"Lane relative vehicle navigation and control requires accurate lane-relative positioning of the vehicle. This relative position can be computed by comparing the vehicle absolute position with analytic roadway maps, which requires both high-accuracy positioning of the vehicle and high-accuracy lane maps. Carrier Phase Differential GPS (CPDGPS) aided INS or CPDGPS aided encoders is capable of estimating vehicle absolute position (relative to earth center) with centimeter level accuracy; however, to the best of the author's knowledge, the accuracy of lane level maps is currently not sufficient. In this paper, we first consider the structure of lane level maps that are compatible with standard practices of GIS road modeling. Then, various analytic lane definition are discussed. We also present a method of building lane level maps from high-accuracy positioning data along the lane center. The data is segmented according to road intersections. Shape points (vertices) as a function of arclength are located based on changes in estimated curvature. For each segment, the parameters are estimated by least-square criteria and can be refined as new datasets become available. This process is shown by an example.",
"Planning an itinerary before traveling to a city is one of the most important travel preparation activities. In this paper, we propose a novel framework called TripPlanner, leveraging a combination of location-based social network (i.e., LBSN) and taxi GPS digital footprints to achieve personalized, interactive, and traffic-aware trip planning. First, we construct a dynamic point-of-interest network model by extracting relevant information from crowdsourced LBSN and taxi GPS traces. Then, we propose a two-phase approach for personalized trip planning. In the route search phase, TripPlanner works interactively with users to generate candidate routes with specified venues. In the route augmentation phase, TripPlanner applies heuristic algorithms to add user's preferred venues iteratively to the candidate routes, with the objective of maximizing the route score while satisfying both the venue visiting time and total travel time constraints. To validate the efficiency and effectiveness of the proposed approach, extensive empirical studies were performed on two real-world data sets from the city of San Francisco, which contain more than 391 900 passenger delivery trips generated by 536 taxis in a month and 110 214 check-ins left by 15 680 Foursquare users in six months.",
"This paper presents a Cloud-based system computing customized and practically fast driving routes for an end user using (historical and real-time) traffic conditions and driver behavior. In this system, GPS-equipped taxicabs are employed as mobile sensors constantly probing the traffic rhythm of a city and taxi drivers' intelligence in choosing driving directions in the physical world. Meanwhile, a Cloud aggregates and mines the information from these taxis and other sources from the Internet, like Web maps and weather forecast. The Cloud builds a model incorporating day of the week, time of day, weather conditions, and individual driving strategies (both of the taxi drivers and of the end user for whom the route is being computed). Using this model, our system predicts the traffic conditions of a future time (when the computed route is actually driven) and performs a self-adaptive driving direction service for a particular user. This service gradually learns a user's driving behavior from the user's GPS logs and customizes the fastest route for the user with the help of the Cloud. We evaluate our service using a real-world dataset generated by over 33,000 taxis over a period of 3 months in Beijing. As a result, our service accurately estimates the travel time of a route for a user; hence finding the fastest route customized for the user.",
"Digital maps can provide essential information for many advanced driver assistance systems (ADAS) dedicated to both safety and comfort applications. As the level of detail and global accuracy of state-of-the-art digital maps are not sufficient for a multitude of applications, we present methods and models for the generation of high precision maps. The proposed modeling includes 3D lane level information, road markings, landmarks and additional attributes with benefits for many ADAS. The extensive use of circular arc splines enables both adjustable accuracy and high efficiency as our cartographic methodology guarantees the minimum number of curve segments with respect to a given error threshold."
]
} |
1903.10995 | 2924268550 | Autonomous vehicles are more likely to be accepted if they drive accurately, comfortably, but also similar to how human drivers would. This is especially true when autonomous and human-driven vehicles need to share the same road. The main research focus thus far, however, is still on improving driving accuracy only. This paper formalizes the three concerns with the aim of accurate, comfortable and human-like driving. Three contributions are made in this paper. First, numerical map data from HERE Technologies are employed for more accurate driving; a set of map features which are believed to be relevant to driving are engineered to navigate better. Second, the learning procedure is improved from a pointwise prediction to a sequence-based prediction and passengers' comfort measures are embedded into the learning algorithm. Finally, we take advantage of the advances in adversary learning to learn human-like driving; specifically, the standard L1 or L2 loss is augmented by an adversary loss which is based on a discriminator trained to distinguish between human driving and machine driving. Our model is trained and evaluated on the Drive360 dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving model is more accurate, more comfortable and behaves more like a human driver than previous methods. The resources of this work will be released on the project page. | . Cars transport passengers. This has led to passenger comfort research for human-driven vehicles @cite_2 . Driver comfort is also considered when developing the control system of human-driven vehicles @cite_29 @cite_33 . Autonomous cars can lead to concerns about how well-controlled such a car is @cite_17 , motion sickness @cite_27 and apparent safety @cite_27 . While research on passenger comfort started to receive more attention @cite_27 , it is still missing in current driving models. To address this problem, this work incorporates passenger comfort measures into learned autonomous driving models. | {
"cite_N": [
"@cite_33",
"@cite_29",
"@cite_27",
"@cite_2",
"@cite_17"
],
"mid": [
"2551773494",
"2107537851",
"1955305638",
"1969152264",
"2962703144"
],
"abstract": [
"Autonomous vehicle field of study has seen considerable researches within three decades. In the last decade particularly, interests in this field has undergone tremendous improvement. One of the main aspects in autonomous vehicle is the path tracking control, focusing on the vehicle control in lateral and longitudinal direction in order to follow a specified path or trajectory. In this paper, path tracking control is reviewed in terms of the basic vehicle model usually used; the control strategies usually employed in path tracking control, and the performance criteria used to evaluate the controller's performance. Vehicle model is categorised into several types depending on its linearity and the type of behaviour it simulates, while path tracking control is categorised depending on its approach. This paper provides critical review of each of these aspects in terms of its usage and disadvantages advantages. Each aspect is summarised for better overall understanding. Based on the critical reviews, main challenges in the field of path tracking control is identified and future research direction is proposed. Several promising advancement is proposed with the main prospect is focused on adaptive geometric controller developed on a nonlinear vehicle model and tested with hardware-in-the-loop (HIL). It is hoped that this review can be treated as preliminary insight into the choice of controllers in path tracking control development for an autonomous ground vehicle.",
"The autonomous vehicle is a mobile robot integrating multi-sensor navigation and positioning, intelligent decision making and control technology. This paper presents the control system architecture of the autonomous vehicle, called “Intelligent Pioneer”, and the path tracking and stability of motion to effectively navigate in unknown environments is discussed. In this approach, a two degree-of-freedom dynamic model is developed to formulate the path-tracking problem in state space format. For controlling the instantaneous path error, traditional controllers have difficulty in guaranteeing performance and stability over a wide range of parameter changes and disturbances. Therefore, a newly developed adaptive-PID controller will be used. By using this approach the flexibility of the vehicle control system will be increased and achieving great advantages. Throughout, we provide examples and results from Intelligent Pioneer and the autonomous vehicle using this approach competed in the 2010 and 2011 Future Ch...",
"The prospect of driverless cars wide-scale deployment is imminent owing to the advances in robotics, computational power, communications, and sensor technologies. This promises highway fatality reductions and improvements in traffic and fuel efficiency. Our understanding of the effects arising from commuting in autonomous cars is still limited. The novel concept of the loss of driver controllability is introduced here. It requires a reassessment of vehicle's comfort criteria. In this review paper, traditional comfort measures are examined and autonomous passenger awareness factors are proposed. We categorize path-planning methods in light of the offered factors. The objective of the review presented in this article is to highlight the gap in path planning from a passenger comfort perspective and propose some research solutions. It is expected that this investigation will generate more research interest and bring innovative solutions into this field.",
"Abstract This paper shows the development of a system (Hardware, Firmware and Software) focused to assess the dynamic motion factors that affect the comfort in public transportation systems. The data is collected, on-board processed and transported using the public transportation system vehicles as mobile smart sensors. Therefore, the acceleration measurement using a tri-axial accelerometer, the position detection using Global Positioning System (GPS) and the appropriate algorithms allow the system to detect rude driver styles and defects on the pavement. The firmware is composed by two algorithms. The first one is based on the detection of acceleration and Jerk magnitudes out of the comfort range, which is called Jerk-Acceleration Threshold Detection (JATD). An algorithm to compute the Jerk with comparable results to prior researches is proposed in this paper. The second algorithm, called Comfort Index with Acceleration Threshold Detection (CI-ATD), is based on the detection of acceleration values out of comfort range and the average ride comfort. The average ride comfort is supported by the recommendation of the international standard ISO2631-1. The comfort range or threshold values can be set using the user’s perception. A software developed in LabVIEW™ interface, visualizes discomfort event in online maps for geographic location of each event. Also, the software implements road unevenness detection, which is based on the collected data analysis. The system was successful tested in a conventional bus line on its daily ride, the results reveals that most of the events are due to vertical acceleration disturbances. Also, a preliminary test indicates higher sensibility for vertical than longitudinal or transversal accelerations.",
"We investigate the problem of object referring (OR) i.e. to localize a target object in a visual scene coming with a language description. Humans perceive the world more as continued video snippets than as static images, and describe objects not only by their appearance, but also by their spatio-temporal context and motion features. Humans also gaze at the object when they issue a referring expression. Existing works for OR mostly focus on static images only, which fall short in providing many such cues. This paper addresses OR in videos with language and human gaze. To that end, we present a new video dataset for OR, with 30, 000 objects over 5, 000 stereo video sequences annotated for their descriptions and gaze. We further propose a novel network model for OR in videos, by integrating appearance, motion, gaze, and spatio-temporal context into one network. Experimental results show that our method effectively utilizes motion cues, human gaze, and spatio-temporal context. Our method outperforms previous OR methods. For dataset and code, please refer https: people.ee.ethz.ch arunv ORGaze.html."
]
} |
1903.10995 | 2924268550 | Autonomous vehicles are more likely to be accepted if they drive accurately, comfortably, but also similar to how human drivers would. This is especially true when autonomous and human-driven vehicles need to share the same road. The main research focus thus far, however, is still on improving driving accuracy only. This paper formalizes the three concerns with the aim of accurate, comfortable and human-like driving. Three contributions are made in this paper. First, numerical map data from HERE Technologies are employed for more accurate driving; a set of map features which are believed to be relevant to driving are engineered to navigate better. Second, the learning procedure is improved from a pointwise prediction to a sequence-based prediction and passengers' comfort measures are embedded into the learning algorithm. Finally, we take advantage of the advances in adversary learning to learn human-like driving; specifically, the standard L1 or L2 loss is augmented by an adversary loss which is based on a discriminator trained to distinguish between human driving and machine driving. Our model is trained and evaluated on the Drive360 dataset, which features 60 hours and 3000 km of real-world driving data. Extensive experiments show that our driving model is more accurate, more comfortable and behaves more like a human driver than previous methods. The resources of this work will be released on the project page. | . A large body of work has studied human driving styles @cite_43 @cite_42 . Statistical approaches were employed to evaluate human drivers and to suggest improvements @cite_18 @cite_39 . This line of research inspired us to ask whether . Human-like driving is hard to quantify. Fortunately, recent advances in adversarial learning provide the tools to extract the of human-like driving, using it to adjust machine driving so that it becomes more human-like. Some work has studied human-like motion planning of autonomous cars, but it was constrained to simulated scenarios @cite_41 @cite_19 . The closest work to ours was done by @cite_44 where a set of manually-crafted features are used to characterize human driving style. Our method learns the features directly from the data using adversarial neural networks. | {
"cite_N": [
"@cite_18",
"@cite_41",
"@cite_42",
"@cite_39",
"@cite_44",
"@cite_43",
"@cite_19"
],
"mid": [
"2070420170",
"2083433284",
"2190194936",
"1985423199",
"1527702126",
"2170003873",
"2145015998"
],
"abstract": [
"We evaluate a mobile application that assesses driving behavior based on in-vehicle acceleration measurements and gives corresponding feedback to drivers. In the insurance business, such applications have recently gained traction as a viable alternative to the monitoring of drivers via \"black boxes\" installed in vehicles, which lacks interaction opportunities and is perceived as privacy intrusive by policyholders. However, pose uncertainty and other noise-inducing factors make smartphones potentially less reliable as sensor platforms. We therefore compare critical driving events generated by a smartphone with reference measurements from a vehicle-fixed IMU in a controlled field study. The study was designed to capture driver variability under real-world conditions, while minimizing the influence of external factors. We find that the mobile measurements tend to overestimate critical driving events, possibly due to deviation from the calibrated initial device pose. While weather and daytime do not appear to influence event counts, road type is a significant factor that is not considered in most current state-of-the-art implementations.",
"A framework for modeling driver behavior within driving simulators is described in this paper. This framework serves as a basis for building human- like driving behavior models for autonomous vehicles operating within the virtual environment of a driving simulator. The framework consists of four units, the Perception Unit, the Emotions Unit, the Decision- making Unit (DMU), and the Decision- implementation Unit (DIU). The Perception Unit defines how the model perceives its environment in local and global terms. The Emotions Unit defines how the model responds emotionally to its environment. The DMU investigates the environment for possible actions that might potentially serve the model's emotional demands. And finally the DIU tries to implement these decisions when a traffic condition, perceived as safe enough for such an implementation, emerges. Each of these units has its own set of fuzzy variables and fuzzy ifthen rules. Any driving model, that is based on this framework, should provide membership function parameters for these fuzzy variables in accordance with the category of human driving behavior this model is targeting. Our framework addresses decision making and implementation at the maneuvering and operational levels of the driving task. Decisions at the planning level are addressed through a script- based traffic controller. The present model is limited to simulating human behaviors when driving in a two- lane rural environment.",
"In this paper the various driving style analysis solutions are investigated. An in-depth investigation is performed to identify the relevant machine learning and artificial intelligence algorithms utilised in current driver behaviour and driving style analysis systems. This review therefore serves as a trove of information, and will inform the specialist and the student regarding the current state of the art in driver style analysis systems, the application of these systems and the underlying artificial intelligence algorithms applied to these applications. The aim of the investigation is to evaluate the possibilities for unique driver identification utilizing the approaches identified in other driver behaviour studies. It was found that Fuzzy Logic inference systems, Hidden Markov Models and Support Vector Machines consist of promising capabilities to address unique driver identification algorithms if model complexity can be reduced.",
"In this paper, we develop a smart phone-based driving behavior evaluation system, named Join Driving, which helps drivers notice how aggressive their driving behaviors are and be aware of the riding comfort level of passengers. The proposed evaluation system is made of two parts: driving events detection and evaluation part and riding comfort level evaluation part. In driving events detection and evaluation part, the proposed system, Join Driving, first presents a model to detect drivers' driving events, based on the data collected from the acceleration, orientation and GPS sensors in smart phones. Then, based on the detected drivers' driving events, Join Driving implements a novel scoring mechanism to quantitatively evaluate how aggressive these driving events are. In riding comfort level evaluation part, the proposed system gives the specific scores to rate passengers' riding comfort level based on ISO 2631. Finally, several practical experiments are conducted to evaluate the effectiveness of the proposed scoring system.",
"It is expected that autonomous vehicles capable of driving without human supervision will be released to market within the next decade. For user acceptance, such vehicles should not only be safe and reliable, they should also provide a comfortable user experience. However, individual perception of comfort may vary considerably among users. Whereas some users might prefer sporty driving with high accelerations, others might prefer a more relaxed style. Typically, a large number of parameters such as acceleration profiles, distances to other cars, speed during lane changes, etc., characterize a human driver's style. Manual tuning of these parameters may be a tedious and error-prone task. Therefore, we propose a learning from demonstration approach that allows the user to simply demonstrate the desired style by driving the car manually. We model the individual style in terms of a cost function and use feature-based inverse reinforcement learning to find the model parameters that fit the observed style best. Once the model has been learned, it can be used to efficiently compute trajectories for the vehicle in autonomous mode. We show that our approach is capable of learning cost functions and reproducing different driving styles using data from real drivers.",
"This paper considers a comprehensive and collaborative project to collect large amounts of driving data on the road for use in a wide range of areas of vehicle-related research centered on driving behavior. Unlike previous data collection efforts, the corpora collected here contain both human and vehicle sensor data, together with rich and continuous transcriptions. While most efforts on in-vehicle research are generally focused within individual countries, this effort links a collaborative team from three diverse regions (i.e., Asia, American, and Europe). Details relating to the data collection paradigm, such as sensors, driver information, routes, and transcription protocols, are discussed, and a preliminary analysis of the data across the three data collection sites from the U.S. (Dallas), Japan (Nagoya), and Turkey (Istanbul) is provided. The usability of the corpora has been experimentally verified with a Cohen's kappa coefficient of 0.74 for transcription reliability, as well as being successfully exploited for several in-vehicle applications. Most importantly, the corpora are publicly available for research use and represent one of the first multination efforts to share resources and understand driver characteristics. Future work on distributing the corpora to the wider research community is also discussed.",
"Autonomous vehicles are perhaps the most encountered element in a driving simulator. Their effect on the realism of the simulator is critical. For autonomous vehicles to contribute positively to the realism of the hosting driving simulator, they need to have a realistic appearance and, possibly more importantly, realistic behavior. Addressed is the problem of modeling realistic and humanlike behaviors on simulated highway systems by developing an abstract framework that captures the details of human driving at the microscopic level. This framework consists of four units that together define and specify the elements needed for a concrete humanlike driving model to be implemented within a driving simulator. These units are the perception unit, the emotions unit, the decision-making unit, and the decision-implementation unit. Realistic models of humanlike driving behavior can be built by implementing the specifications set by the driving framework. Four humanlike driving models have been implemented on the basis of the driving framework: (a) a generic normal driving model, (b) an aggressive driving model, (c) an alcoholic driving model, and (d) an elderly driving model. These driving models provide experiment designers with a powerful tool for generating complex traffic scenarios in their experiments. These behavioral models were incorporated along with three-dimensional visual models and vehicle dynamics models into one entity, which is the autonomous vehicle. Subjects perceived the autonomous vehicles with the described behavioral models as having a positive effect on the realism of the driving simulator. The erratic driving models were identified correctly by the subjects in most cases."
]
} |
1903.11174 | 2968354794 | In the task of Autonomous aerial filming of a moving actor (e.g. a person or a vehicle), it is crucial to have a good heading direction estimation for the actor from the visual input. However, the models obtained in other similar tasks, such as pedestrian collision risk analysis and human-robot interaction, are very difficult to generalize to the aerial filming task, because of the difference in data distributions. Towards improving generalization with less amount of labeled data, this paper presents a semi-supervised algorithm for heading direction estimation problem. We utilize temporal continuity as the unsupervised signal to regularize the model and achieve better generalization ability. This semi-supervised algorithm is applied to both training and testing phases, which increases the testing performance by a large margin. We show that by leveraging unlabeled sequences, the amount of labeled data required can be significantly reduced. We also discuss several important details on improving the performance by balancing labeled and unlabeled loss, and making good combinations. Experimental results show that our approach robustly outputs the heading direction for different types of actor. The aesthetic value of the video is also improved in the aerial filming task. | Heading direction estimation is a widely studied problem, in particular focused on humans and cars. One option to tackle the problem is to use inertial and GPS sensors to estimate human's @cite_29 @cite_35 or a car's @cite_7 heading direction. In the context of aerial filming, the target actor generally does not carry extra sensors; thus our emphasis on vision-based solutions in this paper. | {
"cite_N": [
"@cite_35",
"@cite_29",
"@cite_7"
],
"mid": [
"",
"2519253192",
"2077405386"
],
"abstract": [
"",
"The paper proposes a novel approach for direction estimation of a moving pedestrian as perceived in a 2-D coordinate of field camera. The proposed direction estimation method is intended for pedestrian monitoring in traffic control systems. Apart from traffic control, direction of motion estimation is also very important in accident avoidance system for smart cars, assisted living systems, in occlusion prediction for seamless tracking in visual surveillance, and so on. The proposed video-based direction estimation method exploits the notion of perspective distortion as perceived in monocular vision of 2-D camera co-ordinate. The temporal pattern of change in dimension of pedestrian in a frame sequence is unique for each direction; hence, the dimensional change-based feature is used to estimate the direction of motion; eight discrete directions of motion are considered and the hidden Markov model is used for classification. The experiments are conducted over CASIA Dataset A , CASIA Dataset B , and over a self-acquired dataset: NITR Conscious Walk Dataset . The balanced accuracy of direction estimation for these experiments yields satisfactory results with accuracy indices as 94.58 , 90.87 , and 95.83 , respectively. The experiment also justifies with suitable test conditions about the characteristic features, such as robustness toward improper segmentation, partial occlusion, and changing orientation of head or body during walk of a pedestrian. The proposed method can be used as a standalone system or can be integrated with existing frame-based direction estimation methods for implementing a pedestrian monitoring system.",
"An efficient approach for deriving accurate pose and heading values through multi-sensor fusion of data from several inexpensive sensors (such as multiple GPS (Global Positioning Systems), EC (electronic compass), rate gyro) is presented. The proposed multisensor fusion approach is composed of several sub-methods namely initial heading calculation, classification and weighing (CnW), extended Kalman filter (EKF) and then covariance intersection (CI) algorithms. The consecutive implementation of the sub-methods gives an accurate heading value with lesser RMSE (root mean square error) compared to the original GPS COG (course over ground) and EC. Several experimental tests were done to confirm the good performance of the proposed process."
]
} |
1903.11174 | 2968354794 | In the task of Autonomous aerial filming of a moving actor (e.g. a person or a vehicle), it is crucial to have a good heading direction estimation for the actor from the visual input. However, the models obtained in other similar tasks, such as pedestrian collision risk analysis and human-robot interaction, are very difficult to generalize to the aerial filming task, because of the difference in data distributions. Towards improving generalization with less amount of labeled data, this paper presents a semi-supervised algorithm for heading direction estimation problem. We utilize temporal continuity as the unsupervised signal to regularize the model and achieve better generalization ability. This semi-supervised algorithm is applied to both training and testing phases, which increases the testing performance by a large margin. We show that by leveraging unlabeled sequences, the amount of labeled data required can be significantly reduced. We also discuss several important details on improving the performance by balancing labeled and unlabeled loss, and making good combinations. Experimental results show that our approach robustly outputs the heading direction for different types of actor. The aesthetic value of the video is also improved in the aerial filming task. | Based on a probabilistic framework, @cite_10 present a joint pedestrian head and body orientation estimation method, in which they design a HOG linSVM pedestrian detector combined with a Kalman filter. Learning-based methods, however, seem to achieve more robust and generalizable results, being more prevalent in the HDE literature. Most existing learning-based methods use large amounts of labeled data and supervised learning to train a model @cite_3 @cite_14 @cite_13 @cite_31 . However, open datasets @cite_19 @cite_28 @cite_9 @cite_25 generalize poorly to our aerial filming task mainly due to mismatch between image viewpoints, scales, and image blurr. Human key-point detection and 2D pose estimation have also been widely studied @cite_22 @cite_16 . However, such works are focused only on human bodies, and the 3D heading direction cannot be trivially recovered directly from 2D points because the keypoint's depth remains undefined. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_28",
"@cite_10",
"@cite_9",
"@cite_3",
"@cite_19",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_25"
],
"mid": [
"2519833594",
"2113325037",
"",
"2048960138",
"",
"",
"2101032778",
"2963842958",
"",
"2562663242",
""
],
"abstract": [
"Personal robots are expected to interact with the user by recognizing the user's face. However, in most of the service robot applications, the user needs to move himself herself to allow the robot to see him her face to face. To overcome such limitations, a method for estimating human body orientation is required. Previous studies used various components such as feature extractors and classification models to classify the orientation which resulted in low performance. For a more robust and accurate approach, we propose the light weight convolutional neural networks, an end to end system, for estimating human body orientation. Our body orientation estimation model achieved 81.58 and 94 accuracy with the benchmark dataset and our own dataset respectively. The proposed method can be used in a wide range of service robot applications which depend on the ability to estimate human body orientation. To show its usefulness in service robot applications, we designed a simple robot application which allows the robot to move towards the user's frontal plane. With this, we demonstrated an improved face detection rate.",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"",
"We present a probabilistic framework for the joint estimation of pedestrian head and body orientation from a mobile stereo vision platform. For both head and body parts, we convert the responses of a set of orientation-specific detectors into a (continuous) probability density function. The parts are localized by means of a pictorial structure approach, which balances part-based detector responses with spatial constraints. Head and body orientations are estimated jointly to account for anatomical constraints. The joint single-frame orientation estimates are integrated over time by particle filtering. The experiments involved data from a vehicle-mounted stereo vision camera in a realistic traffic setting; 65 pedestrian tracks were supplied by a state-of-the-art pedestrian tracker. We show that the proposed joint probabilistic orientation estimation framework reduces the mean absolute head and body orientation error up to 15° compared with simpler methods. This results in a mean absolute head body orientation error of about 21° 19°, which remains fairly constant up to a distance of 25 m. Our system currently runs in near real time (8–9 Hz).",
"",
"",
"We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20 improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http: vision.imar.ro human3.6m .",
"Modern deep learning systems successfully solve many perception tasks such as object pose estimation when the input image is of high quality. However, in challenging imaging conditions such as on low resolution images or when the image is corrupted by imaging artifacts, current systems degrade considerably in accuracy. While a loss in performance is unavoidable, we would like our models to quantify their uncertainty to achieve robustness against images of varying quality. Probabilistic deep learning models combine the expressive power of deep learning with uncertainty quantification. In this paper we propose a novel probabilistic deep learning model for the task of angular regression. Our model uses von Mises distributions to predict a distribution over object pose angle. Whereas a single von Mises distribution is making strong assumptions about the shape of the distribution, we extend the basic model to predict a mixture of von Mises distributions. We show how to learn a mixture model using a finite and infinite number of mixture components. Our model allows for likelihood-based training and efficient inference at test time. We demonstrate on a number of challenging pose estimation datasets that our model produces calibrated probability predictions and competitive or superior point estimates compared to the current state-of-the-art.",
"",
"This paper presents a novel approach for joint object detection and orientation estimation in a single deep convolutional neural network utilizing proposals calculated from 3D data. For orientation estimation, we extend a R-CNN like architecture by several carefully designed layers. Two new object proposal methods are introduced, to make use of stereo as well as lidar data. Our experiments on the KITTI dataset show that by combining proposals of both domains, high recall can be achieved while keeping the number of proposals low. Furthermore, our method for joint detection and orientation estimation outperforms state of the art approaches for cyclists on the easy test scenario of the KITTI test dataset.",
""
]
} |
1903.11174 | 2968354794 | In the task of Autonomous aerial filming of a moving actor (e.g. a person or a vehicle), it is crucial to have a good heading direction estimation for the actor from the visual input. However, the models obtained in other similar tasks, such as pedestrian collision risk analysis and human-robot interaction, are very difficult to generalize to the aerial filming task, because of the difference in data distributions. Towards improving generalization with less amount of labeled data, this paper presents a semi-supervised algorithm for heading direction estimation problem. We utilize temporal continuity as the unsupervised signal to regularize the model and achieve better generalization ability. This semi-supervised algorithm is applied to both training and testing phases, which increases the testing performance by a large margin. We show that by leveraging unlabeled sequences, the amount of labeled data required can be significantly reduced. We also discuss several important details on improving the performance by balancing labeled and unlabeled loss, and making good combinations. Experimental results show that our approach robustly outputs the heading direction for different types of actor. The aesthetic value of the video is also improved in the aerial filming task. | Semi-supervised learning (SSL) is also an active research area. Self-training is a commonly used technique for SSL @cite_18 . proposed a graph-based method for semi-supervised classification @cite_37 , and recently more related works have been proposed @cite_38 @cite_4 @cite_30 in the area. Most of the existing SSL works are focused on classification problems, which assume that different classes are separated by a low-density area. This assumption is not directly applicable to regression problems. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_18",
"@cite_37",
"@cite_4"
],
"mid": [
"2963170156",
"2552765184",
"2101210369",
"2407712691",
"830076066"
],
"abstract": [
"Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised classification performance and a good generator cannot be obtained at the same time. Theoretically we show that given the discriminator objective, good semi-supervised learning indeed requires a bad generator, and propose the definition of a preferred generator. Empirically, we derive a novel formulation based on our analysis that substantially improves over feature matching GANs, obtaining state-of-the-art results on multiple benchmark datasets.",
"Deep networks are successfully used as classification models yielding state-of-the-art results when trained on a large number of labeled samples. These models, however, are usually much less suited for semi-supervised problems because of their tendency to overfit easily when trained on small amounts of data. In this work we will explore a new training objective that is targeting a semi-supervised regime with only a small subset of labeled data. This criterion is based on a deep metric embedding over distance relations within the set of labeled samples, together with constraints over the embeddings of the unlabeled set. The final learned representations are discriminative in euclidean space, and hence can be used with subsequent nearest-neighbor classification using the labeled samples.",
"This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 .",
"We show how nonlinear embedding algorithms popular for use with \"shallow\" semi-supervised learning techniques such as kernel methods can be easily applied to deep multi-layer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This trick provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.",
"We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification in addition to permutation-invariant MNIST classification with all labels."
]
} |
1903.10899 | 2924509970 | This article proposes and evaluates a technique to predict the level of interference in wireless networks. We design a recursive predictor that computes future interference values at a given location by filtering measured interference at this location. The parametrization of the predictor is done offline by translating the autocorrelation of interference into an autoregressive moving average (ARMA) representation. This ARMA model is inserted into a steady-state Kalman filter enabling nodes to predict with low computational effort. Results show good performance in terms of accuracy between predicted and true values for relevant time horizons. Although the predictor is parametrized for the case of Poisson networks, Rayleigh fading, and fixed message lengths, a sensitivity analysis shows that it also works well in more general network scenarios. Numerical examples for underlay device-to-device communications and a common wireless sensor technology illustrate its broad applicability. The predictor can be applied as part of interference management to improve medium access, scheduling, and resource allocation. | In this context, interference is described as a random variable whose properties depend on several parameters including node locations, mobility, and data traffic patterns. These properties can be calculated in a given setup using tools from stochastic geometry. Examples include the mean interference (see @cite_18 @cite_36 ), higher-order statistics @cite_12 , and distribution (see @cite_31 @cite_0 @cite_30 @cite_9 @cite_41 ). These publications take into account the spatial features of wireless networks, which is fundamentally different to many classical" works on interference modeling and analysis (such as @cite_8 @cite_9 ). The past years have also seen a branch of research that analyzes how interference changes over time and space @cite_24 @cite_27 @cite_22 @cite_21 @cite_38 @cite_4 . Such interference dynamics can be described, for example, in terms of the autocorrelation of the received interference power (see @cite_27 @cite_34 ). Correlation influences the system behavior, such as the performance of diversity, relaying, multiple-input multiple-output (MIMO), and MAC protocols (see @cite_13 @cite_22 @cite_21 @cite_15 ). | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_36",
"@cite_41",
"@cite_9",
"@cite_21",
"@cite_15",
"@cite_0",
"@cite_24",
"@cite_27",
"@cite_31",
"@cite_34",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"",
"2156356530",
"2625225493",
"2019715307",
"2125663988",
"1884504066",
"2143252188",
"2121874910",
"2049504097",
"2424483325",
"2042164227",
"2135248699",
"2092609929",
"2171882038",
"2963458669",
"2089527438",
"2100195693"
],
"abstract": [
"",
"",
"Matern hard core processes of types I and II are the point processes of choice to model concurrent transmitters in CSMA networks. We determine the mean interference observed at a node of the process and compare it with the mean interference in a Poisson point process of the same density. It turns out that despite the similarity of the two models, they behave rather differently. For type I, the excess interference (relative to the Poisson case) increases exponentially in the hard-core distance, while for type II, the gap never exceeds 1 dB.",
"In practice, wireless networks are deployed over finite domains, the level of mobility is different at different locations, and user mobility is correlated over time. All these features have an impact on the temporal properties of interference which is often neglected. In this paper, we show how to incorporate correlated user mobility into the interference and outage correlation models. We use the random waypoint mobility model over a bounded one-dimensional domain as an example model inducing correlation, and we calculate its displacement law at different locations. Based on that, we illustrate that the temporal correlations of interference and outage are location-dependent, being lower close to the center of the domain, where the level of mobility is higher than near the boundary. Close to the boundary, more time is also needed to see uncorrelated interference. Our findings suggest that an accurate description of the mobility pattern is important, because it leads to more accurate understanding modeling of interference and receiver performance.",
"The interference in wireless networks is temporally correlated, since the node or user locations are correlated over time and the interfering transmitters are a subset of these nodes. For a wireless network where (potential) interferers form a Poisson point process and use ALOHA for channel access, we calculate the joint success and outage probabilities of n transmissions over a reference link. The results are based on the diversity polynomial, which captures the temporal interference correlation. The joint outage probability is used to determine the diversity gain (as the SIR goes to infinity), and it turns out that there is no diversity gain in simple retransmission schemes, even with independent Rayleigh fading over all links. We also determine the complete joint SIR distribution for two transmissions and the distribution of the local delay, which is the time until a repeated transmission over the reference link succeeds.",
"A microcell interference model termed the Nakagami m sub x m sub y model is introduced. The desired signal and the cochannel interferers are assumed to have Nakagami statistics but with different amounts of fading. A special case of this model is obtained when the desired signal has Nakagami statistics while the cochannel interferers are subject to Rayleigh fading. The probability density function of the signal-to-interference ratio in the Nakagami model is derived. This model is also compared with a Rician Rayleigh microcellular model. Expressions for the outage probabilities in microcell systems are derived. Numerical results show that, compared to medium large cell systems, the microcellular systems have a lower outage probability. The impact of diversity on the microcellular system is also studied. An improvement of the outage probability due to diversity is observed. >",
"We propose and prove a theorem that allows the calculation of a class of functionals on Poisson point processes that have the form of expected values of sum–products of functions. In proving the theorem, we present a variant of the Campbell–Mecke theorem from stochastic geometry. We proceed to apply our result in the calculation of expected values involving interference in wireless Poisson networks. Based on this, we derive outage probabilities for transmissions in a Poisson network with Nakagami fading. Our results extend the stochastic geometry toolbox used for the mathematical analysis of interference-limited wireless networks.",
"In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.",
"This paper presents a unified analytical method for efficient computation of first and second order statistics of the signal-to-interference-plus-noise-ratio (SINR) in wireless communication systems. New exact expressions are derived for the moments of SINR and its autocorrelation function that are valid for arbitrary interfering signals. These are used to examine the accuracy of the commonly used approximations for the average SINR and to investigate effects of fast fading and interfering signals' temporal behavior on the autocorrelation properties of SINR.",
"While the performance of maximum ratio combining (MRC) is well understood for a single isolated link, the same is not true in the presence of interference, which is typically correlated across antennas due to the common locations of interferers. For tractability, prior work focuses on the two extreme cases where the interference power across antennas is either assumed to be fully correlated or fully uncorrelated. In this paper, we address this shortcoming and characterize the performance of MRC in the presence of spatially-correlated interference across antennas. Modeling the interference field as a Poisson point process, we derive the exact distribution of the signal-to-interference ratio (SIR) for the case of two receive antennas, and upper and lower bounds for the general case. Using these results, we study the diversity behavior of MRC and characterize the critical density of simultaneous transmissions for a given outage constraint. The exact SIR distribution is also useful in benchmarking simpler correlation models. We show that the full-correlation assumption is considerably pessimistic (up to 30 higher outage probability for typical values) and the no-correlation assumption is significantly optimistic compared to the true performance.",
"Despite being ubiquitous in practice, the performance of maximal-ratio combining (MRC) in the presence of interference is not well understood. Because the interference received at each antenna originates from the same set of interferers but partially decorrelates over the fading channel, it possesses a complex correlation structure. This paper develops a realistic analytic model that accurately accounts for the interference correlation using stochastic geometry. Modeling interference by a Poisson shot noise process with independent Nakagami fading, we derive the link success probability for dual-branch interference-aware MRC. Using this result, we show that the common assumption that all receive antennas experience equal interference power underestimates the true performance, although this gap rapidly decays with increasing the Nakagami parameter @math of the interfering links. In contrast, ignoring interference correlation leads to a highly optimistic performance estimate for MRC, especially for large @math . In the low outage probability regime, our success probability expression can be considerably simplified. Observations based from the analysis include the following: 1) For small path loss exponents, MRC and minimum mean square error combining exhibit similar performance, and 2) the gains of MRC over selection combining are smaller in the interference-limited case than in the well-studied noise-limited case.",
"This paper deals with the distribution of cumulated instantaneous interference power in a Rayleigh fading channel for an infinite number of interfering stations, where each station transmits with a certain probability, independently of all others. If all distances are known, a necessary and sufficient condition is given for the corresponding distribution to be nondefective. Explicit formulae of density and distribution functions are obtained in the interesting special case that interfering stations are located on a linear grid. Moreover, the Laplace transform of cumulated power is investigated when the positions of stations follow a one- or two-dimensional Poisson process. It turns out that the corresponding distribution is defective for the two-dimensional models.",
"Interference is a main limiting factor of the performance of a wireless ad hoc network. The temporal and the spatial correlation of the interference makes the outages correlated temporally (important for retransmissions) and spatially correlated (important for routing). In this letter we quantify the temporal and spatial correlation of the interference in a wireless ad hoc network whose nodes are distributed as a Poisson point process on the plane when ALOHA is used as the multiple-access scheme.",
"The temporal correlation of interference is a key performance factor of several technologies and protocols for wireless communications. A comprehensive understanding of interference correlation is especially important in the design of diversity schemes, whose performance can severely degrade in case of highly correlated interference. Taking into account three sources of correlation-node locations, channel, and traffic-and using common modeling assumptions-random homogeneous node positions, Rayleigh block fading, and slotted ALOHA traffic-we derive closed-form expressions and calculation rules for the correlation coefficient of the overall interference power received at a certain point in space. Plots give an intuitive understanding as to how model parameters influence the interference correlation.",
"The authors obtain the optimum transmission ranges to maximize throughput for a direct-sequence spread-spectrum multihop packet radio network. In the analysis, they model the network self-interference as a random variable which is equal to the sum of the interference power of all other terminals plus background noise. The model is applicable to other spread-spectrum schemes where the interference of one user appears as a noise source with constant power spectral density to the other users. The network terminals are modeled as a random Poisson field of interference power emitters. The statistics of the interference power at a receiving terminal are obtained and shown to be the stable distributions of a parameter that is dependent on the propagation power loss law. The optimum transmission range in such a network is of the form CK sup alpha where C is a constant, K is a function of the processing gain, the background noise power spectral density, and the degree of error-correction coding used, and alpha is related to the power loss law. The results obtained can be used in heuristics to determine optimum routing strategies in multihop networks. >",
"",
"Interference in wireless systems is both temporally and spatially correlated. Yet very little research has analyzed the effect of such correlation. Here we focus on its impact on the diversity in Poisson networks with multi-antenna receivers. Most work on multi-antenna communication does not consider interference, and if it is included, it is assumed independent across the receive antennas. Here we show that interference correlation significantly reduces the probability of successful reception over SIMO links. The diversity loss is quantified via the diversity polynomial. For the two-antenna case, we provide the complete joint SIR distribution.",
"The paper considers interference in a wireless communication network caused by users that share the same propagation medium. Under the assumption that the interfering users are spatially Poisson distributed and under a power-law propagation loss function, it has been shown in the past that the interference instantaneous amplitude at the receiver is spl alpha -stable distributed. Past work has not considered the second-order statistics of the interference and has relied on the assumption that interference samples are independent. In this paper, we provide analytic expressions for the interference second-order statistics and show that depending on the properties of the users' holding times, the interference can be correlated. We provide conditions under which the interference becomes m-dependent, spl phi -mixing, or long-range dependent. Finally, we present some implications of our theoretical findings on signal detection."
]
} |
1903.10899 | 2924509970 | This article proposes and evaluates a technique to predict the level of interference in wireless networks. We design a recursive predictor that computes future interference values at a given location by filtering measured interference at this location. The parametrization of the predictor is done offline by translating the autocorrelation of interference into an autoregressive moving average (ARMA) representation. This ARMA model is inserted into a steady-state Kalman filter enabling nodes to predict with low computational effort. Results show good performance in terms of accuracy between predicted and true values for relevant time horizons. Although the predictor is parametrized for the case of Poisson networks, Rayleigh fading, and fixed message lengths, a sensitivity analysis shows that it also works well in more general network scenarios. Numerical examples for underlay device-to-device communications and a common wireless sensor technology illustrate its broad applicability. The predictor can be applied as part of interference management to improve medium access, scheduling, and resource allocation. | Despite these advances in interference modeling, this new knowledge has not been exploited to actually improve the performance of wireless systems. The state of research is not as advanced as in channel modeling, where knowledge about the channel dynamics, such as coherence time and decorrelation distances, is indeed used in state-of-the-art technologies (e.g., space-time coding and MIMO). This gap from modeling to design is the motivation for our research: the investigation of concepts and solutions for interference prediction . At the core of our work is the fundamental issue: How well can we predict, in a probabilistic manner, the interference power at a given location in a given network into the future? An initial step in this direction is made in @cite_14 , where a simple prediction technique based on low-complexity learning of traffic patterns is proposed. The paper at hand presents a conceptually completely different predictor and discusses the problem more comprehensively. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2768950606"
],
"abstract": [
"Our research investigates the concept of interference prediction as an unprecedented approach for interference management and medium access in wireless networks. This paper is a first step in this direction: it proposes and evaluates a simple interference prediction technique that is based on low-complexity learning. Nodes predict the interference situation they expect to experience in the near future and select the most favorable time slot to start the transmission of a multislot message. The performance gain is evaluated in a small-scale fading environment in terms of link outage and delay against random slot selection. Simulation results show that interference prediction is a promising building block for wireless systems. Additional studies are needed to explore advanced techniques and assess their feasibility."
]
} |
1903.10920 | 2923677086 | Predicting human perceptual similarity is a challenging subject of ongoing research. The visual process underlying this aspect of human vision is thought to employ multiple different levels of visual analysis (shapes, objects, texture, layout, color, etc). In this paper, we postulate that the perception of image similarity is not an explicitly learned capability, but rather one that is a byproduct of learning others. This claim is supported by leveraging representations learned from a diverse set of visual tasks and using them jointly to predict perceptual similarity. This is done via simple feature concatenation, without any further learning. Nevertheless, experiments performed on the challenging Totally-Looks-Like (TLL) benchmark significantly surpass recent baselines, closing much of the reported gap towards prediction of human perceptual similarity. We provide an analysis of these results and discuss them in a broader context of emergent visual capabilities and their implications on the course of machine-vision research. | Top: the Totally-Looks-Like dataset @cite_44 . The images within each pair have been judged by some user to be similar. Bottom : images from @cite_42 , where a user must choose which of the two distorted versions (left, right) of an image is more similar to a reference (center). | {
"cite_N": [
"@cite_44",
"@cite_42"
],
"mid": [
"2962778531",
"2783879794"
],
"abstract": [
"Perceptual judgment of image similarity by humans relies on rich internal representations ranging from low-level features to high-level concepts, scene properties and even cultural associations. Existing methods and datasets attempting to explain perceived similarity use stimuli which arguably do not cover the full breadth of factors that affect human similarity judgments, even those geared toward this goal. We introduce a new dataset dubbed Totally-Looks-Like (TLL) after a popular entertainment website, which contains images paired by humans as being visually similar. The dataset contains 6016 image-pairs from the wild, shedding light upon a rich and diverse set of criteria employed by human beings. We conduct experiments to try to reproduce the pairings via features extracted from state-of-the-art deep convolutional neural networks, as well as additional human experiments to verify the consistency of the collected data. Even though we create conditions to artificially make the matching task increasingly easier, we show that machine-extracted representations perform very poorly in terms of reproducing the matching selected by humans. The results suggest future directions for improvement of learned image representations. Data and code will be available at https: sites.google.com view totally-looks-like-dataset.",
"While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called \"perceptual losses\"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations."
]
} |
1903.10920 | 2923677086 | Predicting human perceptual similarity is a challenging subject of ongoing research. The visual process underlying this aspect of human vision is thought to employ multiple different levels of visual analysis (shapes, objects, texture, layout, color, etc). In this paper, we postulate that the perception of image similarity is not an explicitly learned capability, but rather one that is a byproduct of learning others. This claim is supported by leveraging representations learned from a diverse set of visual tasks and using them jointly to predict perceptual similarity. This is done via simple feature concatenation, without any further learning. Nevertheless, experiments performed on the challenging Totally-Looks-Like (TLL) benchmark significantly surpass recent baselines, closing much of the reported gap towards prediction of human perceptual similarity. We provide an analysis of these results and discuss them in a broader context of emergent visual capabilities and their implications on the course of machine-vision research. | : classical works on perceptual similarity already recognize it as a multi-faceted @cite_28 @cite_23 @cite_0 , knowledge and context dependent @cite_18 @cite_30 problem. More recent benchmarks include subjective image quality assessment with a reference image, which have been serving for evaluating similarity metrics @cite_38 @cite_2 @cite_8 . The large-scale BAPPS dataset has been recently introduced by @cite_42 , more geared towards perceptual similarity than quality assessment per-se. Several lines of work have claimed that human perceptual similarity judgment is solved to a good extend by CNN-based methods @cite_31 @cite_1 @cite_48 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_18",
"@cite_8",
"@cite_28",
"@cite_48",
"@cite_42",
"@cite_1",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_31"
],
"mid": [
"2045300586",
"2161907179",
"2114826854",
"2171349048",
"1979104110",
"2492109573",
"2783879794",
"2762409611",
"2059975159",
"1987551799",
"1974013408",
"2768180730"
],
"abstract": [
"The representation of physics problems in relation to the organization of physics knowledge is investigated in experts and novices. Four experiments examine (a) the existence of problem categories as a basis for representation; (b) differences in the categories used by experts and novices; (c) differences in the knowledge associated with the categories; and (d) features in the problems that contribute to problem categorization and representation. Results from sorting tasks and protocols reveal that experts and novices begin their problem representations with specifiably different problem categories, and completion of the representations depends on the knowledge associated with the categories. For, the experts initially abstract physics principles to approach and solve a problem representation, whereas novices base their representation and approaches on the problem's literal features.",
"Measurement of visual quality is of fundamental importance for numerous image and video processing applications, where the goal of quality assessment (QA) algorithms is to automatically assess the quality of images or videos in agreement with human quality judgments. Over the years, many researchers have taken different approaches to the problem and have contributed significant research in this area and claim to have made progress in their respective domains. It is important to evaluate the performance of these algorithms in a comparative setting and analyze the strengths and weaknesses of these methods. In this paper, we present results of an extensive subjective quality assessment study in which a total of 779 distorted images were evaluated by about two dozen human subjects. The \"ground truth\" image quality data obtained from about 25 000 individual human quality judgments is used to evaluate the performance of several prominent full-reference image quality assessment algorithms. To the best of our knowledge, apart from video quality studies conducted by the Video Quality Experts Group, the study presented in this paper is the largest subjective image quality study in the literature in terms of number of images, distortion types, and number of human judgments per image. Moreover, we have made the data from the study freely available to the research community . This would allow other researchers to easily report comparative results in the future",
"The question of what makes a concept coherent (what makes its members form a comprehensible class) has received a variety of answers. In this article we review accounts based on similarity, feature correlations, and various theories of categorization. We find that each theory provides an inadequate account of conceptual coherence (or no account at all) because none provides enough constraints on possible concepts. We propose that concepts are coherent to the extent that they fit people's background knowledge or naive theories about the world. These theories help to relate the concepts in a domain and to structure the attributes that are internal to a concept. Evidence of the influence of theories on various conceptual tasks is presented, and the possible importance of theories in cognitive development is discussed.",
"The mainstream approach to image quality assessment has centered around accurately modeling the single most relevant strategy employed by the human visual system (HVS) when judging image quality (e.g., detecting visible differences, and extracting image structure information). In this work, we suggest that a single strategy may not be sufficient; rather, we advocate that the HVS uses multiple strategies to determine image quality. For images containing near-threshold distortions, the image is most apparent, and thus the HVS attempts to look past the image and look for the distortions (a detection-based strategy). For images containing clearly visible distortions, the distortions are most apparent, and thus the HVS attempts to look past the distortion and look for the image's subject matter (an appearance-based strategy). Here, we present a quality assessment method [most apparent distortion (MAD)], which attempts to explicitly model these two separate strategies. Local luminance and contrast masking are used to estimate detection-based perceived distortion in high-quality images, whereas changes in the local statistics of spatial-frequency components are used to estimate appearance-based perceived distortion in low-quality images. We show that a combination of these two measures can perform well in predicting subjective ratings of image quality.",
"This article reviews the status of similarity as an explanatory construct with a focus on similarity judgments. For similarity to be a useful construct, one must be able to specify the ways or respects in which two things are similar. One solution to this problem is to restrict the notion of similarity to hard-wired perceptual processes. It is argued that this view is too narrow and limiting. Instead, it is proposed that an important source of constraints derives from the similarity comparison process itself. Both new experiments and other evidence are described that support the idea that respects are determined by processes internal to comparisons",
"Deep neural networks have become increasingly successful at solving classic perception problems such as object recognition, semantic segmentation, and scene understanding, often reaching or surpassing human-level accuracy. This success is due in part to the ability of DNNs to learn useful representations of high-dimensional inputs, a problem that humans must also solve. We examine the relationship between the representations learned by these networks and human psychological representations recovered from similarity judgments. We find that deep features learned in service of object classification account for a significant amount of the variance in human similarity judgments for a set of animal images. However, these features do not capture some qualitative distinctions that are a key part of human representations. To remedy this, we develop a method for adapting deep features to align with human similarity judgments, resulting in image representations that can potentially be used to extend the scope of psychological experiments.",
"While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called \"perceptual losses\"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.",
"Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behaviour on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgements for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g. “eye”) and category labels (e.g. “animal”) for the same image set. Feature labels were divided into parts, colours, textures and contours, while category labels were divided into subordinate, basic and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgements, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgements. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgements significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features other than object parts perform relatively poorly, perhaps because DNNs more comprehensively capture the colours, textures and contours which matter to human object perception. However, categorical models outperform DNNs, suggesting that further work may be needed to bring high-level semantic representations in DNNs closer to those extracted by humans. Modern DNNs explain similarity judgements remarkably well considering they were not trained on this task, and are promising models for many aspects of human cognition.",
"The metric and dimensional assumptions that underlie the geometric representation of similarity are questioned on both theoretical and empirical grounds. A new set-theoretical approach to similarity is developed in which objects are represented as collections of features, and similarity is described as a feature-matching process. Specifically, a set of qualitative assumptions is shown to imply the contrast model, which expresses the similarity between objects as a linear combination of the measures of their common and distinctive features. Several predictions of the contrast model are tested in studies of similarity with both semantic and perceptual stimuli. The model is used to uncover, analyze, and explain a variety of empirical phenomena such as the role of common and distinctive features, the relations between judgments of similarity and difference, the presence of asymmetric similarities, and the effects of context on judgments of similarity. The contrast model generalizes standard representations of similarity data in terms of clusters and trees. It is also used to analyze the relations of prototypicality and family resemblance",
"By adding the same component (e.g., glasses) to two stimuli (e.g., schematic faces) or to one stimulus only, it is possible to assess the impact of that component as a common or as a distinctive feature. A formal procedure, based on the contrast model (A. Tversky, 1977, Psychological Review, 84, 327–352), for estimating the relative weight of common to distinctive features from similarity judgments between separable stimuli with independent components, was developed. The results show that in verbal stimuli (e.g., descriptions of persons, meals, trips) common features loom larger than distinctive features. On the other hand, in pictorial stimuli (e.g., schematic faces, landscapes) distinctive features loom larger than common features. Verbal descriptions of pictorial stimuli were evaluated like other verbal stimuli and unlike their pictorial counterparts. In conceptual comparisons, the relative weight of common to distinctive features was higher in judgments of similarity than in judgments of dissimilarity.",
"This paper describes a recently created image database, TID2013, intended for evaluation of full-reference visual quality assessment metrics. With respect to TID2008, the new database contains a larger number (3000) of test images obtained from 25 reference images, 24 types of distortions for each reference image, and 5 levels for each type of distortion. Motivations for introducing 7 new types of distortions and one additional level of distortions are given; examples of distorted images are presented. Mean opinion scores (MOS) for the new database have been collected by performing 985 subjective experiments with volunteers (observers) from five countries (Finland, France, Italy, Ukraine, and USA). The availability of MOS allows the use of the designed database as a fundamental tool for assessing the effectiveness of visual quality. Furthermore, existing visual quality metrics have been tested with the proposed database and the collected results have been analyzed using rank order correlation coefficients between MOS and considered metrics. These correlation indices have been obtained both considering the full set of distorted images and specific image subsets, for highlighting advantages and drawbacks of existing, state of the art, quality metrics. Approaches to thorough performance analysis for a given metric are presented to detect practical situations or distortion types for which this metric is not adequate enough to human perception. The created image database and the collected MOS values are freely available for downloading and utilization for scientific purposes. We have created a new large database.This database contains larger number of distorted images and distortion types.MOS values for all images are obtained and provided.Analysis of correlation between MOS and a wide set of existing metrics is carried out.Methodology for determining drawbacks of existing visual quality metrics is described.",
"Over the last few decades, psychologists have developed sophisticated formal models of human categorization using simple artificial stimuli. In this paper, we use modern machine learning methods to extend this work into the realm of naturalistic stimuli, enabling human categorization to be studied over the complex visual domain in which it evolved and developed. We show that representations derived from a convolutional neural network can be used to model behavior over a database of >300,000 human natural image classifications, and find that a group of models based on these representations perform well, near the reliability of human judgments. Interestingly, this group includes both exemplar and prototype models, contrasting with the dominance of exemplar models in previous work. We are able to improve the performance of the remaining models by preprocessing neural network representations to more closely capture human similarity judgments."
]
} |
1903.10920 | 2923677086 | Predicting human perceptual similarity is a challenging subject of ongoing research. The visual process underlying this aspect of human vision is thought to employ multiple different levels of visual analysis (shapes, objects, texture, layout, color, etc). In this paper, we postulate that the perception of image similarity is not an explicitly learned capability, but rather one that is a byproduct of learning others. This claim is supported by leveraging representations learned from a diverse set of visual tasks and using them jointly to predict perceptual similarity. This is done via simple feature concatenation, without any further learning. Nevertheless, experiments performed on the challenging Totally-Looks-Like (TLL) benchmark significantly surpass recent baselines, closing much of the reported gap towards prediction of human perceptual similarity. We provide an analysis of these results and discuss them in a broader context of emergent visual capabilities and their implications on the course of machine-vision research. | : Some visual capabilities such as gaze direction prediction and hand detection can be explained by using causal reasoning together with innate capabilities ( face detection) @cite_36 . Object naming does receive some amount of supervision during childhood and this is indeed shown to assist development in early stages @cite_37 . Nevertheless, many other abilities are either seldom or even never supervised. Examples include stereopsis, contour integration, perception of motion and other @cite_15 . Behavioral patterns linked with vision also emerge; a notable example is saliency, the prediction of gaze given a visual stimulus @cite_41 . Though saliency is clearly a measurement of a single behavioral aspect of the visual system, virtually all recent leading methods of predicting saliency have been based on purely data-driven methods @cite_46 . With some rare exceptions @cite_12 , most methods treat saliency as a goal rather than observing it as a part of a functioning system. There are are few recent papers that report emergence of useful visual representations, such as emergence of visual tracking by the need to color videos in a consistent manner @cite_35 as well as motor @cite_39 or visual @cite_17 skills. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_15",
"@cite_36",
"@cite_41",
"@cite_39",
"@cite_46",
"@cite_12",
"@cite_17"
],
"mid": [
"2963426332",
"2168297200",
"2802723470",
"2023906772",
"1497599070",
"2726187156",
"",
"2900846027",
"2788718066"
],
"abstract": [
"We use large amounts of unlabeled video to learn models for visual tracking without manual human supervision. We leverage the natural temporal coherency of color to create a model that learns to colorize gray-scale videos by copying colors from a reference frame. Quantitative and qualitative experiments suggest that this task causes the model to automatically learn to track visual regions. Although the model is trained without any ground-truth labels, our method learns to track well enough to outperform the latest methods based on optical flow. Moreover, our results suggest that failures to track are correlated with failures to colorize, indicating that advancing video colorization may further improve self-supervised visual tracking.",
"Can object names and functions act as cues to categories for infants? In Study 1, 14- and 18-month-old infants were shown novel category exemplars along with a function, a name, or no cues. Infants were then asked to “find another one,” choosing between 2 novel objects (1 from the familiar category and the other not). Infants at both ages were more likely to select the category match in the function than in the no-cue condition. However, only at 18 months did naming the objects enhance categorization. Study 2 shows that names can facilitate categorization for 14-month-olds as well when a hint regarding the core meaning of the objects (the function of a single familiarization object) is provided. Partitioning the world into meaningful categories is a formidable task, especially considering the vast amount of information that could be organized in the process. Nevertheless, infants succeed at forming a wide variety of categories within their first year of life (e.g., Quinn & Eimas, 1997; Mandler & McDonough, 1993). Although it is commonly assumed that processing biases in the infant can help to explain this remarkable ability, the precise nature of such constraints, and the mechanisms by which they exert their",
"",
"Early in development, infants learn to solve visual problems that are highly challenging for current computational methods. We present a model that deals with two fundamental problems in which the gap between computational difficulty and infant learning is particularly striking: learning to recognize hands and learning to recognize gaze direction. The model is shown a stream of natural videos and learns without any supervision to detect human hands by appearance and by context, as well as direction of gaze, in complex natural scenes. The algorithm is guided by an empirically motivated innate mechanism—the detection of “mover” events in dynamic images, which are the events of a moving image region causing a stationary region to move or change after contact. Mover events provide an internal teaching signal, which is shown to be more effective than alternative cues and sufficient for the efficient acquisition of hand and gaze representations. The implications go beyond the specific tasks, by showing how domain-specific “proto concepts” can guide the system to acquire meaningful concepts, which are significant to the observer but statistically inconspicuous in the sensory input.",
"A number of psychophysical studies concerning the detection, localization and recognition of objects in the visual field have suggested a two-stage theory of human visual perception. The first stage is the “preattentive” mode, in which simple features are processed rapidly and in parallel over the entire visual field. In the second, “attentive” mode, a specialized processing focus, usually called the focus of attention, is directed to particular locations in the visual field. The analysis of complex forms and the recognition of objects are associated with this second stage.1 The computational justification for such a hypothesis comes from the realization that while it is possible to imagine specific algorithms performing tasks such as shape analysis and recognition at specific locations, it is difficult to imagine these algorithms operating in parallel over the whole visual scene, since such an approach will quickly lead to a combinatorial explosion in terms of required computational resources.2 This is essentially the major critique of Minsky and Papert to a universal application of perceptrons in visual perception.3 Taken together, these empirical and theoretical studies suggest that beyond a certain preprocessing stage, the analysis of visual information proceeds in a sequence of operations, each one applied to a selected location (or locations).",
"The reinforcement learning paradigm allows, in principle, for complex behaviours to be learned directly from simple reward signals. In practice, however, it is common to carefully hand-design the reward function to encourage a particular solution, or to derive it from demonstration data. In this paper explore how a rich environment can help to promote the learning of complex behavior. Specifically, we train agents in diverse environmental contexts, and find that this encourages the emergence of robust behaviours that perform well across a suite of tasks. We demonstrate this principle for locomotion -- behaviours that are known for their sensitivity to the choice of reward. We train several simulated bodies on a diverse set of challenging terrains and obstacles, using a simple reward function based on forward progress. Using a novel scalable variant of policy gradient reinforcement learning, our agents learn to run, jump, crouch and turn as required by the environment without explicit reward-based guidance. A visual depiction of highlights of the learned behavior can be viewed following this https URL .",
"",
"Recent machine learning models have shown that including attention as a component results in improved model accuracy and interpretability, despite the concept of attention in these approaches only loosely approximating the brain's attention mechanism. Here we extend this work by building a more brain-inspired deep network model of the primate ATTention Network (ATTNet) that learns to shift its attention so as to maximize the reward. Using deep reinforcement learning, ATTNet learned to shift its attention to the visual features of a target category in the context of a search task. ATTNet's dorsal layers also learned to prioritize these shifts of attention so as to maximize success of the ventral pathway classification and receive greater reward. Model behavior was tested against the fixations made by subjects searching images for the same cued category. Both subjects and ATTNet showed evidence for attention being preferentially directed to target goals, behaviorally measured as oculomotor guidance to targets. More fundamentally, ATTNet learned to shift its attention to target like objects and spatially route its visual inputs to accomplish the task. This work makes a step toward a better understanding of the role of attention in the brain and other computational systems.",
"Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to replicate some of these abilities with a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which the agent can move and interact with objects it sees, the agent learns a world model predicting the dynamic consequences of its actions. Simultaneously, the agent learns to take actions that adversarially challenge the developing world model, pushing the agent to explore novel and informative interactions with its environment. We demonstrate that this policy leads to the self-supervised emergence of a spectrum of complex behaviors, including ego motion prediction, object attention, and object gathering. Moreover, the world model that the agent learns supports improved performance on object dynamics prediction and localization tasks. Our results are a proof-of-principle that computational models of intrinsic motivation might account for key features of developmental visuomotor learning in infants."
]
} |
1903.10920 | 2923677086 | Predicting human perceptual similarity is a challenging subject of ongoing research. The visual process underlying this aspect of human vision is thought to employ multiple different levels of visual analysis (shapes, objects, texture, layout, color, etc). In this paper, we postulate that the perception of image similarity is not an explicitly learned capability, but rather one that is a byproduct of learning others. This claim is supported by leveraging representations learned from a diverse set of visual tasks and using them jointly to predict perceptual similarity. This is done via simple feature concatenation, without any further learning. Nevertheless, experiments performed on the challenging Totally-Looks-Like (TLL) benchmark significantly surpass recent baselines, closing much of the reported gap towards prediction of human perceptual similarity. We provide an analysis of these results and discuss them in a broader context of emergent visual capabilities and their implications on the course of machine-vision research. | : transfer learning has already been established as the tool to enable learning of new tasks by leveraging already learned ones @cite_9 . The transferability of tasks to related ones has also been explored, @cite_19 . Recently, the work of @cite_3 has shown a method of predicting which feature extractors will perform well on a given task. In multi-task learning, a single network is adapted to multiple tasks @cite_10 . The shared representation is more compact than using an exclusive network for each task independently. We propose not to adapt one net to multiple representations, but to adapt multiple representations to a single task. Related approaches exist in NLP where pre-trained representations via multiple tasks turn out useful for many downstream ones @cite_33 @cite_21 . | {
"cite_N": [
"@cite_33",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_19",
"@cite_10"
],
"mid": [
"2896457183",
"2149933564",
"",
"",
"2798512429",
"2624871570"
],
"abstract": [
"We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7 (4.6 absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",
"Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.",
"",
"",
"Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a structure among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity. We proposes a fully computational approach for modeling the structure of space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning. We study the consequences of this structure, e.g. nontrivial emerged relationships, and exploit them to reduce the demand for labeled data. For example, we show that the total number of labeled datapoints needed for solving a set of 10 tasks can be reduced by roughly 2 3 (compared to training independently) while keeping the performance nearly the same. We provide a set of tools for computing and probing this taxonomical structure including a solver that users can employ to devise efficient supervision policies for their use cases.",
"Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks."
]
} |
1903.10630 | 2949528210 | We consider the problem of diversifying automated reply suggestions for a commercial instant-messaging (IM) system (Skype). Our conversation model is a standard matching based information retrieval architecture, which consists of two parallel encoders to project messages and replies into a common feature representation. During inference, we select replies from a fixed response set using nearest neighbors in the feature space. To diversify responses, we formulate the model as a generative latent variable model with Conditional Variational Auto-Encoder (M-CVAE). We propose a constrained-sampling approach to make the variational inference in M-CVAE efficient for our production system. In offline experiments, M-CVAE consistently increased diversity by 30-40 without significant impact on relevance. This translated to a 5 gain in click-rate in our online production system. | Several researchers have used CVAEs @cite_12 for generating text @cite_7 @cite_14 @cite_0 , modeling conversations @cite_9 , diversifying responses in dialogues @cite_19 @cite_1 and improving translations @cite_22 . These papers use S2S architectures which we found impractical for production. We demonstrate similar objectives without having to rely on any sequential generative process, in an IR setting. VAE has been also used in IR @cite_16 to generate hash maps for semantically similar documents and top-n recommendation systems @cite_8 . In contrast, we demonstrate semantic-diversity in intents in a conversational IR model with M-CVAE. Novelty and diversity are well-studied problems in IR @cite_3 @cite_17 where it is assumed that document topics are available (and not latent) during training. Diversification effect as shown in @cite_15 relies on relevance (click) data, and thus is not directly applicable in our system. MMR @cite_11 is a relevant prior work which we use as a baseline. | {
"cite_N": [
"@cite_14",
"@cite_11",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2083305840",
"2798493043",
"2963773425",
"2883308936",
"2963879591",
"2963958388",
"2104895009",
"",
"2962717182",
"2037098674",
"",
"2188365844",
""
],
"abstract": [
"",
"This paper presents a method for combining query-relevance with information-novelty in the context of text retrieval and summarization. The Maximal Marginal Relevance (MMR) criterion strives to reduce redundancy while maintaining query relevance in re-ranking retrieved documents and in selecting apprw priate passages for text summarization. Preliminary results indicate some benefits for MMR diversity ranking in document retrieval and in single document summarization. The latter are borne out by the recent results of the SUMMAC conference in the evaluation of summarization systems. However, the clearest advantage is demonstrated in constructing non-redundant multi-document summaries, where MMR results are clearly superior to non-MMR passage selection.",
"",
"Recent advances in neural variational inference have spawned a renaissance in deep latent variable models. In this paper we introduce a generic variational inference framework for generative and conditional models of text. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. Our neural variational document model combines a continuous stochastic document representation with a bagofwords generative model and achieves the lowest reported perplexities on two standard test corpora. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair. On two question answering benchmarks this model exceeds all previous published benchmarks.",
"Recommender systems have been studied extensively due to their practical use in real-world scenarios. Despite this, generating effective recommendations with sparse user ratings remains a challenge. Side information has been widely utilized to address rating sparsity Existing recommendation models that use side information are linear and, hence, have restricted expressiveness. Deep learning has been used to capture non-linearities by learning deep item representations from side information but as side information is high-dimensional, existing deep models tend to have large input dimensionality, which dominates their overall size. This makes them difficult to train, especially with insufficient inputs. Rather than learning item representations, in this paper, we propose to learn feature representations through deep learning from side information. Learning feature representations ensures a sufficient number of inputs to train a deep network. To achieve this, we propose to simultaneously recover user ratings and side information, by using a Variational Autoencoder (VAE). Specifically, user ratings and side information are encoded and decoded collectively through the same inference network and generation network. This is possible as both user ratings and side information are associated with items. To account for the heterogeneity of user ratings and side information, the final layer of the generation network follows different distributions. The proposed model is easy to implement and efficient to optimize and is shown to outperform state-of-the-art top-N recommendation methods that use side information.",
"",
"",
"In many retrieval tasks, one important goal involves retrieving a diverse set of results (e.g., documents covering a wide range of topics for a search query). First of all, this reduces redundancy, effectively showing more information with the presented results. Secondly, queries are often ambiguous at some level. For example, the query \"Jaguar\" can refer to many different topics (such as the car or feline). A set of documents with high topic diversity ensures that fewer users abandon the query because no results are relevant to them. Unlike existing approaches to learning retrieval functions, we present a method that explicitly trains to diversify results. In particular, we formulate the learning problem of predicting diverse subsets and derive a training method based on structural SVMs.",
"",
"",
"Traditionally, information retrieval systems aim to maximize thenumber of relevant documents returned to a user within some windowof the top. For that goal, the probability ranking principle, whichranks documents in decreasing order of probability of relevance, isprovably optimal. However, there are many scenarios in which thatranking does not optimize for the users information need. Oneexample is when the user would be satisfied with some limitednumber of relevant documents, rather than needing all relevantdocuments. We show that in such a scenario, an attempt to returnmany relevant documents can actually reduce the chances of findingany relevant documents. We consider a number of information retrieval metrics from theliterature, including the rank of the first relevant result, the no metric that penalizes a system only for retrieving no relevantresults near the top, and the diversity of retrieved results whenqueries have multiple interpretations. We observe that given aprobabilistic model of relevance, it is appropriate to rank so asto directly optimize these metrics in expectation. While doing somay be computationally intractable, we show that a simple greedyoptimization algorithm that approximately optimizes the givenobjectives produces rankings for TREC queries that outperform thestandard approach based on the probability ranking principle.",
"",
"Supervised deep learning has been successfully applied to many recognition problems. Although it can approximate a complex many-to-one function well when a large amount of training data is provided, it is still challenging to model complex structured output representations that effectively perform probabilistic inference and make diverse predictions. In this work, we develop a deep conditional generative model for structured output prediction using Gaussian latent variables. The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build robust structured prediction algorithms, such as input noise-injection and multi-scale prediction objective at training. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in generating diverse but realistic structured output predictions using stochastic inference. Furthermore, the proposed training methods are complimentary, which leads to strong pixel-level object segmentation and semantic labeling performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset.",
""
]
} |
1903.10863 | 2924479113 | The learning of Transformation-Equivariant Representations (TERs), which is introduced by hinton2011transforming , has been considered as a principle to reveal visual structures under various transformations. It contains the celebrated Convolutional Neural Networks (CNNs) as a special case that only equivary to the translations. In contrast, we seek to train TERs for a generic class of transformations and train them in an unsupervised fashion. To this end, we present a novel principled method by Autoencoding Variational Transformations (AVT), compared with the conventional approach to autoencoding data. Formally, given transformed images, the AVT seeks to train the networks by maximizing the mutual information between the transformations and representations. This ensures the resultant TERs of individual images contain the intrinsic information about their visual structures that would equivary extricably under various transformations. Technically, we show that the resultant optimization problem can be efficiently solved by maximizing a variational lower-bound of the mutual information. This variational approach introduces a transformation decoder to approximate the intractable posterior of transformations, resulting in an autoencoding architecture with a pair of the representation encoder and the transformation decoder. Experiments demonstrate the proposed AVT model sets a new record for the performances on unsupervised tasks, greatly closing the performance gap to the supervised models. | The study of transformation-equivariance can be traced back to the idea of training capsule nets @cite_15 @cite_3 @cite_28 , where the capsules are designed to equivary to various transformations with vectorized rather than scalar representations. However, there was a lack of explicit training mechanism to ensure the resultant capsules be of transformation equivariance. To address this problem, many efforts have been made in literature @cite_31 @cite_12 @cite_14 to extend the conventional translation-equivariant convolutions to cover more transformations. For example, group equivariant convolutions (G-convolution) @cite_31 have been developed to equivary to more types of transformations so that a richer family of geometric structures can be explored by the classification layers on top of the generated representations. The idea of group equivariance has also been introduced to the capsule nets @cite_14 by ensuring the equivariance of output pose vectors to a group of transformations with a generic routing mechanism. | {
"cite_N": [
"@cite_31",
"@cite_14",
"@cite_28",
"@cite_3",
"@cite_15",
"@cite_12"
],
"mid": [
"2279221249",
"2807754035",
"2785994986",
"",
"2963703618",
""
],
"abstract": [
"We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CI- FAR10 and rotated MNIST.",
"We present group equivariant capsule networks, a framework to introduce guaranteed equivariance and invariance properties to the capsule network idea. Our work can be divided into two contributions. First, we present a generic routing by agreement algorithm defined on elements of a group and prove that equivariance of output pose vectors, as well as invariance of output activations, hold under certain conditions. Second, we connect the resulting equivariant capsule networks with work from the field of group convolutional networks. Through this connection, we provide intuitions of how both methods relate and are able to combine the strengths of both approaches in one deep neural network architecture. The resulting framework allows sparse evaluation of the group convolution operator, provides control over specific equivariance and invariance properties, and can use routing by agreement instead of pooling operations. In addition, it is able to provide interpretable and equivariant representation vectors as output capsules, which disentangle evidence of object existence from its pose.",
"A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules [a group of capsules forms a capsule layer and can be used in place of a traditional layer in a neural net]. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45 compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attack than our baseline convolutional neural network.",
"",
"A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.",
""
]
} |
1903.10863 | 2924479113 | The learning of Transformation-Equivariant Representations (TERs), which is introduced by hinton2011transforming , has been considered as a principle to reveal visual structures under various transformations. It contains the celebrated Convolutional Neural Networks (CNNs) as a special case that only equivary to the translations. In contrast, we seek to train TERs for a generic class of transformations and train them in an unsupervised fashion. To this end, we present a novel principled method by Autoencoding Variational Transformations (AVT), compared with the conventional approach to autoencoding data. Formally, given transformed images, the AVT seeks to train the networks by maximizing the mutual information between the transformations and representations. This ensures the resultant TERs of individual images contain the intrinsic information about their visual structures that would equivary extricably under various transformations. Technically, we show that the resultant optimization problem can be efficiently solved by maximizing a variational lower-bound of the mutual information. This variational approach introduces a transformation decoder to approximate the intractable posterior of transformations, resulting in an autoencoding architecture with a pair of the representation encoder and the transformation decoder. Experiments demonstrate the proposed AVT model sets a new record for the performances on unsupervised tasks, greatly closing the performance gap to the supervised models. | However, these group equivariant convolutions and capsules must be trained in a supervised fashion @cite_31 @cite_14 with labeled data for specific tasks, instead of learning unsupervised transformation-equivariant representations generalizable to unseen tasks. Moreover, their representations are restricted to be a function of groups, which limits the ability of training future classifiers on top of more flexible representations. | {
"cite_N": [
"@cite_31",
"@cite_14"
],
"mid": [
"2279221249",
"2807754035"
],
"abstract": [
"We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CI- FAR10 and rotated MNIST.",
"We present group equivariant capsule networks, a framework to introduce guaranteed equivariance and invariance properties to the capsule network idea. Our work can be divided into two contributions. First, we present a generic routing by agreement algorithm defined on elements of a group and prove that equivariance of output pose vectors, as well as invariance of output activations, hold under certain conditions. Second, we connect the resulting equivariant capsule networks with work from the field of group convolutional networks. Through this connection, we provide intuitions of how both methods relate and are able to combine the strengths of both approaches in one deep neural network architecture. The resulting framework allows sparse evaluation of the group convolution operator, provides control over specific equivariance and invariance properties, and can use routing by agreement instead of pooling operations. In addition, it is able to provide interpretable and equivariant representation vectors as output capsules, which disentangle evidence of object existence from its pose."
]
} |
1903.10863 | 2924479113 | The learning of Transformation-Equivariant Representations (TERs), which is introduced by hinton2011transforming , has been considered as a principle to reveal visual structures under various transformations. It contains the celebrated Convolutional Neural Networks (CNNs) as a special case that only equivary to the translations. In contrast, we seek to train TERs for a generic class of transformations and train them in an unsupervised fashion. To this end, we present a novel principled method by Autoencoding Variational Transformations (AVT), compared with the conventional approach to autoencoding data. Formally, given transformed images, the AVT seeks to train the networks by maximizing the mutual information between the transformations and representations. This ensures the resultant TERs of individual images contain the intrinsic information about their visual structures that would equivary extricably under various transformations. Technically, we show that the resultant optimization problem can be efficiently solved by maximizing a variational lower-bound of the mutual information. This variational approach introduces a transformation decoder to approximate the intractable posterior of transformations, resulting in an autoencoding architecture with a pair of the representation encoder and the transformation decoder. Experiments demonstrate the proposed AVT model sets a new record for the performances on unsupervised tasks, greatly closing the performance gap to the supervised models. | Recently, @cite_34 present a novel Auto-Encoding Transformation (AET) model by learning a representation from which an input transformation can be reconstructed. This is closely related to our motivation of learning transformation equivariant representations, considering the transformation can be decoded from the learned representation of original and transformed images. On the contrary, in this paper, we approach it from an information-theoretic point of view in a more principled fashion. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2909671697"
],
"abstract": [
"The success of deep neural networks often relies on a large amount of labeled examples, which can be difficult to obtain in many real scenarios. To address this challenge, unsupervised methods are strongly preferred for training neural networks without using any labeled data. In this paper, we present a novel paradigm of unsupervised representation learning by Auto-Encoding Transformation (AET) in contrast to the conventional Auto-Encoding Data (AED) approach. Given a randomly sampled transformation, AET seeks to predict it merely from the encoded features as accurately as possible at the output end. The idea is the following: as long as the unsupervised features successfully encode the essential information about the visual structures of original and transformed images, the transformation can be well predicted. We will show that this AET paradigm allows us to instantiate a large variety of transformations, from parameterized, to non-parameterized and GAN-induced ones. Our experiments show that AET greatly improves over existing unsupervised approaches, setting new state-of-the-art performances being greatly closer to the upper bounds by their fully supervised counterparts on CIFAR-10, ImageNet and Places datasets."
]
} |
1903.10863 | 2924479113 | The learning of Transformation-Equivariant Representations (TERs), which is introduced by hinton2011transforming , has been considered as a principle to reveal visual structures under various transformations. It contains the celebrated Convolutional Neural Networks (CNNs) as a special case that only equivary to the translations. In contrast, we seek to train TERs for a generic class of transformations and train them in an unsupervised fashion. To this end, we present a novel principled method by Autoencoding Variational Transformations (AVT), compared with the conventional approach to autoencoding data. Formally, given transformed images, the AVT seeks to train the networks by maximizing the mutual information between the transformations and representations. This ensures the resultant TERs of individual images contain the intrinsic information about their visual structures that would equivary extricably under various transformations. Technically, we show that the resultant optimization problem can be efficiently solved by maximizing a variational lower-bound of the mutual information. This variational approach introduces a transformation decoder to approximate the intractable posterior of transformations, resulting in an autoencoding architecture with a pair of the representation encoder and the transformation decoder. Experiments demonstrate the proposed AVT model sets a new record for the performances on unsupervised tasks, greatly closing the performance gap to the supervised models. | Auto-Encoders and GANs. Training auto-encoders in an unsupervised fashion has been studied in literature @cite_2 @cite_1 @cite_10 . Most auto-encoders are trained by minimizing the reconstruction errors on input data from the encoded representations. A large category of auto-encoder variants have been proposed. Among them is the Variational Auto-Encoder (VAE) @cite_30 that maximizes the lower-bound of the data likelihood to train a pair of probabilistic encoder and decoder, while beta-VAE seeks to disentangle representations by introducing an adjustable hyperparameter on the capacity of latent channel to balance between the independence constraint and the reconstruction accuracy @cite_25 . Denoising auto-encoder @cite_10 seeks to reconstruct noise-corrupted data to learn robust representation, while contrastive Auto-Encoder @cite_11 encourages to learn representations invariant to small perturbations on data. Along this line, @cite_3 propose capsule nets by minimizing the discrepancy between the reconstructed and target data. | {
"cite_N": [
"@cite_30",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"2164122462",
"",
"2102409316",
"2025768430",
"2753738274",
"2218318129"
],
"abstract": [
"",
"A common misperception within the neural network community is that even with nonlinearities in their hidden layer, autoassociators trained with backpropagation are equivalent to linear methods such as principal component analysis (PCA). Our purpose is to demonstrate that nonlinear autoassociators actually behave differently from linear methods and that they can outperform these methods when used for latent extraction, projection, and classification. While linear autoassociators emulate PCA, and thus exhibit a flat or unimodal reconstruction error surface, autoassociators with nonlinearities in their hidden layer learn domains by building error reconstruction surfaces that, depending on the task, contain multiple local valleys. This interpolation bias allows nonlinear autoassociators to represent appropriate classifications of nonlinear multimodal domains, in contrast to linear autoassociators, which are inappropriate for such tasks. In fact, autoassociators with hidden unit nonlinearities can be shown to perform nonlinear classification and nonlinear recognition.",
"",
"An autoencoder network uses a set of recognition weights to convert an input vector into a code vector. It then uses a set of generative weights to convert the code vector into an approximate reconstruction of the input vector. We derive an objective function for training autoencoders based on the Minimum Description Length (MDL) principle. The aim is to minimize the information required to describe both the code vector and the reconstruction error. We show that this information is minimized by choosing code vectors stochastically according to a Boltzmann distribution, where the generative weights define the energy of each possible code vector given the input vector. Unfortunately, if the code vectors use distributed representations, it is exponentially expensive to compute this Boltzmann distribution because it involves all possible code vectors. We show that the recognition weights of an autoencoder can be used to compute an approximation to the Boltzmann distribution and that this approximation gives an upper bound on the description length. Even when this bound is poor, it can be used as a Lyapunov function for learning both the generative and the recognition weights. We demonstrate that this approach can be used to learn factorial codes.",
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.",
"We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising auto-encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pretraining."
]
} |
1903.10863 | 2924479113 | The learning of Transformation-Equivariant Representations (TERs), which is introduced by hinton2011transforming , has been considered as a principle to reveal visual structures under various transformations. It contains the celebrated Convolutional Neural Networks (CNNs) as a special case that only equivary to the translations. In contrast, we seek to train TERs for a generic class of transformations and train them in an unsupervised fashion. To this end, we present a novel principled method by Autoencoding Variational Transformations (AVT), compared with the conventional approach to autoencoding data. Formally, given transformed images, the AVT seeks to train the networks by maximizing the mutual information between the transformations and representations. This ensures the resultant TERs of individual images contain the intrinsic information about their visual structures that would equivary extricably under various transformations. Technically, we show that the resultant optimization problem can be efficiently solved by maximizing a variational lower-bound of the mutual information. This variational approach introduces a transformation decoder to approximate the intractable posterior of transformations, resulting in an autoencoding architecture with a pair of the representation encoder and the transformation decoder. Experiments demonstrate the proposed AVT model sets a new record for the performances on unsupervised tasks, greatly closing the performance gap to the supervised models. | Meanwhile, Generative Adversarial Nets (GANs) have also been used to train unsupervised representations in literature. Contrary to the auto-encoders, a GAN model generates data from the noises drawn from a simple distribution, with a discriminator trained adversarially to distinguish between real and fake data. The sampled noises can be viewed as the representation of generated data over a manifold, and one can train an encoder by inverting the generator to find the generating noise. This can be implemented by jointly training a pair of mutually inverse generator and encoder @cite_20 @cite_5 . There also exist better generalizable GANs in producing unseen data based on the Lipschitz assumption on the real data distribution @cite_17 @cite_27 , which can give rise to more powerful representations of data out of training examples @cite_20 @cite_5 @cite_21 . Compared with the Auto-Encoders, GANs do not rely on learning one-to-one reconstruction of data; instead, they aim to generate the entire distribution of data. | {
"cite_N": [
"@cite_21",
"@cite_27",
"@cite_5",
"@cite_20",
"@cite_17"
],
"mid": [
"2894573160",
"",
"2411541852",
"2412320034",
"2580360036"
],
"abstract": [
"The classic Generative Adversarial Net and its variants can be roughly categorized into two large families: the unregularized versus regularized GANs. By relaxing the non-parametric assumption on the discriminator in the classic GAN, the regularized GANs have better generalization ability to produce new samples drawn from the real distribution. It is well known that the real data like natural images are not uniformly distributed over the whole data space. Instead, they are often restricted to a low-dimensional manifold of the ambient space. Such a manifold assumption suggests the distance over the manifold should be a better measure to characterize the distinct between real and fake samples. Thus, we define a pullback operator to map samples back to their data manifold, and a manifold margin is defined as the distance between the pullback representations to distinguish between real and fake samples and learn the optimal generators. We justify the effectiveness of the proposed model both theoretically and empirically.",
"",
"We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.",
"The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.",
"In this paper, we present the Lipschitz regularization theory and algorithms for a novel Loss-Sensitive Generative Adversarial Network (LS-GAN). Specifically, it trains a loss function to distinguish between real and fake samples by designated margins, while learning a generator alternately to produce realistic samples by minimizing their losses. The LS-GAN further regularizes its loss function with a Lipschitz regularity condition on the density of real data, yielding a regularized model that can better generalize to produce new data from a reasonable number of training examples than the classic GAN. We will further present a Generalized LS-GAN (GLS-GAN) and show it contains a large family of regularized GAN models, including both LS-GAN and Wasserstein GAN, as its special cases. Compared with the other GAN models, we will conduct experiments to show both LS-GAN and GLS-GAN exhibit competitive ability in generating new images in terms of the Minimum Reconstruction Error (MRE) assessed on a separate test set. We further extend the LS-GAN to a conditional form for supervised and semi-supervised learning problems, and demonstrate its outstanding performance on image classification tasks."
]
} |
1903.10863 | 2924479113 | The learning of Transformation-Equivariant Representations (TERs), which is introduced by hinton2011transforming , has been considered as a principle to reveal visual structures under various transformations. It contains the celebrated Convolutional Neural Networks (CNNs) as a special case that only equivary to the translations. In contrast, we seek to train TERs for a generic class of transformations and train them in an unsupervised fashion. To this end, we present a novel principled method by Autoencoding Variational Transformations (AVT), compared with the conventional approach to autoencoding data. Formally, given transformed images, the AVT seeks to train the networks by maximizing the mutual information between the transformations and representations. This ensures the resultant TERs of individual images contain the intrinsic information about their visual structures that would equivary extricably under various transformations. Technically, we show that the resultant optimization problem can be efficiently solved by maximizing a variational lower-bound of the mutual information. This variational approach introduces a transformation decoder to approximate the intractable posterior of transformations, resulting in an autoencoding architecture with a pair of the representation encoder and the transformation decoder. Experiments demonstrate the proposed AVT model sets a new record for the performances on unsupervised tasks, greatly closing the performance gap to the supervised models. | Self-Supervisory Signals. There exist many other unsupervised learning methods using different types of self-supervised signals to train deep networks. Mehdi and Favaro @cite_8 propose to solve Jigsaw puzzles to train a convolutional neural network. @cite_16 train the network by predicting the relative positions between sampled patches from an image as self-supervised information. Instead, @cite_4 count features that satisfy equivalence relations between downsampled and tiled images, while @cite_6 classify a discrete set of image rotations to train deep networks. @cite_35 create a set of surrogate classes by applying various transformations to individual images. However, the resultant features could over-discriminate visually similar images as they always belong to different surrogate classes. Unsupervised features have also been learned from videos by estimating the self-motion of moving objects between consecutive frames @cite_9 . | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_16"
],
"mid": [
"2148349024",
"2750549109",
"2321533354",
"1520997877",
"2785325870",
"343636949"
],
"abstract": [
"Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101).",
"We introduce a novel method for representation learning that uses an artificial supervision signal based on counting visual primitives. This supervision signal is obtained from an equivariance relation, which does not require any manual annotation. We relate transformations of images to transformations of the representations. More specifically, we look for the representation that satisfies such relation rather than the transformations that match a given representation. In this paper, we use two image transformations in the context of counting: scaling and tiling. The first transformation exploits the fact that the number of visual primitives should be invariant to scale. The second transformation allows us to equate the total number of visual primitives in each tile to that in the whole image. These two transformations are combined in one constraint and used to train a neural network with a contrastive loss. The proposed task produces representations that perform on par or exceed the state of the art in transfer learning benchmarks.",
"We propose a novel unsupervised learning approach to build features suitable for object detection and classification. The features are pre-trained on a large dataset without human annotation and later transferred via fine-tuning on a different, smaller and labeled dataset. The pre-training consists of solving jigsaw puzzles of natural images. To facilitate the transfer of features to other tasks, we introduce the context-free network (CFN), a siamese-ennead convolutional neural network. The features correspond to the columns of the CFN and they process image tiles independently (i.e., free of context). The later layers of the CFN then use the features to identify their geometric arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. We pre-train the CFN on the training set of the ILSVRC2012 dataset and transfer the features on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with (51.8 , ) for detection and (68.6 , ) for classification, and reduce the gap with supervised learning ( (56.5 , ) and (78.2 , ) respectively).",
"The current dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it also possible to learn features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigated if the awareness of egomotion(i.e. self motion) can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We found that using the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on the tasks of scene recognition, object recognition, visual odometry and keypoint matching.",
"Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations."
]
} |
1903.10623 | 2923136150 | This paper presents the mathematical modeling, controller design, and flight-testing of an over-actuated Vertical Take-off and Landing (VTOL) tiltwing Unmanned Aerial Vehicle (UAV). Based on simplified aerodynamics and first-principles, a dynamical model of the UAV is developed which captures key aerodynamic effects including propeller slipstream on the wing and post-stall characteristics of the airfoils. The model-based steady-state flight envelope and the corresponding trim-actuation is analyzed and the overactuation of the UAV solved by optimizing for, e.g., power-optimal trims. The developed control system is composed of two controllers: First, a low-level attitude controller based on dynamic inversion and a daisy-chaining approach to handle allocation of redundant actuators. Secondly, a higher-level cruise controller to track a desired vertical velocity. It is based on a linearization of the system and look-up tables to determine the strong and nonlinear variation of the trims throughout the flight-envelope. We demonstrate the performance of the control-system for all flight phases (hover, transition, cruise) in extensive flight-tests. | Existing work on design, modeling and control of tiltwing UAV's considers tandem-wing @cite_5 @cite_7 @cite_8 and single-wing vehicles @cite_6 @cite_19 @cite_3 . Employed control systems are either unified @cite_19 @cite_3 or switch between different controllers for hover, transition, and cruise @cite_17 . For both attitude- and velocity control, decoupled PID and full state feedback LQR-architectures are reported @cite_7 @cite_13 @cite_3 . They are typically combined with local linearizations and gain-scheduling to address the strong non-linearities. Examples of @math -based attitude- and cruise control are found as well @cite_10 @cite_5 . A popular non-linear control technique involves DI @cite_4 and is used both for tailsitters @cite_18 @cite_24 and tandem TWV @cite_5 . It enables reference-model following but requires an accurate model to estimate state-dependent moments and forces. High-fidelity models are required to address the complex transition phase and typically consider the prominent propeller slipstream interaction with the wing @cite_13 . Instead of modeling, @cite_3 describes a control system that is based exclusively on state- and control derivatives obtained from wind-tunnel testing, @cite_24 and @cite_18 introduce lumped-parameter models to fit experimental data for a flying-wing tailsitter. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_10",
"@cite_6",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_5",
"@cite_13",
"@cite_17"
],
"mid": [
"2738968852",
"",
"2787047823",
"2325088837",
"2158337616",
"2510380710",
"2585428954",
"2413000173",
"2099923336",
"",
"",
"2948403564"
],
"abstract": [
"We present a global controller for tracking nominal trajectories with a flying wing tailsitter vehicle. The control strategy is based on a first-principles model of the vehicle dynamics that captures all relevant aerodynamic effects, and we apply an onboard parameter learning scheme in order to estimate unknown aerodynamic parameters. A cascaded control architecture is used: Based on position and velocity errors an outer control loop computes a desired attitude keeping the vehicle in coordinated flight, while an inner control loop tracks the desired attitude using a lookup table with precomputed optimal attitude trajectories. The proposed algorithms can be implemented on a typical microcontroller and the performance is demonstrated in various experiments.",
"",
"",
"",
"This paper describes the development of robust, multi-variable H∞ control systems for the conversion of the High-Speed Autonomous Rotorcraft Vehicle (HARVee), an experimental tilt-wing aircraft. Tilt-wing rotorcraft combine the high-speed cruise capabilities of a conventional airplane with the hovering capabilities of a helicopter by rotating their wings at the fuselage. Changing between cruise and hover flight modes in mid-air is referred to as the conversion process, or simply conversion. A nonlinear aerodynamic model was previously developed that captures the unique dynamics of the tilt-wing aircraft. An H∞design methodology was used to develop cruise and hover control systems because it directly addresses multi-variable and robust design issues. The development of these control systems was governed not only by performance specifications at each particular operating point, but also by the unique requirements of a gain-scheduled conversion control system. The cruise and hover control designs form the basis for the conversion control system. The performance of the resulting conversion closed-loop systems is analyzed in the frequency and time domains. A tilt-wing rotorcraft Modeling, Simulation, Animation, and Real-Time Control (MoSART) software environment provides 3D visualization of the vehicle’s dynamics. The environment is useful for conceptualizing the natural rotorcraft dynamics and for gaining an intuitive understanding of the closed-loop system performance.",
"In this article a Tilt-Wing Unmanned Aerial Vehicle (TW-UAV) and the preliminary evaluation of its hovering characteristics in extended simulation studies are presented. In the beginning, an overview of the TW-UAV's design properties are established, highlighting the novelties of the proposed structure and the overall merits. The TW-UAV's design and structural properties are mathematically modeled and utilized for the synthesis of a cascaded P-PI and PID based control structure for the regulation of its hovering performance. In addition, extensive simulation trials are performed in order to evaluate the structure's efficiency in controlling the TW-UAV's attitude and position under various noise and disturbance scenarios.",
"This paper presents an approach for velocity control of tilt-wing aircraft over their entire flight envelope, ranging from hovering flight to wing-borne flight. With their capability of vertical takeoff and landing operation in combination with efficient cruise flight, tilt-wing aircraft offer multiple benefits in unmanned aerial vehicle applications. Control of tilt-wing aircraft, in particular for fully automated flight, is challenging because along with their versatility comes significant variations in flight mechanics characteristics. Known approaches to this problem subdivide tilt-wing flight into discrete aircraft configurations. The presented control concept omits discrete configurations and instead allows for continuous flight state transitions over a unified flight configuration space. Actuation that is unique to tilt-wing aircraft, for example, tilt angle control, is not only used as an aircraft configuration parameter but also as a full-fledged motion control device. The concept includes a map-...",
"This paper addresses the challenges of the design, development and control of a new convertible VTOL tailsitter unmanned aerial vehicle that combines the advantages of both fixed wing and rotary wing systems. Wind tunnel measurements are used to get an understanding of the control allocation and to model the static forces and moments acting on the system. Based on the derived model, a novel controller that operates in SO(3) and handles the dynamics of the vehicle at any attitude configuration, including the rotorcraft and fixed-wing regimes as well as their transitions, is presented. This unified controller allows the autonomous transition of the system without discontinuities of switching, as well as its overall high performance flight control. The capabilities and flying qualities of the platform and the controller are demonstrated and evaluated by means of extensive experimental studies.",
"This paper describes the development and analysis of gain-scheduled, multi-variable Hinfin control law for the conversion of a linear parameter varying (LPV) model of a high-speed autonomous rotorcraft vehicle (HARVee), an experimental tilt-wing aircraft. Tilt-wing aircraft combine the high-speed cruise capabilities of a conventional airplane with the vertical takeoff and station keeping abilities of a helicopter by rotating their wings at the fuselage. Changing between cruise and hover flight modes in mid-air is referred to as the conversion process, or simply conversion. A nonlinear aerodynamic model was previously developed that captures the unique dynamics of the tilt-wing aircraft. An Hinfin design methodology was used to develop linear controllers along various operating points of a conversion trajectory. The development of these control systems was governed not only by performance specifications at each particular operating point, but also by the unique requirements of a gain-scheduled conversion control system. The performance of the resulting conversion closed-loop systems is analyzed in the frequency and time domains. Performance robustness with respect to variation in the location of the center of gravity (eg) has been studied.",
"",
"",
""
]
} |
1903.10735 | 2924723865 | Modern large-scale automation systems integrate thousands to hundreds of thousands of physical sensors and actuators. Demands for more flexible reconfiguration of production systems and optimization across different information models, standards and legacy systems challenge current system interoperability concepts. Automatic semantic translation across information models and standards is an increasingly important problem that needs to be addressed to fulfill these demands in a cost-efficient manner under constraints of human capacity and resources in relation to timing requirements and system complexity. Here we define a translator-based operational interoperability model for interacting cyber-physical systems in mathematical terms, which includes system identification and ontology-based translation as special cases. We present alternative mathematical definitions of the translator learning task and mappings to similar machine learning tasks and solutions based on recent developments in machine learning. Possibilities to learn translators between artefacts without a common physical context, for example in simulations of digital twins and across layers of the automation pyramid are briefly discussed. | The development of more potent interoperability methods and technologies are of central importance for modern SOA, like the aforementioned Arrowhead Framework. For example, ontology-based XML-message translation has been extended with semantic annotations @cite_30 , see also former work in @cite_4 . That translator can map elements, perform unit conversion, detect missing data and, in certain cases, find and add the missing data. Another example is the architecture for device management using autonomic computing @cite_32 , where a manager monitors and plans execution using ontologies and a reasoning engine. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_32"
],
"mid": [
"2771342381",
"84638386",
"2809852038"
],
"abstract": [
"In order to create distributed automation systems, it is required to ensure their interoperability; however, ensuring interoperability between heterogeneous systems (using different communication protocols, data formats, and semantics) is a challenging task. Among many interoperability challenges, this paper addresses issues concerning semantic and data interoperability, namely, it provides a contribution to support the semantic compatibility verification and the generation of translators for XML messages. Translators are generated based on XML-schemas that are annotated with a reference ontology. We base our annotations on an extension of an existing declarative annotation method. In particular, our extension explicitly addresses ambiguities of annotations, schema instance mismatches, and coverage mismatches that frequently occur on Internet-of-things message schemas. We have evaluated our approach based on a set of interaction scenarios from the domain of the Arrowhead Project. A tool prototype that supports the semantic compatibility verification and the generation of XML translators is available at http: gres.uninova.pt tag .",
"Dissertacao para obtencao do Grau de Doutor em Engenharia Electrotecnica e de Computadores",
"Recent advances in the Internet of Things (IoT) has kindled the possibility of a lot of smart industrial systems. With the evolution of these IoT systems in the form of size and complexity, there is a growing need for a high level of interoperability. Autonomic Computing, with the vision of equipping software systems with self-management capabilities, emerges as a potential catalyst to support interoperability. In this paper, we present an approach which exploits Autonomic Computing to facilitate the development of interoperable IoT systems at semantic level. Our approach extends state-of-the-art IoT ontologies as well as Semantic Web Technologies to fit the MAPE-K (Monitor-Analyze-Plan-Execute-Knowledge) paradigm in Autonomic Computing. By using a Smart Home Use Case, the approach is also evaluated under different performance criteria."
]
} |
1903.10872 | 2925285461 | In this paper, we investigate relay selection for cooperative multiple-antenna systems that are equipped with buffers, which increase the reliability of wireless links. In particular, we present a novel relay selection technique based on switching and the selection of the best link, that is named Switched Max-Link. We also introduce a novel relay selection criterion based on the Maximum Likelihood (ML) principle and the Pairwise Error Probability (PEP) denoted Maximum Minimum Distance (MMD) that is incorporated into the proposed Switched Max-Link protocol. We compare the proposed MMD to the existing Quadratic Norm (QN), in terms of PEP and computational complexity. Simulations are then employed to evaluate the performance of the proposed and existing techniques. | The main idea of Max-Link is to select in each time slot the strongest link among all the available SR and RD links (i.e., among @math links) for transmission @cite_28 . For independent and identically distributed (i.i.d.) links and no delay constraints, Max-Link achieves a diversity gain of @math , which is twice the diversity gain of BRS and MMRS. Max-Link has been extended in @cite_40 to account for direct source-destination (SD) connectivity, which provides resiliency in low transmit SNR conditions @cite_28 . In @cite_12 @cite_31 @cite_4 @cite_34 , some buffer-aided relay selection protocols improve the Max-Link performance by: reducing the average packet delay, maintaining a good diversity gain, and or achieving full diversity gain with a smaller buffer size compared to Max-Link. In summary, the previous schemes (MMRS, HRS and Max-Link) only use buffer-aided relay selection for cooperative single-antenna systems. | {
"cite_N": [
"@cite_4",
"@cite_28",
"@cite_40",
"@cite_31",
"@cite_34",
"@cite_12"
],
"mid": [
"2601956795",
"2343523109",
"1980226883",
"2520283863",
"2791940919",
"2344440752"
],
"abstract": [
"In this paper, we propose a relay selection scheme for buffer-aided cooperative relay networks. The proposed scheme exploits the channel state information and the buffer state information to minimize the outage probability. To achieve that, it constantly seeks to maintain the states of the buffers by balancing the arrival and departure rates at each relay's buffer. More specifically, the half-full buffer state is used as a reference to monitor the state of balance, where a relay's buffer is considered balanced if its arrival and departure rates are equal. In each time slot, among all links that are available for selection, the one that can enhance the balance status of the most unbalanced buffer is selected. The outage probability performances in independent and identically distributed and independent and nonidentically distributed Rayleigh fading channels are investigated. In terms of outage probability, simulation results show that the proposed scheme significantly outperforms the max-link scheme and achieves some improvement compared to the buffer-state-based scheme.",
"Relays receive and retransmit signals between one or more sources and one or more destinations. Cooperative relaying is a novel technique for wireless communications that increases throughput and extends the coverage of networks. The task of relay selection serves as a building block to realize cooperative relaying. Recently, relays with buffers have been incorporated into cooperative relaying providing extra degrees of freedom in selection, thus improving various performance metrics, such as outage probability, power reduction, and throughput, at the expense of tolerating an increase in packet delay. In this survey, we review and classify various buffer-aided relay selection policies and discuss their importance through applications. The classification is mainly based on the following aspects: 1) duplexing capabilities, 2) channel state information (CSI), 3) transmission strategies, 4) relay mode, and 5) performance metrics. Relay selection policies for enhanced physical-layer security and cognitive communications with reduced interference are also discussed. Then, a framework for modeling such algorithms is presented based on Markov Chain theory. In addition, performance evaluation is conducted for various buffer-aided relay selection algorithms. To provide a broad perspective on the role of buffer-aided relay selection, various issues relevant to fifth-generation (5G) networks are discussed. Finally, we draw conclusion and discuss current challenges, possible future directions, and emerging technologies.",
"We consider a wireless relay network that consists of a source, half-duplex decode-and-forward buffer-aided relays and a destination. While the majority of previous works on relay selection assume no direct transmission between source and destination in such a setting, we lift this assumption and propose a link selection policy that exploits both the buffering ability and the opportunity for successful reception of a packet directly from the source. The proposed relay selection scheme incorporates the instantaneous strength of the wireless links and adapts the relay selection decision based on the strongest available link. The evolution of the network as a whole is modeled by means of a Markov chain and thus, the outage probability is associated with the steady state of the Markov chain. It is deduced that even if the link between the source and the destination is in principle a very unreliable link, it is always beneficial for the source to multicast a packet to both the relay with the strongest available link and the destination.",
"In this paper, we propose novel relay selection policies that aim at reducing the average delay by incorporating the buffer size of the relay nodes into the relay selection process. More specifically, we propose two delay-aware protocols that are based on the max — link relay selection protocol. First, a delay-aware only approach while it reduces the delays considerably it starves the buffers and increases the outage probability of the system. Towards this end, we propose a delay- and diversity-aware buffer-aided relay selection policy that aims at reducing the average delay considerably and at the same time maintaining good diversity. The protocols are analyzed by means of Markov Chains and expressions for the outage, throughput and delay are derived. The performance and use of our proposed algorithms is demonstrated via extensive simulations and comparisons.",
"In this paper, we propose a new relaying scheme, referred to as priority-based max-link relay selection, for buffer-aided decode-and-forward cooperative networks. We give the first priority to the relay buffers having status full, the second priority to the relay buffers having status empty, and the third priority to the relay buffers having status neither full nor empty. The best relay node is selected corresponding to the link having the highest channel gain among the links within a priority class. By adopting a Markov chain approach to analyze the state transition matrix that models the evolution of the buffers status, we derive analytical expressions for the outage probability and the average bit error rate. Analytical expressions for the steady-state probability vector are also obtained, and through these expressions, it is shown that states with the same probabilities can be grouped, thus reducing the size of the state transition matrix. We propose a general state-grouping-based method to obtain the reduced state transition matrix, which in turn reduces the computational complexity in obtaining the steady-state distribution. Our analytical and simulation results demonstrate that the proposed relaying scheme has better performance gain over the conventional max-link scheme.",
"This paper investigates the buffer-aided relay selection problem for a decode-and-forward cooperative wireless network with @math relays. We propose a new relay selection scheme that incorporates the status of the relay buffers and the instantaneous strength of the wireless links. Specifically, each link is assigned with a weight related to the buffer status; then, the best relay is selected with the largest weight among all the qualified source–relay and relay–destination links. We derive the closed-form expression for the outage probability and the diversity gain by introducing several Markov chains (MCs) to model the evolution of the buffer status. The analysis shows that the proposed scheme can achieve the optimal diversity gain @math for a small @math (i.e., @math ), an improvement in comparison with the existing max-link scheme that achieves the optimal diversity gain only when @math is sufficiently large, where @math denotes the buffer size of each relay. The provided theoretical and numerical results confirm the performance gain of the proposed relay selection scheme over the existing max-link scheme."
]
} |
1903.10601 | 2923819218 | Unsupervised domain adaptation aims to transfer knowledge from a source domain to a target domain so that the target domain data can be recognized without any explicit labelling information for this domain. One limitation of the problem setting is that testing data, despite having no labels, from the target domain is needed during training, which prevents the trained model being directly applied to classify unseen test instances. We formulate a new cross-domain classification problem arising from real-world scenarios where labelled data is available for a subset of classes (known classes) in the target domain, and we expect to recognize new samples belonging to any class (known and unseen classes) once the model is learned. This is a generalized zero-shot learning problem where the side information comes from the source domain in the form of labelled samples instead of class-level semantic representations commonly used in traditional zero-shot learning. We present a unified domain adaptation framework for both unsupervised and zero-shot learning conditions. Our approach learns a joint subspace from source and target domains so that the projections of both data in the subspace can be domain invariant and easily separable. We use the supervised locality preserving projection (SLPP) as the enabling technique and conduct experiments under both unsupervised and zero-shot learning conditions, achieving state-of-the-art results on three domain adaptation benchmark datasets: Office-Caltech, Office31 and Office-Home. | Zero-shot learning (ZSL) aims to recognize novel classes by transferring knowledge learned from known classes to unseen classes @cite_19 . ZSL has attracted much attention since it provides a promising solution to the sparse labelling issues in real world applications. In traditional zero-shot visual recognition tasks, the source domain data are usually of a different modality such as human-defined class attributes and large corpus hence it suffers from the semantic gap between visual and semantic representations @cite_19 . Since the domain adaptation problem under the zero-shot learning condition formulated in Section assumes both source and target data are from visual domain, the semantic gap issue suffered in traditional zero-shot learning tasks can be alleviated though the domain shift still exists. Traditional ZSL approaches can only tackle the class-level semantic representations (e.g., attributes and word vectors) even the source domain data come with multiple labelled examples @cite_47 . As a result, most existing ZSL methods are not ready to be directly applied in our proposed problem. | {
"cite_N": [
"@cite_19",
"@cite_47"
],
"mid": [
"2463762378",
"2724511873"
],
"abstract": [
"Zero-shot learning for visual recognition, e.g., object and action recognition, has recently attracted a lot of attention. However, it still remains challenging in bridging the semantic gap between visual features and their underlying semantics and transferring knowledge to semantic categories unseen during learning. Unlike most of the existing zero-shot visual recognition methods, we propose a stagewise bidirectional latent embedding framework of two subsequent learning stages for zero-shot visual recognition. In the bottom–up stage, a latent embedding space is first created by exploring the topological and labeling information underlying training data of known classes via a proper supervised subspace learning algorithm and the latent embedding of training data are used to form landmarks that guide embedding semantics underlying unseen classes into this learned latent space. In the top–down stage, semantic representations of unseen-class labels in a given label vocabulary are then embedded to the same latent space to preserve the semantic relatedness between all different classes via our proposed semi-supervised Sammon mapping with the guidance of landmarks. Thus, the resultant latent embedding space allows for predicting the label of a test instance with a simple nearest-neighbor rule. To evaluate the effectiveness of the proposed framework, we have conducted extensive experiments on four benchmark datasets in object and action recognition, i.e., AwA, CUB-200-2011, UCF101 and HMDB51. The experimental results under comparative studies demonstrate that our proposed approach yields the state-of-the-art performance under inductive and transductive settings.",
"A proper semantic representation for encoding side information is key to the success of zero-shot learning. In this paper, we explore two alternative semantic representations especially for zero-shot human action recognition: textual descriptions of human actions and deep features extracted from still images relevant to human actions. Such side information are accessible on Web with little cost, which paves a new way in gaining side information for large-scale zero-shot human action recognition. We investigate different encoding methods to generate semantic representations for human actions from such side information. Based on our zero-shot visual recognition method, we conducted experiments on UCF101 and HMDB51 to evaluate two proposed semantic representations. The results suggest that our proposed text- and image-based semantic representations outperform traditional attributes and word vectors considerably for zero-shot human action recognition. In particular, the image-based semantic representations yield the favourable performance even though the representation is extracted from a small number of images per class."
]
} |
1903.10601 | 2923819218 | Unsupervised domain adaptation aims to transfer knowledge from a source domain to a target domain so that the target domain data can be recognized without any explicit labelling information for this domain. One limitation of the problem setting is that testing data, despite having no labels, from the target domain is needed during training, which prevents the trained model being directly applied to classify unseen test instances. We formulate a new cross-domain classification problem arising from real-world scenarios where labelled data is available for a subset of classes (known classes) in the target domain, and we expect to recognize new samples belonging to any class (known and unseen classes) once the model is learned. This is a generalized zero-shot learning problem where the side information comes from the source domain in the form of labelled samples instead of class-level semantic representations commonly used in traditional zero-shot learning. We present a unified domain adaptation framework for both unsupervised and zero-shot learning conditions. Our approach learns a joint subspace from source and target domains so that the projections of both data in the subspace can be domain invariant and easily separable. We use the supervised locality preserving projection (SLPP) as the enabling technique and conduct experiments under both unsupervised and zero-shot learning conditions, achieving state-of-the-art results on three domain adaptation benchmark datasets: Office-Caltech, Office31 and Office-Home. | Domain adaptation under the zero-shot learning condition has been investigated in @cite_32 and @cite_43 . However, this work only focused on the conventional zero-shot learning @cite_45 where the test instances are restricted to be only from unseen classes. Our work aims to address the generalized zero-shot learning problem @cite_45 which arises from a more realistic situation where test instances can belong to any class (i.e. either known or unseen classes). | {
"cite_N": [
"@cite_43",
"@cite_45",
"@cite_32"
],
"mid": [
"2214409633",
"",
"1722318740"
],
"abstract": [
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.",
"",
"Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions."
]
} |
1903.10709 | 2969836179 | We propose the Autoencoding Binary Classifiers (ABC), a novel supervised anomaly detector based on the Autoencoder (AE). There are two main approaches in anomaly detection: supervised and unsupervised. The supervised approach accurately detects the known anomalies included in training data, but it cannot detect the unknown anomalies. Meanwhile, the unsupervised approach can detect both known and unknown anomalies that are located away from normal data points. However, it does not detect known anomalies as accurately as the supervised approach. Furthermore, even if we have labeled normal data points and anomalies, the unsupervised approach cannot utilize these labels. The ABC is a probabilistic binary classifier that effectively exploits the label information, where normal data points are modeled using the AE as a component. By maximizing the likelihood, the AE in the proposed ABC is trained to minimize the reconstruction error for normal data points, and to maximize it for known anomalies. Since our approach becomes able to reconstruct the normal data points accurately and fails to reconstruct the known and unknown anomalies, it can accurately discriminate both known and unknown anomalies from normal data points. Experimental results show that the ABC achieves higher detection performance than existing supervised and unsupervised methods. | If the label information is given perfectly, anomaly detection can be regarded as a binary classification problem. In this situation, supervised classifiers such as support vector machines @cite_8 , gradient tree boosting @cite_16 and feed-forward neural networks @cite_5 are usually used. However, these standard supervised classifiers cannot detect unknown anomalies accurately and do not work well in the class imbalance situations. There are several approaches for imbalanced data such as cost-sensitive and ensemble approaches such as random undersampling boost @cite_20 although these approaches do not aim to detect unknown anomalies. Our ABC also works well for imbalanced data and can detect unknown anomalies since it exploits the reconstruction error of the unsupervised AE. To achieve high detection performance when label information is available for part of the dataset, semi-supervised approaches @cite_9 that utilize both labeled and unlabeled data have been presented. The semi-supervised situation that can use the unlabeled data is out of scope of this paper. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_5",
"@cite_16",
"@cite_20"
],
"mid": [
"2008056655",
"2593576259",
"1981976602",
"2070493638",
"2096945460"
],
"abstract": [
"My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.",
"From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, empirical risks on training data will go negative, and we will suffer from serious overfitting. In this paper, we propose a non-negative risk estimator for PU learning: when getting minimized, it is more robust against overfitting, and thus we are able to use very flexible models (such as deep neural networks) given limited P data. Moreover, we analyze the bias, consistency, and mean-squared-error reduction of the proposed risk estimator, and bound the estimation error of the resulting empirical risk minimizer. Experiments demonstrate that our risk estimator fixes the overfitting problem of its unbiased counterparts.",
"Logistic regression and artificial neural networks are the models of choice in many medical data classification tasks. In this review, we summarize the differences and similarities of these models from a technical point of view, and compare them with other machine learning algorithms. We provide considerations useful for critically assessing the quality of the models and the results based on these models. Finally, we summarize our findings on how quality criteria for logistic regression and artificial neural network models are met in a sample of papers from the medical literature.",
"Gradient boosting constructs additive regression models by sequentially fitting a simple parameterized function (base learner) to current \"pseudo'-residuals by least squares at each iteration. The pseudo-residuals are the gradient of the loss functional being minimized, with respect to the model values at each training data point evaluated at the current step. It is shown that both the approximation accuracy and execution speed of gradient boosting can be substantially improved by incorporating randomization into the procedure. Specifically, at each iteration a subsample of the training data is drawn at random (without replacement) from the full training data set. This randomly selected subsample is then used in place of the full sample to fit the base learner and compute the model update for the current iteration. This randomized approach also increases robustness against overcapacity of the base learner.",
"Class imbalance is a problem that is common to many application domains. When examples of one class in a training data set vastly outnumber examples of the other class(es), traditional data mining algorithms tend to create suboptimal classification models. Several techniques have been used to alleviate the problem of class imbalance, including data sampling and boosting. In this paper, we present a new hybrid sampling boosting algorithm, called RUSBoost, for learning from skewed training data. This algorithm provides a simpler and faster alternative to SMOTEBoost, which is another algorithm that combines boosting and data sampling. This paper evaluates the performances of RUSBoost and SMOTEBoost, as well as their individual components (random undersampling, synthetic minority oversampling technique, and AdaBoost). We conduct experiments using 15 data sets from various application domains, four base learners, and four evaluation metrics. RUSBoost and SMOTEBoost both outperform the other procedures, and RUSBoost performs comparably to (and often better than) SMOTEBoost while being a simpler and faster technique. Given these experimental results, we highly recommend RUSBoost as an attractive alternative for improving the classification performance of learners built using imbalanced data."
]
} |
1903.10360 | 2923632643 | We represent 3D shape by structured 2D representations of fixed length making it feasible to apply well investigated 2D convolutional neural networks (CNN) for both discriminative and geometric tasks on 3D shapes. We first provide a general introduction to such structured descriptors, analyze their different forms and show how a simple 2D CNN can be used to achieve good classification result. With a specialized classification network for images and our structured representation, we achieve the classification accuracy of 99.7 in the ModelNet40 test set - improving the previous state-of-the-art by a large margin. We finally provide a novel framework for performing the geometric task of 3D segmentation using 2D CNNs and the structured representation - concluding the utility of such descriptors for both discriminative and geometric tasks. | In these methods, 3D shape is represented as a binary occupancy grid in a 3D voxel grid on which 3D CNN is applied. @cite_24 uses deep 3D CNN for voxelized shapes and provides the popular classification benchmark dataset of ModelNet40 and ModelNet10. This work is quickly followed by network design that take ideas from popular 2D CNNs giving a big boost in performance over the baseline @cite_27 @cite_21 . @cite_22 @cite_11 design special CNNs optimized for the task of 3D classification. However, because of the fundamental problem of memory overhead associated with 3D networks, the input size was restricted to @math , making them the least accurate methods for both discriminative and geometric tasks. In contrast to voxel gird, we use structured 2D descriptors and use 2D CNN and perform better in both classification and segmentation. | {
"cite_N": [
"@cite_22",
"@cite_21",
"@cite_24",
"@cite_27",
"@cite_11"
],
"mid": [
"2962731536",
"2962948813",
"1920022804",
"2211722331",
"2336098239"
],
"abstract": [
"3D shape models are becoming widely available and easier to capture, making available 3D information crucial for progress in object classification. Current state-of-theart methods rely on CNNs to address this problem. Recently, we witness two types of CNNs being developed: CNNs based upon volumetric representations versus CNNs based upon multi-view representations. Empirical results from these two types of CNNs exhibit a large gap, indicating that existing volumetric CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this paper, we aim to improve both volumetric CNNs and multi-view CNNs according to extensive analysis of existing approaches. To this end, we introduce two distinct network architectures of volumetric CNNs. In addition, we examine multi-view CNNs, where we introduce multiresolution filtering in 3D. Overall, we are able to outperform current state-of-the-art methods for both volumetric CNNs and multi-view CNNs. We provide extensive experiments designed to evaluate underlying design choices, thus providing a better understanding of the space of methods available for object classification on 3D data.",
"",
"3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representation automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet - a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.",
"Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.",
"Recent work has shown good recognition results in 3D object recognition using 3D convolutional networks. In this paper, we show that the object orientation plays an important role in 3D recognition. More specifically, we argue that objects induce different features in the network under rotation. Thus, we approach the category-level classification task as a multi-task problem, in which the network is trained to predict the pose of the object in addition to the class label as a parallel task. We show that this yields significant improvements in the classification results. We test our suggested architecture on several datasets representing various 3D data sources: LiDAR data, CAD models, and RGB-D images. We report state-of-the-art results on classification as well as significant improvements in precision and speed over the baseline on 3D detection."
]
} |
1903.10360 | 2923632643 | We represent 3D shape by structured 2D representations of fixed length making it feasible to apply well investigated 2D convolutional neural networks (CNN) for both discriminative and geometric tasks on 3D shapes. We first provide a general introduction to such structured descriptors, analyze their different forms and show how a simple 2D CNN can be used to achieve good classification result. With a specialized classification network for images and our structured representation, we achieve the classification accuracy of 99.7 in the ModelNet40 test set - improving the previous state-of-the-art by a large margin. We finally provide a novel framework for performing the geometric task of 3D segmentation using 2D CNNs and the structured representation - concluding the utility of such descriptors for both discriminative and geometric tasks. | These methods take virtual snapshots of the shape as input descriptors and perform the task of classification using 2D CNN architecture. Their contributions are novel feature descriptors based on rendering @cite_2 @cite_17 @cite_34 and specialized network design for the purpose of classification @cite_38 @cite_1 @cite_14 . The specialized CNN used for classification in this paper is inspired from @cite_1 where classification and orientation estimation is jointly performed to increase the classification performance. Even though rendered images by definition structured 2D descriptors, they do not provide any direct geometric information. Because of this reason they are not considered a part of the representation in this paper. With the same network architecture all forms of our representation perform significantly better than the rendered images. | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_1",
"@cite_2",
"@cite_34",
"@cite_17"
],
"mid": [
"2893477965",
"2799162093",
"2964342398",
"2608645542",
"1644641054",
"2962724911"
],
"abstract": [
"",
"3D shape recognition has attracted much attention recently. Its recent advances advocate the usage of deep features and achieve the state-of-the-art performance. However, existing deep features for 3D shape recognition are restricted to a view-to-shape setting, which learns the shape descriptor from the view-level feature directly. Despite the exciting progress on view-based 3D shape description, the intrinsic hierarchical correlation and discriminability among views have not been well exploited, which is important for 3D shape representation. To tackle this issue, in this paper, we propose a group-view convolutional neural network (GVCNN) framework for hierarchical correlation modeling towards discriminative 3D shape description. The proposed GVCNN framework is composed of a hierarchical view-group-shape architecture, i.e., from the view level, the group level and the shape level, which are organized using a grouping strategy. Concretely, we first use an expanded CNN to extract a view level descriptor. Then, a grouping module is introduced to estimate the content discrimination of each view, based on which all views can be splitted into different groups according to their discriminative level. A group level description can be further generated by pooling from view descriptors. Finally, all group level descriptors are combined into the shape level descriptor according to their discriminative weights. Experimental results and comparison with state-of-the-art methods show that our proposed GVCNN method can achieve a significant performance gain on both the 3D shape classification and retrieval tasks.",
"We propose a Convolutional Neural Network (CNN)-based model \"RotationNet,\" which takes multi-view images of an object as input and jointly estimates its pose and object category. Unlike previous approaches that use known viewpoint labels for training, our method treats the viewpoint labels as latent variables, which are learned in an unsupervised manner during the training using an unaligned object dataset. RotationNet is designed to use only a partial set of multi-view images for inference, and this property makes it useful in practical scenarios where only partial views are available. Moreover, our pose alignment strategy enables one to obtain view-specific feature representations shared across classes, which is important to maintain high accuracy in both object categorization and pose estimation. Effectiveness of RotationNet is demonstrated by its superior performance to the state-of-the-art methods of 3D object classification on 10- and 40-class ModelNet datasets. We also show that RotationNet, even trained without known poses, achieves the state-of-the-art performance on an object pose estimation dataset.",
"",
"A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.",
"A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both."
]
} |
1903.10360 | 2923632643 | We represent 3D shape by structured 2D representations of fixed length making it feasible to apply well investigated 2D convolutional neural networks (CNN) for both discriminative and geometric tasks on 3D shapes. We first provide a general introduction to such structured descriptors, analyze their different forms and show how a simple 2D CNN can be used to achieve good classification result. With a specialized classification network for images and our structured representation, we achieve the classification accuracy of 99.7 in the ModelNet40 test set - improving the previous state-of-the-art by a large margin. We finally provide a novel framework for performing the geometric task of 3D segmentation using 2D CNNs and the structured representation - concluding the utility of such descriptors for both discriminative and geometric tasks. | These methods project the point-cloud collected from 3D sensors such as LIDAR onto a plane and discetize them to a 2D grid for 2D convolution for 3D object detection @cite_3 @cite_36 @cite_12 . The projection on the ground plane, which is often referred as Bird Eye View', is augmented with other information and finally fed to a network designed for 3D detection. Here the 3D data is assumed to be sparse along the Z direction - across which convolution is performed. | {
"cite_N": [
"@cite_36",
"@cite_12",
"@cite_3"
],
"mid": [
"2798965597",
"2798930779",
"2555618208"
],
"abstract": [
"We address the problem of real-time 3D object detection from point clouds in the context of autonomous driving. Speed is critical as detection is a necessary component for safety. Existing approaches are, however, expensive in computation due to high dimensionality of point clouds. We utilize the 3D data more efficiently by representing the scene from the Bird's Eye View (BEV), and propose PIXOR, a proposal-free, single-stage detector that outputs oriented 3D object estimates decoded from pixel-wise neural network predictions. The input representation, network architecture, and model optimization are specially designed to balance high accuracy and real-time efficiency. We validate PIXOR on two datasets: the KITTI BEV object detection benchmark, and a large-scale 3D vehicle detection benchmark. In both datasets we show that the proposed detector surpasses other state-of-the-art methods notably in terms of Average Precision (AP), while still runs at 10 FPS.",
"In this paper we propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor. By jointly reasoning about these tasks, our holistic approach is more robust to occlusion as well as sparse data at range. Our approach performs 3D convolutions across space and time over a bird's eye view representation of the 3D world, which is very efficient in terms of both memory and computation. Our experiments on a new very large scale dataset captured in several north american cities, show that we can outperform the state-of-the-art by a large margin. Importantly, by sharing computation we can perform all tasks in as little as 30 ms.",
"This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the birds eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25 and 30 AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 14.9 higher AP than the state-of-the-art on the hard data among the LIDAR-based methods."
]
} |
1903.10360 | 2923632643 | We represent 3D shape by structured 2D representations of fixed length making it feasible to apply well investigated 2D convolutional neural networks (CNN) for both discriminative and geometric tasks on 3D shapes. We first provide a general introduction to such structured descriptors, analyze their different forms and show how a simple 2D CNN can be used to achieve good classification result. With a specialized classification network for images and our structured representation, we achieve the classification accuracy of 99.7 in the ModelNet40 test set - improving the previous state-of-the-art by a large margin. We finally provide a novel framework for performing the geometric task of 3D segmentation using 2D CNNs and the structured representation - concluding the utility of such descriptors for both discriminative and geometric tasks. | Gomez- @cite_31 represents shape by 2D slices' - the binary occupancy information along the cross section of the shape at a fixed height. @cite_32 on the other hand represents shape by height-map at multiple layers from a 2D grid. Both of them combines descriptors from different views by a MVCNN @cite_34 like architecture for classification. | {
"cite_N": [
"@cite_31",
"@cite_34",
"@cite_32"
],
"mid": [
"",
"1644641054",
"2884345717"
],
"abstract": [
"",
"A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.",
"We present a novel global representation of 3D shapes, suitable for the application of 2D CNNs. We represent 3D shapes as multi-layered height-maps (MLH) where at each grid location, we store multiple instances of height maps, thereby representing 3D shape detail that is hidden behind several layers of occlusion. We provide a novel view merging method for combining view dependent information (Eg. MLH descriptors) from multiple views. Because of the ability of using 2D CNNs, our method is highly memory efficient in terms of input resolution compared to the voxel based input. Together with MLH descriptors and our multi view merging, we achieve the state-of-the-art result in classification on ModelNet dataset."
]
} |
1903.10360 | 2923632643 | We represent 3D shape by structured 2D representations of fixed length making it feasible to apply well investigated 2D convolutional neural networks (CNN) for both discriminative and geometric tasks on 3D shapes. We first provide a general introduction to such structured descriptors, analyze their different forms and show how a simple 2D CNN can be used to achieve good classification result. With a specialized classification network for images and our structured representation, we achieve the classification accuracy of 99.7 in the ModelNet40 test set - improving the previous state-of-the-art by a large margin. We finally provide a novel framework for performing the geometric task of 3D segmentation using 2D CNNs and the structured representation - concluding the utility of such descriptors for both discriminative and geometric tasks. | Recently, there has been a serious effort to have alternative ways of applying CNNs in 3D data such as OctNet @cite_39 and PointNet @cite_26 . OctNet uses a compact version of voxel based representation where only the occupied grids are stored in an octree instead of the entire voxel grid. PointNet @cite_26 takes unstructured 3D points as input and gets a global feature by using max pool as a symmetrical function on the output of multi-layer perceptron on individual points. Our method is conceptually different, as it respects the actual spatial ordering of points in the 3D space. | {
"cite_N": [
"@cite_26",
"@cite_39"
],
"mid": [
"2950642167",
"2556802233"
],
"abstract": [
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling."
]
} |
1903.10412 | 2923050609 | In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: this https URL. | @cite_11 , propose to detect texts with segments and links. They first detect a number of text parts, then predict the linking relationships between neighboring parts to form text bounding boxes. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2605076167"
],
"abstract": [
"Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line, A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0 on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese."
]
} |
1903.10412 | 2923050609 | In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: this https URL. | CTPN @cite_9 first detects text in sequences of fine-scale proposals, then recurrently connects these sequential proposals using BLSTM. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2519818067"
],
"abstract": [
"We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s image, by using the very deep VGG16 model [27]. Online demo is available: http: textdet.com ."
]
} |
1903.10412 | 2923050609 | In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: this https URL. | TextBoxes @cite_0 was designed based on SSD; but it adopts long default boxes that have large aspect ratios (as well as vertical offsets), because texts tend to have larger aspect ratios than general objects. It only supports the detection of horizontal (vertical) texts in the beginning, later on, the same authors propose TextBoxes++ @cite_15 to support multi-oriented scene text detection. | {
"cite_N": [
"@cite_0",
"@cite_15"
],
"mid": [
"2962773189",
"2784050770"
],
"abstract": [
"This paper presents an end-to-end trainable fast scene text detector, named TextBoxes, which detects scene text with both high accuracy and efficiency in a single network forward pass, involving no post-process except for a standard non-maximum suppression. TextBoxes outperforms competing methods in terms of text localization accuracy and is much faster, taking only 0.09s per image in a fast implementation. Furthermore, combined with a text recognizer, TextBoxes significantly outperforms state-of-the-art approaches on word spotting and end-to-end text recognition tasks.",
"Scene text detection is an important step of scene text recognition system and also a challenging problem. Different from general object detections, the main challenges of scene text detection lie on arbitrary orientations, small sizes, and significantly variant aspect ratios of text in natural images. In this paper, we present an end-to-end trainable fast scene text detector, named TextBoxes++, which detects arbitrary-oriented scene text with both high accuracy and efficiency in a single network forward pass. No post-processing other than efficient non-maximum suppression is involved. We have evaluated the proposed TextBoxes++ on four public data sets. In all experiments, TextBoxes++ outperforms competing methods in terms of text localization accuracy and runtime. More specifically, TextBoxes++ achieves an f-measure of 0.817 at 11.6 frames s for 1024 × 1024 ICDAR 2015 incidental text images and an f-measure of 0.5591 at 19.8 frames s for 768 × 768 COCO-Text images. Furthermore, combined with a text recognizer, TextBoxes++ significantly outperforms the state-of-the-art approaches for word spotting and end-to-end text recognition tasks on popular benchmarks. Code is available at: https: github.com MhLiao TextBoxes_plusplus."
]
} |
1903.10412 | 2923050609 | In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: this https URL. | EAST @cite_2 is a U-shape fully convolutional network for detecting multi-oriented texts, it uses the PVANet to speed up the computation. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2605982830"
],
"abstract": [
"Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution."
]
} |
1903.10412 | 2923050609 | In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: this https URL. | @cite_8 , propose to use CNN and RNN to model image features and Connectionist Temporal Classification (CTC) @cite_7 to transcript the feature sequences into texts. @cite_1 , recognize scene text via attention based sequence-to-sequence model. | {
"cite_N": [
"@cite_1",
"@cite_7",
"@cite_8"
],
"mid": [
"2963517393",
"2127141656",
"2194187530"
],
"abstract": [
"Recognizing text in natural images is a challenging task with many unsolved problems. Different from those in documents, words in natural images often possess irregular shapes, which are caused by perspective distortion, curved character placement, etc. We propose RARE (Robust text recognizer with Automatic REctification), a recognition model that is robust to irregular text. RARE is a speciallydesigned deep neural network, which consists of a Spatial Transformer Network (STN) and a Sequence Recognition Network (SRN). In testing, an image is firstly rectified via a predicted Thin-Plate-Spline (TPS) transformation, into a more \"readable\" image for the following SRN, which recognizes text through a sequence recognition approach. We show that the model is able to recognize several types of irregular text, including perspective text and curved text. RARE is end-to-end trainable, requiring only images and associated text labels, making it convenient to train and deploy the model in practical systems. State-of-the-art or highly-competitive performance achieved on several benchmarks well demonstrates the effectiveness of the proposed model.",
"Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.",
"Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it."
]
} |
1903.10412 | 2923050609 | In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: this https URL. | @cite_10 , propose the sliding convolutional character model, in which a sliding window is used to transform a text-line image into sequential character-size crops. Then for each crop (of character-size, e.g. 32*40), they extract deep features using convolutional neural networks and make predictions. These outputs from the sequential sliding windows are finally decoded with CTC. Sliding CNN can avoid the gradient vanishing exploding in training RNN-LSTM based models. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2751748110"
],
"abstract": [
"Scene text recognition has attracted great interests from the computer vision and pattern recognition community in recent years. State-of-the-art methods use concolutional neural networks (CNNs), recurrent neural networks with long short-term memory (RNN-LSTM) or the combination of them. In this paper, we investigate the intrinsic characteristics of text recognition, and inspired by human cognition mechanisms in reading texts, we propose a scene text recognition method with character models on convolutional feature map. The method simultaneously detects and recognizes characters by sliding the text line image with character models, which are learned end-to-end on text line images labeled with text transcripts. The character classifier outputs on the sliding windows are normalized and decoded with Connectionist Temporal Classification (CTC) based algorithm. Compared to previous methods, our method has a number of appealing properties: (1) It avoids the difficulty of character segmentation which hinders the performance of segmentation-based recognition methods; (2) The model can be trained simply and efficiently because it avoids gradient vanishing exploding in training RNN-LSTM based models; (3) It bases on character models trained free of lexicon, and can recognize unknown words. (4) The recognition process is highly parallel and enables fast recognition. Our experiments on several challenging English and Chinese benchmarks, including the IIIT-5K, SVT, ICDAR03 13 and TRW15 datasets, demonstrate that the proposed method yields superior or comparable performance to state-of-the-art methods while the model size is relatively small."
]
} |
1903.10412 | 2923050609 | In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: this https URL. | @cite_13 @cite_12 , two end-to-end methods were proposed to localize and recognize text in a unified network, but they require relatively complex training procedures. | {
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"2964296749",
"2777652944"
],
"abstract": [
"In this work, we jointly address the problem of text detection and recognition in natural scene images based on convolutional recurrent neural networks. We propose a unified network that simultaneously localizes and recognizes text with a single forward pass, avoiding intermediate processes, such as image cropping, feature re-calculation, word separation, and character grouping. In contrast to existing approaches that consider text detection and recognition as two distinct tasks and tackle them one by one, the proposed framework settles these two tasks concurrently. The whole framework can be trained end-to-end, requiring only images, ground-truth bounding boxes and text labels. The convolutional features are calculated only once and shared by both detection and recognition, which saves processing time. Through multi-task training, the learned features become more informative and improves the overall performance. Our proposed method has achieved competitive performance on several benchmark datasets.",
"A method for scene text localization and recognition is proposed. The novelties include: training of both text detection and recognition in a single end-to-end pass, the structure of the recognition CNN and the geometry of its input layer that preserves the aspect of the text and adapts its resolution to the data.,,The proposed method achieves state-of-the-art accuracy in the end-to-end text recognition on two standard datasets – ICDAR 2013 and ICDAR 2015, whilst being an order of magnitude faster than competing methods - the whole pipeline runs at 10 frames per second on an NVidia K80 GPU."
]
} |
1903.10412 | 2923050609 | In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: this https URL. | @cite_14 , the authors design an end-to-end framework which is able to detect and recognize arbitrary-shape (horizontal, oriented, and curved) scene texts. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2875814315"
],
"abstract": [
"Recently, models based on deep neural networks have dominated the fields of scene text detection and recognition. In this paper, we investigate the problem of scene text spotting, which aims at simultaneous text detection and recognition in natural images. An end-to-end trainable neural network model for scene text spotting is proposed. The proposed model, named as Mask TextSpotter, is inspired by the newly published work Mask R-CNN. Different from previous methods that also accomplish text spotting with end-to-end trainable deep neural networks, Mask TextSpotter takes advantage of simple and smooth end-to-end learning procedure, in which precise text detection and recognition are acquired via semantic segmentation. Moreover, it is superior to previous methods in handling text instances of irregular shapes, for example, curved text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the proposed method achieves state-of-the-art results in both scene text detection and end-to-end text recognition tasks."
]
} |
1903.10412 | 2923050609 | In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: this https URL. | For English scene text detection, ICDAR2013, ICDAR2015 @cite_17 , COCO-Text @cite_16 are well-known real-world datasets, while SynthText @cite_3 is a commonly used synthetic English scene text dataset. The training sets of ICDAR2013 and ICDAR2015 are rather small, which are 229 and 1,000, respectively. COCO-Text has 43,686 training images (yet the annotations of some images are not very accurate), whereas SynthText has 800,000 images with 8 Million synthetic cropped image patches. | {
"cite_N": [
"@cite_16",
"@cite_3",
"@cite_17"
],
"mid": [
"2786962101",
"2343052201",
"2144554289"
],
"abstract": [
"This report presents the final results of the ICDAR 2017 Robust Reading Challenge on COCO-Text. A challenge on scene text detection and recognition based on the largest real scene text dataset currently available: the COCO-Text dataset. The competition is structured around three tasks: Text Localization, Cropped Word Recognition and End-To-End Recognition. The competition received a total of 27 submissions over the different opened tasks. This report describes the datasets and the ground truth, details the performance evaluation protocols used and presents the final results along with a brief summary of the participating methods.",
"In this paper we introduce a new method for text detection in natural images. The method comprises two contributions: First, a fast and scalable engine to generate synthetic images of text in clutter. This engine overlays synthetic text to existing background images in a natural way, accounting for the local 3D scene geometry. Second, we use the synthetic images to train a Fully-Convolutional Regression Network (FCRN) which efficiently performs text detection and bounding-box regression at all locations and multiple scales in an image. We discuss the relation of FCRN to the recently-introduced YOLO detector, as well as other end-toend object detection systems based on deep learning. The resulting detection network significantly out performs current methods for text detection in natural images, achieving an F-measure of 84.2 on the standard ICDAR 2013 benchmark. Furthermore, it can process 15 images per second on a GPU.",
"Results of the ICDAR 2015 Robust Reading Competition are presented. A new Challenge 4 on Incidental Scene Text has been added to the Challenges on Born-Digital Images, Focused Scene Images and Video Text. Challenge 4 is run on a newly acquired dataset of 1,670 images evaluating Text Localisation, Word Recognition and End-to-End pipelines. In addition, the dataset for Challenge 3 on Video Text has been substantially updated with more video sequences and more accurate ground truth data. Finally, tasks assessing End-to-End system performance have been introduced to all Challenges. The competition took place in the first quarter of 2015, and received a total of 44 submissions. Only the tasks newly introduced in 2015 are reported on. The datasets, the ground truth specification and the evaluation protocols are presented together with the results and a brief summary of the participating methods."
]
} |
1903.10412 | 2923050609 | In this paper, we introduce the ShopSign dataset, which is a newly developed natural scene text dataset of Chinese shop signs in street views. Although a few scene text datasets are already publicly available (e.g. ICDAR2015, COCO-Text), there are few images in these datasets that contain Chinese texts characters. Hence, we collect and annotate the ShopSign dataset to advance research in Chinese scene text detection and recognition. The new dataset has three distinctive characteristics: (1) large-scale: it contains 25,362 Chinese shop sign images, with a total number of 196,010 text-lines. (2) diversity: the images in ShopSign were captured in different scenes, from downtown to developing regions, using more than 50 different mobile phones. (3) difficulty: the dataset is very sparse and imbalanced. It also includes five categories of hard images (mirror, wooden, deformed, exposed and obscure). To illustrate the challenges in ShopSign, we run baseline experiments using state-of-the-art scene text detection methods (including CTPN, TextBoxes++ and EAST), and cross-dataset validation to compare their corresponding performance on the related datasets such as CTW, RCTW and ICPR 2018 MTWI challenge dataset. The sample images and detailed descriptions of our ShopSign dataset are publicly available at: this https URL. | For Chinese scene text detection and recognition, the three most related datasets to ours are RCTW @cite_4 , CTW @cite_5 and ICPR 2018 MTWI challenge dataset @cite_6 , all of them were lately released (i.e., in 2017 and 2018). | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_6"
],
"mid": [
"2792781829",
"2964065044",
""
],
"abstract": [
"We introduce Chinese Text in the Wild, a very large dataset of Chinese text in street view images. While optical character recognition (OCR) in document images is well studied and many commercial tools are available, detection and recognition of text in natural images is still a challenging problem, especially for more complicated character sets such as Chinese text. Lack of training data has always been a problem, especially for deep learning methods which require massive training data. In this paper we provide details of a newly created dataset of Chinese text with about 1 million Chinese characters annotated by experts in over 30 thousand street view images. This is a challenging dataset with good diversity. It contains planar text, raised text, text in cities, text in rural areas, text under poor illumination, distant text, partially occluded text, etc. For each character in the dataset, the annotation includes its underlying character, its bounding box, and 6 attributes. The attributes indicate whether it has complex background, whether it is raised, whether it is handwritten or printed, etc. The large size and diversity of this dataset make it suitable for training robust neural networks for various tasks, particularly detection and recognition. We give baseline results using several state-of-the-art networks, including AlexNet, OverFeat, Google Inception and ResNet for character recognition, and YOLOv2 for character detection in images. Overall Google Inception has the best performance on recognition with 80.5 top-1 accuracy, while YOLOv2 achieves an mAP of 71.0 on detection. Dataset, source code and trained models will all be publicly available on the website.",
"Chinese is the most widely used language in the world. Algorithms that read Chinese text in natural images facilitate applications of various kinds. Despite the large potential value, datasets and competitions in the past primarily focus on English, which bares very different characteristics than Chinese. This report introduces RCTW, a new competition that focuses on Chinese text reading. The competition features a large-scale dataset with over 12,000 annotated images. Two tasks, namely text localization and end-to-end recognition, are set up. The competition took place from January 20 to May 31, 2017. 23 valid submissions were received from 19 teams. This report includes dataset description, task definitions, evaluation protocols, and results summaries and analysis. Through this competition, we call for more future research on the Chinese text reading problem.",
""
]
} |
1903.10258 | 2924888702 | In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. We have demonstrated competitive performances on MobileNet V1 V2 networks, up to 9.0 9.9 higher ImageNet accuracy than V1 V2. Compared to the previous state-of-the-art AutoML-based pruning methods, like AMC and NetAdapt, we achieve higher or comparable accuracy under various conditions. | Network pruning is a prevalent approach for removing redundancy in DNNs. In weight pruning, people prune individual weights to compress the model size @cite_17 @cite_42 @cite_48 @cite_9 . However, weight pruning results in unstructured sparse filters, which can hardly be accelerated by general-purpose hardware. Recent works @cite_61 @cite_43 @cite_22 @cite_40 @cite_30 @cite_18 focus on channel pruning in the CNNs, which removes entire weight filters instead of individual weights. Traditional channel pruning methods trim channels based on the importance of each channel either in an iterative mode @cite_11 @cite_30 or by adding a data-driven sparsity @cite_1 @cite_45 . In most traditional channel pruning, compression ratio for each layer need to be manually set based on human experts or heuristics, which is time consuming and prone to be trapped in sub-optimal solutions. | {
"cite_N": [
"@cite_61",
"@cite_30",
"@cite_18",
"@cite_11",
"@cite_22",
"@cite_48",
"@cite_9",
"@cite_42",
"@cite_1",
"@cite_43",
"@cite_40",
"@cite_45",
"@cite_17"
],
"mid": [
"2495425901",
"",
"",
"2963363373",
"",
"",
"",
"",
"2963382930",
"",
"",
"2962851801",
"2114766824"
],
"abstract": [
"State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.",
"",
"",
"In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant.",
"",
"",
"",
"",
"Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy state-of-the-art models into real-world applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an end-to-end manner. In our framework, a new type of parameter – scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative fine-tuning, our method is trained fully end-to-end in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive depth and width selection. Code is available at: https: github.com huangzehao sparse-structure-selection.",
"",
"",
"The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.",
"We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application."
]
} |
1903.10258 | 2924888702 | In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. We have demonstrated competitive performances on MobileNet V1 V2 networks, up to 9.0 9.9 higher ImageNet accuracy than V1 V2. Compared to the previous state-of-the-art AutoML-based pruning methods, like AMC and NetAdapt, we achieve higher or comparable accuracy under various conditions. | Recently, AutoML methods @cite_59 @cite_27 take the real-time inference latency on multiple devices into account to iteratively prune channels in different layers of a network via reinforcement learning @cite_59 or an automatic feedback loop @cite_62 . Compared with traditional channel pruning methods, AutoML methods help to alleviate the manual efforts for tuning the hyper-parameters in channel pruning. Our proposed MetaPruning also involves little human participation. Different from previous AutoML pruning methods, which is carried out in a layer-wise pruning and finetuning loop, our methods is motivated by recent findings @cite_33 , which suggests that instead of selecting important'' weights, the essence of channel pruning sometimes lies in identifying the best pruned network. From this prospective, we propose MetaPruning for directly finding the optimal pruned network structures. Compared to previous AutoML pruning methods @cite_59 @cite_27 , MetaPruning method enjoys higher flexibility in precisely meeting the constraints and possesses the ability of pruning the channel in the short-cut. | {
"cite_N": [
"@cite_27",
"@cite_62",
"@cite_33",
"@cite_59"
],
"mid": [
"",
"2962861284",
"2951569836",
"2886851211"
],
"abstract": [
"",
"This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget. While many existing algorithms simplify networks based on the number of MACs or weights, optimizing those indirect metrics may not necessarily reduce the direct metrics, such as latency and energy consumption. To solve this problem, NetAdapt incorporates direct metrics into its adaptation algorithm. These direct metrics are evaluated using empirical measurements, so that detailed knowledge of the platform and toolchain is not required. NetAdapt automatically and progressively simplifies a pre-trained network until the resource budget is met while maximizing the accuracy. Experiment results show that NetAdapt achieves better accuracy versus latency trade-offs on both mobile CPU and mobile GPU, compared with the state-of-the-art automated network simplification algorithms. For image classification on the ImageNet dataset, NetAdapt achieves up to a 1.7 ( ) speedup in measured inference latency with equal or higher accuracy on MobileNets (V1&V2).",
"Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned \"important\" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited \"important\" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the \"Lottery Ticket Hypothesis\" (Frankle & Carbin 2019), and find that with optimal learning rate, the \"winning ticket\" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.",
"Model compression is an effective technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted features and require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality. We achieved state-of-the-art model compression results in a fully automated way without any human efforts. Under 4 ( ) FLOPs reduction, we achieved 2.7 better accuracy than the hand-crafted model compression method for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet-V1 and achieved a speedup of 1.53 ( ) on the GPU (Titan Xp) and 1.95 ( ) on an Android phone (Google Pixel 1), with negligible loss of accuracy."
]
} |
1903.10258 | 2924888702 | In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. We have demonstrated competitive performances on MobileNet V1 V2 networks, up to 9.0 9.9 higher ImageNet accuracy than V1 V2. Compared to the previous state-of-the-art AutoML-based pruning methods, like AMC and NetAdapt, we achieve higher or comparable accuracy under various conditions. | Meta-learning refers to learning from observing how different machine learning approaches perform on various learning tasks. Meta learning can be used in few zero-shot learning @cite_32 @cite_2 and transfer learning @cite_8 . A comprehensive overview of meta learning is provided in @cite_21 . In this work we are inspired by @cite_50 to use meta learning for weight prediction. Weight predictions refer to weights of a neural network are predicted by another neural network rather than directly learned @cite_50 . Recent works also applies meta learning on various tasks and achieves state-of-the-art results in detection @cite_13 , super-resolution with arbitrary magnification @cite_46 and instance segmentation @cite_16 . | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_32",
"@cite_50",
"@cite_2",
"@cite_46",
"@cite_16",
"@cite_13"
],
"mid": [
"2519882289",
"2091118421",
"2753160622",
"",
"",
"2918405586",
"2963921921",
"2810862788"
],
"abstract": [
"We develop a conceptually simple but powerful approach that can learn novel categories from few annotated examples. In this approach, the experience with already learned categories is used to facilitate the learning of novel classes. Our insight is two-fold: (1) there exists a generic, category agnostic transformation from models learned from few samples to models learned from large enough sample sets, and (2) such a transformation could be effectively learned by high-capacity regressors. In particular, we automatically learn the transformation with a deep model regression network on a large collection of model pairs. Experiments demonstrate that encoding this transformation as prior knowledge greatly facilitates the recognition in the small sample size regime on a broad range of tasks, including domain adaptation, fine-grained recognition, action recognition, and scene classification.",
"Met alearning attracted considerable interest in the machine learning community in the last years. Yet, some disagreement remains on what does or what does not constitute a met alearning problem and in which contexts the term is used in. This survey aims at giving an all-encompassing overview of the research directions pursued under the umbrella of met alearning, reconciling different definitions given in scientific literature, listing the choices involved when designing a met alearning system and identifying some of the future research challenges in this domain.",
"Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.",
"",
"",
"Recent research on super-resolution has achieved great success due to the development of deep convolutional neural networks (DCNNs). However, super-resolution of arbitrary scale factor has been ignored for a long time. Most previous researchers regard super-resolution of different scale factors as independent tasks. They train a specific model for each scale factor which is inefficient in computing, and prior work only take the super-resolution of several integer scale factors into consideration. In this work, we propose a novel method called Meta-SR to firstly solve super-resolution of arbitrary scale factor (including non-integer scale factors) with a single model. In our Meta-SR, the Meta-Upscale Module is proposed to replace the traditional upscale module. For arbitrary scale factor, the Meta-Upscale Module dynamically predicts the weights of the upscale filters by taking the scale factor as input and use these weights to generate the HR image of arbitrary size. For any low-resolution image, our Meta-SR can continuously zoom in it with arbitrary scale factor by only using a single model. We evaluated the proposed method through extensive experiments on widely used benchmark datasets on single image super-resolution. The experimental results show the superiority of our Meta-Upscale.",
"Most methods for object instance segmentation require all training examples to be labeled with segmentation masks. This requirement makes it expensive to annotate new categories and has restricted instance segmentation models to 100 well-annotated classes. The goal of this paper is to propose a new partially supervised training paradigm, together with a novel weight transfer function, that enables training instance segmentation models on a large set of categories all of which have box annotations, but only a small fraction of which have mask annotations. These contributions allow us to train Mask R-CNN to detect and segment 3000 visual concepts using box annotations from the Visual Genome dataset and mask annotations from the 80 classes in the COCO dataset. We evaluate our approach in a controlled study on the COCO dataset. This work is a first step towards instance segmentation models that have broad comprehension of the visual world.",
"We propose a novel and flexible anchor mechanism named MetaAnchor for object detection frameworks. Unlike many previous detectors model anchors via a predefined manner, in MetaAnchor anchor functions could be dynamically generated from the arbitrary customized prior boxes. Taking advantage of weight prediction, MetaAnchor is able to work with most of the anchor-based object detection systems such as RetinaNet. Compared with the predefined anchor scheme, we empirically find that MetaAnchor is more robust to anchor settings and bounding box distributions; in addition, it also shows the potential on the transfer task. Our experiment on COCO detection task shows MetaAnchor consistently outperforms the counterparts in various scenarios."
]
} |
1903.10258 | 2924888702 | In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. We have demonstrated competitive performances on MobileNet V1 V2 networks, up to 9.0 9.9 higher ImageNet accuracy than V1 V2. Compared to the previous state-of-the-art AutoML-based pruning methods, like AMC and NetAdapt, we achieve higher or comparable accuracy under various conditions. | Network pruning is a prevalent approach for removing redundancy in DNNs. In weight pruning, people prune individual weights to compress the model size @cite_17 @cite_42 @cite_48 @cite_9 . However, weight pruning results in unstructured sparse filters, which can hardly be accelerated by general-purpose hardware. Recent works @cite_61 @cite_43 @cite_22 @cite_40 @cite_30 @cite_18 focus on channel pruning in the CNNs, which removes entire weight filters instead of individual weights. Traditional channel pruning methods trim channels based on the importance of each channel either in an iterative mode @cite_11 @cite_30 or by adding a data-driven sparsity @cite_1 @cite_45 . In most traditional channel pruning, compression ratio for each layer need to be manually set based on human experts or heuristics, which is time consuming and prone to be trapped in sub-optimal solutions. | {
"cite_N": [
"@cite_61",
"@cite_30",
"@cite_18",
"@cite_11",
"@cite_22",
"@cite_48",
"@cite_9",
"@cite_42",
"@cite_1",
"@cite_43",
"@cite_40",
"@cite_45",
"@cite_17"
],
"mid": [
"2495425901",
"",
"",
"2963363373",
"",
"",
"",
"",
"2963382930",
"",
"",
"2962851801",
"2114766824"
],
"abstract": [
"State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.",
"",
"",
"In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant.",
"",
"",
"",
"",
"Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy state-of-the-art models into real-world applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an end-to-end manner. In our framework, a new type of parameter – scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative fine-tuning, our method is trained fully end-to-end in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive depth and width selection. Code is available at: https: github.com huangzehao sparse-structure-selection.",
"",
"",
"The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.",
"We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application."
]
} |
1903.10258 | 2924888702 | In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. We have demonstrated competitive performances on MobileNet V1 V2 networks, up to 9.0 9.9 higher ImageNet accuracy than V1 V2. Compared to the previous state-of-the-art AutoML-based pruning methods, like AMC and NetAdapt, we achieve higher or comparable accuracy under various conditions. | Recently, AutoML methods @cite_59 @cite_27 take the real-time inference latency on multiple devices into account to iteratively prune channels in different layers of a network via reinforcement learning @cite_59 or an automatic feedback loop @cite_62 . Compared with traditional channel pruning methods, AutoML methods help to alleviate the manual efforts for tuning the hyper-parameters in channel pruning. Our proposed MetaPruning also involves little human participation. Different from previous AutoML pruning methods, which is carried out in a layer-wise pruning and finetuning loop, our methods is motivated by recent findings @cite_33 , which suggests that instead of selecting important'' weights, the essence of channel pruning sometimes lies in identifying the best pruned network. From this prospective, we propose MetaPruning for directly finding the optimal pruned network structures. Compared to previous AutoML pruning methods @cite_59 @cite_27 , MetaPruning method enjoys higher flexibility in precisely meeting the constraints and possesses the ability of pruning the channel in the short-cut. | {
"cite_N": [
"@cite_27",
"@cite_62",
"@cite_33",
"@cite_59"
],
"mid": [
"",
"2962861284",
"2951569836",
"2886851211"
],
"abstract": [
"",
"This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget. While many existing algorithms simplify networks based on the number of MACs or weights, optimizing those indirect metrics may not necessarily reduce the direct metrics, such as latency and energy consumption. To solve this problem, NetAdapt incorporates direct metrics into its adaptation algorithm. These direct metrics are evaluated using empirical measurements, so that detailed knowledge of the platform and toolchain is not required. NetAdapt automatically and progressively simplifies a pre-trained network until the resource budget is met while maximizing the accuracy. Experiment results show that NetAdapt achieves better accuracy versus latency trade-offs on both mobile CPU and mobile GPU, compared with the state-of-the-art automated network simplification algorithms. For image classification on the ImageNet dataset, NetAdapt achieves up to a 1.7 ( ) speedup in measured inference latency with equal or higher accuracy on MobileNets (V1&V2).",
"Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned \"important\" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited \"important\" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the \"Lottery Ticket Hypothesis\" (Frankle & Carbin 2019), and find that with optimal learning rate, the \"winning ticket\" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.",
"Model compression is an effective technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted features and require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality. We achieved state-of-the-art model compression results in a fully automated way without any human efforts. Under 4 ( ) FLOPs reduction, we achieved 2.7 better accuracy than the hand-crafted model compression method for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet-V1 and achieved a speedup of 1.53 ( ) on the GPU (Titan Xp) and 1.95 ( ) on an Android phone (Google Pixel 1), with negligible loss of accuracy."
]
} |
1903.10258 | 2924888702 | In this paper, we propose a novel meta learning approach for automatic channel pruning of very deep neural networks. We first train a PruningNet, a kind of meta network, which is able to generate weight parameters for any pruned structure given the target network. We use a simple stochastic structure sampling method for training the PruningNet. Then, we apply an evolutionary procedure to search for good-performing pruned networks. The search is highly efficient because the weights are directly generated by the trained PruningNet and we do not need any finetuning. With a single PruningNet trained for the target network, we can search for various Pruned Networks under different constraints with little human participation. We have demonstrated competitive performances on MobileNet V1 V2 networks, up to 9.0 9.9 higher ImageNet accuracy than V1 V2. Compared to the previous state-of-the-art AutoML-based pruning methods, like AMC and NetAdapt, we achieve higher or comparable accuracy under various conditions. | Meta-learning refers to learning from observing how different machine learning approaches perform on various learning tasks. Meta learning can be used in few zero-shot learning @cite_32 @cite_2 and transfer learning @cite_8 . A comprehensive overview of meta learning is provided in @cite_21 . In this work we are inspired by @cite_50 to use meta learning for weight prediction. Weight predictions refer to weights of a neural network are predicted by another neural network rather than directly learned @cite_50 . Recent works also applies meta learning on various tasks and achieves state-of-the-art results in detection @cite_13 , super-resolution with arbitrary magnification @cite_46 and instance segmentation @cite_16 . | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_32",
"@cite_50",
"@cite_2",
"@cite_46",
"@cite_16",
"@cite_13"
],
"mid": [
"2519882289",
"2091118421",
"2753160622",
"",
"",
"2918405586",
"2963921921",
"2810862788"
],
"abstract": [
"We develop a conceptually simple but powerful approach that can learn novel categories from few annotated examples. In this approach, the experience with already learned categories is used to facilitate the learning of novel classes. Our insight is two-fold: (1) there exists a generic, category agnostic transformation from models learned from few samples to models learned from large enough sample sets, and (2) such a transformation could be effectively learned by high-capacity regressors. In particular, we automatically learn the transformation with a deep model regression network on a large collection of model pairs. Experiments demonstrate that encoding this transformation as prior knowledge greatly facilitates the recognition in the small sample size regime on a broad range of tasks, including domain adaptation, fine-grained recognition, action recognition, and scene classification.",
"Met alearning attracted considerable interest in the machine learning community in the last years. Yet, some disagreement remains on what does or what does not constitute a met alearning problem and in which contexts the term is used in. This survey aims at giving an all-encompassing overview of the research directions pursued under the umbrella of met alearning, reconciling different definitions given in scientific literature, listing the choices involved when designing a met alearning system and identifying some of the future research challenges in this domain.",
"Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.",
"",
"",
"Recent research on super-resolution has achieved great success due to the development of deep convolutional neural networks (DCNNs). However, super-resolution of arbitrary scale factor has been ignored for a long time. Most previous researchers regard super-resolution of different scale factors as independent tasks. They train a specific model for each scale factor which is inefficient in computing, and prior work only take the super-resolution of several integer scale factors into consideration. In this work, we propose a novel method called Meta-SR to firstly solve super-resolution of arbitrary scale factor (including non-integer scale factors) with a single model. In our Meta-SR, the Meta-Upscale Module is proposed to replace the traditional upscale module. For arbitrary scale factor, the Meta-Upscale Module dynamically predicts the weights of the upscale filters by taking the scale factor as input and use these weights to generate the HR image of arbitrary size. For any low-resolution image, our Meta-SR can continuously zoom in it with arbitrary scale factor by only using a single model. We evaluated the proposed method through extensive experiments on widely used benchmark datasets on single image super-resolution. The experimental results show the superiority of our Meta-Upscale.",
"Most methods for object instance segmentation require all training examples to be labeled with segmentation masks. This requirement makes it expensive to annotate new categories and has restricted instance segmentation models to 100 well-annotated classes. The goal of this paper is to propose a new partially supervised training paradigm, together with a novel weight transfer function, that enables training instance segmentation models on a large set of categories all of which have box annotations, but only a small fraction of which have mask annotations. These contributions allow us to train Mask R-CNN to detect and segment 3000 visual concepts using box annotations from the Visual Genome dataset and mask annotations from the 80 classes in the COCO dataset. We evaluate our approach in a controlled study on the COCO dataset. This work is a first step towards instance segmentation models that have broad comprehension of the visual world.",
"We propose a novel and flexible anchor mechanism named MetaAnchor for object detection frameworks. Unlike many previous detectors model anchors via a predefined manner, in MetaAnchor anchor functions could be dynamically generated from the arbitrary customized prior boxes. Taking advantage of weight prediction, MetaAnchor is able to work with most of the anchor-based object detection systems such as RetinaNet. Compared with the predefined anchor scheme, we empirically find that MetaAnchor is more robust to anchor settings and bounding box distributions; in addition, it also shows the potential on the transfer task. Our experiment on COCO detection task shows MetaAnchor consistently outperforms the counterparts in various scenarios."
]
} |
1903.10433 | 2914050157 | Social recommendation leverages social information to solve data sparsity and cold-start problems in traditional collaborative filtering methods. However, most existing models assume that social effects from friend users are static and under the forms of constant weights or fixed constraints. To relax this strong assumption, in this paper, we propose dual graph attention networks to collaboratively learn representations for two-fold social effects, where one is modeled by a user-specific attention weight and the other is modeled by a dynamic and context-aware attention weight. We also extend the social effects in user domain to item domain, so that information from related items can be leveraged to further alleviate the data sparsity problem. Furthermore, considering that different social effects in two domains could interact with each other and jointly influence users' preferences for items, we propose a new policy-based fusion strategy based on contextual multi-armed bandit to weigh interactions of various social effects. Experiments on one benchmark dataset and a commercial dataset verify the efficacy of the key components in our model. The results show that our model achieves great improvement for recommendation accuracy compared with other state-of-the-art social recommendation methods. | GCN and GAT, as two powerful techniques to encode a complex graph into low-dimensional representations, are extensively applied into various problems involved with graph data since their proposals @cite_31 @cite_40 @cite_3 . Using GCN and GAT to solve semi-supervised classification problem in graph could achieve state-the-of-art performance (GAT can be seen as an extension of GCN and provide better performance according to @cite_6 ), and its good scalability enables it to tackle large-scale dataset. Existing studies leverage GAT to tackle social influence analysis @cite_37 , graph node classification @cite_7 , conversation generation @cite_8 , relevence matching @cite_10 . For recommendation, some recent studies like @cite_13 adopt GCN to convolve on user-item network (a bipartite graph) to obtain better representations for items, and there are also some works using GCN to capture social information for recommendation @cite_43 @cite_11 . We are the first to use GAT for social recommendation task, and our new architecture, dual GATs, can capture social information in both user and item networks. | {
"cite_N": [
"@cite_13",
"@cite_37",
"@cite_7",
"@cite_8",
"@cite_6",
"@cite_3",
"@cite_43",
"@cite_40",
"@cite_31",
"@cite_10",
"@cite_11"
],
"mid": [
"2807021761",
"2809583854",
"2891639995",
"2807873315",
"2766453196",
"",
"",
"",
"2964015378",
"2896140001",
"2900041539"
],
"abstract": [
"Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains an unsolved challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. Overall, we can train on and embed graphs that are four orders of magnitude larger than typical GCN implementations. We show how GCN embeddings can be used to make high-quality recommendations in various settings at Pinterest, which has a massive underlying graph with 3 billion nodes representing pins and boards, and 17 billion edges. According to offline metrics, user studies, as well as A B tests, our approach generates higher-quality recommendations than comparable deep learning based systems. To our knowledge, this is by far the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.",
"Social and information networking activities such as on Facebook, Twitter, WeChat, and Weibo have become an indispensable part of our everyday life, where we can easily access friends' behaviors and are in turn influenced by them. Consequently, an effective social influence prediction for each user is critical for a variety of applications such as online recommendation and advertising. Conventional social influence prediction approaches typically design various hand-crafted rules to extract user- and network-specific features. However, their effectiveness heavily relies on the knowledge of domain experts. As a result, it is usually difficult to generalize them into different domains. Inspired by the recent success of deep neural networks in a wide range of computing applications, we design an end-to-end framework, DeepInf, to learn users' latent feature representation for predicting social influence. In general, DeepInf takes a user's local network as the input to a graph neural network for learning her latent social representation. We design strategies to incorporate both network structures and user-specific features into convolutional neural and attention networks. Extensive experiments on Open Academic Graph, Twitter, Weibo, and Digg, representing different types of social and information networks, demonstrate that the proposed end-to-end model, DeepInf, significantly outperforms traditional feature engineering-based approaches, suggesting the effectiveness of representation learning for social applications.",
"Edge features contain important information about graphs. However, current state-of-the-art neural network models designed for graph learning do not consider incorporating edge features, especially multi-dimensional edge features. In this paper, we propose an attention mechanism which combines both node features and edge features. Guided by the edge features, the attention mechanism on a pair of graph nodes will not only depend on node contents, but also ajust automatically with respect to the properties of the edge connecting these two nodes. Moreover, the edge features are adjusted by the attention function and fed to the next layer, which means our edge features are adaptive across network layers. As a result, our proposed adaptive edge features guided graph attention model can consolidate a rich source of graph information that current state-of-the-art graph learning methods cannot. We apply our proposed model to graph node classification, and experimental results on three citaion network datasets and a biological network dataset show that out method outperforms the current state-of-the-art methods, testifying to the discriminative capability of edge features and the effectiveness of our adaptive edge features guided attention model. Additional ablation experimental study further shows that both the edge features and adaptiveness components contribute to our model.",
"",
"We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).",
"",
"",
"",
"We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.",
"A large number of deep learning models have been proposed for the text matching problem, which is at the core of various typical natural language processing (NLP) tasks. However, existing deep models are mainly designed for the semantic matching between a pair of short texts, such as paraphrase identification and question answering, and do not perform well on the task of relevance matching between short-long text pairs. This is partially due to the fact that the essential characteristics of short-long text matching have not been well considered in these deep models. More specifically, these methods fail to handle extreme length discrepancy between text pieces and neither can they fully characterize the underlying structural information in long text documents. In this paper, we are especially interested in relevance matching between a piece of short text and a long document, which is critical to problems like query-document matching in information retrieval and web searching. To extract the structural information of documents, an undirected graph is constructed, with each vertex representing a keyword and the weight of an edge indicating the degree of interaction between keywords. Based on the keyword graph, we further propose a Multiresolution Graph Attention Network to learn multi-layered representations of vertices through a Graph Convolutional Network (GCN), and then match the short text snippet with the graphical representation of the document with an attention mechanism applied over each layer of the GCN. Experimental results on two datasets demonstrate that our graph approach outperforms other state-of-the-art deep matching models.",
"Collaborative Filtering (CF) is one of the most successful approaches for recommender systems. With the emergence of online social networks, social recommendation has become a popular research direction. Most of these social recommendation models utilized each user's local neighbors' preferences to alleviate the data sparsity issue in CF. However, they only considered the local neighbors of each user and neglected the process that users' preferences are influenced as information diffuses in the social network. Recently, Graph Convolutional Networks (GCN) have shown promising results by modeling the information diffusion process in graphs that leverage both graph structure and node feature information. To this end, in this paper, we propose an effective graph convolutional neural network based model for social recommendation. Based on a classical CF model, the key idea of our proposed model is that we borrow the strengths of GCNs to capture how users' preferences are influenced by the social diffusion process in social networks. The diffusion of users' preferences is built on a layer-wise diffusion manner, with the initial user embedding as a function of the current user's features and a free base user latent vector that is not contained in the user feature. Similarly, each item's latent vector is also a combination of the item's free latent vector, as well as its feature representation. Furthermore, we show that our proposed model is flexible when user and item features are not available. Finally, extensive experimental results on two real-world datasets clearly show the effectiveness of our proposed model."
]
} |
1903.10346 | 2952730822 | Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output. So far, adversarial examples have been studied most extensively in the image domain. In this domain, adversarial examples can be constructed by imperceptibly modifying images to cause misclassification, and are practical in the physical world. In contrast, current targeted adversarial examples applied to speech recognition systems have neither of these properties: humans can easily identify the adversarial perturbations, and they are not effective when played over-the-air. This paper makes advances on both of these fronts. First, we develop effectively imperceptible audio adversarial examples (verified through a human study) by leveraging the psychoacoustic principle of auditory masking, while retaining 100 targeted success rate on arbitrary full-sentence targets. Next, we make progress towards physical-world over-the-air audio adversarial examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions. | We build on a long line of work studying the robustness of neural networks. This research area largely began with @cite_10 @cite_4 , who first studied for deep neural networks. | {
"cite_N": [
"@cite_10",
"@cite_4"
],
"mid": [
"9657784",
"1673923490"
],
"abstract": [
"In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.",
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input."
]
} |
1903.10346 | 2952730822 | Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output. So far, adversarial examples have been studied most extensively in the image domain. In this domain, adversarial examples can be constructed by imperceptibly modifying images to cause misclassification, and are practical in the physical world. In contrast, current targeted adversarial examples applied to speech recognition systems have neither of these properties: humans can easily identify the adversarial perturbations, and they are not effective when played over-the-air. This paper makes advances on both of these fronts. First, we develop effectively imperceptible audio adversarial examples (verified through a human study) by leveraging the psychoacoustic principle of auditory masking, while retaining 100 targeted success rate on arbitrary full-sentence targets. Next, we make progress towards physical-world over-the-air audio adversarial examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions. | This paper focuses on adversarial examples on automatic speech recognition systems. Early work in this space @cite_23 @cite_18 was successful when generating adversarial examples that produced incorrect, but arbitrary, transcriptions. A concurrent line of work succeeded at generating targeted attacks in practice, even when played over a speaker and recorded by a microphone (a so-called over-the-air'' attack) but only by both (a) synthesizing completely new audio and (b) targeting older, traditional (i.e., not neural network based) speech recognition systems @cite_0 @cite_13 @cite_17 . | {
"cite_N": [
"@cite_18",
"@cite_0",
"@cite_23",
"@cite_13",
"@cite_17"
],
"mid": [
"2738841453",
"2486441166",
"2767951891",
"",
"2747678151"
],
"abstract": [
"Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand. We introduce a novel flexible approach named Houdini for generating adversarial examples specifically tailored for the final performance measure of the task considered, be it combinatorial and non-decomposable. We successfully apply Houdini to a range of applications such as speech recognition, pose estimation and semantic segmentation. In all cases, the attacks based on Houdini achieve higher success rate than those based on the traditional surrogates used to train the models while using a less perceptible adversarial perturbation.",
"Voice interfaces are becoming more ubiquitous and are now the primary input method for many devices. We explore in this paper how they can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices. We evaluate these attacks under two different threat models. In the black-box model, an attacker uses the speech recognition system as an opaque oracle. We show that the adversary can produce difficult to understand commands that are effective against existing systems in the black-box model. Under the white-box model, the attacker has full knowledge of the internals of the speech recognition system and uses it to create attack commands that we demonstrate through user testing are not understandable by humans. We then evaluate several defenses, including notifying the user when a voice command is accepted; a verbal challenge-response protocol; and a machine learning approach that can detect our attacks with 99.8 accuracy.",
"Computational paralinguistic analysis is increasingly being used in a wide range of applications, including security-sensitive applications such as speaker verification, deceptive speech detection, and medical diagnosis. While state-of-the-art machine learning techniques, such as deep neural networks, can provide robust and accurate speech analysis, they are susceptible to adversarial attacks. In this work, we propose a novel end-to-end scheme to generate adversarial examples by perturbing directly the raw waveform of an audio recording rather than specific acoustic features. Our experiments show that the proposed adversarial perturbation can lead to a significant performance drop of state-of-the-art deep neural networks, while only minimally impairing the audio quality.",
"",
"Voice assistants like Siri enable us to control IoT devices conveniently with voice commands, however, they also provide new attack opportunities for adversaries. Previous papers attack voice assistants with obfuscated voice commands by leveraging the gap between speech recognition system and human voice perception. The limitation is that these obfuscated commands are audible and thus conspicuous to device owners. In this paper, we propose a novel mechanism to directly attack the microphone used for sensing voice data with inaudible voice commands. We show that the adversary can exploit the microphone's non-linearity and play well-designed inaudible ultrasounds to cause the microphone to record normal voice commands, and thus control the victim device inconspicuously. We demonstrate via end-to-end real-world experiments that our inaudible voice commands can attack an Android phone and an Amazon Echo device with high success rates at a range of 2-3 meters."
]
} |
1903.10346 | 2952730822 | Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output. So far, adversarial examples have been studied most extensively in the image domain. In this domain, adversarial examples can be constructed by imperceptibly modifying images to cause misclassification, and are practical in the physical world. In contrast, current targeted adversarial examples applied to speech recognition systems have neither of these properties: humans can easily identify the adversarial perturbations, and they are not effective when played over-the-air. This paper makes advances on both of these fronts. First, we develop effectively imperceptible audio adversarial examples (verified through a human study) by leveraging the psychoacoustic principle of auditory masking, while retaining 100 targeted success rate on arbitrary full-sentence targets. Next, we make progress towards physical-world over-the-air audio adversarial examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions. | These two lines of work were partially unified by who constructed adversarial perturbations for speech recognition systems targeting arbitrary (multi-word) sentences. However, this attack was neither effective over-the-air, nor was the adversarial perturbation completely inaudible; while the perturbations it introduces are very quiet, they can be heard by a human (see ). Concurrently, the CommanderSong @cite_24 attack developed adversarial examples that are effective over-the-air but at a cost of introducing a significant perturbation to the original audio. Following this, concurrent work with ours develops attacks on deep learning ASR systems that either work over-the-air or are less obviously perceptible. , create adversarial examples which can be played over-the-air. These attacks are highly effective on short two- or three-word phrases, but not on the full-sentence phrases originally studied. Further, these adversarial examples often have a significantly larger perturbation, and in one case, the perturbation introduced had a magnitude than the original audio. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2784500886"
],
"abstract": [
"ASR (automatic speech recognition) systems like Siri, Alexa, Google Voice or Cortana has become quite popular recently. One of the key techniques enabling the practical use of such systems in people's daily life is deep learning. Though deep learning in computer vision is known to be vulnerable to adversarial perturbations, little is known whether such perturbations are still valid on the practical speech recognition. In this paper, we not only demonstrate such attacks can happen in reality, but also show that the attacks can be systematically conducted. To minimize users' attention, we choose to embed the voice commands into a song, called CommandSong. In this way, the song carrying the command can spread through radio, TV or even any media player installed in the portable devices like smartphones, potentially impacting millions of users in long distance. In particular, we overcome two major challenges: minimizing the revision of a song in the process of embedding commands, and letting the CommandSong spread through the air without losing the voice \"command\". Our evaluation demonstrates that we can craft random songs to \"carry\" any commands and the modify is extremely difficult to be noticed. Specially, the physical attack that we play the CommandSongs over the air and record them can success with 94 percentage."
]
} |
1903.10346 | 2952730822 | Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output. So far, adversarial examples have been studied most extensively in the image domain. In this domain, adversarial examples can be constructed by imperceptibly modifying images to cause misclassification, and are practical in the physical world. In contrast, current targeted adversarial examples applied to speech recognition systems have neither of these properties: humans can easily identify the adversarial perturbations, and they are not effective when played over-the-air. This paper makes advances on both of these fronts. First, we develop effectively imperceptible audio adversarial examples (verified through a human study) by leveraging the psychoacoustic principle of auditory masking, while retaining 100 targeted success rate on arbitrary full-sentence targets. Next, we make progress towards physical-world over-the-air audio adversarial examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions. | A final line of work extends adversarial example generation on ASR systems from the white-box setting (where the adversary has complete knowledge of the underlying classifier) to the black-box setting @cite_14 @cite_11 (where the adversary is only allowed to query the system). This work is complementary and independent of ours: we assume a white-box threat model. | {
"cite_N": [
"@cite_14",
"@cite_11"
],
"mid": [
"2899171775",
"2803853585"
],
"abstract": [
"Fooling deep neural networks with adversarial input have exposed a significant vulnerability in current state-of-the-art systems in multiple domains. Both black-box and white-box approaches have been used to either replicate the model itself or to craft examples which cause the model to fail. In this work, we use a multi-objective genetic algorithm based approach to perform both targeted and un-targeted black-box attacks on automatic speech recognition (ASR) systems. The main contribution of this research is the proposal of a generic framework which can be used to attack any ASR system, even if it's internal working is hidden. During the un-targeted attacks, the Word Error Rates (WER) of the ASR degrades from 0.5 to 5.4, indicating the potency of our approach. In targeted attacks, our solution reaches a WER of 2.14. In both attacks, the adversarial samples maintain a high acoustic similarity of 0.98 and 0.97.",
"The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25 targeted attack similarity after 3000 generations while maintaining 94.6 audio file similarity."
]
} |
1903.10269 | 2923537944 | To monitor critical infrastructure, high quality sensors sampled at a high frequency are increasingly installed. However, due to the big amounts of data produced, only simple aggregates are stored. This removes outliers and hides fluctuations that could indicate problems. As a solution we propose compressing time series with dimensions using a model-based method we name Multi-model Group Compression (MMGC). MMGC adaptively compresses groups of correlated time series with dimensions using an extensible set of models within a user-defined error bound (possibly zero). To partition time series into groups, we propose a set of primitives for efficiently describing correlation for data sets of varying sizes. We also propose efficient query processing algorithms for executing multi-dimensional aggregate queries on models instead of data points. Last, we provide an open-source implementation of our methods as extensions to the model-based Time Series Management System (TSMS) ModelarDB. ModelarDB interfaces with the stock versions of Apache Spark and Apache Cassandra and thus can reuse existing infrastructure. Through an evaluation we show that, compared to widely used systems, our extended ModelarDB provides up to 11 times faster ingestion due to high compression, 65 times better compression due to the adaptivity of MMGC, 92 times faster aggregate queries as they are executed on models, and close to linear scalability while also being extensible and supporting online query processing. | We summarize papers about model-based time series management and model-based olap . These are surveys about model-based time series management @cite_3 @cite_12 , Hadoop olap @cite_22 and tsms @cite_2 . | {
"cite_N": [
"@cite_22",
"@cite_12",
"@cite_3",
"@cite_2"
],
"mid": [
"2735688000",
"",
"1831821983",
"2749874879"
],
"abstract": [
"The growth of social networks and affordability of various sensing devices has lead to a huge increase of both human and non-human entities that are interconnected via various networks, mostly Internet. All of these entities generate large amounts of various data, and BI analysts have realized that such data contain knowledge that can no longer be ignored. However, traditional support for extraction of knowledge from mostly transactional data - data warehouse - can no longer cope with large amounts of fast incoming various, unstructured data - big data - and is facing a paradigm shift. Big data analytics has become a very active research area in the last few years, as well as the research of underlying data organization that would enhance it, which could be addressed as big data warehousing. One research direction is enhancing data warehouse with new paradigms that have proven to be successful at handling big data. Most popular of them is the MapReduce paradigm. This paper provides an overview on research and attempts to incorporate MapReduce with data warehouse in order to empower it for handling of big data.",
"",
"In recent years, due to the proliferation of sensor networks, there has been a genuine need of researching techniques for sensor data acquisition and management. To this end, a large number of techniques have emerged that advocate model-based sensor data acquisition and management. These techniques use mathematical models for performing various, day-to-day tasks involved in managing sensor data. In this chapter, we survey the state-of-the-art techniques for model-based sensor data acquisition and management. We start by discussing the techniques for.",
"The collection of time series data increases as more monitoring and automation are being deployed. These deployments range in scale from an Internet of things (IoT) device located in a household to enormous distributed Cyber-Physical Systems (CPSs) producing large volumes of data at high velocity. To store and analyze these vast amounts of data, specialized Time Series Management Systems (TSMSs) have been developed to overcome the limitations of general purpose Database Management Systems (DBMSs) for times series management. In this paper, we present a thorough analysis and classification of TSMSs developed through academic or industrial research and documented through publications. Our classification is organized into categories based on the architectures observed during our analysis. In addition, we provide an overview of each system with a focus on the motivational use case that drove the development of the system, the functionality for storage and querying of time series a system implements, the components the system is composed of, and the capabilities of each system with regard to Stream Processing and Approximate Query Processing (AQP) . Last, we provide a summary of research directions proposed by other researchers in the field and present our vision for a next generation TSMS."
]
} |
1903.10269 | 2923537944 | To monitor critical infrastructure, high quality sensors sampled at a high frequency are increasingly installed. However, due to the big amounts of data produced, only simple aggregates are stored. This removes outliers and hides fluctuations that could indicate problems. As a solution we propose compressing time series with dimensions using a model-based method we name Multi-model Group Compression (MMGC). MMGC adaptively compresses groups of correlated time series with dimensions using an extensible set of models within a user-defined error bound (possibly zero). To partition time series into groups, we propose a set of primitives for efficiently describing correlation for data sets of varying sizes. We also propose efficient query processing algorithms for executing multi-dimensional aggregate queries on models instead of data points. Last, we provide an open-source implementation of our methods as extensions to the model-based Time Series Management System (TSMS) ModelarDB. ModelarDB interfaces with the stock versions of Apache Spark and Apache Cassandra and thus can reuse existing infrastructure. Through an evaluation we show that, compared to widely used systems, our extended ModelarDB provides up to 11 times faster ingestion due to high compression, 65 times better compression due to the adaptivity of MMGC, 92 times faster aggregate queries as they are executed on models, and close to linear scalability while also being extensible and supporting online query processing. | mmc was proposed in @cite_16 @cite_44 . Models are fit to a time series in parallel until they all fail, the model with the highest compression ratio is then stored. The Adaptive Approximation (AA) algorithm @cite_30 fits models in parallel and creates segments as each model fails. After all models have failed, the segments from the model with the highest compression ratio are stored. @cite_10 regression models are fitted in sequence with coefficients added as required by the error-bound. The model providing the best compression ratio are stored when @math coefficients are reached. | {
"cite_N": [
"@cite_44",
"@cite_16",
"@cite_10",
"@cite_30"
],
"mid": [
"",
"2157773023",
"2033201837",
"1964219482"
],
"abstract": [
"",
"The increasing use of sensor technology for various monitoring applications (e.g. air-pollution, traffic, climate-change, etc.) has led to an unprecedented volume of streaming data that has to be efficiently aggregated, stored and retrieved. Real-time model-based data approximation and filtering is a common solution for reducing the storage (and communication) overhead. However, the selection of the most efficient model depends on the characteristics of the data stream, namely rate, burstiness, data range, etc., which cannot be always known a priori for (mobile) sensors and they can even dynamically change. In this paper, we investigate the innovative concept of efficiently combining multiple approximation models in real-time. Our approach dynamically adapts to the properties of the data stream and approximates each data segment with the most suitable model. As experimentally proved, our multi-model approximation approach always produces fewer or equal data segments than those of the best individual model, and thus provably achieves higher data compression ratio than individual linear models.",
"Time-series data is increasingly collected in many domains. One example is the smart electricity infrastructure, which generates huge volumes of such data from sources such as smart electricity meters. Although today these data are used for visualization and billing in mostly 15-min resolution, its original temporal resolution frequently is more fine-grained, e.g., seconds. This is useful for various analytical applications such as short-term forecasting, disaggregation and visualization. However, transmitting and storing huge amounts of such fine-grained data are prohibitively expensive in terms of storage space in many cases. In this article, we present a compression technique based on piecewise regression and two methods which describe the performance of the compression. Although our technique is a general approach for time-series compression, smart grids serve as our running example and as our evaluation scenario. Depending on the data and the use-case scenario, the technique compresses data by ratios of up to factor 5,000 while maintaining its usefulness for analytics. The proposed technique has outperformed related work and has been applied to three real-world energy datasets in different scenarios. Finally, we show that the proposed compression technique can be implemented in a state-of-the-art database management system.",
"The volume of time series stream data grows rapidly in various applications. To reduce the storage, transmission and processing costs of time series data, segmentation and approximation is a common approach. In this paper, we propose a novel online segmentation algorithm that approximates time series by a set of different types of candidate functions (polynomials of different orders, exponential functions, etc.) and adaptively chooses the most compact one as the pattern of the time series changes. We call this algorithm the Adaptive Approximation (AA) algorithm. The AA algorithm incrementally narrows the feasible coefficient spaces (FCS) of candidate functions in coefficient coordinate systems to make each segment as long as possible given an error bound on each data point. We propose an algorithm called the FCS algorithm for the incremental computation of the feasible coefficient spaces. We further propose a mapping based index for similarity searches on the approximated time series. Experimental results show that our AA algorithm generates more compact approximations of the time series with lower average errors than the state-of-the-art algorithm, and our indexing method processes similarity searches on the approximated time series efficiently."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.